CN114788288A - Transform information encoding/decoding method and apparatus, and bit stream storage medium - Google Patents

Transform information encoding/decoding method and apparatus, and bit stream storage medium Download PDF

Info

Publication number
CN114788288A
CN114788288A CN202080086337.2A CN202080086337A CN114788288A CN 114788288 A CN114788288 A CN 114788288A CN 202080086337 A CN202080086337 A CN 202080086337A CN 114788288 A CN114788288 A CN 114788288A
Authority
CN
China
Prior art keywords
block
transform
information
target block
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080086337.2A
Other languages
Chinese (zh)
Inventor
姜晶媛
林成昶
李镇浩
李河贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electronics and Telecommunications Research Institute ETRI
Original Assignee
Electronics and Telecommunications Research Institute ETRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electronics and Telecommunications Research Institute ETRI filed Critical Electronics and Telecommunications Research Institute ETRI
Priority claimed from PCT/KR2020/013879 external-priority patent/WO2021071342A1/en
Publication of CN114788288A publication Critical patent/CN114788288A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Disclosed are a transform information encoding/decoding method and apparatus, and a storage medium. The transformation includes a primary transformation and a secondary transformation. The primary transform type and the secondary transform type are selected from a plurality of different types. The target block is transformed according to the selected primary transform type and the secondary transform type. A primary transform type and a secondary transform type to be applied to the target block are signaled by using transform information such as a primary transform method index and a secondary transform method index.

Description

Transform information encoding/decoding method and apparatus, and bit stream storage medium
The present application claims the rights of korean patent application nos. 10-2019-0126242, 10-2019-0173708, 12-24, 2019, 10-2020-0004424, 2020-1, 13 and 10-2020-0131168, 2020-10, 12, 2020, which are all incorporated herein by reference.
Technical Field
The present disclosure generally relates to a method, apparatus, and storage medium for image encoding/decoding. More particularly, the present disclosure relates to a method, apparatus, and storage medium for transform information encoding/decoding.
Background
With the continuous development of the information and communication industry, broadcasting services supporting High Definition (HD) resolution have been popularized throughout the world. With this popularity, a large number of users have become accustomed to high resolution and high definition images and/or videos.
In order to meet the demand of users for high definition, a large number of mechanisms have accelerated the development of next-generation imaging devices. In addition to high definition TV (hdtv) and Full High Definition (FHD) TV, user interest in UHD TV has also increased, where the resolution of UHD TV is more than four times the resolution of Full High Definition (FHD) TV. With the increase of interest thereof, image encoding/decoding techniques for images with higher resolution and higher definition are now required.
As an image compression technique, there are various techniques (such as an inter-prediction technique, an intra-prediction technique, a transform, a quantization technique, and an entropy coding technique).
The inter prediction technique is a technique for predicting values of pixels included in a current picture using a picture before the current picture and/or a picture after the current picture. The intra prediction technique is a technique for predicting values of pixels included in a current picture using information on the pixels in the current picture. The transform and quantization techniques may be techniques for compressing the energy of the residual image. Entropy coding techniques are techniques for assigning short codewords to frequently occurring values and long codewords to less frequently occurring values.
By using these image compression techniques, data on an image can be efficiently compressed, transmitted, and stored.
Disclosure of Invention
Technical problem
Embodiments are directed to providing an apparatus and method for encoding/decoding transform information.
Embodiments are directed to providing an apparatus and method for encoding/decoding a target block.
Technical scheme
According to an aspect, there is provided a decoding method comprising: determining an inverse transform method for the target block; and performing an inverse transform on the target block using the inverse transform method.
The inverse transform may include a secondary inverse transform and a primary inverse transform.
A secondary inverse transform method corresponding to the secondary inverse transform may be determined based on the encoding parameter for the target block.
The encoding parameters may include information about a tree of the target block.
The encoding parameter may be a tree type.
The secondary inverse transform method may be one of a variety of methods.
The secondary inverse transform method index may indicate the secondary inverse transform method.
The secondary inverse transform method index may be included in a bitstream when the encoding parameter has a specific value.
When the secondary inverse transform method index is not included in the bitstream, the secondary inverse transform method index may be derived as a first value indicating that a secondary inverse transform is not applied.
The target block may be partitioned into a plurality of sub-blocks by intra sub-partitioning.
The inverse transform may include a secondary inverse transform and a primary inverse transform.
The same secondary inverse transform method and the same primary inverse transform method may be applied to the plurality of sub-blocks.
Whether a secondary inverse transform is to be performed on the plurality of sub-blocks may be determined based on the encoding parameters for the target block.
The encoding parameters may include information about a tree of the target block.
According to another aspect, there is provided an encoding method comprising: determining a transform method for the target block; and performing a transform on the target block using the transform method.
The transforms may include a primary transform and a secondary transform.
A secondary transform method corresponding to the secondary transform may depend on encoding parameters for the target block.
The encoding parameters may include information about a tree of the target block.
The encoding parameter may be a tree type.
The secondary transformation method may be one of a variety of methods.
The secondary transform method index may indicate the secondary transform method.
The secondary transform method index may be included in a bitstream when the encoding parameter has a specific value.
The target block may be partitioned into a plurality of sub-blocks by intra sub-partitioning.
The transforms may include a primary transform and a secondary transform.
The same primary transform method and the same secondary transform method may be applied to the plurality of subblocks.
Whether a secondary inverse transform is to be performed on the plurality of sub-blocks may depend on encoding parameters for the target block.
The encoding parameters may include information about a tree of the target block.
According to another aspect, there is provided a storage medium storing a bitstream generated by the encoding method.
According to yet another aspect, there is provided a computer-readable storage medium storing a bitstream for decoding an image, wherein the bitstream includes encoding information on a target block, decoding of the target block is performed using the encoding information, an inverse transformation method for the target block is determined, and inverse transformation is performed on the target block using the inverse transformation method.
The inverse transform may include a secondary inverse transform and a primary inverse transform.
A secondary inverse transform method corresponding to the secondary inverse transform may be determined based on the encoding parameter for the target block.
The encoding parameters may include information about a tree of the target block.
The encoding parameter may be a tree type.
The secondary inverse transform method may be one of a variety of methods.
The secondary inverse transform method index may indicate the secondary inverse transform method.
The secondary inverse transform method index may be included in a bitstream when the encoding parameter has a specific value.
When the secondary inverse transform method index is not included in the bitstream, the secondary inverse transform method index may be derived as a first value indicating that a secondary inverse transform is not applied.
The target block may be partitioned into a plurality of sub-blocks by intra sub-partitions.
The inverse transform may include a secondary inverse transform and a primary inverse transform.
The same secondary inverse transform method and the same primary inverse transform method may be applied to the plurality of sub-blocks.
Advantageous effects
An apparatus and method for encoding/decoding transform information are provided.
An apparatus and method for encoding/decoding a target block are provided.
Drawings
Fig. 1 is a block diagram showing a configuration of an embodiment of an encoding apparatus to which the present disclosure is applied;
fig. 2 is a block diagram showing a configuration of an embodiment of a decoding apparatus to which the present disclosure is applied;
Fig. 3 is a diagram schematically showing a partition structure of an image when the image is encoded and decoded;
fig. 4 is a diagram illustrating a form of a prediction unit that a coding unit can include;
fig. 5 is a diagram showing a form of a transform unit that can be included in an encoding unit;
FIG. 6 illustrates partitioning of a block according to an example;
FIG. 7 is a diagram for explaining an embodiment of an intra prediction process;
fig. 8 is a diagram illustrating reference samples used in an intra prediction process;
FIG. 9 is a diagram for explaining an embodiment of an inter prediction process;
FIG. 10 illustrates spatial candidates according to an embodiment;
fig. 11 illustrates an order of adding motion information of spatial candidates to a merge list according to an embodiment;
FIG. 12 illustrates a transform and quantization process according to an example;
FIG. 13 illustrates a diagonal scan according to an example;
fig. 14 shows a horizontal scan according to an example;
FIG. 15 shows a vertical scan according to an example;
fig. 16 is a configuration diagram of an encoding device according to an embodiment;
fig. 17 is a configuration diagram of a decoding apparatus according to an embodiment;
FIG. 18 illustrates an ISP for partitioning a target block into two sub-blocks, according to an example;
fig. 19 illustrates an ISP for partitioning a target block into four sub-blocks, according to an example;
FIG. 20 is a flow diagram of an encoding method according to an embodiment; and
fig. 21 is a flowchart of a decoding method according to an embodiment.
Detailed Description
The present invention may be variously modified and may have various embodiments, and specific embodiments will be described in detail below with reference to the accompanying drawings. It should be understood, however, that these embodiments are not intended to limit the invention to the particular forms disclosed, but to include all changes, equivalents, and modifications encompassed within the spirit and scope of the invention.
The following exemplary embodiments will be described in detail with reference to the accompanying drawings showing specific embodiments. These embodiments are described so that those of ordinary skill in the art to which the present disclosure pertains can easily implement them. It should be noted that the various embodiments are different from one another, but are not necessarily mutually exclusive. For example, particular shapes, structures, and characteristics described herein may be implemented as one embodiment without departing from the spirit and scope of other embodiments associated with the other embodiments. Further, it is to be understood that the location or arrangement of individual elements within each disclosed embodiment may be modified without departing from the spirit and scope of the embodiments. Therefore, the appended detailed description is not intended to limit the scope of the disclosure, and the scope of exemplary embodiments is defined only by the appended claims and equivalents thereof, as they are properly described.
In the drawings, like numerals are used to designate the same or similar functions in various respects. The shapes, sizes, and the like of components in the drawings may be exaggerated for clarity of the description.
Terms such as "first" and "second" may be used to describe various components, but the components are not limited by the terms. The terms are only used to distinguish one component from another component. For example, a first component may be termed a second component without departing from the scope of the present description. Similarly, the second component may be referred to as the first component. The term "and/or" may include a combination of multiple related items or any one of multiple related items.
It will be understood that when an element is referred to as being "connected" or "coupled" to another element, the two elements may be directly connected or coupled to each other or intervening elements may be present between the two elements. On the other hand, it will be understood that when components are referred to as being "directly connected or coupled", there are no intervening components between the two components.
Further, components described in the embodiments are independently illustrated to indicate different feature functions, but it does not mean that each component is formed of a separate piece of hardware or software. That is, a plurality of components are individually arranged and included for convenience of description. For example, at least two of the plurality of components may be integrated into a single component. Instead, one component may be divided into a plurality of components. Embodiments in which a plurality of components are integrated or embodiments in which some components are separated are included in the scope of the present specification as long as they do not depart from the essence of the present specification.
Furthermore, in exemplary embodiments, the expression that a component "includes" a specific component means that another component may be included within the scope of the practical or technical spirit of the exemplary embodiments, but does not exclude the presence of components other than the specific component.
The terminology used in the description herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. Singular references include plural references unless the context specifically indicates the contrary. In this specification, it should be understood that terms such as "including" or "having" are only intended to indicate the presence of features, numbers, steps, operations, components, parts, or combinations thereof, and are not intended to preclude the possibility that one or more other features, numbers, steps, operations, components, parts, or combinations thereof, will be present or added. That is, in the present invention, the expression that a component is described as "including" a specific component means that another component may be included in the scope of the practice of the present invention or the technical spirit of the present invention, but does not exclude the presence of components other than the specific component.
Some components of the present invention are not essential components for performing essential functions but may be optional components only for improving performance. An embodiment may be implemented using only the necessary components to implement the essence of the embodiment. For example, a structure including only necessary components (not including only optional components for improving performance) is also included in the scope of the embodiments.
The embodiments will be described in detail below with reference to the accompanying drawings so that those skilled in the art to which the embodiments pertain can easily implement the embodiments. In the following description of the embodiments, a detailed description of known functions or configurations incorporated herein will be omitted. In addition, the same reference numerals are used to designate the same components throughout the drawings, and the repetitive description of the same components will be omitted.
Hereinafter, "image" may represent a single picture constituting a video, or may represent the video itself. For example, "encoding and/or decoding of an image" may mean "encoding and/or decoding of a video", and may also mean "encoding and/or decoding of any one of a plurality of images constituting a video".
Hereinafter, the terms "video" and "moving picture" may be used to have the same meaning and may be used interchangeably with each other.
Hereinafter, the target image may be an encoding target image that is a target to be encoded and/or a decoding target image that is a target to be decoded. Further, the target image may be an input image input to the encoding apparatus or an input image input to the decoding apparatus. Also, the target image may be a current image, i.e., a target that is currently to be encoded and/or decoded. For example, the terms "target image" and "current image" may be used to have the same meaning and may be used interchangeably with each other.
Hereinafter, the terms "image", "picture", "frame", and "screen" may be used to have the same meaning and may be used interchangeably with each other.
Hereinafter, the target block may be an encoding target block (i.e., a target to be encoded) and/or a decoding target block (i.e., a target to be decoded). Furthermore, the target block may be a current block, i.e. a target that is currently to be encoded and/or decoded. Here, the terms "target block" and "current block" may be used to have the same meaning and may be used interchangeably with each other. The current block may represent an encoding target block that is an encoding target during encoding and/or a decoding target block that is a decoding target during decoding. Further, the current block may be at least one of an encoding block, a prediction block, a residual block, and a transform block.
Hereinafter, the terms "block" and "unit" may be used to have the same meaning and may be used interchangeably with each other. Alternatively, a "block" may represent a particular unit.
Hereinafter, the terms "region" and "fragment" are used interchangeably with each other.
In the following embodiments, particular information, data, flags, indices, elements, and attributes may have their respective values. A value of "0" corresponding to each of the information, data, flags, indices, elements, and attributes may indicate false, logical false, or a first predefined value. In other words, the values "0", false, logical false, and the first predefined value may be used interchangeably with each other. A value of "1" corresponding to each of the information, data, flags, indices, elements, and attributes may indicate true, logically true, or a second predefined value. In other words, the values "1", true, logically true, and second predefined values may be used interchangeably with each other.
When a variable such as i or j is used to indicate a row, column, or index, the value i may be an integer 0 or greater than 0, or may be an integer 1 or greater than 1. In other words, in embodiments, each of the rows, columns, and indices may count from 0, or may count from 1.
In embodiments, the term "one or more" or the term "at least one" may mean the term "a plurality". The term "one or more" or the term "at least one" may be used interchangeably with "plurality".
Next, terms to be used in the embodiments will be described.
An encoder: the encoder represents an apparatus for performing encoding. That is, the encoder may represent an encoding apparatus.
A decoder: the decoder represents means for performing decoding. That is, the decoder may represent a decoding apparatus.
A unit: the unit may represent a unit of image encoding and decoding. The terms "unit" and "block" may be used to have the same meaning and may be used interchangeably with each other.
The cell may be an M × N array of samples. Each of M and N may be a positive integer. The cells may generally represent a two-dimensional form of an array of samples.
During the encoding and decoding of an image, a "unit" may be a region generated by partitioning an image. In other words, a "cell" may be a region designated in one image. A single image may be partitioned into multiple cells. Alternatively, one image may be partitioned into sub-parts, and a unit may represent each partitioned sub-part when encoding or decoding is performed on the partitioned sub-parts.
During the encoding and decoding of the image, a predefined processing may be performed on each unit according to the type of unit.
Unit types may be classified into macro-units, Coding Units (CUs), Prediction Units (PUs), residual units, Transform Units (TUs), etc., according to functions. Alternatively, a unit may represent a block, a macroblock, a coding tree unit, a coding tree block, a coding unit, a coding block, a prediction unit, a prediction block, a residual unit, a residual block, a transform unit, a transform block, and the like, according to functions. For example, a target unit that is a target of encoding and/or decoding may be at least one of a CU, a PU, a residual unit, and a TU.
The term "unit" may denote information including a luminance (luma) component block, a chrominance (chroma) component block corresponding to the luminance component block, and syntax elements for the respective blocks, such that the unit is designated to be distinguished from the blocks.
The size and shape of the cells can be implemented differently. Further, the cells may have any of a variety of sizes and shapes. Specifically, the shape of the cell may include not only a square but also geometric shapes such as a rectangle, a trapezoid, a triangle, and a pentagon, which may be represented in two dimensions (2D).
In addition, the unit information may include one or more of a type of the unit, a size of the unit, a depth of the unit, an encoding order of the unit, a decoding order of the unit, and the like. For example, the type of the unit may indicate one of a CU, a PU, a residual unit, and a TU.
A unit may be partitioned into sub-units, each sub-unit having a size smaller than the size of the associated unit.
Depth: depth may represent the degree to which a cell is partitioned. Further, the depth of a cell may indicate a level at which the corresponding cell exists when the cell is represented by a tree structure.
The unit partition information may comprise a depth indicating a depth of the unit. The depth may indicate the number of times a cell is partitioned and/or the extent to which the cell is partitioned.
In the tree structure, the depth of the root node can be considered to be the smallest and the depth of the leaf nodes the largest. The root node may be the highest (top) node. The leaf node may be the lowest node.
A single unit may be hierarchically partitioned into a plurality of sub-units, while the single unit has tree structure based depth information. In other words, a unit and a child unit generated by partitioning the unit may correspond to a node and a child node of the node, respectively. Each partitioned sub-unit may have a unit depth. Since the depth indicates the number of times the unit is partitioned and/or the degree to which the unit is partitioned, the partition information of the sub-unit may include information on the size of the sub-unit.
In the tree structure, the top node may correspond to the initial node before partitioning. The top node may be referred to as the "root node". Further, the root node may have a minimum depth value. Here, the depth of the top node may be level "0".
A node with a depth of level "1" may represent a cell generated when an initial cell is partitioned once. A node with a depth of level "2" may represent a cell generated when an initial cell is partitioned twice.
A leaf node with a depth of level "n" may represent a cell generated when an initial cell is partitioned n times.
A leaf node may be the bottom node that cannot be partitioned further. The depth of a leaf node may be a maximum level. For example, the predefined value for the maximum level may be 3.
the-QT depth may represent a depth for a quad partition. BT depth may represent a depth for a bipartite. The TT depth may represent a depth for a tri-partition.
-sampling points: the samples may be elementary units that constitute a block. Available from 0 to 2 according to the bit depth (Bd)Bd-a value of 1 to represent a sample point.
The samples may be pixels or pixel values.
In the following, the terms "pixel" and "sample" may be used with the same meaning and may be used interchangeably with each other.
Coding Tree Unit (CTU): a CTU may be composed of a single luma component coding tree block (i.e., a Y coding tree block) and two chroma component coding tree blocks (i.e., a Cb coding tree block and a Cr coding tree block) related to the luma component coding tree block. In addition, the CTU may represent information including the above-described blocks and syntax elements for each block.
-each Coding Tree Unit (CTU) may be partitioned using one or more partitioning methods, such as Quadtree (QT), Binary Tree (BT) and Ternary Tree (TT), in order to configure sub-units, such as coding units, prediction units and transform units. The quadtree may represent a quadtree. Further, each coding tree unit may be partitioned using a multi-type tree (MTT) using one or more partitioning methods.
"CTU" may be used as a term designating a pixel block as a processing unit in an image decoding and encoding process, such as in the case of partitioning an input image.
Coding Tree Block (CTB): "CTB" may be used as a term designating any one of a Y coding tree block, a Cb coding tree block, and a Cr coding tree block.
Adjacent blocks: the neighboring blocks (or neighboring blocks) may represent blocks adjacent to the target block. The neighboring blocks may represent reconstructed neighboring blocks.
Hereinafter, the terms "adjacent block" and "adjacent block" may be used to have the same meaning and may be used interchangeably with each other.
The neighboring blocks may represent reconstructed neighboring blocks.
Spatially adjacent blocks: the spatially neighboring block may be a block spatially adjacent to the target block. The neighboring blocks may include spatially neighboring blocks.
The target block and the spatially neighboring blocks may be comprised in the target picture.
Spatially neighboring blocks may represent blocks whose boundaries are in contact with the target block or blocks which are located within a predetermined distance from the target block.
The spatially neighboring blocks may represent blocks adjacent to the vertex of the target block. Here, the blocks adjacent to the vertex of the target block may represent blocks vertically adjacent to an adjacent block horizontally adjacent to the target block or blocks horizontally adjacent to an adjacent block vertically adjacent to the target block.
Temporal neighboring blocks: the temporally adjacent block may be a block temporally adjacent to the target block. The neighboring blocks may include temporally neighboring blocks.
The temporally adjacent blocks may comprise co-located blocks (col blocks).
The col block may be a block in a previously reconstructed co-located picture (col picture). The location of the col block in the col picture may correspond to the location of the target block in the target picture. Alternatively, the location of the col block in the col picture may be equal to the location of the target block in the target picture. The col picture may be a picture included in the reference picture list.
The temporal neighboring blocks may be blocks temporally adjacent to spatially neighboring blocks of the target block.
Prediction mode: the prediction mode may be information indicating a mode in which encoding and/or decoding is performed for intra prediction or a mode in which encoding and/or decoding is performed for inter prediction.
A prediction unit: the prediction unit may be a basic unit for prediction such as inter prediction, intra prediction, inter compensation, intra compensation, and motion compensation.
A single prediction unit may be divided into multiple partitions or sub-prediction units of smaller size. The plurality of partitions may also be basic units in performing prediction or compensation. The partition generated by dividing the prediction unit may also be the prediction unit.
Prediction unit partitioning: the prediction unit partition may be a shape into which the prediction unit is divided.
Reconstructed neighboring cells: the reconstructed neighboring cell may be a cell that is neighboring the target cell and has been decoded and reconstructed.
The reconstructed neighboring cells may be cells spatially adjacent to the target cell or temporally adjacent to the target cell.
The reconstructed spatially neighboring units may be units comprised in the target picture that have been reconstructed by encoding and/or decoding.
The reconstructed temporal neighboring cells may be cells comprised in the reference image that have been reconstructed by encoding and/or decoding. The position of the reconstructed temporally neighboring unit in the reference image may be the same as the position of the target unit in the target picture or may correspond to the position of the target unit in the target picture. Furthermore, the reconstructed temporal neighboring unit may be a block neighboring the corresponding block in the reference image. Here, the position of the corresponding block in the reference image may correspond to the position of the target block in the target image. Here, the fact that the positions of the blocks correspond to each other may mean that the positions of the blocks are the same as each other, may mean that one block is included in another block, or may mean that one block occupies a specific position in another block.
Sub-picture: a picture may be divided into one or more sub-pictures. A sprite may be composed of one or more parallel block rows and one or more parallel block columns.
A sprite may be a region in a picture that has a square or rectangular (i.e., non-square, rectangular) shape. Further, a sprite may include one or more CTUs.
A single sprite may comprise one or more parallel blocks, one or more tiles (swick) and/or one or more stripes.
Parallel block: a parallel block may be a region in a picture having a square or rectangular (i.e., non-square, rectangular) shape.
A parallel block may comprise one or more CTUs.
A parallel block can be partitioned into one or more partitions.
Partitioning: a block may represent one or more rows of CTUs in a parallel block.
A parallel block may be partitioned into one or more partitions. Each partition may include one or more rows of CTUs.
Parallel blocks that are not partitioned into two parts may also represent partitions.
Strip: a stripe may comprise one or more parallel blocks in a picture. Optionally, a stripe may comprise one or more partitions of parallel blocks.
Parameter set: the parameter set may correspond to header information in an internal structure of the bitstream.
The parameter set may include at least one of a Video Parameter Set (VPS), a Sequence Parameter Set (SPS), a Picture Parameter Set (PPS), an Adaptive Parameter Set (APS), a Decoding Parameter Set (DPS), and the like.
The information signaled by each parameter set may be applied to a picture referring to the corresponding parameter set. For example, information in the VPS may be applied to pictures that reference the VPS. Information in the SPS may be applied to pictures that reference the SPS. Information in the PPS may be applied to pictures that reference the PPS.
Each parameter set may refer to a higher parameter set. For example, a PPS may reference an SPS. SPS may refer to VPS.
Further, a parameter set may include a parallel block group, slice header information, and parallel block header information. The parallel block group may be a group including a plurality of parallel blocks. Further, the meaning of "parallel block group" may be the same as that of "stripe".
And (3) rate distortion optimization: an encoding device may use rate-distortion optimization to provide high encoding efficiency by utilizing a combination of: a size of a Coding Unit (CU), a prediction mode, a size of a Prediction Unit (PU), motion information, and a size of a Transform Unit (TU).
The rate-distortion optimization scheme may calculate rate-distortion costs for the respective combinations to select an optimal combination from the combinations. The rate-distortion cost may be calculated using the equation "D + λ R". In general, the combination that minimizes the rate-distortion cost can be selected as the optimal combination under the rate-distortion optimization scheme.
-D may represent distortion. D may be the average of the squares of the differences between the original transform coefficients and the reconstructed transform coefficients in the transform unit (i.e., the mean square error).
-R may represent the rate, which may represent a bit rate using the relevant context information.
- λ represents the lagrange multiplier. R may include not only coding parameter information such as prediction mode, motion information, and a coding block flag, but also bits generated as a result of coding transform coefficients.
The coding device may perform processes such as inter-and/or intra-prediction, transformation, quantization, entropy coding, inverse quantization (dequantization) and/or inverse transformation in order to calculate the exact D and R. These processes can add significant complexity to the encoding device.
Bit stream: the bitstream may represent a stream of bits including encoded image information.
And (3) resolving: parsing may be a decision on the value of a syntax element made by performing entropy decoding on the bitstream. Alternatively, the term "parsing" may denote such entropy decoding itself.
Symbol: the symbol may be at least one of a syntax element, a coding parameter, and a transform coefficient of the encoding target unit and/or the decoding target unit. Further, the symbol may be a target of entropy encoding or a result of entropy decoding.
Reference picture: the reference picture may be an image that is referenced by a unit in order to perform inter prediction or motion compensation. Alternatively, the reference picture may be an image including a reference unit that is referred to by the target unit in order to perform inter prediction or motion compensation.
Hereinafter, the terms "reference picture" and "reference image" may be used to have the same meaning and may be used interchangeably with each other.
Reference picture list: the reference picture list may be a list including one or more reference pictures used for inter prediction or motion compensation.
The types of the reference picture list may include a composition List (LC), a list 0(L0), a list 1(L1), a list 2(L2), a list 3(L3), and the like.
For inter prediction, one or more reference picture lists may be used.
Inter prediction indicator: the inter prediction indicator may indicate an inter prediction direction for the target unit. The inter prediction may be one of unidirectional prediction and bidirectional prediction. Alternatively, the inter prediction indicator may represent the number of reference pictures used to generate the prediction unit of the target unit. Alternatively, the inter prediction indicator may represent the number of prediction blocks used for inter prediction or motion compensation of the target unit.
Prediction list utilization flag: the prediction list utilization flag may indicate whether to use at least one reference picture in a particular reference picture list to generate the prediction unit.
-deriving the inter prediction indicator using the prediction list utilization flag. Instead, the prediction list utilization flag may be derived using the inter prediction indicator. For example, a case where the prediction list indicates "0" (as a first value) with the flag may indicate that, for the target unit, the prediction block is not generated using the reference picture in the reference picture list. The case where the prediction list utilization flag indicates "1" (as a second value) may indicate that, for the target unit, the prediction unit is generated using the reference picture list.
Reference picture index: the reference picture index may be an index indicating a specific reference picture in the reference picture list.
Picture Order Count (POC): the POC value of a picture may represent an order in which the corresponding picture is displayed.
Motion Vector (MV): the motion vector may be a 2D vector for inter prediction or motion compensation. The motion vector may represent an offset between the target image and the reference image.
For example, may be represented by a symbol such as (mv)x,mvy) Represents the MV. mvxCan indicate the horizontal component, mvyA vertical component may be indicated.
The search range is as follows: the search range may be a 2D region in which a search for MVs is performed during inter prediction. For example, the size of the search range may be M × N. M and N may be positive integers, respectively.
Motion vector candidates: the motion vector candidate may be a block that is a prediction candidate when the motion vector is predicted or a motion vector of a block that is a prediction candidate.
The motion vector candidate may be comprised in a motion vector candidate list.
Motion vector candidate list: the motion vector candidate list may be a list configured using one or more motion vector candidates.
Motion vector candidate index: the motion vector candidate index may be an indicator for indicating a motion vector candidate in the motion vector candidate list. Alternatively, the motion vector candidate index may be an index of a motion vector predictor.
Motion information: the motion information may be information including at least one of a reference picture list, a reference picture, a motion vector candidate index, a merge candidate, and a merge index, and a motion vector, a reference picture index, and an inter prediction indicator.
Merging the candidate list: the merge candidate list may be a list using one or more merge candidate configurations.
Merging candidates: the merge candidate may be a spatial merge candidate, a temporal merge candidate, a combined bi-predictive merge candidate, a history-based candidate, a candidate based on an average of the two candidates, a zero merge candidate, etc. The merge candidate may include an inter prediction indicator, and may include motion information such as prediction type information, a reference picture index for each list, a motion vector, a prediction list utilization flag, and an inter prediction indicator.
Merging indexes: the merge index may be an indicator for indicating a merge candidate in the merge candidate list.
The merging index may indicate a reconstruction unit used for deriving the merging candidate among reconstruction units spatially neighboring the target unit and reconstruction units temporally neighboring the target unit.
The merge index may indicate at least one of pieces of motion information of the merge candidates.
A transformation unit: the transform unit may be a basic unit of residual signal encoding and/or residual signal decoding, such as transform, inverse transform, quantization, inverse quantization, transform coefficient encoding, and transform coefficient decoding. A single transform unit may be partitioned into multiple sub-transform units having smaller sizes. Here, the transform may include one or more of a primary transform and a secondary transform, and the inverse transform may include one or more of a primary inverse transform and a secondary inverse transform.
Zooming: scaling may refer to the process of multiplying a factor by a transform coefficient level.
-as a result of scaling the transform coefficient level, transform coefficients may be generated. Scaling may also be referred to as "inverse quantization".
Quantization Parameter (QP): the quantization parameter may be a value used to generate a transform coefficient level for a transform coefficient in quantization. Alternatively, the quantization parameter may also be a value used to generate a transform coefficient by scaling the transform coefficient level in inverse quantization. Alternatively, the quantization parameter may be a value mapped to a quantization step.
Delta (Delta) quantization parameter: the delta quantization parameter may represent a difference between the quantization parameter of the target unit and the predicted quantization parameter.
Scanning: scanning may represent a method of arranging the order of coefficients in a cell, block, or matrix. For example, a method for arranging a 2D array in the form of a one-dimensional (1D) array may be referred to as "scanning". Alternatively, the method for arranging the 1D array in the form of a 2D array may also be referred to as "scanning" or "inverse scanning".
Transform coefficients: the transform coefficient may be a coefficient value generated when the encoding apparatus performs the transform. Alternatively, the transform coefficient may be a coefficient value generated when the decoding apparatus performs at least one of entropy decoding and inverse quantization.
Quantized levels generated by applying quantization to transform coefficients or residual signals or quantized transform coefficient levels may also be included in the meaning of the term "transform coefficients".
Level of quantization: the level of quantization may be a value generated when the encoding apparatus performs quantization on the transform coefficient or the residual signal. Alternatively, the quantized level may be a value that is a target of inverse quantization when the decoding apparatus performs inverse quantization.
The quantized transform coefficient levels as a result of the transform and quantization may also be included in the meaning of quantized levels.
Non-zero transform coefficients: the non-zero transform coefficient may be a transform coefficient having a value other than 0 or may be a transform coefficient level having a value other than 0. Alternatively, the non-zero transform coefficient may be a transform coefficient whose value is not 0 in magnitude, or may be a transform coefficient level whose value is not 0 in magnitude.
Quantization matrix: the quantization matrix may be a matrix used in a quantization process or an inverse quantization process in order to improve subjective image quality or objective image quality of an image. The quantization matrix may also be referred to as a "scaling list".
Quantization matrix coefficients: the quantization matrix coefficient may be each element in the quantization matrix. The quantized matrix coefficients may also be referred to as "matrix coefficients".
A default matrix: the default matrix may be a quantization matrix predefined by the encoding device and the decoding device.
Non-default matrix: the non-default matrix may be a quantization matrix that is not predefined by the encoding device and the decoding device. The non-default matrix may represent a quantization matrix signaled by a user from an encoding device to a decoding device.
Most Probable Mode (MPM): the MPM may represent an intra prediction mode in which a high probability is used for intra prediction for the target block.
The encoding apparatus and the decoding apparatus may determine one or more MPMs based on the encoding parameters related to the target block and the attributes of the entity related to the target block.
The encoding device and the decoding device may determine the one or more MPMs based on an intra prediction mode of the reference block. The reference block may include a plurality of reference blocks. The plurality of reference blocks may include a spatially adjacent block adjacent to a left side of the target block and a spatially adjacent block adjacent to an upper side of the target block. In other words, one or more different MPMs may be determined according to which intra prediction modes have been used for the reference block.
One or more MPMs may be determined in the same way in both the encoding device and the decoding device. That is, the encoding apparatus and the decoding apparatus may share the same MPM list including one or more MPMs.
List of MPMs: the MPM list may be a list including one or more MPMs. The number of one or more MPMs in the MPM list may be predefined.
MPM indicator: the MPM indicator may indicate an MPM to be used for intra prediction for the target block among one or more MPMs in the MPM list. For example, the MPM indicator may be an index for an MPM list.
Since the MPM list is determined in the same manner in both the encoding device and the decoding device, it may not be necessary to transmit the MPM list itself from the encoding device to the decoding device.
The MPM indicator may be signaled from the encoding device to the decoding device. Since the MPM indicator is signaled, the decoding apparatus may determine an MPM to be used for intra prediction for the target block among MPMs in the MPM list.
MPM usage indicator: the MPM usage indicator may indicate whether an MPM usage mode is to be used for prediction for the target block. The MPM use mode may be a mode that determines an MPM to be used for intra prediction for the target block using the MPM list.
The MPM usage indicator may be signaled from the encoding device to the decoding device.
Signaling: "signaling" may mean that information is sent from an encoding device to a decoding device. Alternatively, "signaling" may mean that information is included in a bitstream or a recording medium. The information signaled by the encoding device may be used by the decoding device.
The encoding device may generate the encoded information by performing an encoding of the information to be signaled. The encoded information may be transmitted from the encoding device to the decoding device. The decoding apparatus may obtain the information by decoding the transmitted encoded information. Here, the encoding may be entropy encoding, and the decoding may be entropy decoding.
And (3) statistical value: variables, coding parameters, constants, etc. may have computable values. The statistical value may be a value generated by performing calculation (operation) on a value of a specified target. For example, the statistical value may indicate one or more of an average, a weighted sum, a minimum, a maximum, a mode, a median, and an interpolation of values of a particular variable, a particular encoding parameter, a particular constant, and the like.
Fig. 1 is a block diagram showing a configuration of an embodiment of an encoding apparatus to which the present disclosure is applied.
The encoding device 100 may be an encoder, a video encoding device, or an image encoding device. A video may comprise one or more images (pictures). The encoding apparatus 100 may sequentially encode one or more images of a video.
Referring to fig. 1, the encoding apparatus 100 includes an inter prediction unit 110, an intra prediction unit 120, a switch 115, a subtractor 125, a transform unit 130, a quantization unit 140, an entropy encoding unit 150, an inverse quantization (inverse quantization) unit 160, an inverse transform unit 170, an adder 175, a filter unit 180, and a reference picture buffer 190.
The encoding apparatus 100 may perform encoding on a target image using an intra mode and/or an inter mode. In other words, the prediction mode of the target block may be one of an intra mode and an inter mode.
Hereinafter, the terms "intra mode", "intra prediction mode", "intra mode", and "intra prediction mode" may be used to have the same meaning and may be used interchangeably with each other.
Hereinafter, the terms "inter mode", "inter prediction mode", "inter mode", and "inter prediction mode" may be used to have the same meaning and may be used interchangeably with each other.
Hereinafter, the term "image" may indicate only a partial image, or may indicate a block. Further, the processing of an "image" may indicate sequential processing of a plurality of blocks.
Further, the encoding apparatus 100 may generate a bitstream including encoded information by encoding the target image, and may output and store the generated bitstream. The generated bitstream may be stored in a computer-readable storage medium and may be streamed over a wired and/or wireless transmission medium.
When the intra mode is used as the prediction mode, the switch 115 may switch to the intra mode. When the inter mode is used as the prediction mode, the switch 115 may switch to the inter mode.
The encoding apparatus 100 may generate a prediction block of a target block. Also, after the prediction block has been generated, the encoding apparatus 100 may encode a residual block for the target block using a residual between the target block and the prediction block.
When the prediction mode is the intra mode, the intra prediction unit 120 may use pixels of a neighboring block, which is adjacent to the target block and is previously encoded/decoded, as reference samples. The intra prediction unit 120 may perform spatial prediction on the target block using the reference sample points, and may generate prediction sample points for the target block via the spatial prediction. The prediction samples may represent samples in a prediction block.
The inter prediction unit 110 may include a motion prediction unit and a motion compensation unit.
When the prediction mode is the inter mode, the motion prediction unit may search for a region that best matches the target block in the reference image in the motion prediction process, and may derive a motion vector for the target block and the found region based on the found region. Here, the motion prediction unit may use the search range as a target region for the search.
The reference image may be stored in the reference picture buffer 190. More specifically, when encoding and/or decoding of a reference image has been processed, the encoded and/or decoded reference image may be stored in the reference picture buffer 190.
The reference picture buffer 190 may be a Decoded Picture Buffer (DPB) since decoded pictures are stored.
The motion compensation unit may generate a prediction block for the target block by performing motion compensation using the motion vector. Here, the motion vector may be a two-dimensional (2D) vector for inter prediction. Further, the motion vector may indicate an offset between the target image and the reference image.
When the motion vector has a value other than an integer, the motion prediction unit and the motion compensation unit may generate the prediction block by applying an interpolation filter to a partial region of the reference image. In order to perform inter prediction or motion compensation, it may be determined which one of a skip mode, a merge mode, an Advanced Motion Vector Prediction (AMVP) mode, and a current picture reference mode corresponds to a method for predicting and compensating for motion of a PU included in a CU based on the CU, and the inter prediction or motion compensation may be performed according to the mode.
The subtractor 125 may generate a residual block, wherein the residual block is a difference between the target block and the prediction block. The residual block may also be referred to as a "residual signal".
The residual signal may be a difference between the original signal and the predicted signal. Alternatively, the residual signal may be a signal generated by transforming or quantizing the difference between the original signal and the prediction signal or a signal generated by transforming and quantizing the difference. The residual block may be a residual signal for a block unit.
The transform unit 130 may generate a transform coefficient by transforming the residual block, and may output the generated transform coefficient. Here, the transform coefficient may be a coefficient value generated by transforming the residual block.
The transformation unit 130 may use one of a plurality of predefined transformation methods when performing the transformation.
The plurality of predefined transform methods may include Discrete Cosine Transform (DCT), Discrete Sine Transform (DST), Karhunen-Loeve transform (KLT), and the like.
The transform method for transforming the residual block may be determined according to at least one of the encoding parameters for the target block and/or the neighboring blocks. For example, the transform method may be determined based on at least one of an inter prediction mode for the PU, an intra prediction mode for the PU, a size of the TU, and a shape of the TU. Alternatively, transform information indicating a transform method may be signaled from the encoding apparatus 100 to the decoding apparatus 200.
When the transform skip mode is used, the transform unit 130 may omit an operation of transforming the residual block.
By performing quantization on the transform coefficients, quantized transform coefficient levels or quantized levels may be generated. Hereinafter, in the embodiment, each of the quantized transform coefficient level and the quantized level may also be referred to as a "transform coefficient".
The quantization unit 140 may generate quantized transform coefficient levels (i.e., quantized levels or quantized coefficients) by quantizing the transform coefficients according to a quantization parameter. The quantization unit 140 may output the generated quantized transform coefficient levels. In this case, the quantization unit 140 may quantize the transform coefficient using a quantization matrix.
The entropy encoding unit 150 may generate a bitstream by performing probability distribution-based entropy encoding based on the values calculated by the quantization unit 140 and/or the encoding parameter values calculated in the encoding process. The entropy encoding unit 150 may output the generated bitstream.
The entropy encoding unit 150 may perform entropy encoding on information about pixels of an image and information required for decoding the image. For example, information required to decode an image may include syntax elements and the like.
When entropy coding is applied, fewer bits may be allocated to more frequently occurring symbols and more bits may be allocated to less frequently occurring symbols. Since the symbols are represented by this allocation, the size of the bit string for the target symbol to be encoded can be reduced. Accordingly, the compression performance of video encoding can be improved by entropy encoding.
Also, for entropy encoding, the entropy encoding unit 150 may use an encoding method such as exponential golomb, context-adaptive variable length coding (CAVLC), or context-adaptive binary arithmetic coding (CABAC). For example, entropy encoding unit 150 may perform entropy encoding using a variable length coding/code (VLC) table. For example, the entropy encoding unit 150 may derive a binarization method for the target symbol. Furthermore, entropy encoding unit 150 may derive a probability model for the target symbol/bin. The entropy encoding unit 150 may perform arithmetic encoding using the derived binarization method, probability model, and context model.
The entropy-encoding unit 150 may transform the coefficients in the form of 2D blocks into the form of 1D vectors by a transform coefficient scanning method so as to encode the quantized transform coefficient levels.
The encoding parameters may be information required for encoding and/or decoding. The encoding parameter may include information encoded by the encoding apparatus 100 and transmitted from the encoding apparatus 100 to the decoding apparatus, and may also include information that may be derived in an encoding or decoding process. For example, the information sent to the decoding device may include syntax elements.
The encoding parameters may include not only information (or flags or indexes) such as syntax elements encoded by the encoding apparatus and signaled by the encoding apparatus to the decoding apparatus, but also information derived in the encoding or decoding process. In addition, the encoding parameters may include information required to encode or decode the image. For example, the encoding parameters may include at least one of the following, a combination of the following, or statistics: size of unit/block, shape/form of unit/block, depth of unit/block, partition information of unit/block, partition structure of unit/block, information indicating whether unit/block is partitioned in a quad-tree structure, information indicating whether unit/block is partitioned in a binary-tree structure, partition direction (horizontal direction or vertical direction) of a binary-tree structure, partition form (symmetric partition or asymmetric partition) of a binary-tree structure, information indicating whether unit/block is partitioned in a tri-tree structure, partition direction (horizontal direction or vertical direction) of a tri-tree structure, partition form (symmetric partition or asymmetric partition, etc.) of a tri-tree structure, information indicating whether unit/block is partitioned in a multi-type tree structure, combination and direction (horizontal direction or vertical direction, etc.) of partitions of a multi-type tree structure, Partition form of partitions of multi-type tree structure (symmetric partition or asymmetric partition, etc.), partition tree of multi-type tree form (binary tree or ternary tree), prediction type (intra prediction or inter prediction), intra prediction mode/direction, intra luma prediction mode/direction, intra chroma prediction mode/direction, intra partition information, inter partition information, coding block partition flag, prediction block partition flag, transform block partition flag, reference sample point filtering method, reference sample point filter tap, reference sample point filter coefficient, prediction block filtering method, prediction block filter tap, prediction block filter coefficient, prediction block boundary filtering method, prediction block boundary filter tap, prediction block boundary filter coefficient, inter prediction mode, motion information, motion vector difference, reference picture index, prediction mode, motion vector, motion information, motion vector, reference picture index, and/mode, Inter prediction direction, inter prediction indicator, prediction list utilization flag, reference picture list, reference picture, POC, motion vector predictor, motion vector prediction index, motion vector prediction candidate, motion vector candidate list, information indicating whether merge mode is used, merge index, merge candidate list, information indicating whether skip mode is used, type of interpolation filter, tap of interpolation filter, filter coefficient of interpolation filter, size of motion vector, accuracy of motion vector representation, transform type, transform size, information indicating whether first transform is used, information indicating whether additional (second) transform is used, first transform selection information (or first transform index), second transform selection information (or second transform index), information indicating presence or absence of residual signal, motion vector prediction index, and motion vector prediction index, A coding block pattern, a coding block flag, a quantization parameter, a residual quantization parameter, a quantization matrix, information on an in-loop filter, information indicating whether an in-loop filter is applied, a coefficient of an in-loop filter, a tap of an in-loop filter, a shape/form of an in-loop filter, information indicating whether a deblocking filter is applied, a coefficient of a deblocking filter, a tap of a deblocking filter, a deblocking filter strength, a shape/form of a deblocking filter, information indicating whether an adaptive sample offset is applied, a value of an adaptive sample offset, a class of an adaptive sample offset, a type of an adaptive sample offset, information indicating whether an adaptive loop filter is applied, a coefficient of an adaptive loop filter, a tap of an adaptive loop filter, a shape/form of an adaptive loop filter, a binarization/inverse binarization method, a computer program, and a computer-readable storage medium, Context model, context model deciding method, context model updating method, information indicating whether normal mode is executed or not, information indicating whether bypass (bypass) mode is executed or not, significant coefficient flag, last significant coefficient flag, coding flag of coefficient group, position of last significant coefficient, information indicating whether value of coefficient is greater than 1, information indicating whether value of coefficient is greater than 2, information indicating whether value of coefficient is greater than 3, residual coefficient value information, sign information, reconstructed luma sample, reconstructed chroma sample, context binary, bypass binary, residual luma sample, residual chroma sample, transform coefficient, luma transform coefficient, chroma transform coefficient, quantized level, luma quantized level, chroma quantized level, transform coefficient level scanning method, size of motion vector search area on decoding apparatus side, and method of decoding apparatus, A shape/form of a motion vector search region on a decoding apparatus side, a number of motion vector searches on the decoding apparatus side, a size of a CTU, a minimum block size, a maximum block depth, a minimum block depth, an image display/output order, slice identification information, a slice type, slice partition information, parallel block group identification information, a parallel block group type, parallel block group partition information, parallel block identification information, a parallel block type, parallel block partition information, a picture type, a bit depth, an input sample bit depth, a reconstructed sample bit depth, a residual sample bit depth, a transform coefficient bit depth, a quantized level bit depth, information on a luminance signal, information on a chrominance signal, a color space of a target block, and a color space of a residual block. In addition, the above-described encoding parameter-related information may also be included in the encoding parameter. Information for calculating and/or deriving the above-described encoding parameters may also be included in the encoding parameters. Information calculated or derived using the above-described encoding parameters may also be included in the encoding parameters.
The prediction scheme may represent one of an intra prediction mode and an inter prediction mode.
The first transform selection information may indicate a first transform applied to the target block.
The second transform selection information may indicate a second transform applied to the target block.
The residual signal may represent the difference between the original signal and the predicted signal. Alternatively, the residual signal may be a signal generated by transforming a difference between the original signal and the prediction signal. Alternatively, the residual signal may be a signal generated by transforming and quantizing the difference between the original signal and the prediction signal. The residual block may be a residual signal for the block.
Here, signaling the information may indicate that the encoding apparatus 100 includes entropy-encoded information generated by performing entropy encoding on the flag or index in the bitstream, and may indicate that the decoding apparatus 200 acquires the information by performing entropy decoding on the entropy-encoded information extracted from the bitstream. Here, the information may include a flag, an index, and the like.
The signal may represent information to be signaled. Hereinafter, information on the image and the block may be referred to as a "signal". In addition, hereinafter, the terms "information" and "signal" may be used to have the same meaning and may be used interchangeably with each other. For example, the specific signal may be a signal representing a specific block. The original signal may be a signal representing the target block. The prediction signal may be a signal representing a prediction block. The residual signal may be a signal representing a residual block.
The bitstream may include information based on a specific syntax. The encoding apparatus 100 may generate a bitstream including information according to a specific syntax. The decoding apparatus 200 may acquire information from the bitstream according to a specific syntax.
Since the encoding apparatus 100 performs encoding via inter prediction, the encoded target image can be used as a reference image for another image to be subsequently processed. Accordingly, the encoding apparatus 100 may reconstruct or decode the encoded target image and store the reconstructed or decoded image as a reference image in the reference picture buffer 190. For decoding, inverse quantization and inverse transformation of the encoded target image may be performed.
The quantized levels may be inverse quantized by the inverse quantization unit 160 and inverse transformed by the inverse transformation unit 170. The inverse quantization unit 160 may generate inverse quantized coefficients by performing an inverse transform on the quantized levels. The inverse transform unit 170 may generate the inverse quantized and inverse transformed coefficients by performing an inverse transform on the inverse quantized coefficients.
The inverse quantized and inverse transformed coefficients may be added to the prediction block by adder 175. The inverse quantized and inverse transformed coefficients and the prediction block are added, and then a reconstructed block may be generated. Here, the inverse quantized and/or inverse transformed coefficients may represent coefficients on which one or more of inverse quantization and inverse transformation are performed, and may also represent a reconstructed residual block. Here, the reconstructed block may represent a restored block or a decoded block.
The reconstructed block may be filtered by the filter unit 180. Filter unit 180 may apply one or more of a deblocking filter, a Sample Adaptive Offset (SAO) filter, an Adaptive Loop Filter (ALF), and a non-local filter (NLF) to the reconstructed samples, reconstructed blocks, or reconstructed pictures. The filter unit 180 may also be referred to as a "loop filter".
The deblocking filter may remove block distortion occurring at the boundary between blocks. In order to determine whether to apply the deblocking filter, it may be decided to be included in the block and include the number of columns or lines of pixels on which to determine whether to apply the deblocking filter to the target block.
When the deblocking filter is applied to the target block, the applied filter may be different according to the strength of the deblocking filtering required. In other words, among different filters, a filter decided in consideration of the strength of the deblocking filtering may be applied to the target block. When the deblocking filter is applied to the target block, a filter corresponding to any one of the strong filter and the weak filter may be applied to the target block according to the strength of the required deblocking filter.
Further, when vertical filtering and horizontal filtering are performed on the target block, the horizontal filtering and the vertical filtering may be performed in parallel.
The SAO may add the appropriate offset to the pixel values to compensate for the coding error. The SAO may perform a correction on the image to which the deblocking is applied on a pixel basis, wherein the correction uses an offset of a difference between the original image and the image to which the deblocking is applied. In order to perform offset correction for an image, a method for dividing pixels included in the image into a certain number of regions, determining a region to which an offset is to be applied among the divided regions, and applying the offset to the determined region may be used, and a method for applying the offset in consideration of edge information of each pixel may also be used.
ALF may perform filtering based on values obtained by comparing a reconstructed image with an original image. After pixels included in an image have been divided into a predetermined number of groups, a filter to be applied to each group may be determined, and filtering may be performed differently for the respective groups. Information about whether to apply the adaptive loop filter may be signaled for each CU. Such information may be signaled for a luminance signal. The shape and filter coefficients of the ALF to be applied to each block may be different for each block. Alternatively, ALF having a fixed form may be applied to a block regardless of the characteristics of the block.
The non-local filter may perform filtering based on a reconstructed block similar to the target block. A region similar to the target block may be selected from the reconstructed picture, and filtering of the target block may be performed using statistical properties of the selected similar region. Information about whether to apply a non-local filter may be signaled for a Coding Unit (CU). Further, the shape and filter coefficients of the non-local filter to be applied to a block may be different according to the block.
The reconstructed block or the reconstructed image filtered by the filter unit 180 may be stored as a reference picture in the reference picture buffer 190. The reconstructed block filtered by the filter unit 180 may be a portion of a reference picture. In other words, the reference picture may be a reconstructed picture composed of the reconstructed block filtered by the filter unit 180. The stored reference pictures can then be used for inter prediction or motion compensation.
Fig. 2 is a block diagram showing a configuration of an embodiment of a decoding apparatus to which the present disclosure is applied.
The decoding apparatus 200 may be a decoder, a video decoding apparatus, or an image decoding apparatus.
Referring to fig. 2, the decoding apparatus 200 may include an entropy decoding unit 210, an inverse quantization (inverse quantization) unit 220, an inverse transformation unit 230, an intra prediction unit 240, an inter prediction unit 250, a switch 245, an adder 255, a filter unit 260, and a reference picture buffer 270.
The decoding apparatus 200 may receive the bitstream output from the encoding apparatus 100. The decoding apparatus 200 may receive a bitstream stored in a computer-readable storage medium and may receive a bitstream transmitted through a wired/wireless transmission medium stream.
The decoding apparatus 200 may perform decoding on the bitstream in an intra mode and/or an inter mode. Further, the decoding apparatus 200 may generate a reconstructed image or a decoded image via decoding, and may output the reconstructed image or the decoded image.
For example, an operation of switching to an intra mode or an inter mode based on a prediction mode for decoding may be performed by the switch 245. When the prediction mode used for decoding is intra mode, switch 245 may be operated to switch to intra mode. When the prediction mode for decoding is an inter mode, the switch 245 may be operated to switch to the inter mode.
The decoding apparatus 200 may acquire a reconstructed residual block by decoding an input bitstream and may generate a prediction block. When the reconstructed residual block and the prediction block are acquired, the decoding apparatus 200 may generate a reconstructed block that is a target to be decoded by adding the reconstructed residual block to the prediction block.
The entropy decoding unit 210 may generate symbols by performing entropy decoding on the bitstream based on a probability distribution of the bitstream. The generated symbols may comprise symbols in the form of quantized transform coefficient levels (i.e. quantized levels or quantized coefficients). Here, the entropy decoding method may be similar to the entropy encoding method described above. That is, the entropy decoding method may be the inverse process of the entropy encoding method described above.
The entropy decoding unit 210 may change coefficients having a one-dimensional (1D) vector form into a 2D block shape by a transform coefficient scanning method in order to decode quantized transform coefficient levels.
For example, the coefficients of a block may be changed to a 2D block shape by scanning the block coefficients using an upper right diagonal scan. Alternatively, which one of the upper right diagonal scan, the vertical scan, and the horizontal scan is to be used may be determined according to the size of the corresponding block and/or the intra prediction mode.
The quantized coefficients may be inverse quantized by the inverse quantization unit 220. The inverse quantization unit 220 may generate inverse quantized coefficients by performing inverse quantization on the quantized coefficients. Also, the inverse quantized coefficients may be inverse transformed by the inverse transformation unit 230. The inverse transform unit 230 may generate a reconstructed residual block by performing an inverse transform on the inversely quantized coefficients. As a result of inverse quantization and inverse transformation performed on the quantized coefficients, a reconstructed residual block may be generated. Here, when generating the reconstructed residual block, the inverse quantization unit 220 may apply a quantization matrix to the quantized coefficients.
When the intra mode is used, the intra prediction unit 240 may generate a prediction block by performing spatial prediction on a target block, wherein the spatial prediction uses pixel values of previously decoded neighboring blocks adjacent to the target block.
The inter prediction unit 250 may include a motion compensation unit. Alternatively, the inter prediction unit 250 may be designated as a "motion compensation unit".
When the inter mode is used, the motion compensation unit may generate the prediction block by performing motion compensation for the target block, wherein the motion compensation uses the reference image stored in the reference picture buffer 270 and the motion vector.
The motion compensation unit may apply an interpolation filter to a partial region of the reference image when the motion vector has a value other than an integer, and may generate the prediction block using the reference image to which the interpolation filter is applied. To perform motion compensation, the motion compensation unit may determine which one of a skip mode, a merge mode, an Advanced Motion Vector Prediction (AMVP) mode, and a current picture reference mode corresponds to a motion compensation method for a PU included in the CU based on the CU, and may perform motion compensation according to the determined mode.
The reconstructed residual block and the prediction block may be added to each other by the adder 255. The adder 255 may generate a reconstructed block by adding the reconstructed residual block and the predicted block.
The reconstructed block may be filtered by the filter unit 260. The filter unit 260 may apply at least one of a deblocking filter, an SAO filter, an ALF, and an NLF to the reconstructed block or the reconstructed image. The reconstructed image may be a picture including the reconstructed block.
The filter unit may output a reconstructed image.
The reconstructed image and/or reconstructed block filtered by the filter unit 260 may be stored as a reference picture in the reference picture buffer 270. The reconstructed block filtered by the filter unit 260 may be a portion of a reference picture. In other words, the reference picture may be an image composed of the reconstructed block filtered by the filter unit 260. The stored reference pictures can then be used for inter prediction or motion compensation.
Fig. 3 is a diagram schematically showing a partition structure of an image when the image is encoded and decoded.
Fig. 3 may schematically illustrate an example in which a single cell is partitioned into a plurality of sub-cells.
In order to partition an image efficiently, a Coding Unit (CU) may be used in encoding and decoding. The term "unit" may be used to collectively specify 1) a block comprising image samples and 2) syntax elements. For example, "partition of a unit" may represent "partition of a block corresponding to the unit".
A CU can be used as a basic unit for image encoding/decoding. A CU can be used as a unit to which one mode selected from an intra mode and an inter mode is applied in image encoding/decoding. In other words, in image encoding/decoding, it may be determined which one of an intra mode and an inter mode is to be applied to each CU.
Also, a CU may be a basic unit that predicts, transforms, quantizes, inversely transforms, inversely quantizes, and encodes/decodes transform coefficients.
Referring to fig. 3, a picture 300 may be sequentially partitioned into units corresponding to maximum coding units (LCUs), and a partition structure may be determined for each LCU. Here, the LCU may be used to have the same meaning as a Coding Tree Unit (CTU).
Partitioning a cell may refer to partitioning a block corresponding to the cell. The block partition information may include depth information regarding a depth of the unit. The depth information may indicate a number of times the unit is partitioned and/or a degree to which the unit is partitioned. A single unit may be hierarchically partitioned into a plurality of sub-units while the single unit has depth information based on a tree structure.
Each partitioned sub-unit may have depth information. The depth information may be information indicating a size of the CU. Depth information may be stored for each CU.
Each CU may have depth information. When a CU is partitioned, the depth of the CU generated from the partition may be increased by 1 from the depth of the partitioned CU.
The partition structure may represent the distribution of Coding Units (CUs) in the LCU 310 for efficient encoding of the image. Such a distribution may be determined according to whether a single CU is to be partitioned into multiple CUs. The number of CUs generated by partitioning may be a positive integer of 2 or more, including 2, 3, 4, 8, 16, etc.
According to the number of CUs generated by performing partitioning, the horizontal size and the vertical size of each CU generated by performing partitioning may be smaller than those of the CUs before being partitioned. For example, the horizontal and vertical sizes of each CU generated by partitioning may be half of the horizontal and vertical sizes of the CU before partitioning.
Each partitioned CU may be recursively partitioned into four CUs in the same manner. At least one of a horizontal size and a vertical size of each partitioned CU may be reduced via recursive partitioning compared to at least one of a horizontal size and a vertical size of a CU before being partitioned.
Partitioning of CUs may be performed recursively until a predefined depth or a predefined size.
For example, the depth of a CU may have a value ranging from 0 to 3. The size of a CU may range from 64 × 64 to 8 × 8, depending on the depth of the CU.
For example, the depth of the LCU 310 may be 0 and the depth of the minimum coding unit (SCU) may be a predefined maximum depth. Here, as described above, the LCU may be a CU having a maximum coding unit size, and the SCU may be a CU having a minimum coding unit size.
Partitioning may begin at LCU 310, and the depth of a CU may increase by 1 each time the horizontal and/or vertical dimensions of the CU are reduced by partitioning.
For example, for each depth, a CU that is not partitioned may have a size of 2N × 2N. Further, in the case where CUs are partitioned, CUs of a size of 2N × 2N may be partitioned into four CUs each of a size of N × N. The value of N may be halved each time the depth is increased by 1.
Referring to fig. 3, an LCU having a depth of 0 may have 64 × 64 pixels or 64 × 64 blocks. 0 may be a minimum depth. An SCU of depth 3 may have 8 × 8 pixels or 8 × 8 blocks. 3 may be the maximum depth. Here, a CU having a 64 × 64 block as an LCU may be represented by a depth 0. A CU with 32 x 32 blocks may be represented with depth 1. A CU with 16 x 16 blocks may be represented with depth 2. A CU with 8 x 8 blocks as an SCU may be represented by depth 3.
The information on whether the corresponding CU is partitioned may be represented by partition information of the CU. The partition information may be 1-bit information. All CUs except the SCU may include partition information. For example, the value of the partition information of the CU that is not partitioned may be the first value. The value of the partition information of the partitioned CU may be the second value. When the partition information indicates whether the CU is partitioned, the first value may be "0" and the second value may be "1".
For example, when a single CU is partitioned into four CUs, the horizontal and vertical sizes of each of the four CUs generated by partitioning may be half the horizontal and vertical sizes of the CU before being partitioned. When a CU having a size of 32 × 32 is partitioned into four CUs, the size of each of the partitioned four CUs may be 16 × 16. When a single CU is partitioned into four CUs, the CUs may be considered to have been partitioned in a quadtree structure. In other words, the quadtree partition may be considered to have been applied to the CU.
For example, when a single CU is partitioned into two CUs, the horizontal size or the vertical size of each of the two CUs generated by partitioning may be half the horizontal size or the vertical size of the CU before being partitioned. When a CU having a size of 32 × 32 is vertically partitioned into two CUs, the size of each of the partitioned two CUs may be 16 × 32. When a CU having a size of 32 × 32 is horizontally partitioned into two CUs, the size of each of the partitioned two CUs may be 32 × 16. When a single CU is partitioned into two CUs, the CUs may be considered to have been partitioned in a binary tree structure. In other words, the binary tree partition may be considered to have been applied to the CU.
For example, when a single CU is partitioned (or divided) into three CUs, the original CU before being partitioned is partitioned such that its horizontal or vertical size is 1: 2: the ratio of 1 is divided, thus enabling generation of three sub-CUs. For example, when a CU having a size of 16 × 32 is horizontally partitioned into three sub-CUs, the three sub-CUs generated by the partitioning may have sizes of 16 × 8, 16 × 16, and 16 × 8, respectively, in a direction from top to bottom. For example, when a CU having a size of 32 × 32 is vertically partitioned into three sub-CUs, the three sub-CUs generated by the partitioning may have sizes of 8 × 32, 16 × 32, and 8 × 32, respectively, in a direction from left to right. When a single CU is partitioned into three CUs, the CUs may be considered to be partitioned in a ternary tree. In other words, a ternary tree partition may be considered to have been applied to a CU.
Both quad tree and binary tree partitioning are applied to LCU 310 of fig. 3.
In the encoding apparatus 100, a Coding Tree Unit (CTU) having a size of 64 × 64 may be partitioned into a plurality of smaller CUs by a recursive quadtree structure. A single CU may be partitioned into four CUs having the same size. Each CU may be recursively partitioned and may have a quadtree structure.
By recursive partitioning of CUs, the optimal partitioning method that incurs the smallest rate-distortion cost can be selected.
The Coding Tree Unit (CTU)320 in fig. 3 is an example of a CTU to which a quad tree partition, a binary tree partition, and a ternary tree partition are all applied.
As described above, in order to partition the CTU, at least one of a quadtree partition, a binary tree partition, and a ternary tree partition may be applied to the CTU. Partitions may be applied based on a particular priority.
For example, quadtree partitioning may be preferentially applied to CTUs. CUs that cannot be further partitioned in a quadtree fashion may correspond to leaf nodes of the quadtree. CUs corresponding to leaf nodes of a quadtree may be root nodes of a binary tree and/or a ternary tree. That is, CUs corresponding to leaf nodes of a quadtree may be partitioned in binary or ternary tree form, or may not be further partitioned. In this case, each CU generated by applying binary tree partitioning or ternary tree partitioning to CUs corresponding to leaf nodes of the quadtree is prevented from being partitioned again by the quadtree, thereby efficiently performing partitioning of blocks and/or signaling of block partition information.
The partition of the CU corresponding to each node of the quadtree may be signaled using the four-partition information. The four-partition information having a first value (e.g., "1") may indicate that the corresponding CU is partitioned in a quadtree form. The four-partition information having a second value (e.g., "0") may indicate that the corresponding CU is not partitioned in a quadtree form. The quad-partition information may be a flag having a specific length (e.g., 1 bit).
There may not be a priority between the binary tree partition and the ternary tree partition. That is, CUs corresponding to leaf nodes of a quadtree may be partitioned in a binary tree form or a ternary tree form. Furthermore, CUs generated by binary tree partitioning or ternary tree partitioning may or may not be further partitioned in binary tree form or ternary tree form.
Partitions that are executed when there is no priority between a binary tree partition and a ternary tree partition may be referred to as "multi-type tree partitions. That is, a CU corresponding to a leaf node of a quadtree may be a root node of a multi-type tree. The partition of the CU corresponding to each node of the multi-type tree may be signaled using at least one of information indicating whether the CU is partitioned by the multi-type tree, partition direction information, and partition tree information. For the partition of the CU corresponding to each node of the multi-type tree, information indicating whether or not the partition by the multi-type tree is performed, partition direction information, and partition tree information may be sequentially signaled.
For example, the information indicating whether a CU is partitioned in a multi-type tree and has a first value (e.g., "1") may indicate that the corresponding CU is partitioned in a multi-type tree form. The information indicating whether the CU is partitioned by the multi-type tree and has a second value (e.g., "0") may indicate that the corresponding CU is not partitioned in the multi-type tree form.
When a CU corresponding to each node of the multi-type tree is partitioned in the multi-type tree form, the corresponding CU may further include partition direction information.
The partition direction information may indicate a partition direction of the multi-type tree partition. The partition direction information having a first value (e.g., "1") may indicate that the corresponding CU is partitioned in the vertical direction. The partition direction information having the second value (e.g., "0") may indicate that the corresponding CU is partitioned in the horizontal direction.
When a CU corresponding to each node of the multi-type tree is partitioned in the multi-type tree form, the corresponding CU may further include partition tree information. The partition tree information may indicate a tree that is used for multi-type tree partitioning.
For example, partition tree information having a first value (e.g., "1") may indicate that the corresponding CU is partitioned in a binary tree form. The partition tree information having the second value (e.g., "0") may indicate that the corresponding CU is partitioned in a ternary tree form.
Here, each of the above-described information indicating whether partitioning by the multi-type tree is performed, the partition tree information, and the partition direction information may be a flag having a specific length (e.g., 1 bit).
At least one of the above-described four partition information, information indicating whether partitioning is performed per the multi-type tree, partition direction information, and partition tree information may be entropy-encoded and/or entropy-decoded. To perform entropy encoding/decoding of such information, information of neighboring CUs adjacent to the target CU may be used.
For example, it may be considered that there is a high probability that the partition form (i.e., partition/non-partition, partition tree, and/or partition direction) of the left-side CU and/or the upper CU and the partition form of the target CU may be similar to each other. Thus, based on the information of neighboring CUs, context information for entropy encoding and/or entropy decoding of the information of the target CU may be derived. Here, the information of the neighboring CU may include at least one of: 1) four partition information of a neighboring CU, 2) information indicating whether the neighboring CU is partitioned by a multi-type tree, 3) partition direction information of the neighboring CU, and 4) partition tree information of the neighboring CU.
In another embodiment of binary tree partitioning and ternary tree partitioning, binary tree partitioning may be performed preferentially. That is, binary tree partitioning may be applied first, and then CUs corresponding to leaf nodes of the binary tree may be set as root nodes of the ternary tree. In this case, quad tree partitioning or binary tree partitioning may not be performed on CUs corresponding to nodes of the ternary tree.
CUs that are not further partitioned by quadtree partitioning, binary tree partitioning, and/or ternary tree partitioning may be units of coding, prediction, and/or transformation. That is, a CU may not be further partitioned for prediction and/or transform. Accordingly, a partition structure for partitioning a CU into Prediction Units (PUs)/or Transform Units (TUs), partition information thereof, and the like may not exist in a bitstream.
However, when the size of a CU, which is a unit of partitioning, is larger than the size of the largest transform block, the CU may be recursively partitioned until the size of the CU becomes smaller than or equal to the size of the largest transform block. For example, when the size of a CU is 64 × 64 and the size of the largest transform block is 32 × 32, the CU may be partitioned into four 32 × 32 blocks in order to perform the transform. For example, when the size of a CU is 32 × 64 and the size of the largest transform block is 32 × 32, the CU may be partitioned into two 32 × 32 blocks.
In this case, the information indicating whether a CU is partitioned for transformation may not be separately signaled. Without signaling, it may be determined whether a CU is partitioned via a comparison between the horizontal size (and/or vertical size) of the CU and the horizontal size (and/or vertical size) of the largest transform block. For example, a CU may be vertically halved when the horizontal size of the CU is larger than the horizontal size of the largest transform block. Furthermore, a CU may be horizontally bisected when the vertical size of the CU is greater than the vertical size of the largest transform block.
The information on the maximum size and/or the minimum size of the CU and the information on the maximum size and/or the minimum size of the transform block may be signaled or determined at a level higher than the level of the CU. For example, the higher level may be a sequence level, a picture level, a parallel block group level, or a stripe level. For example, the minimum size of a CU may be set to 4 × 4. For example, the maximum size of the transform block may be set to 64 × 64. For example, the maximum size of the transform block may be set to 4 × 4.
Information about a minimum size of a CU corresponding to a leaf node of the quadtree (i.e., the minimum size of the quadtree) and/or information about a maximum depth of a path from a root node of the multi-type tree to the leaf node (i.e., the maximum depth of the multi-type tree) may be signaled or determined at a level higher than that of the CU. For example, the higher level may be a sequence level, a picture level, a stripe level, a parallel block group level, or a parallel block level. Information regarding a minimum size of the quadtree and/or information regarding a maximum depth of the multi-type tree may be separately signaled or determined at each of the intra-stripe level and the inter-stripe level.
Information about the difference between the size of the CTU and the maximum size of the transform block may be signaled or determined at a level higher than the level of the CU. For example, the higher level may be a sequence level, a picture level, a stripe level, a parallel block group level, or a parallel block level. Information about the maximum size of the CU corresponding to each node of the binary tree (i.e., the maximum size of the binary tree) may be determined based on the size of the CTU and the information of the difference. The maximum size of the CU corresponding to each node of the trifurcated tree (i.e., the maximum size of the trifurcated tree) may have different values according to the type of the strip. For example, the maximum size of the ternary tree at the intra-stripe level may be 32 × 32. For example, the maximum size of the tri-ary tree at the inter-band level may be 128 × 128. For example, the minimum size of the CU corresponding to each node of the binary tree (i.e., the minimum size of the binary tree) and/or the minimum size of the CU corresponding to each node of the ternary tree (i.e., the minimum size of the ternary tree) may be set to the minimum size of the CU.
In another example, the maximum size of the binary tree and/or the maximum size of the ternary tree may be signaled or determined at the slice level. Further, a minimum size of the binary tree and/or a minimum size of the ternary tree may be signaled or determined at the slice level.
Based on the various block sizes and depths described above, the four-partition information, information indicating whether partitioning by the multi-type tree is performed, partition tree information, and/or partition direction information may or may not be present in the bitstream.
For example, when the size of the CU is not greater than the minimum size of the quadtree, the CU may not include the four-partition information, and the four-partition information of the CU may be inferred to be a second value.
For example, when the size (horizontal size and vertical size) of a CU corresponding to each node of the multi-type tree is larger than the maximum size (horizontal size and vertical size) of the binary tree and/or the maximum size (horizontal size and vertical size) of the ternary tree, the CU may not be partitioned in the binary tree form and/or the ternary tree form. By this determination, the information indicating whether partitioning is performed per multi-type tree may not be signaled, but may be inferred as a second value.
Alternatively, a CU may not be partitioned in binary tree form and/or ternary tree form when the size (horizontal size and vertical size) of the CU corresponding to each node of the multi-type tree is equal to the minimum size (horizontal size and vertical size) of the binary tree, or when the size (horizontal size and vertical size) of the CU is equal to twice the minimum size (horizontal size and vertical size) of the ternary tree. By this determination, the information indicating whether partitioning is performed per multi-type tree may not be signaled, but may be inferred as the second value. The reason for this is that when a CU is partitioned in binary tree form and/or ternary tree form, a CU smaller than the minimum size of the binary tree and/or the minimum size of the ternary tree is generated.
Alternatively, the binary tree partition or the ternary tree partition may be restricted based on the size of the virtual pipeline data unit (i.e., the size of the pipeline buffer). For example, binary or ternary tree partitioning may be limited when a CU is partitioned into sub-CUs that do not fit the size of the pipeline buffer by binary or ternary tree partitioning. The size of the pipeline buffer may be equal to the maximum size of the transform block (e.g., 64 x 64).
For example, when the size of the pipeline buffer is 64 × 64, the following partitions may be restricted.
Ternary tree partitioning for nxm CUs (where N and/or M are 128)
Horizontal binary tree partitioning for 128 × N CU (where N < ═ 64)
Vertical binary tree partitioning for nx128 CU (where N < ═ 64)
Alternatively, when the depth of the CU corresponding to each node of the multi-type tree is equal to the maximum depth of the multi-type tree, the CU may not be partitioned in the binary tree form and/or the ternary tree form. By this determination, information indicating whether partitioning is performed per multi-type tree may be signaled but may be inferred as a second value.
Alternatively, the information indicating whether partitioning per multi-type tree is performed may be signaled only when at least one of the vertical binary tree partition, the horizontal binary tree partition, the vertical ternary tree partition, and the horizontal ternary tree partition is possible for a CU corresponding to each node of the multi-type tree. Otherwise, the CU may not be partitioned in binary and/or ternary tree form. By this determination, the information indicating whether partitioning is performed per multi-type tree may not be signaled, but may be inferred as a second value.
Alternatively, for a CU corresponding to each node of the multi-type tree, the partition direction information may be signaled only when both vertical and horizontal binary tree partitions are feasible or only when both vertical and horizontal ternary tree partitions are feasible. Otherwise, partition direction information may be not signaled, but may be inferred as a value indicating the direction in which the CU may be partitioned.
Alternatively, for a CU corresponding to each node of the multi-type tree, partition tree information may be signaled only when both vertical binary tree partitioning and vertical ternary tree partitioning are feasible, or only when both horizontal binary tree partitioning and horizontal ternary tree partitioning are feasible. Otherwise, partition tree information may not be signaled, but may be inferred as a value indicating a tree applicable to the partitions of the CU.
Fig. 4 is a diagram illustrating a form of a prediction unit that a coding unit can include.
Among CUs partitioned from the LCU, CUs that are no longer partitioned may be divided into one or more Prediction Units (PUs). This division is also referred to as "partitioning".
A PU may be the basic unit for prediction. A PU may be encoded and decoded in any one of skip mode, inter mode, and intra mode. The PUs may be partitioned into various shapes according to various modes. For example, the target block described above with reference to fig. 1 and the target block described above with reference to fig. 2 may both be PUs.
A CU may not be partitioned into PUs. When a CU is not divided into PUs, the size of the CU and the size of the PU may be equal to each other.
In skip mode, there may be no partitions in a CU. In the skip mode, the 2N × 2N mode 410 may be supported without partitioning, wherein the size of the PU and the size of the CU are the same as each other in the 2N × 2N mode 410.
In inter mode, there may be 8 types of partition shapes in a CU. For example, in the inter mode, a 2N × 2N mode 410, a 2N × N mode 415, an N × 2N mode 420, an N × N mode 425, a 2N × nU mode 430, a 2N × nD mode 435, an nL × 2N mode 440, and an nR × 2N mode 445 may be supported.
In intra mode, a 2N × 2N mode 410 and an N × N mode 425 may be supported.
In the 2 nx 2N mode 410, PUs of size 2 nx 2N may be encoded. A PU of size 2N × 2N may represent a PU of the same size as the CU. For example, a PU of size 2N × 2N may have a size 64 × 64, 32 × 32, 16 × 16, or 8 × 8.
In the nxn mode 425, PUs of size nxn may be encoded.
For example, in intra prediction, when the size of a PU is 8 × 8, four partitioned PUs may be encoded. The size of each partitioned PU may be 4 x 4.
When a PU is encoded in intra mode, the PU may be encoded using any one of a plurality of intra prediction modes. For example, High Efficiency Video Coding (HEVC) techniques may provide 35 intra prediction modes, a PU may be encoded in any one of the 35 intra prediction modes.
Which of the 2N × 2N mode 410 and the N × N mode 425 is to be used to encode the PU may be determined based on the rate-distortion cost.
The encoding apparatus 100 may perform an encoding operation on PUs having a size of 2N × 2N. Here, the encoding operation may be an operation of encoding the PU in each of a plurality of intra prediction modes that can be used by the encoding apparatus 100. Through the encoding operation, the optimal intra prediction mode for a PU of size 2N × 2N may be derived. The optimal intra prediction mode may be an intra prediction mode in which a minimum rate-distortion cost occurs when a PU having a size of 2N × 2N is encoded, among a plurality of intra prediction modes that can be used by the encoding apparatus 100.
Further, the encoding apparatus 100 may sequentially perform an encoding operation on the respective PUs obtained by performing the N × N partitioning. Here, the encoding operation may be an operation of encoding the PU in each of a plurality of intra prediction modes that can be used by the encoding apparatus 100. By the encoding operation, the optimal intra prediction mode for a PU of size N × N may be derived. The optimal intra prediction mode may be an intra prediction mode in which a minimum rate-distortion cost occurs when a PU having a size of N × N is encoded, among a plurality of intra prediction modes that can be used by the encoding apparatus 100.
The encoding apparatus 100 may determine which one of a PU of size 2N × 2N and a PU of size N × N is to be encoded based on a comparison between rate-distortion costs of PUs of size 2N × 2N and rate-distortion costs of PUs of size N × N.
A single CU may be partitioned into one or more PUs, and a PU may be partitioned into multiple PUs.
For example, when a single PU is partitioned into four PUs, the horizontal and vertical dimensions of each of the four PUs generated by the partitioning may be half the horizontal and vertical dimensions of the PU before being partitioned. When a PU having a size of 32 × 32 is partitioned into four PUs, the size of each of the four partitioned PUs may be 16 × 16. When a single PU is partitioned into four PUs, the PUs may be considered to have been partitioned in a quad-tree structure.
For example, when a single PU is partitioned into two PUs, the horizontal or vertical size of each of the two PUs generated by the partitioning may be half the horizontal or vertical size of the PU before being partitioned. When a PU of size 32 x 32 is vertically partitioned into two PUs, the size of each of the two partitioned PUs may be 16 x 32. When a PU having a size of 32 × 32 is horizontally partitioned into two PUs, the size of each of the two partitioned PUs may be 32 × 16. When a single PU is partitioned into two PUs, the PUs may be considered to have been partitioned in a binary tree structure.
Fig. 5 is a diagram illustrating a form of a transform unit that can be included in a coding unit.
A Transform Unit (TU) may be a basic unit used in a CU for processes such as transform, quantization, inverse transform, inverse quantization, entropy coding, and entropy decoding.
The TU may have a square shape or a rectangular shape. The shape of a TU may be determined based on the size and/or shape of the CU.
Among CUs partitioned from the LCU, CUs that are no longer partitioned into CUs may be partitioned into one or more TUs. Here, the partition structure of the TU may be a quad tree structure. For example, as shown in fig. 5, a single CU 510 may be partitioned one or more times according to a quadtree structure. With such partitioning, a single CU 510 may be composed of TUs having various sizes.
A CU may be considered to be recursively divided when a single CU is divided two or more times. By the division, a single CU may be composed of Transform Units (TUs) having various sizes.
Alternatively, a single CU may be divided into one or more TUs based on the number of vertical and/or horizontal lines dividing the CU.
A CU may be divided into symmetric TUs or asymmetric TUs. For the division into the asymmetric TUs, information regarding the size and/or shape of each TU may be signaled from the encoding apparatus 100 to the decoding apparatus 200. Alternatively, the size and/or shape of each TU may be derived from information on the size and/or shape of the CU.
A CU may not be divided into TUs. When a CU is not divided into TUs, the size of the CU and the size of the TU may be equal to each other.
A single CU may be partitioned into one or more TUs, and a TU may be partitioned into multiple TUs.
For example, when a single TU is partitioned into four TUs, the horizontal size and the vertical size of each of the four TUs generated by the partitioning may be half of those of the TU before being partitioned. When a TU having a size of 32 × 32 is partitioned into four TUs, the size of each of the four partitioned TUs may be 16 × 16. When a single TU is partitioned into four TUs, the TUs may be considered to have been partitioned in a quadtree structure.
For example, when a single TU is partitioned into two TUs, the horizontal size or the vertical size of each of the two TUs generated by the partitioning may be half of the horizontal size or the vertical size of the TU before being partitioned. When a TU having a size of 32 × 32 is vertically partitioned into two TUs, the size of each of the two partitioned TUs may be 16 × 32. When a TU having a size of 32 × 32 is horizontally partitioned into two TUs, the size of each of the two partitioned TUs may be 32 × 16. When a single TU is partitioned into two TUs, the TUs may be considered to have been partitioned in a binary tree structure.
A CU may be partitioned in a different manner than shown in fig. 5.
For example, a single CU may be divided into three CUs. The horizontal or vertical sizes of the three CUs generated by the division may be 1/4, 1/2, and 1/4 of the horizontal or vertical size of the original CU before being divided, respectively.
For example, when a CU having a size of 32 × 32 is vertically divided into three CUs, the sizes of the three CUs generated by the division may be 8 × 32, 16 × 32, and 8 × 32, respectively. In this way, when a single CU is divided into three CUs, the CU can be considered to be divided in the form of a ternary tree.
One of exemplary division forms (i.e., quadtree division, binary tree division, and ternary tree division) may be applied to the division of the CU, and a variety of division schemes may be combined and used together for the division of the CU. Here, a case where a plurality of division schemes are combined and used together may be referred to as "composite tree-like division".
Fig. 6 illustrates partitioning of a block according to an example.
In the video encoding and/or decoding process, as shown in fig. 6, the target block may be divided. For example, the target block may be a CU.
For the division of the target block, an indicator indicating division information may be signaled from the encoding apparatus 100 to the decoding apparatus 200. The partition information may be information indicating how the target block is partitioned.
The partition information may be one or more of a partition flag (hereinafter, referred to as "split _ flag"), a quad-binary flag (hereinafter, referred to as "QB _ flag"), a quad-tree flag (hereinafter, referred to as "quadtree _ flag"), a binary tree flag (hereinafter, referred to as "binary _ flag"), and a binary type flag (hereinafter, referred to as "Btype _ flag").
The "split _ flag" may be a flag indicating whether the block is divided. For example, a split _ flag value of 1 may indicate that the corresponding block is divided. A split _ flag value of 0 may indicate that the corresponding block is not divided.
"QB _ flag" may be a flag indicating which of the quad tree form and the binary tree form corresponds to the shape in which the block is divided. For example, a QB _ flag value of 0 may indicate that the block is divided in a quad tree form. A QB _ flag value of 1 may indicate that the block is divided in a binary tree. Alternatively, a QB _ flag value of 0 may indicate that the block is divided in a binary tree form. A QB _ flag value of 1 may indicate that the block is divided in a quad tree form.
"quadtree _ flag" may be a flag indicating whether a block is divided in a quad-tree form. For example, a value of quadtree _ flag of 1 may indicate that the block is divided in a quad-tree form. A quadtree _ flag value of 0 may indicate that the block is not divided in a quadtree form.
"binarytree _ flag" may be a flag indicating whether a block is divided in a binary tree form. For example, a binarytree _ flag value of 1 may indicate that the block is divided in a binary tree form. A binarytree _ flag value of 0 may indicate that the block is not divided in a binary tree form.
"Btype _ flag" may be a flag indicating which one of the vertical division and the horizontal division corresponds to the division direction when the block is divided in the binary tree form. For example, a Btype _ flag value of 0 may indicate that the block is divided in the horizontal direction. A Btype _ flag value of 1 may indicate that the block is divided in the vertical direction. Alternatively, a Btype _ flag value of 0 may indicate that the block is divided in the vertical direction. A Btype _ flag value of 1 may indicate that the block is divided in the horizontal direction.
For example, the partition information of the block in fig. 6 may be derived by signaling at least one of quadtree _ flag, binytree _ flag, and Btype _ flag, as shown in table 1 below.
TABLE 1
Figure BDA0003690721400000371
For example, the partition information of the block in fig. 6 may be derived by signaling at least one of split _ flag, QB _ flag, and Btype _ flag, as shown in table 2 below.
TABLE 2
Figure BDA0003690721400000381
The partitioning method may be limited to only a quad tree or a binary tree depending on the size and/or shape of the block. When this restriction is applied, the split _ flag may be a flag indicating whether the block is divided in a quad tree form or a flag indicating whether the block is divided in a binary tree form. The size and shape of the block may be derived from the depth information of the block, and the depth information may be signaled from the encoding apparatus 100 to the decoding apparatus 200.
When the size of the block falls within a certain range, division in the form of only a quad tree is possible. For example, the specific range may be defined by at least one of a maximum block size and a minimum block size that can be divided only in a quad-tree form.
Information indicating the maximum block size and the minimum block size that can be divided only in the form of a quadtree may be signaled from the encoding apparatus 100 to the decoding apparatus 200 through a bitstream. Further, this information may be signaled for at least one of units such as video, sequences, pictures, parameters, parallel block sets, and stripes (or slices).
Alternatively, the maximum block size and/or the minimum block size may be a fixed size predefined by the encoding apparatus 100 and the decoding apparatus 200. For example, when the size of the block is larger than 64 × 64 and smaller than 256 × 256, only the division in the form of a quad tree is possible. In this case, split _ flag may be a flag indicating whether to perform partitioning in the form of a quad tree.
When the size of the block is larger than the maximum size of the transform block, only partitioning in the form of a quadtree is possible. Here, the sub-block generated by the partition may be at least one of a CU and a TU.
In this case, the split _ flag may be a flag indicating whether partitioning in the form of a quad-tree is performed.
When the size of the block falls within a specific range, division in only a binary tree form or a ternary tree form is possible. For example, the specific range may be defined by at least one of a maximum block size and a minimum block size that can be divided only in a binary tree form or a ternary tree form.
Information indicating the maximum block size and/or the minimum block size that can be divided only in a binary tree form or in a ternary tree form may be signaled from the encoding apparatus 100 to the decoding apparatus 200 through a bitstream. Further, this information may be signaled for at least one of the units such as sequence, picture, and slice (or slice).
Alternatively, the maximum block size and/or the minimum block size may be a fixed size predefined by the encoding apparatus 100 and the decoding apparatus 200. For example, when the size of the block is larger than 8 × 8 and smaller than 16 × 16, only division in a binary tree form is possible. In this case, split _ flag may be a flag indicating whether to perform partitioning in a binary tree form or a ternary tree form.
The above description of partitioning in a quadtree form can be equally applied to a binary tree form and/or a ternary tree form.
The partitioning of a block may be limited by previous partitions. For example, when a block is partitioned in a specific binary tree form and a plurality of sub-blocks are generated from the partition, each sub-block may be additionally partitioned only in a specific tree form. Here, the specific tree form may be at least one of a binary tree form, a ternary tree form, and a quaternary tree form.
The indicator may not be signaled when the horizontal size or the vertical size of the partition block is a size that cannot be further divided.
Fig. 7 is a diagram for explaining an embodiment of an intra prediction process.
The arrow radially extending from the center of the graph in fig. 7 indicates the prediction direction of the directional intra prediction mode. Further, numbers appearing near the arrows indicate examples of mode values assigned to the intra prediction mode or the prediction direction of the intra prediction mode.
In fig. 7, the number "0" may represent a planar mode as a non-directional intra prediction mode. The number "1" may represent a DC mode as a non-directional intra prediction mode.
Intra-coding and/or decoding may be performed using reference samples of neighboring units of the target block. The neighboring blocks may be reconstructed neighboring blocks. The reference samples may represent neighboring samples.
For example, intra encoding and/or decoding may be performed using values of reference samples included in the reconstructed neighboring blocks or encoding parameters of the reconstructed neighboring blocks.
The encoding apparatus 100 and/or the decoding apparatus 200 may generate the prediction block by performing intra prediction on the target block based on the information on the sampling point in the target image. When the intra prediction is performed, the encoding apparatus 100 and/or the decoding apparatus 200 may generate a prediction block for the target block by performing the intra prediction based on the information on the sampling points in the target image. When the intra prediction is performed, the encoding apparatus 100 and/or the decoding apparatus 200 may perform directional prediction and/or non-directional prediction based on the at least one reconstructed reference sample.
The prediction block may be a block generated as a result of performing intra prediction. The prediction block may correspond to at least one of a CU, a PU, and a TU.
The units of the prediction block may have a size corresponding to at least one of the CU, the PU, and the TU. The prediction block may have a square shape with a size of 2N × 2N or N × N. The dimensions nxn may include dimensions 4 x 4, 8 x 8, 16 x 16, 32 x 32, 64 x 64, etc.
Alternatively, the prediction block may be a square block having a size of 2 × 2, 4 × 4, 8 × 8, 16 × 16, 32 × 32, 64 × 64, or the like or a rectangular block having a size of 2 × 8, 4 × 8, 2 × 16, 4 × 16, 8 × 16, or the like.
The intra prediction may be performed in consideration of an intra prediction mode for the target block. The number of intra prediction modes that the target block may have may be a predefined fixed value, and may be a value differently determined according to the properties of the prediction block. For example, the properties of the prediction block may include the size of the prediction block, the type of prediction block, and the like. Furthermore, the properties of the prediction block may indicate the coding parameters used for the prediction block.
For example, the number of intra prediction modes may be fixed to N regardless of the size of the prediction block. Alternatively, the number of intra prediction modes may be, for example, 3, 5, 9, 17, 34, 35, 36, 65, 67, or 95.
The intra prediction mode may be a non-directional mode or a directional mode.
For example, the intra prediction modes may include two non-directional modes and 65 directional modes corresponding to numbers 0 to 66 shown in fig. 7.
For example, in the case of using a specific intra prediction method, the intra prediction modes may include two non-directional modes corresponding to numbers-14 to 80 shown in fig. 7 and 93 directional modes.
The two non-directional modes may include a DC mode and a planar mode.
The directional mode may be a prediction mode having a specific direction or a specific angle. The directional mode may also be referred to as an "angular mode".
The intra prediction mode may be represented by at least one of a mode number, a mode value, a mode angle, and a mode direction. In other words, the terms "a (mode) number of an intra prediction mode", "a (mode) value of an intra prediction mode", "a (mode) angle of an intra prediction mode", and "a (mode) direction of an intra prediction mode" may be used to have the same meaning and may be used interchangeably with each other.
The number of intra prediction modes may be M. The value of M may be 1 or greater. In other words, the number of intra prediction modes may be M, where M includes the number of non-directional modes and the number of directional modes.
The number of intra prediction modes may be fixed to M regardless of the size and/or color components of the block. For example, the number of intra prediction modes may be fixed to any one of 35 and 67 regardless of the size of the block.
Alternatively, the number of intra prediction modes may be different according to the shape, size, and/or type of color component of the block.
For example, in fig. 7, the directional prediction mode as shown by the dotted line may be applied only to prediction for non-square blocks.
For example, the larger the size of the block, the larger the number of intra prediction modes. Alternatively, the larger the size of the block, the smaller the number of intra prediction modes. When the size of the block is 4 × 4 or 8 × 8, the number of intra prediction modes may be 67. When the size of the block is 16 × 16, the number of intra prediction modes may be 35. When the size of the block is 32 × 32, the number of intra prediction modes may be 19. When the size of the block is 64 × 64, the number of intra prediction modes may be 7.
For example, the number of intra prediction modes may be different according to whether a color component is a luminance signal or a chrominance signal. Alternatively, the number of intra prediction modes corresponding to the luminance component block may be greater than the number of intra prediction modes corresponding to the chrominance component block.
For example, in the vertical mode with a mode value of 50, prediction may be performed in the vertical direction based on the pixel values of the reference sampling points. For example, in the horizontal mode with the mode value of 18, prediction may be performed in the horizontal direction based on the pixel value of the reference sampling point.
Even in a directional mode other than the above-described modes, the encoding apparatus 100 and the decoding apparatus 200 may perform intra prediction on a target unit using reference samples according to an angle corresponding to the directional mode.
The intra prediction mode located on the right side with respect to the vertical mode may be referred to as a "vertical-right mode". The intra prediction mode located below the horizontal mode may be referred to as a "horizontal-lower mode". For example, in fig. 7, the intra prediction mode having one of the mode values 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, and 66 may be a vertical-right mode. The intra prediction mode having a mode value of one of 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, and 17 may be a horizontal-lower mode.
The non-directional mode may include a DC mode and a planar mode. For example, the value of the DC mode may be 1. The value of the planar mode may be 0.
The directional pattern may include an angular pattern. Among the plurality of intra prediction modes, the remaining modes other than the DC mode and the planar mode may be directional modes.
When the intra prediction mode is the DC mode, the prediction block may be generated based on an average value of pixel values of the plurality of reference pixels. For example, values of pixels of the prediction block may be determined based on an average of pixel values of a plurality of reference pixels.
The number of intra prediction modes and the mode value of each intra prediction mode described above are only exemplary. The number of intra prediction modes described above and the mode values of the respective intra prediction modes may be defined differently according to embodiments, implementations, and/or requirements.
In order to perform intra prediction on the target block, a step of checking whether or not a sample included in the reconstructed neighboring block can be used as a reference sample of the target block may be performed. When there are samples that cannot be used as reference samples of the target block among samples in the neighboring block, a value generated via interpolation and/or duplication using at least one sample value among samples included in the reconstructed neighboring block may replace sample values of samples that cannot be used as reference samples. When a value generated via replication and/or interpolation replaces a sample value of an existing sample, the sample may be used as a reference sample for the target block.
When intra prediction is used, a filter may be applied to at least one of the reference samples and the prediction samples based on at least one of the size of the target block and the intra prediction mode.
The type of the filter to be applied to at least one of the reference samples and the prediction samples may be different according to at least one of an intra prediction mode of the target block, a size of the target block, and a shape of the target block. The type of filter may be classified according to one or more of the length of the filter tap, the value of the filter coefficient, and the filter strength. The length of the filter taps may represent the number of filter taps. Further, the number of filter taps may represent the length of the filter.
When the intra prediction mode is the planar mode, the sample value of the prediction target block may be generated using a weighted sum of the upper reference sample of the target block, the left reference sample of the target block, the upper right reference sample of the target block, and the lower left reference sample of the target block according to the position of the prediction target sample in the prediction block when generating the prediction block of the target block.
When the intra prediction mode is the DC mode, an average value of the reference samples above the target block and the reference samples to the left of the target block may be used in generating the prediction block of the target block. Further, filtering using the value of the reference sampling point may be performed on a specific row or a specific column in the target block. The particular row may be one or more upper rows adjacent to the reference sampling point. The particular column may be one or more left-hand columns adjacent to the reference sample point.
When the intra prediction mode is a directional mode, the prediction block may be generated using the upper reference sample, the left reference sample, the upper right reference sample, and/or the lower left reference sample of the target block.
To generate the predicted samples described above, real number based interpolation may be performed.
The intra prediction mode of the target block may be predicted from intra prediction modes of neighboring blocks adjacent to the target block, and information for prediction may be entropy-encoded/entropy-decoded.
For example, when the intra prediction modes of the target block and the neighboring block are identical to each other, the intra prediction modes of the target block and the neighboring block may be signaled to be identical using a predefined flag.
For example, an indicator indicating the same intra prediction mode as that of the target block among intra prediction modes of a plurality of neighboring blocks may be signaled.
When the intra prediction modes of the target block and the neighboring block are different from each other, information regarding the intra prediction mode of the target block may be encoded and/or decoded using entropy encoding and/or entropy decoding.
Fig. 8 is a diagram illustrating reference samples used in an intra prediction process.
The reconstructed reference samples for intra prediction of the target block may include a lower left reference sample, a left reference sample, an upper right reference sample, and an upper right reference sample.
For example, the left reference sample point may represent a reconstructed reference pixel adjacent to the left side of the target block. The upper reference sample point may represent a reconstructed reference pixel adjacent to the top of the target block. The upper left reference sample point may represent a reconstructed reference pixel located at the upper left corner of the target block. The lower-left reference sampling point may represent a reference sampling point located below a left side sampling point line composed of the left reference sampling points among sampling points located on the same line as the left side sampling point line. The upper right reference sampling point may represent a reference sampling point located at the right side of an upper sampling point line composed of the upper reference sampling points among sampling points located on the same line as the upper sampling point line.
When the size of the target block is N × N, the numbers of the lower-left reference samples, the upper reference samples, and the upper-right reference samples may all be N.
By performing intra prediction on the target block, a prediction block may be generated. The process of generating the prediction block may include determining values of pixels in the prediction block. The sizes of the target block and the prediction block may be the same.
The reference sampling point used for intra prediction of the target block may be changed according to the intra prediction mode of the target block. The direction of the intra prediction mode may represent a dependency between the reference samples and the pixels of the prediction block. For example, a value specifying a reference sample may be used as a value of one or more specified pixels in the prediction block. In this case, the specified reference samples and the one or more specified pixels in the prediction block may be samples and pixels located on a straight line along a direction of an intra prediction mode. In other words, the value of the specified reference sample point may be copied as the value of the pixel located in the direction opposite to the direction of the intra prediction mode. Alternatively, the value of a pixel in the prediction block may be a value of a reference sample point located in the direction of the intra prediction mode with respect to the position of the pixel.
In an example, when the intra prediction mode of the target block is a vertical mode, the above-reference samples may be used for intra prediction. When the intra prediction mode is a vertical mode, the value of a pixel in the prediction block may be a value of a reference sample point vertically above the position of the pixel. Therefore, the upper reference samples adjacent to the top of the target block may be used for intra prediction. Furthermore, values of pixels in a row of the prediction block may be the same as values of pixels of the upper reference sample point.
In an example, when the intra prediction mode of the target block is a horizontal mode, the left reference sample may be used for intra prediction. When the intra prediction mode is a horizontal mode, the value of a pixel in the prediction block may be a value of a reference sample point horizontally located to the left of the position of the pixel. Therefore, the left reference samples adjacent to the left side of the target block may be used for intra prediction. Furthermore, the values of pixels in a column of the prediction block may be the same as the values of pixels of the left reference sample point.
In an example, when a mode value of an intra prediction mode of the current block is 34, at least some of the left reference samples, the upper-left corner reference samples, and at least some of the upper reference samples may be used for intra prediction. When the mode value of the intra prediction mode is 18, the value of a pixel in the prediction block may be the value of a reference sample point located diagonally at the upper left corner of the pixel.
Further, in the case of an intra prediction mode in which the mode value is a value ranging from 52 to 66, at least a portion of the upper-right reference samples may be used for intra prediction.
Further, in the case of an intra prediction mode in which the mode value is a value ranging from 2 to 17, at least a part of the lower left reference sample may be used for intra prediction.
Further, in the case of an intra prediction mode in which the mode value is a value ranging from 19 to 49, the upper left reference sample may be used for intra prediction.
The number of reference samples used to determine the pixel value of one pixel in the prediction block may be 1 or 2 or more.
As described above, the pixel values of the pixels in the prediction block may be determined according to the positions of the pixels and the positions of the reference samples indicated by the direction of the intra prediction mode. When the position of the pixel and the position of the reference sample point indicated by the direction of the intra prediction mode are integer positions, the value of one reference sample point indicated by the integer position may be used to determine the pixel value of the pixel in the prediction block.
When the position of the pixel and the position of the reference sample point indicated by the direction of the intra prediction mode are not integer positions, an interpolated reference sample point based on two reference sample points closest to the position of the reference sample point may be generated. The values of the interpolated reference samples may be used to determine pixel values for pixels in the prediction block. In other words, when the position of the pixel in the prediction block and the position of the reference sample point indicated by the direction of the intra prediction mode indicate a position between two reference sample points, an interpolation based on the values of the two sample points may be generated.
The prediction block generated via prediction may be different from the original target block. In other words, there may be a prediction error, which is a difference between the target block and the prediction block, and there may also be a prediction error between pixels of the target block and pixels of the prediction block.
Hereinafter, the terms "difference", "error" and "residual" may be used to have the same meaning and may be used interchangeably with each other.
For example, in the case of directional intra prediction, the longer the distance between the pixels of the predicted block and the reference sample, the larger the prediction error that may occur. Such a prediction error may cause discontinuity between the generated prediction block and the neighboring block.
To reduce the prediction error, a filtering operation for the prediction block may be used. The filtering operation may be configured to adaptively apply a filter to a region in the prediction block that is considered to have a large prediction error. For example, a region considered to have a large prediction error may be a boundary of a prediction block. In addition, regions that are considered to have a large prediction error in a prediction block may be different according to an intra prediction mode, and characteristics of a filter may also be different according to the intra prediction mode.
As shown in fig. 8, for intra prediction of a target block, at least one of reference line 0 to reference line 3 may be used. Each reference line may indicate a reference sample line. When the number of reference lines is smaller, a reference sample line closer to the target block may be indicated.
The samples in segment a and segment F may be obtained by padding using the samples in segment B and segment E that are closest to the target block, rather than from reconstructed neighboring blocks.
Index information indicating a reference sample line to be used for intra prediction of a target block may be signaled. The index information may indicate a reference sample line of a plurality of reference sample lines to be used for intra prediction of a target block. For example, the index information may have a value corresponding to any one of 0 to 3.
When the upper boundary of the target block is the boundary of the CTU, only the reference sample line 0 may be available. Thus, in this case, the index information may not be signaled. When an additional reference sample line other than the reference sample line 0 is used, filtering of a prediction block, which will be described later, may not be performed.
In the case of inter-color intra prediction, a prediction block of a target block of a second color component may be generated based on a corresponding reconstructed block of a first color component.
For example, the first color component may be a luminance component and the second color component may be a chrominance component.
To perform inter-color intra prediction, parameters of a linear model between the first color component and the second color component may be derived based on the template.
The template may include reference samples (upper reference samples) above the target block and/or reference samples (left reference samples) to the left of the target block, and may include upper reference samples and/or left reference samples of a reconstructed block of the first color component corresponding to the reference samples.
For example, the following values may be used to derive the parameters of the linear model: 1) a value of a sample point of a first color component having a maximum value among sample points in the template, 2) a value of a sample point of a second color component corresponding to a sample point of the first color component, 3) a value of a sample point of a first color component having a minimum value among sample points in the template, and 4) a value of a sample point of a second color component corresponding to a sample point of the first color component.
When the parameters of the linear model are derived, the prediction block of the target block may be generated by applying the corresponding reconstructed block to the linear model.
According to the image format, subsampling may be performed on samples adjacent to the reconstructed block of the first color component and the corresponding reconstructed block of the first color component. For example, when one sampling point of the second color component corresponds to four sampling points of the first color component, one corresponding sampling point may be calculated by performing sub-sampling on the four sampling points of the first color component. When performing sub-sampling, derivation of parameters of the linear model and inter-color intra prediction may be performed based on the sub-sampled corresponding sampling points.
Information regarding whether to perform inter-color intra prediction and/or the range of templates may be signaled in intra prediction mode.
The target block may be partitioned into two or four sub-blocks in the horizontal direction and/or the vertical direction.
The sub-blocks generated by the partitioning may be sequentially reconstructed. That is, when intra prediction is performed on each sub-block, sub-prediction blocks of the sub-block may be generated. Further, when inverse quantization (inverse quantization) and/or inverse transformation is performed on each subblock, a sub-residual block for the corresponding subblock may be generated. The reconstructed sub-block may be generated by adding the sub-prediction block to the sub-residual block. The reconstructed sub-block may be used as a reference sample point for intra prediction of a sub-block having a next priority.
A sub-block may be a block that includes a certain number (e.g., 16) or more samples. For example, when the target block is an 8 × 4 block or a 4 × 8 block, the target block may be partitioned into two sub-blocks. Further, when the target block is a 4 × 4 block, the target block cannot be partitioned into sub-blocks. When the target block has another size, the target block may be partitioned into four sub-blocks.
Information on whether to perform intra prediction based on the sub-blocks and/or information on a partition direction (horizontal direction or vertical direction) may be signaled.
Such sub-block based intra prediction may be restricted such that it is performed only when the reference sample line 0 is used. When the sub-block-based intra prediction is performed, filtering of a prediction block, which will be described below, may not be performed.
The final prediction block may be generated by performing filtering on the prediction block generated through intra prediction.
The filtering may be performed by applying a specific weight to the filtering target samples, the left reference samples, the upper reference samples, and/or the upper left reference samples, which are targets to be filtered.
The weight for filtering and/or the reference samples (e.g., the range of the reference samples, the location of the reference samples, etc.) may be determined based on at least one of the block size, the intra prediction mode, and the location of the filtering target samples in the prediction block.
For example, the filtering may be performed only in a specific intra prediction mode (e.g., DC mode, planar mode, vertical mode, horizontal mode, diagonal mode, and/or adjacent diagonal mode).
The adjacent diagonal patterns may be patterns having numbers obtained by adding k to the numbers of the diagonal patterns, and may be patterns having numbers obtained by subtracting k from the numbers of the diagonal patterns. In other words, the number of the adjacent diagonal patterns may be the sum of the number of the diagonal patterns and k, or may be the difference between the number of the diagonal patterns and k. For example, k may be a positive integer of 8 or less.
The intra prediction mode of the target block may be derived using intra prediction modes of neighboring blocks existing near the target block, and such derived intra prediction modes may be entropy-encoded and/or entropy-decoded.
For example, when the intra prediction mode of the target block is the same as the intra prediction modes of the neighboring blocks, information indicating that the intra prediction mode of the target block is the same as the intra prediction modes of the neighboring blocks may be signaled using the specific flag information.
Also, for example, indicator information of neighboring blocks of which intra prediction modes are the same as the intra prediction mode of the target block among the intra prediction modes of the plurality of neighboring blocks may be signaled.
For example, when the intra prediction mode of the target block is different from the intra prediction modes of the neighboring blocks, entropy encoding and/or entropy decoding may be performed on information regarding the intra prediction mode of the target block by performing entropy encoding and/or entropy decoding based on the intra prediction modes of the neighboring blocks.
Fig. 9 is a diagram for explaining an embodiment of an inter prediction process.
The rectangle shown in fig. 9 may represent an image (or picture). In addition, in fig. 9, an arrow may indicate a prediction direction. The arrow pointing from the first picture to the second picture indicates that the second picture refers to the first picture. That is, each image may be encoded and/or decoded according to a prediction direction.
Images can be classified into an intra picture (I picture), a mono-predictive picture or a predictive coded picture (P picture), and a bi-predictive picture or a bi-predictive coded picture (B picture) according to coding types. Each picture may be encoded and/or decoded according to its coding type.
When the target image that is the target to be encoded is an I picture, the target image can be encoded using data contained in the image itself without performing inter prediction with reference to other images. For example, an I picture may be encoded via intra prediction only.
When the target image is a P picture, the target image may be encoded via inter prediction using a reference picture existing in one direction. Here, the one direction may be a forward direction or a backward direction.
When the target image is a B picture, the image may be encoded via inter prediction using reference pictures existing in both directions, or may be encoded via inter prediction using reference pictures existing in one of a forward direction and a backward direction. Here, the two directions may be a forward direction and a backward direction.
P-pictures and B-pictures encoded and/or decoded using reference pictures may be considered images using inter prediction.
Hereinafter, inter prediction in the inter mode according to the embodiment will be described in detail.
Inter prediction or motion compensation may be performed using the reference picture and the motion information.
In the inter mode, the encoding apparatus 100 may perform inter prediction and/or motion compensation on the target block. The decoding apparatus 200 may perform inter prediction and/or motion compensation corresponding to the inter prediction and/or motion compensation performed by the encoding apparatus 100 on the target block.
The motion information of the target block may be separately derived by the encoding apparatus 100 and the decoding apparatus 200 during inter prediction. The motion information may be derived using motion information of reconstructed neighboring blocks, motion information of a col block, and/or motion information of blocks adjacent to the col block.
For example, the encoding apparatus 100 or the decoding apparatus 200 may perform prediction and/or motion compensation by using motion information of a spatial candidate and/or a temporal candidate as motion information of a target block. The target block may represent a PU and/or a PU partition.
The spatial candidate may be a reconstructed block spatially adjacent to the target block.
The temporal candidate may be a reconstructed block corresponding to the target block in a previously reconstructed co-located picture (col picture).
In the inter prediction, the encoding apparatus 100 and the decoding apparatus 200 may improve encoding efficiency and decoding efficiency by using motion information of spatial candidates and/or temporal candidates. The motion information of the spatial candidates may be referred to as "spatial motion information". The motion information of the temporal candidate may be referred to as "temporal motion information".
Next, the motion information of the spatial candidate may be the motion information of the PU including the spatial candidate. The motion information of the temporal candidate may be the motion information of the PU including the temporal candidate. The motion information of the candidate block may be motion information of a PU that includes the candidate block.
Inter prediction may be performed using a reference picture.
The reference picture may be at least one of a picture preceding the target picture and a picture following the target picture. The reference picture may be an image used for prediction of the target block.
In inter prediction, a region in a reference picture may be specified using a reference picture index (or refIdx) indicating the reference picture, a motion vector to be described later, or the like. Here, the area specified in the reference picture may indicate a reference block.
Inter prediction may select a reference picture, and may also select a reference block corresponding to the target block from the reference picture. Further, inter prediction may generate a prediction block for a target block using the selected reference block.
The motion information may be derived by each of the encoding apparatus 100 and the decoding apparatus 200 during inter prediction.
The spatial candidates may be 1) blocks that exist in the target picture that 2) have been previously reconstructed via encoding and/or decoding and 3) are adjacent to the target block or located at corners of the target block. Here, the "block located at a corner of the target block" may be a block vertically adjacent to an adjacent block horizontally adjacent to the target block, or a block horizontally adjacent to an adjacent block vertically adjacent to the target block. Further, "a block located at a corner of the target block" may have the same meaning as "a block adjacent to the corner of the target block". The meaning of "a block located at a corner of a target block" may be included in the meaning of "a block adjacent to the target block".
For example, the spatial candidate may be a reconstructed block located to the left of the target block, a reconstructed block located above the target block, a reconstructed block located in the lower left corner of the target block, a reconstructed block located in the upper right corner of the target block, or a reconstructed block located in the upper left corner of the target block.
Each of the encoding apparatus 100 and the decoding apparatus 200 can identify a block existing in a position spatially corresponding to a target block in a col picture. The position of the target block in the target picture and the position of the identified block in the col picture may correspond to each other.
Each of the encoding apparatus 100 and the decoding apparatus 200 may determine, as a time candidate, a col block existing at a predefined correlation position with respect to the identified block. The predefined relative location may be a location that exists inside and/or outside the identified block.
For example, the col blocks may include a first col block and a second col block. When the coordinates of the identified block are (xP, yP) and the size of the identified block is represented by (nPSW, nPSH), the first col block may be a block located at coordinates (xP + nPSW, yP + nPSH). The second col block may be a block located at coordinates (xP + (nPSW > >1), yP + (nPSH > > 1)). When the first col block is not available, the second col block may be selectively used.
The motion vector of the target block may be determined based on the motion vector of the col block. Each of the encoding apparatus 100 and the decoding apparatus 200 may scale the motion vector of the col block. The scaled motion vector of the col block can be used as the motion vector of the target block. Further, the motion vector of the motion information of the temporal candidate stored in the list may be a scaled motion vector.
The ratio of the motion vector of the target block relative to the motion vector of the col block may be the same as the ratio of the first temporal distance relative to the second temporal distance. The first temporal distance may be a distance between the reference picture and a target picture of the target block. The second temporal distance may be a distance between the reference picture and a col picture of the col block.
The scheme for deriving the motion information may vary according to the inter prediction mode of the target block. For example, as an inter prediction mode applied to inter prediction, there may be an Advanced Motion Vector Predictor (AMVP) mode, a merge mode, a skip mode, a merge mode having a motion vector difference, a sub-block merge mode, a triangle partition mode, an inter-intra combined prediction mode, an affine inter mode, a current picture reference mode, and the like. The merge mode may also be referred to as a "motion merge mode". Each mode will be described in detail below.
1) AMVP mode
When using the AMVP mode, the encoding apparatus 100 may search for similar blocks in a neighboring area of a target block. The encoding apparatus 100 may acquire a prediction block by performing prediction on a target block using motion information of the found similar block. The encoding apparatus 100 may encode a residual block that is a difference between the target block and the prediction block.
1-1) creating a list of predicted motion vector candidates
When the AMVP mode is used as the prediction mode, each of the encoding apparatus 100 and the decoding apparatus 200 may create a list of predicted motion vector candidates using a motion vector of a spatial candidate, a motion vector of a temporal candidate, and a zero vector. The predicted motion vector candidate list may include one or more predicted motion vector candidates. At least one of a motion vector of the spatial candidate, a motion vector of the temporal candidate, and a zero vector may be determined and used as the prediction motion vector candidate.
Hereinafter, the terms "prediction motion vector (candidate)" and "motion vector (candidate)" may be used to have the same meaning and may be used interchangeably with each other.
Hereinafter, the terms "prediction motion vector candidate" and "AMVP candidate" may be used to have the same meaning and may be used interchangeably with each other.
Hereinafter, the terms "predicted motion vector candidate list" and "AMVP candidate list" may be used to have the same meaning and may be used interchangeably with each other.
The spatial candidates may comprise reconstructed spatially neighboring blocks. In other words, the motion vectors of the reconstructed neighboring blocks may be referred to as "spatial prediction motion vector candidates".
The temporal candidates may include a col block and blocks adjacent to the col block. In other words, a motion vector of a col block or a motion vector of a block adjacent to the col block may be referred to as a "temporal prediction motion vector candidate".
The zero vector may be a (0,0) motion vector.
The predicted motion vector candidate may be a motion vector predictor for predicting a motion vector. Further, in the encoding apparatus 100, each predicted motion vector candidate may be an initial search position for a motion vector.
1-2) searching for motion vector using list of predicted motion vector candidates
The encoding apparatus 100 may determine a motion vector to be used for encoding the target block within the search range using the list of predicted motion vector candidates. Further, the encoding apparatus 100 may determine a predicted motion vector candidate to be used as a predicted motion vector of the target block among predicted motion vector candidates existing in the predicted motion vector candidate list.
The motion vector to be used for encoding the target block may be a motion vector that can be encoded at minimum cost.
In addition, the encoding apparatus 100 may determine whether to encode the target block using the AMVP mode.
1-3) Transmission of Interframe prediction information
The encoding apparatus 100 may generate a bitstream including inter prediction information required for inter prediction. The decoding apparatus 200 may perform inter prediction on the target block using inter prediction information of the bitstream.
The inter prediction information may include 1) mode information indicating whether the AMVP mode is used, 2) a prediction motion vector index, 3) a Motion Vector Difference (MVD), 4) a reference direction, and 5) a reference picture index.
Hereinafter, the terms "prediction motion vector index" and "AMVP index" may be used to have the same meaning and may be used interchangeably with each other.
In addition, the inter prediction information may include a residual signal.
When the mode information indicates that the AMVP mode is used, the decoding apparatus 200 may acquire a prediction motion vector index, an MVD, a reference direction, and a reference picture index from the bitstream through entropy decoding.
The prediction motion vector index may indicate a prediction motion vector candidate to be used for predicting the target block among prediction motion vector candidates included in the prediction motion vector candidate list.
1-4) inter prediction in AMVP mode using inter prediction information
The decoding apparatus 200 may derive the prediction motion vector candidate using the prediction motion vector candidate list, and may determine motion information of the target block based on the derived prediction motion vector candidate.
The decoding apparatus 200 may determine a motion vector candidate for the target block among the predicted motion vector candidates included in the predicted motion vector candidate list using the predicted motion vector index. The decoding apparatus 200 may select a predicted motion vector candidate indicated by the predicted motion vector index as the predicted motion vector of the target block from among the predicted motion vector candidates included in the predicted motion vector candidate list.
The encoding apparatus 100 may generate an entropy-encoded prediction motion vector index by applying entropy encoding to the prediction motion vector index, and may generate a bitstream including the entropy-encoded prediction motion vector index. The entropy-encoded prediction motion vector index may be signaled from the encoding apparatus 100 to the decoding apparatus 200 through a bitstream. The decoding apparatus 200 may extract the entropy-encoded prediction motion vector index from the bitstream, and may acquire the prediction motion vector index by applying entropy decoding to the entropy-encoded prediction motion vector index.
The motion vector that is actually to be used for inter prediction of the target block may not match the predicted motion vector. To indicate the difference between the motion vector that will actually be used for inter-predicting the target block and the predicted motion vector, MVD may be used. The encoding apparatus 100 may derive a prediction motion vector similar to a motion vector that will be actually used for inter-predicting the target block in order to use an MVD as small as possible.
The MVD may be the difference between the motion vector of the target block and the predicted motion vector. The encoding apparatus 100 may calculate an MVD and may generate an entropy-encoded MVD by applying entropy encoding to the MVD. The encoding apparatus 100 may generate a bitstream including the entropy-encoded MVDs.
The MVD may be transmitted from the encoding apparatus 100 to the decoding apparatus 200 through a bitstream. The decoding apparatus 200 may extract entropy-encoded MVDs from the bitstream, and may acquire MVDs by applying entropy decoding to the entropy-encoded MVDs.
The decoding apparatus 200 may derive a motion vector of the target block by summing the MVD and the prediction motion vector. In other words, the motion vector of the target block derived by the decoding apparatus 200 may be the sum of the MVD and the motion vector candidate.
Also, the encoding apparatus 100 may generate entropy-encoded MVD resolution information by applying entropy encoding to the calculated MVD resolution information, and may generate a bitstream including the entropy-encoded MVD resolution information. The decoding apparatus 200 may extract entropy-encoded MVD resolution information from the bitstream, and may acquire MVD resolution information by applying entropy decoding to the entropy-encoded MVD resolution information. The decoding apparatus 200 may adjust the resolution of the MVD using the MVD resolution information.
In addition, the encoding apparatus 100 may calculate the MVD based on an affine model. The decoding apparatus 200 may derive an affine control motion vector of the target block by the sum of the MVD and the affine control motion vector candidate, and may derive a motion vector of the sub-block using the affine control motion vector.
The reference direction may indicate a list of reference pictures to be used for predicting the target block. For example, the reference direction may indicate one of the reference picture list L0 and the reference picture list L1.
The reference direction indicates only a reference picture list to be used for prediction of the target block, and may not mean that the direction of the reference picture is limited to a forward direction or a backward direction. In other words, each of the reference picture list L0 and the reference picture list L1 may include pictures in the forward direction and/or the backward direction.
The reference direction being unidirectional may mean that a single reference picture list is used. The reference direction being bi-directional may mean that two reference picture lists are used. In other words, the reference direction may indicate one of the following: the case of using only the reference picture list L0, the case of using only the reference picture list L1, and the case of using two reference picture lists.
The reference picture index may indicate a reference picture for the prediction target block among reference pictures existing in the reference picture list. The encoding apparatus 100 may generate an entropy-encoded reference picture index by applying entropy encoding to the reference picture index, and may generate a bitstream including the entropy-encoded reference picture index. The entropy-encoded reference picture index may be signaled from the encoding apparatus 100 to the decoding apparatus 200 through a bitstream. The decoding apparatus 200 may extract the entropy-encoded reference picture index from the bitstream, and may acquire the reference picture index by applying entropy decoding to the entropy-encoded reference picture index.
When two reference picture lists are used for prediction of a target block, a single reference picture index and a single motion vector may be used for each of the reference picture lists. Further, when two reference picture lists are used for predicting the target block, two prediction blocks may be specified for the target block. For example, an average or a weighted sum of the two prediction blocks for the target block may be used to generate the (final) prediction block of the target block.
The motion vector of the target block may be derived by predicting a motion vector index, an MVD, a reference direction, and a reference picture index.
The decoding apparatus 200 may generate a prediction block for the target block based on the derived motion vector and the reference picture index. For example, the prediction block may be a reference block indicated by a derived motion vector in a reference picture indicated by a reference picture index.
Since the prediction motion vector index and the MVD are encoded while the motion vector itself of the target block is not encoded, the number of bits transmitted from the encoding apparatus 100 to the decoding apparatus 200 can be reduced and the encoding efficiency can be improved.
For the target block, motion information of the reconstructed neighboring blocks may be used. In a specific inter prediction mode, the encoding apparatus 100 may not encode actual motion information of the target block alone. The motion information of the target block is not encoded, but additional information that enables the motion information of the target block to be derived using the reconstructed motion information of the neighboring blocks may be encoded. Since the additional information is encoded, the number of bits transmitted to the decoding apparatus 200 may be reduced and the encoding efficiency may be improved.
For example, as an inter prediction mode in which motion information of a target block is not directly encoded, a skip mode and/or a merge mode may exist. Here, each of the encoding apparatus 100 and the decoding apparatus 200 may use an identifier and/or an index indicating a unit, of which motion information is to be used as motion information of the target unit, among the reconstructed neighboring units.
2) Merge mode
As a scheme for deriving motion information of a target block, there is merging. The term "merging" may mean merging motion of multiple blocks. "merging" may mean that motion information of one block is also applied to other blocks. In other words, the merge mode may be a mode in which motion information of the target block is derived from motion information of neighboring blocks.
When the merge mode is used, the encoding apparatus 100 may predict motion information of the target block using motion information of the spatial candidate and/or motion information of the temporal candidate. The spatial candidates may include reconstructed spatially neighboring blocks that are spatially adjacent to the target block. The spatially neighboring blocks may include a left neighboring block and an upper neighboring block. The temporal candidates may include col blocks. The terms "spatial candidate" and "spatial merge candidate" may be used to have the same meaning and may be used interchangeably with each other. The terms "time candidate" and "time merge candidate" may be used to have the same meaning and may be used interchangeably with each other.
The encoding apparatus 100 may acquire a prediction block via prediction. The encoding apparatus 100 may encode a residual block that is a difference between the target block and the prediction block.
2-1) creating a merge candidate list
When the merge mode is used, each of the encoding apparatus 100 and the decoding apparatus 200 may create a merge candidate list using motion information of spatial candidates and/or motion information of temporal candidates. The motion information may include 1) a motion vector, 2) a reference picture index, and 3) a reference direction. The reference direction may be unidirectional or bidirectional. The reference direction may represent an inter prediction indicator.
The merge candidate list may include merge candidates. The merge candidate may be motion information. In other words, the merge candidate list may be a list storing a plurality of pieces of motion information.
The merge candidate may be motion information of a plurality of temporal candidates and/or spatial candidates. In other words, the merge candidate list may include motion information of temporal candidates and/or spatial candidates, and the like.
Further, the merge candidate list may include a new merge candidate generated by combining merge candidates already existing in the merge candidate list. In other words, the merge candidate list may include new motion information generated by combining a plurality of pieces of motion information previously existing in the merge candidate list.
Further, the merge candidate list may include history-based merge candidates. The history-based merge candidate may be motion information of a block that is encoded and/or decoded before the target block.
Further, the merge candidate list may include a merge candidate based on an average of the two merge candidates.
The merging candidate may be a specific mode of deriving inter prediction information. The merge candidate may be information indicating a specific mode of deriving inter prediction information. Inter prediction information for the target block may be derived from a particular mode indicated by the merge candidate. Further, the particular mode may include a process of deriving a series of inter prediction information. This particular mode may be an inter prediction information derivation mode or a motion information derivation mode.
The inter prediction information of the target block may be derived according to a mode indicated by a merge candidate selected among merge candidates in the merge candidate list by a merge index.
For example, the motion information derivation mode in the merge candidate list may be at least one of the following modes: 1) a motion information derivation mode for sub-block units and 2) an affine motion information derivation mode.
In addition, the merge candidate list may include motion information of a zero vector. The zero vector may also be referred to as a "zero merge candidate".
In other words, the pieces of motion information in the merge candidate list may be at least one of: 1) motion information of a spatial candidate, 2) motion information of a temporal candidate, 3) motion information generated by combining pieces of motion information previously existing in the merge candidate list, and 4) a zero vector.
The motion information may include 1) a motion vector, 2) a reference picture index, and 3) a reference direction. The reference direction may also be referred to as an "inter prediction indicator". The reference direction may be unidirectional or bidirectional. The unidirectional reference direction may indicate L0 prediction or L1 prediction.
The merge candidate list may be created before performing prediction in merge mode.
The number of merge candidates in the merge candidate list may be predefined. Each of the encoding apparatus 100 and the decoding apparatus 200 may add the merge candidates to the merge candidate list according to a predefined scheme and a predefined priority such that the merge candidate list has a predefined number of merge candidates. The merge candidate list of the encoding apparatus 100 and the merge candidate list of the decoding apparatus 200 may be made identical to each other using a predefined scheme and a predefined priority.
Merging may be applied on a CU or PU basis. When the merging is performed on a CU or PU basis, the encoding apparatus 100 may transmit a bitstream including predefined information to the decoding apparatus 200. For example, the predefined information may include 1) information indicating whether to perform merging for each block partition, and 2) information on a block on which merging is to be performed among blocks that are spatial candidates and/or temporal candidates for a target block.
2-2) searching for motion vector using merge candidate list
The encoding apparatus 100 may determine a merge candidate to be used for encoding the target block. For example, the encoding apparatus 100 may perform prediction on the target block using the merge candidate in the merge candidate list, and may generate a residual block for the merge candidate. The encoding apparatus 100 may encode the target block using a merging candidate that generates the minimum cost in the encoding of the prediction and residual blocks.
In addition, the encoding apparatus 100 may determine whether to encode the target block using the merge mode.
2-3) Transmission of Interframe prediction information
The encoding apparatus 100 may generate a bitstream including inter prediction information required for inter prediction. The encoding apparatus 100 may generate entropy-encoded inter prediction information by performing entropy encoding on the inter prediction information, and may transmit a bitstream including the entropy-encoded inter prediction information to the decoding apparatus 200. The entropy-encoded inter prediction information may be signaled by the encoding apparatus 100 to the decoding apparatus 200 through a bitstream. The decoding apparatus 200 may extract entropy-encoded inter prediction information from a bitstream, and may acquire the inter prediction information by applying entropy decoding to the entropy-encoded inter prediction information.
The decoding apparatus 200 may perform inter prediction on the target block using inter prediction information of the bitstream.
The inter prediction information may include 1) mode information indicating whether a merge mode is used, 2) a merge index, and 3) correction information.
Furthermore, the inter prediction information may include a residual signal.
The decoding apparatus 200 may acquire the merge index from the bitstream only when the mode information indicates that the merge mode is used.
The mode information may be a merge flag. The unit of the mode information may be a block. The information on the block may include mode information, and the mode information may indicate whether a merge mode is applied to the block.
The merge index may indicate a merge candidate to be used for prediction of the target block among merge candidates included in the merge candidate list. Alternatively, the merge index may indicate a block to be merged with the target block among neighboring blocks spatially or temporally adjacent to the target block.
The encoding apparatus 100 may select a merging candidate having the highest encoding performance among the merging candidates included in the merging candidate list, and may set a value of the merging index to indicate the selected merging candidate.
The correction information may be information for correcting a motion vector. The encoding apparatus 100 may generate correction information. The decoding apparatus 200 may correct the motion vector of the merge candidate selected by the merge index based on the correction information.
The correction information may include at least one of information indicating whether correction is to be performed, correction direction information, and correction size information. The prediction mode for correcting the motion vector based on the signaled correction information may be referred to as a "merge mode with motion vector difference".
2-4) inter prediction of merge mode using inter prediction information
The decoding apparatus 200 may perform prediction on the target block using the merge candidate indicated by the merge index among the merge candidates included in the merge candidate list.
The motion vector of the target block may be specified by the motion vector of the merging candidate indicated by the merging index, the reference picture index, and the reference direction.
3) Skip mode
The skip mode may be a mode in which motion information of a spatial candidate or motion information of a temporal candidate is applied to the target block without change. Also, the skip mode may be a mode that does not use a residual signal. In other words, when the skip mode is used, the reconstructed block may be the same as the predicted block.
The difference between the merge mode and the skip mode is whether a residual signal is sent or used. That is, the skip mode may be similar to the merge mode except that no residual signal is sent or used.
When the skip mode is used, the encoding apparatus 100 may transmit information on a block whose motion information is to be used as motion information of a target block among blocks that are spatial candidates or temporal candidates to the decoding apparatus 200 through a bitstream. The encoding apparatus 100 may generate entropy-encoded information by performing entropy encoding on the information, and may signal the entropy-encoded information to the decoding apparatus 200 through a bitstream. The decoding apparatus 200 may extract entropy-encoded information from a bitstream and may acquire the information by applying entropy decoding to the entropy-encoded information.
Also, when the skip mode is used, the encoding apparatus 100 may not send other syntax information (such as MVD) to the decoding apparatus 200. For example, when the skip mode is used, the encoding apparatus 100 may not signal syntax elements related to at least one of an MVD, a coded block flag, and a transform coefficient level to the decoding apparatus 200.
3-1) creating a merge candidate list
The skip mode may also use a merge candidate list. In other words, the merge candidate list may be used in both the merge mode and the skip mode. In this regard, the merge candidate list may also be referred to as a "skip candidate list" or a "merge/skip candidate list".
Alternatively, the skip mode may use an additional candidate list different from the candidate list of the merge mode. In this case, in the following description, the merge candidate list and the merge candidate may be replaced with the skip candidate list and the skip candidate, respectively.
The merge candidate list may be created before performing prediction in skip mode.
3-2) searching for motion vector using merge candidate list
The encoding apparatus 100 may determine a merging candidate to be used for encoding the target block. For example, the encoding apparatus 100 may perform prediction on the target block using the merge candidate in the merge candidate list. The encoding apparatus 100 may encode the target block using the merge candidate that generates the smallest cost in the prediction.
In addition, the encoding apparatus 100 may determine whether to encode the target block using the skip mode.
3-3) Transmission of inter-frame prediction information
The encoding apparatus 100 may generate a bitstream including inter prediction information required for inter prediction. The decoding apparatus 200 may perform inter prediction on the target block using inter prediction information of the bitstream.
The inter prediction information may include 1) mode information indicating whether a skip mode is used and 2) a skip index.
The skip index may be the same as the merge index described above.
When the skip mode is used, the target block may be encoded without using a residual signal. The inter prediction information may not include a residual signal. Alternatively, the bitstream may not include a residual signal.
The decoding apparatus 200 may acquire the skip index from the bitstream only when the mode information indicates that the skip mode is used. As described above, the merge index and the skip index may be identical to each other. The decoding apparatus 200 may acquire the skip index from the bitstream only when the mode information indicates that the merge mode or the skip mode is used.
The skip index may indicate a merge candidate to be used for prediction of the target block among merge candidates included in the merge candidate list.
3-4) inter prediction in skip mode using inter prediction information
The decoding apparatus 200 may perform prediction on the target block using a merge candidate indicated by the skip index among merge candidates included in the merge candidate list.
The motion vector of the target block may be specified by the motion vector of the merging candidate indicated by the skip index, the reference picture index, and the reference direction.
4) Current picture reference mode
The current picture reference mode may represent a prediction mode: the prediction mode uses a previously reconstructed region in a target picture to which the target block belongs.
A motion vector specifying a previously reconstructed region may be used. The reference picture index of the target block may be used to determine whether the target block has been encoded in the current picture reference mode.
A flag or index indicating whether the target block is a block encoded in the current picture reference mode may be signaled by the encoding apparatus 100 to the decoding apparatus 200. Alternatively, whether or not the target block is a block encoded in the current picture reference mode may be inferred by the reference picture index of the target block.
When a target block is encoded in the current picture reference mode, the current picture may exist at a fixed position or a specific position in the reference picture list for the target block.
For example, the fixed position may be a position where the value of the reference picture index is 0 or the last position.
When the target picture exists at a specific position in the reference picture list, an additional reference picture index indicating such specific position may be signaled by the encoding apparatus 100 to the decoding apparatus 200.
5) Subblock merging mode
The sub-block merging mode may be a mode in which motion information is derived from sub-blocks of the CU.
When the subblock merging mode is applied, a subblock merging candidate list may be generated using motion information of a co-located subblock (col-sub-block) of a target subblock (i.e., a subblock-based temporal merging candidate) in a reference image and/or an affine control point motion vector merging candidate.
6) Triangular partition mode
In the triangle partition mode, the target block may be partitioned in a diagonal direction, and a child target block generated by the partitioning may be generated. For each sub-target block, motion information for the corresponding sub-target block may be derived, and the derived motion information may be used to derive a prediction sample for each sub-target block. The predicted samples of the target block may be derived by a weighted sum of the predicted samples of the sub-target blocks generated via partitioning.
7) Combining inter-intra prediction modes
The combined inter-intra prediction mode may be a mode in which a predicted sample of the target block is derived using a weighted sum of predicted samples generated via inter prediction and predicted samples generated via intra prediction.
In the above-described mode, the decoding apparatus 200 may autonomously correct the derived motion information. For example, the decoding apparatus 200 may search for motion information having a minimum Sum of Absolute Differences (SAD) in a specific region based on a reference block indicated by the derived motion information, and may derive the found motion information as corrected motion information.
In the above-described mode, the decoding apparatus 200 may compensate for prediction samples derived through inter-prediction using optical flow.
In the AMVP mode, the merge mode, the skip mode, and the like described above, the index information of the list may be used to specify motion information to be used for prediction of the target block among pieces of motion information in the list.
In order to improve encoding efficiency, the encoding apparatus 100 may signal only an index of an element that generates the smallest cost in inter prediction of the target block among elements in the list. The encoding apparatus 100 may encode the index and may signal the encoded index.
Therefore, it is necessary to be able to derive the above-described lists (i.e., the predictive motion vector candidate list and the merge candidate list) based on the same data using the same scheme by the encoding apparatus 100 and the decoding apparatus 200. Here, the same data may include a reconstructed picture and a reconstructed block. Further, in order to specify an element using an index, the order of the elements in the list must be fixed.
Fig. 10 illustrates spatial candidates according to an embodiment.
In fig. 10, the positions of the spatial candidates are shown.
The large block at the center of the graph may represent the target block. Five small blocks may represent spatial candidates.
The coordinates of the target block may be (xP, yP), and the size of the target block may be represented by (nPSW, nPSH).
Spatial candidate A0May be a block adjacent to the lower left corner of the target block. A. the0May be a block occupying a pixel located at the coordinates (xP-1, yP + nPSH + 1).
Spatial candidate A1May be the block adjacent to the left side of the target block. A. the 1May be the lowermost block among blocks adjacent to the left side of the target block. Alternatively, A1Can be with A0Top adjacent block of (a). A. the1May be a block occupying pixels located at coordinates (xP-1, yP + nPSH).
Spatial candidate B0May be a block adjacent to the upper right corner of the target block. B is0May be a block occupying a pixel located at the coordinates (xP + nPSW +1, yP-1).
Spatial candidate B1May be a block adjacent to the top of the target block. B is1May be the rightmost block among blocks adjacent to the top of the target block. Alternatively, B1May be with B0Left adjacent block. B1May be a block occupying a pixel located at the coordinates (xP + nPSW, yP-1).
Spatial candidate B2May be adjacent to the upper left corner of the target blockThe block of (2). B2May be a block occupying a pixel located at the coordinates (xP-1, yP-1).
Determination of availability of spatial and temporal candidates
In order to include the motion information of the spatial candidate or the motion information of the temporal candidate in the list, it is necessary to determine whether the motion information of the spatial candidate or the motion information of the temporal candidate is available.
Hereinafter, the candidate block may include a spatial candidate and a temporal candidate.
For example, the determination may be performed by sequentially applying the following steps 1) to 4).
Step 1) when a PU including a candidate block is located outside the boundary of a picture, the availability of the candidate block may be set to "false". The expression "availability is set to false" may have the same meaning as "set to unavailable".
Step 2) when a PU including a candidate block is located outside the boundary of a slice, the availability of the candidate block may be set to "false". When the target block and the candidate block are located in different stripes, the availability of the candidate block may be set to "false".
Step 3) when a PU including a candidate block is located outside the boundary of the parallel block, the availability of the candidate block may be set to "false". When the target block and the candidate block are located in different parallel blocks, the availability of the candidate block may be set to "false".
Step 4) when the prediction mode of a PU including the candidate block is an intra prediction mode, the availability of the candidate block may be set to "false". The availability of a candidate block may be set to "false" when a PU that includes the candidate block does not use inter prediction.
Fig. 11 illustrates an order of adding motion information of spatial candidates to a merge list according to an embodiment.
As shown in fig. 11, a may be used when pieces of motion information of spatial candidates are added to the merge list 1、B1、B0、A0And B2The order of (a). That is, can be according to A1、B1、B0、A0And B2The order of (a) adds pieces of motion information of the available spatial candidates to the merge list.
Method for deriving merge lists in merge mode and skip mode
As described above, the maximum number of merge candidates in the merge list may be set. The maximum number of settings may be indicated by "N". The set number may be transmitted from the encoding apparatus 100 to the decoding apparatus 200. The head of the strip may comprise N. In other words, the maximum number of merging candidates in the merging list for the target block of the slice may be set by the slice header. For example, the value of N may be substantially 5.
Pieces of motion information (i.e., merging candidates) may be added to the merge list in the order of the following steps 1) to 4).
Step 1)Among the spatial candidates, the available spatial candidates may be added to the merge list. The pieces of motion information of the available spatial candidates may be added to the merge list in the order shown in fig. 10. Here, when the motion information of the available spatial candidate overlaps with other motion information already existing in the merge list, the motion information of the available spatial candidate may not be added to the merge list. The operation of checking whether the corresponding motion information overlaps with other motion information present in the list may be simply referred to as "overlap check".
The maximum number of pieces of motion information to be added may be N.
Step 2)When the number of pieces of motion information in the merge list is less than N and a temporal candidate is available, the motion information of the temporal candidate may be added to the merge list. Here, when the motion information of the available temporal candidate overlaps with other motion information already existing in the merge list, the motion information of the available temporal candidate may not be added to the merge list.
Step 3)When the number of pieces of motion information in the merge list is less than N and the type of the target slice is "B", combined motion information generated by combining bi-prediction (bi-prediction) may be added to the merge list.
The target stripe may be a stripe that includes the target block.
The combined motion information may be a combination of the L0 motion information and the L1 motion information. The L0 motion information may be motion information referring only to the reference picture list L0. The L1 motion information may be motion information referring only to the reference picture list L1.
In the merge list, there may be one or more pieces of L0 motion information. Further, in the merge list, there may be one or more pieces of L1 motion information.
The combined motion information may include one or more pieces of combined motion information. When generating the combined motion information, L0 motion information and L1 motion information, which will be used for the step of generating the combined motion information, among the one or more pieces of L0 motion information and the one or more pieces of L1 motion information, may be previously defined. One or more pieces of combined motion information may be generated in a predefined order via combined bi-prediction using a pair of different pieces of motion information in the merge list. One piece of the pair of different motion information may be L0 motion information, and the other piece of the pair of different motion information may be L1 motion information.
For example, the combined motion information added with the highest priority may be a combination of L0 motion information having a merge index of 0 and L1 motion information having a merge index of 1. When the motion information having the merge index 0 is not the L0 motion information or when the motion information having the merge index 1 is not the L1 motion information, the combined motion information may be neither generated nor added. Next, the combined motion information added with the next priority may be a combination of L0 motion information having a merge index of 1 and L1 motion information having a merge index of 0. The detailed combinations that follow may conform to other combinations in the video encoding/decoding field.
Here, when the combined motion information overlaps with other motion information already existing in the merge list, the combined motion information may not be added to the merge list.
Step 4)When the number of pieces of motion information in the merge list is less than N, the motion information of the zero vector may be added to the merge list.
The zero vector motion information may be motion information in which the motion vector is a zero vector.
The number of pieces of zero vector motion information may be one or more. The reference picture indices of one or more pieces of zero vector motion information may be different from each other. For example, the value of the reference picture index of the first zero vector motion information may be 0. The reference picture index of the second zero vector motion information may have a value of 1.
The number of pieces of zero vector motion information may be the same as the number of reference pictures in the reference picture list.
The reference direction of the zero vector motion information may be bi-directional. Both motion vectors may be zero vectors. The number of pieces of zero vector motion information may be the smaller one of the number of reference pictures in the reference picture list L0 and the number of reference pictures in the reference picture list L1. Alternatively, when the number of reference pictures in the reference picture list L0 and the number of reference pictures in the reference picture list L1 are different from each other, the reference direction, which is unidirectional, may be used for the reference picture index that can be applied to only a single reference picture list.
The encoding apparatus 100 and/or the decoding apparatus 200 may then add zero vector motion information to the merge list while changing the reference picture index.
Zero vector motion information may not be added to the merge list when it overlaps with other motion information already present in the merge list.
The order of the above-described steps 1) to 4) is merely exemplary, and may be changed. Furthermore, some of the above steps may be omitted according to predefined conditions.
Method for deriving a predicted motion vector candidate list in AMVP mode
The maximum number of predicted motion vector candidates in the predicted motion vector candidate list may be predefined. The predefined maximum number may be indicated by N. For example, the predefined maximum number may be 2.
The pieces of motion information (i.e., predicted motion vector candidates) may be added to the predicted motion vector candidate list in the order of step 1) to step 3) below.
Step 1)An available spatial candidate among the spatial candidates may be added to the predicted motion vector candidate list. The spatial candidates may include a first spatial candidate and a second spatial candidate.
The first spatial candidate may be a0、A1Zoomed A0And scaled A1One of them. The second spatial candidate may be B0、B1、B2Scaled B0Scaled B1And scaled B2Of the above.
The plurality of pieces of motion information of the available spatial candidates may be added to the prediction motion vector candidate list in the order of the first spatial candidate and the second spatial candidate. In this case, when the motion information of the available spatial candidate overlaps with other motion information already existing in the predicted motion vector candidate list, the motion information of the available spatial candidate may not be added to the predicted motion vector candidate list. In other words, when the value of N is 2, if the motion information of the second spatial candidate is the same as the motion information of the first spatial candidate, the motion information of the second spatial candidate may not be added to the predicted motion vector candidate list.
The maximum number of pieces of motion information added may be N.
Step 2)When the number of pieces of motion information in the predicted motion vector candidate list is less than N and a temporal candidate is available, the motion information of the temporal candidate may be added to the predicted motion vector candidate list. In this case, when the motion information of the available temporal candidate overlaps with other motion information already existing in the predicted motion vector candidate list, the motion information of the available temporal candidate may not be added to the predicted motion vector candidate list.
Step 3)When the number of pieces of motion information in the predicted motion vector candidate list is less than N, zero vector motion information may be added to the predicted motion vector candidate list.
The zero vector motion information may include one or more pieces of zero vector motion information. The reference picture indices of the one or more pieces of zero vector motion information may be different from each other.
The encoding apparatus 100 and/or the decoding apparatus 200 may sequentially add pieces of zero vector motion information to the predicted motion vector candidate list while changing the reference picture index.
When the zero vector motion information overlaps with other motion information already existing in the predicted motion vector candidate list, the zero vector motion information may not be added to the predicted motion vector candidate list.
The description of zero vector motion information made above in connection with the merge list is also applicable to zero vector motion information. A repetitive description thereof will be omitted.
The order of step 1) to step 3) described above is merely exemplary and may be changed. Furthermore, some of the steps may be omitted according to predefined conditions.
Fig. 12 illustrates a transform and quantization process according to an example.
As shown in fig. 12, the quantized level may be generated by performing transform and/or quantization processing on the residual signal.
The residual signal may be generated as a difference between the original block and the prediction block. Here, the prediction block may be a block generated via intra prediction or inter prediction.
The residual signal may be transformed into a signal in the frequency domain by a transform process that is part of a quantization process.
The transform kernels used for the transform may include various DCT kernels, such as Discrete Cosine Transform (DCT) type 2(DCT-II) and Discrete Sine Transform (DST) kernels.
These transform kernels may perform separable transforms or two-dimensional (2D) inseparable transforms on the residual signal. The separable transform may be a transform indicating that a one-dimensional (1D) transform is performed on the residual signal in each of a horizontal direction and a vertical direction.
The DCT type and the DST type adaptively used for the 1D transform may include DCT-V, DCT-VIII, DST-I, and DST-VII in addition to DCT-II, as shown in each of Table 3 and Table 4 below.
TABLE 3
Figure BDA0003690721400000641
Figure BDA0003690721400000651
TABLE 4
Transformation set Transformation candidates
0 DST-VII,DCT-VIII,DST-I
1 DST-VII,DST-I,DCT-VIII
2 DST-VII,DCT-V,DST-I
As shown in tables 3 and 4, when a DCT type or a DST type to be used for transformation is derived, a transformation set may be used. Each transform set may include a plurality of transform candidates. Each transform candidate may be of a DCT type or a DST type.
Table 5 below shows an example of a transform set to be applied to the horizontal direction and a transform set to be applied to the vertical direction according to the intra prediction mode.
TABLE 5
Intra prediction mode 0 1 2 3 4 5 6 7 8 9
Vertical transformation set 2 1 0 1 0 1 0 1 0 1
Set of horizontal transformations 2 1 0 1 0 1 0 1 0 1
Intra prediction mode 10 11 12 13 14 15 16 17 18 19
Vertical direction transformation set 0 1 0 1 0 0 0 0 0 0
Horizontal direction transformation set 0 1 0 1 2 2 2 2 2 2
Intra prediction mode 20 21 22 23 24 25 26 27 28 29
Vertical direction transformation set 0 0 0 1 0 1 0 1 0 1
Horizontal direction transformation set 2 2 2 1 0 1 0 1 0 1
Intra prediction mode 30 31 32 33 34 35 36 37 38 39
Vertical direction transformation set 0 1 0 1 0 1 0 1 0 1
Horizontal direction transformation set 0 1 0 1 0 1 0 1 0 1
Intra prediction mode 40 41 42 43 44 45 46 47 48 49
Vertical direction transformation set 0 1 0 1 0 1 2 2 2 2
Transformation set in horizontal direction 0 1 0 1 0 1 0 0 0 0
Intra prediction mode 50 51 52 53 54 55 56 57 58 59
Vertical direction transformation set 2 2 2 2 2 1 0 1 0 1
Horizontal direction transformation set 0 0 0 0 0 1 0 1 0 1
Intra prediction mode 60 61 62 63 64 65 66
Vertical direction transformation set 0 1 0 1 0 1 0
Horizontal direction transformation set 0 1 0 1 0 1 0
In table 5, numbers of vertical transform sets and horizontal transform sets to be applied to the horizontal direction of the residual signal according to the intra prediction mode of the target block are shown.
As illustrated in table 5, a transform set to be applied to the horizontal direction and the vertical direction may be predefined according to the intra prediction mode of the target block. The encoding apparatus 100 may perform transform and inverse transform on a residual signal using a transform included in a transform set corresponding to an intra prediction mode of a target block. Also, the decoding apparatus 200 may perform inverse transformation on the residual signal using a transform included in a transform set corresponding to the intra prediction mode of the target block.
In the transform and inverse transform, as illustrated in table 3, table 4, and table 5, a transform set to be applied to a residual signal may be determined and may not be signaled. The transformation indication information may be signaled from the encoding apparatus 100 to the decoding apparatus 200. The transformation indication information may be information indicating which one of a plurality of transformation candidates included in a transformation set to be applied to the residual signal is used.
For example, when the size of the target block is 64 × 64 or less, transform sets each having three transforms may be configured according to the intra prediction mode. The optimal transformation method may be selected from a total of nine multi-transformation methods resulting from a combination of three transformations in the horizontal direction and three transformations in the vertical direction. By such an optimal transformation method, a residual signal may be encoded and/or decoded, and thus encoding efficiency may be improved.
Here, the information indicating which one of a plurality of transforms belonging to each transform set has been used for at least one of a vertical transform and a horizontal transform may be entropy-encoded and/or entropy-decoded. Here, truncated unary binarization may be used to encode and/or decode such information.
As described above, a method using various transforms may be applied to a residual signal generated via intra prediction or inter prediction.
The transform may include at least one of a first transform and a secondary transform. The transform coefficient may be generated by performing a first transform on the residual signal, and the secondary transform coefficient may be generated by performing a secondary transform on the transform coefficient.
The first transformation may be referred to as a "primary transformation". Further, the first transformation may also be referred to as an "adaptive multi-transformation (AMT) scheme". As described above, the AMT may represent applying different transforms to respective 1D directions (i.e., vertical and horizontal directions).
The secondary transform may be a transform for increasing the energy concentration of transform coefficients generated by the first transform. Similar to the first transform, the secondary transform may be a separable transform or a non-separable transform. Such an inseparable transform may be an inseparable secondary transform (NSST).
The first transformation may be performed using at least one of a predefined plurality of transformation methods. For example, the predefined multiple transform methods may include Discrete Cosine Transform (DCT), Discrete Sine Transform (DST), Karhunen-Loeve transform (KLT), and the like.
Further, the first transform may be a transform having various types according to a kernel function defining a Discrete Cosine Transform (DCT) or a Discrete Sine Transform (DST).
For example, the first transform may include transforms such as DCT-2, DCT-5, DCT-7, DST-1, DST-8, and DCT-8 according to the transform kernel presented in Table 6 below. In table 6 below, various transform types and transform kernels for Multiple Transform Selection (MTS) are illustrated.
MTS may refer to the selection of a combination of one or more DCT and/or DST kernels to transform the residual signal in the horizontal and/or vertical directions.
TABLE 6
Figure BDA0003690721400000671
In Table 6, i and j may be integer values equal to or greater than 0 and less than or equal to N-1.
A secondary transform may be performed on transform coefficients generated by performing the first transform.
As in the first transformation, a set of transformations may also be defined in the secondary transformation. The method for deriving and/or determining the above-described set of transforms may be applied not only to the first transform but also to the secondary transform.
The first transformation and the secondary transformation may be determined for a particular target.
For example, the first transform and the secondary transform may be applied to signal components corresponding to one or more of a luminance (luma) component and a chrominance (chroma) component. Whether to apply the first transform and/or the secondary transform may be determined according to at least one of encoding parameters for the target block and/or the neighboring blocks. For example, whether to apply the first transform and/or the secondary transform may be determined according to the size and/or shape of the target block.
In the encoding apparatus 100 and the decoding apparatus 200, transformation information indicating a transformation method to be used for a target may be derived by using the specification information.
For example, the transformation information may include transformation indices to be used for the primary transformation and/or the secondary transformation. Optionally, the transformation information may indicate that the primary transformation and/or the secondary transformation is not used.
For example, when a target of the primary transform and the secondary transform is a target block, a transform method to be applied to the primary transform and/or the secondary transform, which is indicated by the transform information, may be determined according to at least one of encoding parameters for the target block and/or blocks adjacent to the target block.
Alternatively, transformation information indicating a transformation method for a specific object may be signaled from the encoding apparatus 100 to the decoding apparatus 200.
For example, whether to use the primary transform, the index indicating the primary transform, whether to use the secondary transform, and the index indicating the secondary transform may be derived as transform information by the decoding apparatus 200 for a single CU. Alternatively, for a single CU, transform information indicating the following may be signaled: whether to use a primary transformation, an index indicating a primary transformation, whether to use a secondary transformation, and an index indicating a secondary transformation.
The quantized transform coefficients (i.e., quantized levels) may be generated by performing quantization on a result generated by performing the first transform and/or the secondary transform or performing quantization on the residual signal.
Fig. 13 illustrates a diagonal scan according to an example.
Fig. 14 shows a horizontal scan according to an example.
Fig. 15 shows a vertical scan according to an example.
The quantized transform coefficients may be scanned via at least one of a (top right) diagonal scan, a vertical scan, and a horizontal scan according to at least one of an intra prediction mode, a block size, and a block shape. The block may be a Transform Unit (TU).
Each scan may be initiated at a particular starting point and may be terminated at a particular ending point.
For example, the quantized transform coefficients may be changed into a 1D vector form by scanning the coefficients of the block using the diagonal scan of fig. 13. Alternatively, the horizontal scan of fig. 14 or the vertical scan of fig. 15 may be used according to the size of the block and/or the intra prediction mode, instead of using the diagonal scan.
The vertical scanning may be an operation of scanning the 2D block type coefficients in the column direction. The horizontal scanning may be an operation of scanning the 2D block type coefficients in a row direction.
In other words, which one of the diagonal scan, the vertical scan, and the horizontal scan is to be used may be determined according to the size of the block and/or the inter prediction mode.
As shown in fig. 13, 14, and 15, the quantized transform coefficients may be scanned in a diagonal direction, a horizontal direction, or a vertical direction.
The quantized transform coefficients may be represented by block shapes. Each block may include a plurality of sub-blocks. Each sub-block may be defined according to a minimum block size or a minimum block shape.
In the scanning, a scanning order according to the type or direction of the scanning may be first applied to the subblocks. Also, a scanning order according to a direction of scanning may be applied to the quantized transform coefficients in each subblock.
For example, as shown in fig. 13, 14, and 15, when the size of the target block is 8 × 8, the quantized transform coefficient may be generated by the first transform, the secondary transform, and the quantization of the residual signal of the target block. Thus, one of three types of scanning orders may be applied to four 4 × 4 sub-blocks, and the quantized transform coefficients may also be scanned for each 4 × 4 sub-block according to the scanning order.
The encoding apparatus 100 may generate entropy-encoded quantized transform coefficients by performing entropy encoding on the scanned quantized transform coefficients, and may generate a bitstream including the entropy-encoded quantized transform coefficients.
The decoding apparatus 200 may extract entropy-encoded quantized transform coefficients from a bitstream, and may generate the quantized transform coefficients by performing entropy decoding on the entropy-encoded quantized transform coefficients. The quantized transform coefficients may be arranged in the form of 2D blocks via inverse scanning. Here, as a method of the inverse scanning, at least one of the upper right diagonal scanning, the vertical scanning, and the horizontal scanning may be performed.
In the decoding apparatus 200, inverse quantization may be performed on the quantized transform coefficients. The secondary inverse transform may be performed on a result generated by performing inverse quantization according to whether the secondary inverse transform is performed. Further, the first inverse transform may be performed on a result generated by performing the secondary inverse transform according to whether the first inverse transform is to be performed. The reconstructed residual signal may be generated by performing a first inverse transform on a result generated by performing the secondary inverse transform.
For the luminance component reconstructed via intra prediction or inter prediction, inverse mapping with dynamic range may be performed before loop filtering.
The dynamic range may be divided into 16 equal segments and the mapping function of the respective segments may be signaled. Such mapping functions may be signaled at the stripe level or parallel block group level.
An inverse mapping function for performing inverse mapping may be derived based on the mapping function.
Loop filtering, storage of reference pictures, and motion compensation may be performed in the inverse mapped region.
The prediction block generated via inter prediction may be transformed to a mapping region by mapping using a mapping function, and the transformed prediction block may be used to generate a reconstructed block. However, since the intra prediction is performed in the mapping region, the prediction block generated via the intra prediction may be used to generate a reconstructed block without mapping and/or inverse mapping.
For example, when the target block is a residual block of the chrominance component, the residual block may be transformed to the inverse mapping region by scaling the chrominance component of the mapping region.
Whether scaling is available may be signaled at the stripe level or the parallel block group level.
For example, scaling may only be applied to the case where the mapping is available for the luma component and the partitions of the chroma component follow the same tree structure.
Scaling may be performed based on an average of values of samples in a luma prediction block corresponding to a chroma prediction block. Here, when the target block uses inter prediction, the luma prediction block may represent a mapped luma prediction block.
The values required for scaling may be derived by referring to a look-up table using the index of the slice to which the average of the sample values of the luma prediction block belongs.
The residual block may be transformed to the inverse mapping region by scaling the residual block using the finally derived value. Thereafter, for the block of the chrominance component, reconstruction, intra prediction, inter prediction, loop filtering, and storage of reference pictures may be performed in the inverse mapping region.
For example, information indicating whether mapping and/or inverse mapping of the luminance component and the chrominance component is available may be signaled by the sequence parameter set.
A prediction block of the target block may be generated based on the block vector. The block vector may indicate a displacement between the target block and the reference block. The reference block may be a block in the target image.
In this way, a prediction mode in which a prediction block is generated by referring to a target image may be referred to as an "Intra Block Copy (IBC) mode".
The IBC mode may be applied to a CU having a specific size. For example, the IBC mode may be applied to M × N CUs. Here, M and N may be less than or equal to 64.
The IBC mode may include a skip mode, a merge mode, an AMVP mode, and the like. In the case of the skip mode or the merge mode, the merge candidate list may be configured and the merge index may be signaled, and thus a single merge candidate may be specified among merge candidates existing in the merge candidate list. The block vector of the specified merging candidate may be used as the block vector of the target block.
In the case of AMVP mode, a differential block vector may be signaled. Furthermore, the prediction block vector may be derived from a left neighboring block and an upper neighboring block of the target block. Further, an index indicating which neighboring block is to be used may be signaled.
The prediction block in the IBC mode may be included in the target CTU or the left CTU, and may be limited to a block within the previous reconstruction region. For example, the values of the block vector may be restricted such that the prediction block of the target block is located in a specific region. The specific region may be a region defined by three 64 × 64 blocks that are encoded and/or decoded before a 64 × 64 block including the target block. Limiting the value of the block vector in this manner, memory consumption and device complexity caused by implementation of the IBC mode can be reduced.
Fig. 16 is a configuration diagram of an encoding apparatus according to an embodiment.
The encoding apparatus 1600 may correspond to the encoding apparatus 100 described above.
The encoding apparatus 1600 may include a processing unit 1610, a memory 1630, a User Interface (UI) input device 1650, a UI output device 1660, and a storage 1640 that communicate with each other over a bus 1690. The encoding device 1600 may also include a communication unit 1620 connected to the network 1699.
The processing unit 1610 may be a Central Processing Unit (CPU) or semiconductor device for executing processing instructions stored in the memory 1630 or the storage 1640. The processing unit 1610 may be at least one hardware processor.
The processing unit 1610 may generate and process a signal, data, or information input to the encoding apparatus 1600, output from the encoding apparatus 1600, or used in the encoding apparatus 1600, and may perform checking, comparison, determination, or the like related to the signal, data, or information. In other words, in embodiments, the generation and processing of data or information, as well as the examination, comparison, and determination of data or information related thereto, may be performed by processing unit 1610.
The processing unit 1610 may include an inter-prediction unit 110, an intra-prediction unit 120, a switch 115, a subtractor 125, a transform unit 130, a quantization unit 140, an entropy-coding unit 150, an inverse quantization unit 160, an inverse transform unit 170, an adder 175, a filter unit 180, and a reference picture buffer 190.
At least some of the inter prediction unit 110, the intra prediction unit 120, the switch 115, the subtractor 125, the transform unit 130, the quantization unit 140, the entropy encoding unit 150, the inverse quantization unit 160, the inverse transform unit 170, the adder 175, the filter unit 180, and the reference picture buffer 190 may be program modules and may communicate with an external device or system. The program modules may be included in the encoding device 1600 in the form of an operating system, application program modules, or other program modules.
The program modules may be physically stored in various types of well-known storage devices. Additionally, at least some of the program modules may also be stored in remote memory storage devices that are capable of communicating with the encoding apparatus 1600.
Program modules may include, but are not limited to, routines, subroutines, programs, objects, components, and data structures for performing functions or operations in accordance with the embodiments or for implementing abstract data types in accordance with the embodiments.
The program modules may be implemented using instructions or code executed by at least one processor of the encoding apparatus 1600.
The processing unit 1610 may execute instructions or code in the inter-prediction unit 110, the intra-prediction unit 120, the switch 115, the subtractor 125, the transform unit 130, the quantization unit 140, the entropy encoding unit 150, the inverse quantization unit 160, the inverse transform unit 170, the adder 175, the filter unit 180, and the reference picture buffer 190.
The memory unit may represent the memory 1630 and/or the storage 1640. Each of memory 1630 and storage 1640 may be any of various types of volatile or non-volatile storage media. For example, the memory 1630 may include at least one of Read Only Memory (ROM)1631 and Random Access Memory (RAM) 1632.
The memory unit may store data or information used to encode the operation of device 1600. In an embodiment, data or information of the encoding apparatus 1600 may be stored in a storage unit.
For example, the storage unit may store pictures, blocks, lists, motion information, inter prediction information, bitstreams, and the like.
The encoding device 1600 may be implemented in a computer system including a computer-readable storage medium.
The storage medium may store at least one module required for the operation of the encoding apparatus 1600. Memory 1630 may store at least one module and may be configured to cause the at least one module to be executed by processing unit 1610.
Functions related to communication of data or information of the encoding apparatus 1600 may be performed by the communication unit 1620.
For example, the communication unit 1620 may transmit the bitstream to the decoding apparatus 1700 to be described later.
Fig. 17 is a configuration diagram of a decoding apparatus according to an embodiment.
The decoding apparatus 1700 may correspond to the decoding apparatus 200 described above.
The decoding apparatus 1700 may include a processing unit 1710, a memory 1730, a User Interface (UI) input device 1750, a UI output device 1760, and a storage 1740 that communicate with each other through a bus 1790. The decoding apparatus 1700 may further include a communication unit 1720 connected to the network 1799.
The processing unit 1710 may be a Central Processing Unit (CPU) or a semiconductor device for executing processing instructions stored in the memory 1730 or the storage 1740. The processing unit 1710 may be at least one hardware processor.
The processing unit 1710 may generate and process a signal, data, or information input to the decoding apparatus 1700, output from the decoding apparatus 1700, or used in the decoding apparatus 1700, and may perform checking, comparing, determining, or the like, with respect to the signal, data, or information. In other words, in embodiments, the generation and processing of data or information, as well as the checking, comparing, and determining related to the data or information, may be performed by the processing unit 1710.
The processing unit 1710 may include the entropy decoding unit 210, the inverse quantization unit 220, the inverse transform unit 230, the intra prediction unit 240, the inter prediction unit 250, the switch 245, the adder 255, the filter unit 260, and the reference picture buffer 270.
At least some of the entropy decoding unit 210, the inverse quantization unit 220, the inverse transform unit 230, the intra prediction unit 240, the inter prediction unit 250, the adder 255, the switch 245, the filter unit 260, and the reference picture buffer 270 of the decoding apparatus 200 may be program modules and may communicate with an external device or system. The program modules may be included in the decoding apparatus 1700 in the form of an operating system, application program modules, or other program modules.
Program modules may be physically stored in various types of well-known memory devices. Furthermore, at least some of the program modules may also be stored in a remote memory storage device that is capable of communicating with the decoding apparatus 1700.
Program modules may include, but are not limited to, routines, subroutines, programs, objects, components, and data structures for performing functions or operations in accordance with the embodiments or for implementing abstract data types in accordance with the embodiments.
The program modules may be implemented using instructions or code executed by at least one processor of the decoding apparatus 1700.
Processing unit 1710 may execute instructions or code in entropy decoding unit 210, inverse quantization unit 220, inverse transform unit 230, intra prediction unit 240, inter prediction unit 250, switch 245, adder 255, filter unit 260, and reference picture buffer 270.
The memory units may represent memory 1730 and/or storage 1740. Memory 1730 and storage 1740 can each be any of a variety of types of volatile or non-volatile storage media. For example, memory 1730 may include at least one of ROM 1731 and RAM 1732.
The storage unit may store data or information for the operation of the decoding apparatus 1700. In an embodiment, data or information of the decoding apparatus 1700 may be stored in a storage unit.
For example, the storage unit may store pictures, blocks, lists, motion information, inter prediction information, bitstreams, and the like.
The decoding apparatus 1700 may be implemented in a computer system including a computer-readable storage medium.
The storage medium may store at least one module required for the operation of the decoding apparatus 1700. The memory 1730 may store at least one module and may be configured to cause the at least one module to be executed by the processing unit 1710.
Functions related to communication of data or information of the decoding apparatus 1700 can be performed by the communication unit 1720.
For example, the communication unit 1720 may receive a bitstream from the encoding device 1600.
Intra-frame sub-partitions (ISP)
Fig. 18 illustrates an ISP for partitioning a target block into two sub-blocks, according to an example.
Fig. 19 illustrates an ISP for partitioning a target block into four sub-blocks, according to an example.
Fig. 18 and 19 illustrate an example of execution of an ISP, which is one of the intra prediction methods.
In image compression, as the size of a block is larger, the accuracy of prediction for the block may become lower, and the probability of an error occurring in a transformation process may be higher. Therefore, as the size of a block is larger, the compression performance for the block may be further deteriorated.
Thus, a single block may be partitioned into smaller blocks, and prediction and transformation may be performed on the partitioned blocks. In other words, multiple smaller sub-blocks may be generated by partitioning one parent block, and block processing may be applied to the sub-blocks.
In intra prediction, as shown in fig. 18 and 19, a block may be partitioned into smaller blocks by intra sub-partitions (ISPs), and compression efficiency with respect to image information may be improved by performing prediction, transformation, and the like on smaller partitioned block units.
In the encoding apparatus 100 and the decoding apparatus 200 providing an intra prediction method such as an ISP, an ISP flag and an ISP mode may be additionally signaled.
The ISP flag may indicate whether the ISP is to be used.
The ISP mode may indicate the type of ISP.
For example, the ISP mode may specify a partition direction for the target block. The ISP mode may indicate one of a horizontal mode and a vertical mode. The horizontal mode may be a mode in which horizontal partitioning is applied to the target block. The vertical mode may be a mode in which vertical partitioning is applied to the target block.
Hereinafter, ISP signaling may be the signaling of information related to an ISP. For example, the ISP signaling may be signaling of ISP logo and ISP mode.
The information related to the ISP may include an ISP logo and an ISP mode. The information related to the ISP may also include the number of sub-partitions (ISPs) within the frame. The number of ISPs may indicate the number of sub-blocks generated from partitioning the target block. The number of ISPs may be signaled from the encoding apparatus 100, and may be derived by the encoding apparatus 100 and the decoding apparatus 200 in the same manner based on the specific encoding parameters illustrated in the above embodiments.
The encoding parameter may indicate at least one of a width and a height of a block, a maximum/minimum value of the width/height, a sum of the width and the height, a number of pixels belonging to the block, a block shape, a component type, a position/range of a reference pixel, a type (e.g., whether an intra prediction mode is a directional mode or whether an intra prediction mode is a predefined default mode) or an angle of an intra prediction mode, information on whether a transform is skipped, a transform type, and the like. Here, the block may be a target block (i.e., at least one of a prediction block and a transform block) or a block adjacent to the target block.
As shown in fig. 18 and 19, when the ISP is used, the target block may be partitioned into N sub-blocks. Here, N may be an integer of 2 or more.
The target block may have a size of W × H. The width of the target block may be W and its height may be H. Here, the width may be the number of horizontal pixels. The height may be the number of vertical pixels. W may be an integer of 1 or more. H may be an integer of 1 or greater.
As shown in fig. 18, the target block may be vertically halved and may be partitioned into two sub-blocks each having a size of (W/2) × H. Alternatively, the target block may be horizontally bisected and may be partitioned into two sub-blocks, both of which are W × (H/2) in size.
As shown in fig. 19, the target block may be vertically quartered and may be partitioned into four sub-blocks all of which have a size of (W/4) × H. Alternatively, the target block may be horizontally quartered and partitioned into four sub-blocks all having a size of W × (H/4).
The shape of the partition of the target block may be determined or limited according to the size of the target block.
For example, when the size of the target block is 4 × 4, partitioning of the target block into sub-blocks may not be performed.
For example, as shown in fig. 18, when the size of the target block is 4 × 8 or 8 × 4, the target block may be partitioned into two sub-blocks.
For example, as shown in fig. 19, when the size of the target block does not correspond to the size exemplified above (i.e., when the size of the target block is equal to or greater than a predefined size (such as 8 × 8)), the target block may be partitioned into four sub-blocks.
In intra prediction using an ISP, an intra prediction mode may be selected (for a target block) before the target block is partitioned. Accordingly, the same intra prediction mode (determined for the target block) may be commonly applied to the plurality of sub blocks generated from the partition, and the plurality of sub blocks generated from the partition may be encoded/decoded using the same intra prediction mode. Further, the information indicating the intra prediction mode may be signaled only once.
Horizontal partitioning may be an operation that partitions a target block into sub-blocks that are all W H/4 or W H/2 in size. That is, the partition direction of the horizontal partition may be horizontal. Vertical partitioning may be an operation that partitions a target block into sub-blocks that are all W/4 XH or W/2 XH in size. That is, the partition direction of the vertical partition may be vertical.
When the target block is partitioned into one or more sub-blocks by the ISP, encoding/decoding may be performed on each sub-block. The encoding of each sub-block may include at least one of prediction, transformation, quantization, inverse transformation, and reconstruction of the corresponding sub-block. The decoding of each sub-block may include at least one of inverse quantization, inverse transformation, prediction, and reconstruction of the corresponding sub-block. In other words, a subblock may be a unit to which a process such as prediction, transformation, quantization, inverse transformation, and reconstruction is to be applied.
By dividing the unit of encoding/decoding, the accuracy of prediction and the like can be improved, and the performance of compression can be enhanced.
Transformation method
In the following description related to the transform, "transform" may be a term applied to the encoding apparatus 100. In the following description related to the transform, the term "transform" may be replaced with the term "inverse transform" related to the decoding apparatus 200.
In the image encoding/decoding method described in the embodiments, when a primary transform and a secondary transform are used, a secondary transform method corresponding to the secondary transform may be determined based on a primary transform method (or type) corresponding to the primary transform. This determination may be limited in improving the efficiency of encoding/decoding. Here, the "primary transformation method" may refer to a core for primary transformation. The "secondary transformation method" may refer to a core for secondary transformation.
In an embodiment described later, a primary transformation method corresponding to a primary transformation may be determined based on a secondary transformation method corresponding to a secondary transformation. In other words, a primary transformation method corresponding to a primary transformation may be associated with a secondary transformation method corresponding to a secondary transformation. Alternatively, the primary transformation method corresponding to the primary transformation may depend on the secondary transformation method corresponding to the secondary transformation.
In an embodiment, when there is a residual signal for a target block, encoding information about the target block may be generated by transform-encoding the residual signal in an encoding process. For example, the coding information may be quantized transform coefficient levels.
The encoding information may be included in a bitstream and may be signaled to the decoding apparatus 200 through the bitstream.
Further, when there is a residual signal for the target block, the decoding apparatus 200 may acquire encoding information through a bitstream in a decoding process. The (reconstructed) residual signal for the target block may be generated by performing inverse transform decoding on the encoded information. For example, the coding information may be quantized transform coefficient levels.
Identification information indicating whether coding information (e.g., quantized transform coefficient levels) exists in the bitstream may be included in the bitstream. The identification information may be signaled from the encoding apparatus 100 to the decoding apparatus 200 through a bitstream.
The identification information may include one or more of: 1) the coded block flag (for a coding unit: CU), 2) luma coded block flag (for transform unit: TU), 3) chroma red (Cr) coded block flag (for TU), and 4) chroma blue (Cb) coded block flag (for TU).
The "flags" of the coded block flag, luma coded block flag, chroma red coded block flag, and chroma blue coded block flag may be merely exemplary. The term "flag" may be replaced with the term "information".
Hereinafter, the coded block flag may be indicated by cu _ cbf. The luma coded block flag may be indicated by tu _ cbf _ luma. Chroma red coded block flag may be indicated by tu _ cbf _ cr. The chroma blue coded block flag may be indicated by tu _ cbf _ cb. The meaning of the flag may be defined as follows.
1) cu _ cbf: when the luma component and the chroma component (of the CU) have the same block partition structure, CU _ cbf may be information indicating whether there are transform coefficients for the luma component block and transform coefficients for the chroma component block. In the description of the identification information, the transform coefficient may be information on a residual block. In the description of the identification information, the term "transform coefficient" may be replaced with "transform coefficient level", "residual signal", "quantized level", "quantized transform coefficient level", "quantized level", and/or "quantized coefficient".
When the luma component and the chroma component (of the CU) have independent block partition structures, CU _ cbf may be information indicating whether there is a transform coefficient for a luma component block or a chroma component block.
When the value of cu cbf is a first value (e.g., 0), there may be no transform coefficient of the residual block for the target block (in the bitstream). In other words, the first value of cu cbf may indicate that there are no transform coefficients for the residual block of the target block. Therefore, when the value of cu _ cbf is the first value, the operation of signaling the transform coefficients may be skipped. Here, the target block may be a CU, and may be any block of a plurality of blocks generated by partitioning the CU.
When the value of cu cbf is a second value (e.g., 1), there may be transform coefficients for the residual block of the target block (in the bitstream). In other words, the second value of cu cbf may indicate that there are transform coefficients for the residual block of the target block. Therefore, when the value of cu _ cbf is a second value, the transform coefficient may be signaled. Here, the target block may be a CU, and may be any block of a plurality of blocks generated by partitioning the CU.
When the luma component and the chroma component (of the CU) have the same block partition structure and there is no residual signal for all of the luma component block and the chroma component block, CU _ cbf may have a first value. The chrominance component blocks may be Cb component blocks and Cr component blocks.
The CU cbf may have a second value when the luma component and the chroma component (of the CU) have the same block partition structure and there is a transform coefficient for at least one of the luma component block and the chroma component block.
2) tu _ cbf _ luma: TU cbf luma may indicate whether there are transform coefficients for a luma component block (of the TU).
When the value of tu _ cbf _ luma is a first value (e.g., 0), there may be no transform coefficient for the residual block of the luma component block (in the bitstream). In other words, the first value of tu _ cbf _ luma may indicate that there are no transform coefficients for the residual block of the luma component block. Thus, when the value of tu _ cbf _ luma is the first value, the operation of signaling the transform coefficients may be skipped. Here, the luminance component block may be a TU generated by partitioning a CU.
When the value of tu _ cbf _ luma is a second value (e.g., 1), there may be transform coefficients for the residual block of the luma component block (in the bitstream). In other words, the second value of tu _ cbf _ luma may indicate that there are transform coefficients for the residual block of the luma component block. Therefore, when the value of tu _ cbf _ luma is a second value, the transform coefficients may be signaled. Here, the luminance component block may be a TU generated by partitioning a CU.
3) tu _ cbf _ cr: TU _ cbf _ Cr may indicate whether there are transform coefficients for a Cr component block (of a TU).
When the value of tu _ cbf _ Cr is a first value (e.g., 0), there may be no transform coefficient for the residual block of the Cr component block (in the bitstream). In other words, the first value of tu _ cbf _ Cr may indicate that there are no transform coefficients for the residual block of the Cr component block. Therefore, when the value of tu _ cbf _ cr is the first value, the operation of signaling the transform coefficients may be skipped. Here, the Cr component block may be a TU generated by partitioning a CU.
When the value of tu _ cbf _ Cr is a second value (e.g., 1), there may be transform coefficients for the residual block of the Cr component block (in the bitstream). In other words, the second value of tu _ cbf _ Cr may indicate that there are transform coefficients for the residual block of the Cr component block. Therefore, when the value of tu _ cbf _ cr is the second value, the transform coefficient may be signaled. Here, the Cr component block may be a TU generated by partitioning a CU.
3) tu _ cbf _ cb: TU _ cbf _ Cb may indicate whether transform coefficients exist for a Cb component block (of a TU).
When the value of tu _ cbf _ Cb is a first value (e.g., 0), there may be no transform coefficient (in the bitstream) for the residual block of the Cb component block. In other words, the first value of tu _ cbf _ Cb may indicate that there are no transform coefficients for the residual block of the Cb component block. Thus, when the value of tu _ cbf _ cb is the first value, the operation of signaling the transform coefficients may be skipped. Here, the Cb component block may be a TU generated by partitioning a CU.
When the value of tu _ cbf _ Cb is a second value (e.g., 1), there may be transform coefficients for the residual block of the Cb component block (in the bitstream). In other words, the second value of tu _ cbf _ Cb may indicate that there are transform coefficients for the residual block of the Cb component block. Therefore, when the value of tu _ cbf _ cb is the second value, the transform coefficient may be signaled. Here, the Cb component block may be a TU generated by partitioning a CU.
In general, only when the value of cu _ cbf is the second value, one or more of tu _ cbf _ luma, tu _ cbf _ Cr, and tu _ cbf _ Cb may be additionally signaled, and the signaled tu _ cbf _ luma, tu _ cbf _ Cr, and tu _ cbf _ Cb may indicate whether transform coefficients exist for the Cb components of the luminance component, the Cr component of the chrominance component, and the chrominance component, respectively.
When the luminance component and the chrominance component have independent block partition structures, a case where cu _ cbf and tu _ cbf _ luma have the same information may occur. In this case, tu _ cbf _ luma may not be signaled and may be derived from the cu _ cbf information. For example, tu _ cbf _ luma may be the same as cu _ cbf. In other words, the value of cu _ cbf may be used as the value of tu _ cbf _ luma.
Transform coding may include a primary transform and a secondary transform.
The secondary transform may be applied only to the intra prediction mode. In other words, only when intra prediction is used for a target block, a secondary transform for the target block that is a target of the transform may be performed.
The preliminary transformation may be one of a variety of methods. The primary transformation method may be a method for primary transformation among the plurality of methods.
For example, the primary transform method may be a Discrete Cosine Transform (DCT) -2 method of applying DCT-2 to the target block in both horizontal and vertical directions of the target block. Alternatively, the primary transformation method may be a combination of DCT-7 and DCT-8 that applies DST-7 in the horizontal direction of the target block and DCT-8 in the vertical direction of the target block.
The preliminary transform method applied to the target block may be signaled in the form of an index. The primary transformation method index may be an index of the primary transformation method.
For example, when the primary transform method is DCT-2, the value of the primary transform method index may be 0. When the primary transformation method is a combination of DST-7 and DCT-8, the value of the primary transformation method index may be a specific integer greater than 0.
For example, when the value of the primary transform method index is 1, DST-7 may be applied in the horizontal direction and the vertical direction of the target block. When the value of the primary transform method index is 2, DST-7 may be applied in the horizontal direction of the target block, and DCT-8 may be applied in the vertical direction of the target block. When the value of the primary transform method index is 3, DCT-8 may be applied in the horizontal direction of the target block, and DST-7 may be applied in the vertical direction of the target block. When the value of the primary transform method index is 4, DCT-8 may be applied in both the horizontal direction and the vertical direction of the target block.
For example, when the value of the primary transform method index is 2, DCT-8 may be applied in the horizontal direction of the target block, and DST-7 may be applied in the vertical direction of the target block. When the value of the primary transform method index is 3, DST-7 may be applied in the horizontal direction of the target block, and DCT-8 may be applied in the vertical direction of the target block.
The primary transform method index may be represented by mts _ idx or mts _ idx [ x0] [ y0 ]. x0 and y0 may be coordinates in the target block. mts _ idx [ x0] [ y0] may indicate the primary transform method index for the top-left-most pixel (or the block that includes the top-left-most pixel) in the target block. Hereinafter, mts _ idx [ x0] [ y0] may be replaced with mts _ idx.
For example, a case where mts _ idx [ x0] [ y0] has a first value (e.g., 0) may represent that DCT-2 is applied to the target block.
The secondary transform may be one of a variety of methods. The secondary transformation method may be a method for secondary transformation among the plurality of methods.
The secondary transform may be applied to only a portion of the signal (or block) to which the primary transform is applied.
The secondary transform method applied to the target block may be signaled in the form of an index. The secondary transformation method index may be an index of a secondary transformation method.
For example, when the secondary transform is not applied to the target block, the value of the secondary transform method index may be 0. When the secondary transform is applied to the target block, the value of the secondary transform method index may be an integer greater than 0.
The secondary transform method index may be represented by lfnst _ idx or lfnst _ idx [ x0] [ y0 ]. x0 and y0 may be coordinates in the target block. lfnst _ idx [ x0] [ y0] may indicate a secondary transform method index for the upper-left-most pixel in the target block (or a block including the upper-left-most pixel). Hereinafter, lfnst _ idx [ x0] [ y0] may be replaced with lfnst _ idx.
A case where fnst _ idx [ x0] [ y0] has a first value (e.g., 0) may indicate that a secondary transform is not applied to the target block.
When lfnst _ idx [ x0] [ y0] has a second value (e.g., 1), the first secondary transformation method indicated by lfnst _ idx [ x0] [ y0] may be used for the secondary transformation.
When lfnst _ idx [ x0] [ y0] has a third value (e.g., 2), a second secondary transformation method, indicated by lfnst _ idx [ x0] [ y0], may be used for the secondary transformation.
The value of lfnst _ idx [ x0] [ y0] may be equal to or greater than 0 and less than or equal to N. Here, N may be a positive integer. For example, N may be 2.
The decoding apparatus 200 may perform the secondary inverse transform and the primary inverse transform on the target block.
The decoding apparatus 200 may derive a core to be applied to a secondary inverse transform on the target block using the signaled secondary transform method index. The decoding apparatus 200 may perform the secondary inverse transform on the target block by applying the derived kernel to the secondary inverse transform. The decoding apparatus 200 may generate a signal subjected to secondary inverse transform by applying the derived kernel to the transform coefficient.
The decoding apparatus 200 may derive a core to be applied to a primary inverse transform on the target block using the signaled primary transform method index. The decoding apparatus 200 may perform a primary inverse transform on the target block by applying the derived kernel to the primary inverse transform. The decoding apparatus 200 may generate a signal that is subjected to the secondary inverse transform and the primary inverse transform by applying the derived kernel to the signal that is subjected to the secondary inverse transform. The signal subjected to the secondary inverse transform and the primary inverse transform may be a reconstructed residual block.
When the secondary transform is applied to the target block, the primary transform method (for both the horizontal and vertical directions of the target block) may always be DCT-2. When the secondary transform is applied to the target block, the primary transform method (for both the horizontal and vertical directions of the target block) may be limited to DCT-2. The secondary transform may be applied under the assumption that energy is concentrated in the upper left region of the block to which the primary transform is applied. When the primary transform method is DCT-2, there may be a tendency that energy will be concentrated in the upper left region of the block to which the primary transform is applied. In other words, DCT-2 may concentrate energy in the upper left region of the block to which the primary transform is applied. Accordingly, DCT-2 is used as the primary transform method when the secondary transform is applied, and thus an advantage can be obtained from the viewpoint of coding efficiency.
The horizontal size or vertical size of the target block to which the primary transform and/or the secondary transform may be applied may be limited. The horizontal dimension may be a horizontal length (i.e., width). The vertical dimension may be a vertical length (i.e., height).
For example, the preliminary transform may be available when the horizontal size and/or the vertical size of the target block is less than or equal to a certain value. Alternatively, the preliminary transform may be available when the horizontal size and/or the vertical size of the target block is equal to or greater than a certain value. Alternatively, the preliminary transform may be available when the horizontal size and/or the vertical size of the target block falls within a certain range.
For example, a secondary transform may be available when the horizontal size and/or vertical size of the target block is less than or equal to a particular value. Alternatively, the secondary transform may be available when the horizontal size and/or the vertical size of the target block is equal to or greater than a certain value. Alternatively, a secondary transform may be available when the horizontal size and/or vertical size of the target block falls within a certain range.
The fact that a preliminary transform may be used for the target block may indicate that the preliminary transform may be applied to the target block.
The fact that the preliminary transform is not available to the target block may indicate that the preliminary transform may not be applied to the target block.
The fact that a secondary transform is available for the target block may indicate that the secondary transform may be applied to the target block. According to circumstances, even in the case where a secondary transform is available for a target block, the secondary transform may not be applied to the target block. For example, even in the case where a secondary transform is available for the target block, if the value of the secondary transform method index is 0, the secondary transform may not be applied to the target block.
The fact that the secondary transform is not available to the target block may indicate that only the primary transform is applied to the target block, and that the secondary transform cannot be applied to the target block. For example, when a secondary transform is not available for the target block, the operation of signaling the secondary transform method index may be skipped (since it is not necessary to determine which secondary transform method is to be applied).
The maximum horizontal size may be the maximum horizontal size of the target block available for the primary transform or the secondary transform. The maximum vertical size may be the maximum vertical size of the target block available for the primary transform or the secondary transform.
A maximum horizontal size and/or a maximum vertical size may be defined for each of the luminance component and the chrominance component. The maximum horizontal size for the luminance component and the maximum horizontal size for the chrominance component may be different from each other. The maximum vertical size for the luminance component and the maximum vertical size for the chrominance component may be different from each other.
Alternatively, a maximum horizontal size and a maximum vertical size may be defined for the luminance component, and the maximum horizontal size and/or the maximum vertical size for the chrominance component may be derived based on the maximum horizontal size and/or the maximum vertical size defined for the luminance component.
For example, the maximum horizontal dimension or the maximum vertical dimension may be 64, 32, or 2N. N may be a specific positive integer.
The maximum horizontal size or the maximum vertical size may be referred to as a maximum Transform Block (TB) size. Further, the maximum dimension may be a maximum horizontal dimension or a maximum vertical dimension.
When the size of the CU is larger than the maximum size of the primary transform, the CU may be implicitly partitioned until the size of the CU becomes smaller than or equal to the maximum size of the primary transform. A block generated by the partitioning of a CU may be referred to as a Transform Unit (TU). Thus, one CU may consist of one or more TUs. A TU may be a block to which a transform method is applied.
The code in table 7 below shows a procedure for configuring TUs by implicitly partitioning a CU until the size of the CU becomes less than or equal to the maximum size of the primary transform.
[ Table 7]
Figure BDA0003690721400000831
In table 7 and subsequent tables, the left part of the uppermost row indicates the name of the program (or function).
In table 7 and subsequent tables, the left part of each row indicates the code of one row. Syntax of a programming language such as C and JAVA may be applied to the code.
In table 7 and subsequent tables, the right part of each row indicates a descriptor. ae (v) indicates a context adaptive arithmetic entropy department syntax element. When the right part of a particular line indicates ae (v), the left part of the particular line may indicate the name of the signaled syntax element. Each syntax element may be context adaptive arithmetic entropy encoded.
Codes in table 8, table 9, and table 10 below may show information signaled to encode/decode one TU and conditions for signaling the information.
[ Table 8]
Figure BDA0003690721400000841
[ Table 9]
Figure BDA0003690721400000851
[ Table 10]
Figure BDA0003690721400000852
As shown in tables 9 and 10, transform _ skip _ flag [ x0] [ y0] [0], transform _ skip _ flag [ x0] [ y0] [1], and transform _ skip _ flag [ x0] [ y0] [2] can be selectively signaled.
Whether transform _ skip _ flag [ x0] [ y0] [0], transform _ skip _ flag [ x0] [ y0] [1], and transform _ skip _ flag [ x0] [ y0] [2] can be signaled can be determined based on 1) sps _ transform _ skip _ enabled _ flag, 2) BdpcmFlag [ x0] [ y0], 3) wC, 4) hC, and 5) cu _ Sbt _ flag.
For example, a transform _ skip _ flag [ x0] [ y0] [ n ] may be signaled when the value of sps _ transform _ skip _ enabled _ flag is 1, the value of bdpcmdflag [ x0] [ y0] [ n ] is 0, the value of wC is less than or equal to maxttsize, the value of hC is less than or equal to maxttsize, and the value of cu _ sbt _ flag is 0. n may be 0, 1 or 2. Maxttssize may be the maximum block size available for transform skipping.
1) The SPS _ transform _ skip _ enabled _ flag may be a flag indicating whether transform skip is available for a Sequence Parameter Set (SPS). 2) BdpcmFlag may be a flag indicating whether block-based delta pulse code modulation (bdpcmg) is to be applied to the CU. BdpcmFlag [ x0] [ y0] [0] may be BdpcmFlag for a luma component block. BdpcmFlag [ x0] [ y0] [1] may be BdpcmFlag for blocks of Cb components. BdpcmFlag [ x0] [ y0] [2] may be BdpcmFlag for Cr component blocks. 3) wC may be the horizontal size of the CU. 4) hC may be the vertical dimension of the CU. 5) CU _ sbt _ flag may be a flag indicating whether sub-block transform is to be used for the CU.
The secondary transform may only be applied to the case where the size of the CU is less than or equal to the maximum TB size. When the size of the CU is larger than the maximum TB size, the secondary transform may not be applied. The secondary transform may be applied only if the horizontal size of the CU is less than or equal to the maximum TB (horizontal) size and the vertical size of the CU is less than or equal to the maximum TB (vertical) size. In the case where the horizontal size of the CU is greater than the maximum TB (horizontal) size or the vertical size of the CU is greater than the maximum TB (vertical) size, the secondary transform may not be applied.
Alternatively, a secondary transform may be available in the case where the size of the CU is less than or equal to the maximum TB size. When the size of a CU is larger than the maximum TB size, a secondary transform may not be available. The secondary transform may be available in the case where the horizontal size of the CU is less than or equal to the maximum TB (horizontal) size and the vertical size of the CU is less than or equal to the maximum TB (vertical) size. In case the horizontal size of the CU is larger than the maximum TB (horizontal) size or the vertical size of the CU is larger than the maximum TB (vertical) size, the secondary transform may not be available.
The primary transform method index and the secondary transform method index may be signaled for the CU. The primary transform method and the secondary transform method for the CU may be specified by a primary transform method index and a secondary transform method index signaled for the CU, and the same primary transform method and the same secondary transform method may be applied to all TUs included in the CU.
In this case, when an intra prediction signal is generated by applying the ISP to a target block, the same primary transformation method and the same secondary transformation method may be applied to partitioned blocks generated by the ISP. The target block may be a CU. The intra prediction signal may include an ISP flag, an ISP mode, and an intra prediction mode.
This scheme can reduce the amount of information to be signaled and obtain an advantage from the viewpoint of coding efficiency, compared to a scheme in which different primary transform methods and different secondary transform methods are respectively applied to TUs included in a CU.
When the primary transform method index and the secondary transform method index are signaled, the secondary transform method index may be signaled first, and then the primary transform method index may be signaled (in accordance with the order of transforms performed by the decoding apparatus 100).
When the value of the secondary transform method index is not 0 (i.e., when the secondary transform is applied), the primary transform method may always be DCT-2. Accordingly, when the value of the secondary transform method index is not 0, the primary transform method index may not be signaled, and the value of the primary transform method index may be set to 0, which is a value indicating the primary transform method index of DCT-2.
When the value of the secondary transform method index is 0 (i.e., when the secondary transform is not applied), the primary transform method index may be signaled.
Codes in the following tables 11 to 27, which will be described below, may indicate signaling of the primary transform method index and the secondary transform method index according to an example.
[ Table 11]
Figure BDA0003690721400000881
The residual signal may be encoded using a transform skip method. The transform skip method may be a method of not performing a primary transform on a luminance component of a CU or a luminance component of a TU included in the CU. When the transform skip method is used, the secondary transform may not be applied. When the transform skip method is used, the secondary transform method index lfnst _ idx [ x0] [ y0] may not be signaled, and the secondary transform method index may be derived to a first value (e.g., 0) to indicate that the secondary transform is not applied.
The transform skip mode flag transform _ skip _ flag x0 y0 may indicate whether a transform skip method is to be applied.
The transform skip mode flag transform _ skip _ flag for luma component x0 y0 0 may indicate whether a transform skip method for luma component is to be applied.
The transform skip mode flag transform _ skip _ flag [ x0] [ y0] [1] for a Cb component may indicate whether a transform skip method for the Cb component is to be applied.
The transform skip mode flag transform _ skip _ flag x0 y0 2 for the Cr component may indicate whether a transform skip method for the Cr component is to be applied.
In association with the transform skip mode flag transform _ skip _ flag [ x0] [ y0], when the transform skip mode flag transform _ skip _ flag [ x0] [ y0] matches at least one of the first to seventeenth cases (which will be described below with reference to the codes in tables 12 to 28 below), the secondary transform method index lfnst _ idx [ x0] [ y0] may be entropy-coded/entropy-decoded and signaled.
[ Table 12]
Figure BDA0003690721400000891
First case) as described above with respect to the codes of table 12, lfnst _ idx [ x0] [ y0] may be neither entropy encoded/entropy decoded nor signaled when the following code 1 has a first value (or 0). When the following code 1 has a second value (or 1), lfnst _ idx [ x0] [ y0] may be entropy encoded/entropy decoded and signaled.
[ code 1]
(treeType==SINGLE_TREE&&(transform_skip_flag[x0][y0][0]==0||transform_skip_flag[x0][y0][1]==0||transform_skip_flag[x0][y0][2]==0))
1) When the TREE structure has a SINGLE TREE type (i.e., SINGLE _ TREE) and 2) at least one of the transform _ skip _ flag [ x0] [ y0] [0], transform _ skip _ flag [ x0] [ y0] [1] and transform _ skip _ flag [ x0] [ y0] [2] has a first value (or 0), lfnst _ idx [ x0] [ y0] may be entropy-coded/entropy-decoded and signaled.
The tree structure may refer to a tree type.
[ Table 13]
Figure BDA0003690721400000901
Second case) as described above with respect to the codes of table 13, lfnst _ idx [ x0] [ y0] may be neither entropy encoded/entropy decoded nor signaled when the following code 2 has a first value (or 0). When code 2 below has a second value (or 1), lfnst _ idx [ x0] [ y0] may be entropy encoded/entropy decoded and signaled.
[ code 2]
(treeType==DUAL_TREE_LUMA&&transform_skip_flag[x0][y0][0]==0)
1) When the TREE structure has a DUAL TREE LUMA type (i.e., DUAL _ TREE _ LUMA) and 2) the transform _ skip _ flag [ x0] [ y0] [0] has a first value (or 0), lfnst _ idx [ x0] [ y0] can be entropy coded/decoded and signaled.
For example, when the TREE structure has a DUAL TREE LUMA type (i.e., DUAL _ TREE _ LUMA), it may be determined whether lfnst _ idx [ x0] [ y0] is to be signaled based on whether transform _ skip _ flag [ x0] [ y0] [0] has a particular value (e.g., 0).
[ Table 14]
Figure BDA0003690721400000921
Third case) as described above with respect to the codes of table 14, lfnst _ idx [ x0] [ y0] may be neither entropy encoded/entropy decoded nor signaled when the following code 3 has a first value (or 0). When the following code 3 has a second value (or 1), lfnst _ idx [ x0] [ y0] may be entropy encoded/entropy decoded and signaled.
[ code 3]
((treeType==SINGLE_TREE||treeType==DUAL_TREE_LUMA)&&transform_skip_flag[x0][y0][0]==0)
1) When the TREE structure has a SINGLE TREE type (i.e., SINGLE _ TREE) or a DUAL TREE LUMA type (i.e., DUAL _ TREE _ LUMA) and 2) the transform _ skip _ flag x0 y0 0 has a first value (or 0), lfnst _ idx x0 y0 may be entropy coded/decoded and signaled.
[ Table 15]
Figure BDA0003690721400000931
Fourth case) as described above with respect to the codes of table 15, lfnst _ idx [ x0] [ y0] may be neither entropy-encoded/entropy-decoded nor signaled when the following code 4 has a first value (or 0). When the following code 4 has a second value (or 1), lfnst _ idx [ x0] [ y0] may be entropy encoded/entropy decoded and signaled.
[ code 4]
(treeType==DUAL_TREE_CHROMA&&(transform_skip_flag[x0][y0][1]==0||transform_skip_flag[x0][y0][2]==0)&&ChromaArrayType!=0)
1) When the TREE structure has a DUAL TREE CHROMA type (i.e., DUAL _ TREE _ CHROMA), 2) at least one of transform _ skip _ flag [ x0] [ y0] [1] and transform _ skip _ flag [ x0] [ y0] [2] has a first value (or 0), and 3) the CHROMA array type does not have the first value (or 0), lfnst _ idx [ x0] [ y0] may be entropy-coded/entropy-decoded and signaled.
[ Table 16]
Figure BDA0003690721400000941
Fifth case) as described above with respect to the codes of table 16, lfnst _ idx [ x0] [ y0] may be neither entropy encoded/entropy decoded nor signaled when the following code 5 has a first value (or 0). When the following code 5 has a second value (or 1), lfnst _ idx [ x0] [ y0] may be entropy encoded/entropy decoded and signaled.
[ code 5]
((treeType==SINGLE_TREE&&(transform_skip_flag[x0][y0][0]==0||transform_skip_flag[x0][y0][1]==0||ransform_skip_flag[x0][y0][2]==0))||(treeType==DUAL_TREE_LUMA&&transform_skip_flag[x0][y0][0]==0)||(treeType==DUAL_TREE_CHROMA&&(transform_skip_flag[x0][y0][1]==0||transform_skip_flag[x0][y0][2]==0)&&ChromaArrayType!=0))
1) When the TREE structure has a SINGLE TREE type (i.e., SINGLE _ TREE) and at least one of transform _ skip _ flag [ x0] [ y0] [0], transform _ skip _ flag [ x0] [ y0] [1] and transform _ skip _ flag [ x0] [ y0] [2] has a first value (or 0), 2) when the TREE structure has a DUAL TREE LUMA type (i.e., DUAL _ TREE _ LUMA) and transform _ skip _ flag [ x0] [ y0] [0] has a first value (or 0), or 3) when the TREE structure has a DUAL TREE CHROMA type (i.e., DUAL _ TREE _ CHROMA), transform _ skip _ flag [ x0] [ y0] [1] and transform _ skip _ flag [ x0] [2] has a first value (or 6850) and is decodable with no entropy coding and entropy coding is signaled, e.g., the entropy coding is signaled and the entropy coding is signaled for example with the first value of the entropy coding [ 7370 ] (or the entropy coding).
[ Table 17]
Figure BDA0003690721400000961
Sixth case) as described above with respect to the codes of table 17, lfnst _ idx [ x0] [ y0] may be neither entropy-encoded/entropy-decoded nor signaled when the following code 6 has a first value (or 0). When the following code 6 has a second value (or 1), lfnst _ idx [ x0] [ y0] may be entropy encoded/entropy decoded and signaled.
[ code 6]
((treeType==SINGLE_TREE&&(transform_skip_flag[x0][y0][0]==0||transform_skip_flag[x0][y0][1]==0||transform_skip_flag[x0][y0][2]==0))||((treeType==SINGLE_TREE||treeType==DUAL_TREE_LUMA)&&transform_skip_flag[x0][y0][0]==0)||(treeType==DUAL_TREE_CHROMA&&(transform_skip_flag[x0][y0][1]==0||transform_skip_flag[x0][y0][2]==0)&&ChromaArrayType!=0))
1) When the TREE structure has a SINGLE TREE type (i.e., SINGLE _ TREE) and at least one of the transform _ skip _ flag [ x0] [ y0] [0], transform _ skip _ flag [ x0] [ y0] [1] and transform _ skip _ flag [ x0] [ y0] [2] has a first value (or 0), 2) when the TREE structure has a SINGLE TREE type (i.e., SINGLE _ TREE) or a DUAL TREE LUMA type (i.e., DUAL _ TREE _ LUMA) and the transform _ skip _ flag [ x0] [ y0] [0] has a first value (or 0), or 3) when the TREE structure has a DUAL TREE CHROMA type (i.e., DUAL _ TREE _ CHROMA), at least one of transform _ skip _ flag [ x0] [ y0] [1] and transform _ skip _ flag [ x0] [ y0] [2] has a first value (or 0) and the CHROMA array type ChromaArrayType does not have the first value (e.g., 0), lfnst _ idx [ x0] [ y0] may be entropy encoded/entropy decoded and signaled.
[ Table 18]
Figure BDA0003690721400000981
Seventh case) as described above with respect to the codes of table 18, lfnst _ idx [ x0] [ y0] may be neither entropy coded/entropy decoded nor signaled when the following code 7 has a first value (or 0). When the following code 7 has a second value (or 1), lfnst _ idx [ x0] [ y0] may be entropy encoded/entropy decoded and signaled.
[ code 7]
(((treeType==SINGLE_TREE||treeType==DUAL_TREE_LUMA)&&transform_skip_flag[x0][y0][0]==0)||(treeType==DUAL_TREE_CHROMA&&(transform_skip_flag[x0][y0][1]==0||transform_skip_flag[x0][y0][2]==0)&&ChromaArrayType!=0))
1) When the TREE structure has a SINGLE TREE type (i.e., SINGLE _ TREE) or a DUAL TREE LUMA type (i.e., DUAL _ TREE _ LUMA) and the transform _ skip _ flag [ x0] [ y0] [0] has a first value (or 0), or 2) when at least one of the TREE structure has a DUAL TREE CHROMA type (i.e., DUAL _ TREE _ CHROMA), the transform _ skip _ flag [ x0] [ y0] [1] and the transform _ skip _ flag [ x0] [ y0] [2] has a first value (or 0), and the CHROMA array type CHROMA type has no first value (or 0), lfnst _ idx [ x0] [ y0] may be entropy encoded/decoded and signaled.
[ Table 19]
Figure BDA0003690721400000991
Eighth case) as described above with respect to the codes of table 19, lfnst _ idx [ x0] [ y0] may be neither entropy encoded/entropy decoded nor signaled when the following code 8 has a first value (or 0). When the following code 8 has a second value (or 1), lfnst _ idx [ x0] [ y0] may be entropy encoded/entropy decoded and signaled.
[ code 8]
(treeType==SINGLE_TREE&&(transform_skip_flag[x0][y0][0]==0&&transform_skip_flag[x0][y0][1]==0&&transform_skip_flag[x0][y0][2]==0))
1) When the TREE structure has a SINGLE TREE type (i.e., SINGLE _ TREE) and 2) the transform _ skip _ flag [ x0] [ y0] [0], transform _ skip _ flag [ x0] [ y0] [1], and transform _ skip _ flag [ x0] [ y0] [2] all have a first value (or 0), lfnst _ idx [ x0] [ y0] may be entropy encoded/entropy decoded and signaled.
For example, when the TREE structure has a SINGLE TREE type (i.e., SINGLE _ TREE), it may be determined whether or not lfnst _ idx [ x0] [ y0] is to be signaled based on whether or not transform _ skip _ flag [ x0] [ y0] [0], transform _ skip _ flag [ x0] [ y0] [1] and transform _ skip _ flag [ x0] [ y0] [2] all have a specific value (e.g., 0).
[ Table 20]
Figure BDA0003690721400001011
Ninth case) as described above with respect to the codes of table 20, lfnst _ idx [ x0] [ y0] may be neither entropy encoded/entropy decoded nor signaled when the following code 9 has a first value (or 0). When the following code 9 has a second value (or 1), lfnst _ idx [ x0] [ y0] may be entropy encoded/entropy decoded and signaled.
[ code 9]
(treeType==SINGLE_TREE&&transform_skip_flag[x0][y0][0]==0)
1) When the TREE structure has a SINGLE TREE type (i.e., SINGLE _ TREE) and 2) transform _ skip _ flag [ x0] [ y0] [0] has a first value (or 0), lfnst _ idx [ x0] [ y0] may be entropy encoded/entropy decoded and signaled.
[ Table 21]
Figure BDA0003690721400001021
Tenth case) when the following code 10 has a first value (or 0), lfnst _ idx [ x0] [ y0] may be neither entropy coded/decoded nor signaled, as described above with respect to the codes of table 21. When the following code 10 has a second value (or 1), lfnst _ idx [ x0] [ y0] may be entropy encoded/entropy decoded and signaled.
[ code 10]
(treeType==DUAL_TREE_LUMA&&transform_skip_flag[x0][y0][0]==0)
1) When the TREE structure has a DUAL TREE LUMA type (i.e., DUAL _ TREE _ LUMA) and 2) transform _ skip _ flag [ x0] [ y0] [0] has a first value (or 0), lfnst _ idx [ x0] [ y0] may be entropy coded/entropy decoded and signaled.
[ Table 22]
Figure BDA0003690721400001031
Eleventh case) when the following code 11 has a first value (or 0), lfnst _ idx [ x0] [ y0] may be neither entropy coded/entropy decoded nor signaled, as described above with respect to the codes of table 22. When the following code 11 has a second value (or 1), lfnst _ idx [ x0] [ y0] may be entropy encoded/entropy decoded and signaled.
[ code 11]
((treeType==SINGLE_TREE||treeType==DUAL_TREE_LUMA)&&transform_skip_flag[x0][y0][0]==0)
1) Lfnst _ idx [ x0] [ y0] may be entropy encoded/decoded and signaled when the TREE structure has a SINGLE TREE type (i.e., SINGLE _ TREE) or a DUAL TREE LUMA type (i.e., DUAL _ TREE _ LUMA) and 2) transform _ skip _ flag [ x0] [ y0] [0] has a first value (or 0).
[ Table 23]
Figure BDA0003690721400001041
Twelfth case) as described above with respect to the codes of table 23, lfnst _ idx [ x0] [ y0] may be neither entropy encoded/entropy decoded nor signaled when the following code 12 has a first value (or 0). When the following code 12 has a second value (or 1), lfnst _ idx [ x0] [ y0] may be entropy encoded/entropy decoded and signaled.
[ code 12]
(treeType==DUAL_TREE_CHROMA&&(transform_skip_flag[x0][y0][1]==0&&transform_skip_flag[x0][y0][2]==0))
1) When the TREE structure has a DUAL TREE CHROMA type (i.e., DUAL _ TREE _ CHROMA) and 2) both the transform _ skip _ flag [ x0] [ y0] [1]) and the transform _ skip _ flag [ x0] [ y0] [2] have a first value (or 0), lfnst _ idx [ x0] [ y0] may be entropy encoded/entropy decoded and signaled.
For example, when the TREE structure has a DUAL TREE CHROMA type (i.e., DUAL _ TREE _ CHROMA), it may be determined whether lfnst _ idx [ x0] [ y0] is to be signaled based on whether both the transform _ skip _ flag [ x0] [ y0] [1]) and the transform _ skip _ flag [ x0] [ y0] [2] have a particular value (e.g., 0).
[ Table 24]
Figure BDA0003690721400001051
Thirteenth case) as described above with respect to the codes of table 24, lfnst _ idx [ x0] [ y0] may be neither entropy encoded/entropy decoded nor signaled when the following code 13 has a first value (or 0). When the following code 13 has a second value (or 1), lfnst _ idx [ x0] [ y0] may be entropy encoded/entropy decoded and signaled.
[ code 13]
(treeType==DUAL_TREE_CHROMA&&(transform_skip_flag[x0][y0][1]==0&&transform_skip_flag[x0][y0][2]==0)&&ChromaArrayType!=0)
1) When the TREE structure has a DUAL TREE CHROMA type (i.e., DUAL _ TREE _ CHROMA), 2) transform _ skip _ flag [ x0] [ y0] [1] and transform _ skip _ flag [ x0] [ y0] [2] both have a first value (or 0), and 3) CHROMA array type does not have the first value (or 0), lfnst _ idx [ x0] [ y0] may be entropy coded/entropy decoded and signaled.
[ Table 25]
Figure BDA0003690721400001071
Fourteenth case) as described above with respect to the codes of table 25, lfnst _ idx [ x0] [ y0] may be neither entropy encoded/entropy decoded nor signaled when the following code 14 has a first value (or 0). When the following code 14 has a second value (or 1), lfnst _ idx [ x0] [ y0] may be entropy encoded/entropy decoded and signaled.
[ code 14]
((treeType==SINGLE_TREE&&(transform_skip_flag[x0][y0][0]==0&&transform_skip_flag[x0][y0][1]==0&&transform_skip_flag[x0][y0][2]==0))||(treeType==DUAL_TREE_LUMA&&transform_skip_flag[x0][y0][0]==0)||(treeType==DUAL_TREE_CHROMA&&(transform_skip_flag[x0][y0][1]==0&&transform_skip_flag[x0][y0][2]==0)&&ChromaArrayType!=0))
1) When the TREE structure has a SINGLE TREE type (i.e., SINGLE _ TREE) and both transform _ skip _ flag [ x0] [ y0] [0], transform _ skip _ flag [ x0] [ y0] [1] and transform _ skip _ flag [ x0] [ y0] [2] have a first value (or 0), 2) when the TREE structure has a DUAL TREE LUMA type (i.e., DUAL _ TREE _ LUMA) and transform _ skip _ flag [ x0] [ y0] [0] has a first value (or 0), or 3) when the TREE structure has both TREE CHROMA types (i.e., DUAL _ TREE _ ROCHMA), transform _ skip _ flag [ x0] [ y0] [1] and transform _ skip _ flag [ x0] [ y0] have a first value (or 0), and 3) when both the TREE structure has a DUAL TREE CHROMA type (i.e., DUAL _ TREE _ ROCHMA), transform _ skip _ flag [ x0] [ y0] [1] and transform _ skip _ flag [ x0] [ y0] have a first value (or 0) and the transform _ skip _ array is decodable with no entropy coding value (or 0) and is transmitted with no entropy coding of the first value [ x _ Fy 0/737 signal.
[ Table 26]
Figure BDA0003690721400001091
Fifteenth case) when the following code 15 has a first value (or 0), lfnst _ idx [ x0] [ y0] may be neither entropy coded/decoded nor signaled as described above with respect to the codes of table 26. When the following code 15 has a second value (or 1), lfnst _ idx [ x0] [ y0] may be entropy encoded/entropy decoded and signaled.
[ code 15]
((treeType==SINGLE_TREE&&(transform_skip_flag[x0][y0][0]==0&&transform_skip_flag[x0][y0][1]==0&&transform_skip_flag[x0][y0][2]==0))||(treeType==DUAL_TREE_LUMA&&transform_skip_flag[x0][y0][0]==0)||(treeType==DUAL_TREE_CHROMA&&(transform_skip_flag[x0][y0][1]==0&&transform_skip_flag[x0][y0][2]==0)))
1) When the TREE structure has a SINGLE TREE type (i.e., SINGLE _ TREE) and both transform _ skip _ flag [ x0] [ y0] [0], transform _ skip _ flag [ x0] [ y0] [1] and transform _ skip _ flag [ x0] [ y0] [2] have the first value (or 0), 2) when the TREE structure has a DUAL TREE LUMA type (i.e., DUAL _ TREE _ LUMA) and transform _ skip _ flag [ x0] [ y0] [0] has the first value (or 0), or 3) when the TREE structure has both DUAL TREE CHROMA types (i.e., DUAL _ TREE _ ROCHMA) and both transform _ skip _ flag [ x0] [ y0] [1] and transform _ skip _ flag [ x0] [ y0] have the first value (or 0) and both transform _ skip _ flag [ x0] [ y ] are entropy coded with the x 0/737 signal being decodable.
For example, whether lfnst _ idx [ x0] [ y0] is to be signaled may be determined based on the tree structure, transform _ skip _ flag [ x0] [ y0] [0], transform _ skip _ flag [ x0] [ y0] [1], and transform _ skip _ flag [ x0] [ y0] [2 ].
When the TREE structure has a SINGLE TREE type (i.e., SINGLE _ TREE), it may be determined whether lfnst _ idx [ x0] [ y0] is to be signaled based on whether transform _ skip _ flag [ x0] [ y0] [0], transform _ skip _ flag [ x0] [ y0] [1], and transform _ skip _ flag [ x0] [ y0] [2] all have a specific value (e.g., 0).
For example, when the TREE structure has a DUAL TREE LUMA type (i.e., DUAL _ TREE _ LUMA), it may be determined whether lfnst _ idx [ x0] [ y0] is to be signaled based on whether transform _ skip _ flag [ x0] [ y0] [0] has a particular value (e.g., 0).
For example, when the TREE structure has a DUAL TREE CHROMA type (i.e., DUAL _ TREE _ CHROMA), it may be determined whether lfnst _ idx [ x0] [ y0] is to be signaled based on whether both the transform _ skip _ flag [ x0] [ y0] [1]) and the transform _ skip _ flag [ x0] [ y0] [2] have a particular value (e.g., 0).
[ Table 27]
Figure BDA0003690721400001111
Sixteenth case) as described above with respect to the codes of table 27, lfnst _ idx [ x0] [ y0] may be neither entropy encoded/entropy decoded nor signaled when the following code 16 has a first value (or 0). When the following code 16 has a second value (or 1), lfnst _ idx [ x0] [ y0] may be entropy encoded/entropy decoded and signaled.
[ code 16]
(((treeType==SINGLE_TREE||treeType==DUAL_TREE_LUMA)&&transform_skip_flag[x0][y0][0]==0)||(treeType==DUAL_TREE_CHROMA&&(transform_skip_flag[x0][y0][1]==0&&transform_skip_flag[x0][y0][2]==0)&&ChromaArrayType!=0))
1) Lfnst _ idx [ x0] [ y0] may be entropy encoded/decoded and signaled when the TREE structure has a SINGLE TREE type (i.e., SINGLE _ TREE) or a DUAL TREE LUMA type (i.e., DUAL _ TREE _ LUMA) and transform _ skip _ flag [ x0] [ y0] [0] has a first value (or 0), or 2) when the TREE structure has a DUAL TREE CHROMA type (i.e., DUAL _ TREE _ CHROMA), both transform _ skip _ flag [ x0] [ y0] [1] and transform _ skip _ flag [ x0] [ y0] [2] have a first value (or 0), and CHROMA array type chromaarray type chromarraytype does not have a first value (or 0).
[ Table 28]
Figure BDA0003690721400001121
Seventeenth case) when the following code 17 has a first value (or 0), lfnst _ idx [ x0] [ y0] may be neither entropy coded/decoded nor signaled, as described above with respect to the codes of table 28. When the following code 17 has a second value (or 1), lfnst _ idx [ x0] [ y0] may be entropy encoded/entropy decoded and signaled.
[ code 17]
(((treeType==SINGLE_TREE||treeType==DUAL_TREE_LUMA)&&transform_skip_flag[x0][y0][0]==0)||(treeType==DUAL_TREE_CHROMA&&(transform_skip_flag[x0][y0][1]==0&&transform_skip_flag[x0][y0][2]==0)))
1) Lft _ idx [ x0] [ y0] may be entropy encoded/decoded and signaled when the TREE structure has a SINGLE TREE type (i.e., SINGLE _ TREE) or a DUAL TREE LUMA type (i.e., DUAL _ TREE _ LUMA) and the transform _ skip _ flag [ x0] [ y0] [0] has a first value (or 0), or 2) when the TREE structure has a DUAL TREE CHROMA type (i.e., DUAL _ TREE _ CHROMA) and both the transform _ skip _ flag [ x0] [ y0] [1] and the transform _ skip _ flag [ x0] [ y0] [2] have a first value (or 0).
In the above case, the case where the transform skip mode flag has the first value (or 0) may indicate that at least one of the primary transform and the secondary transform is performed. A case where the transform skip mode flag has the second value (or 1) may indicate that at least one of the primary transform and the secondary transform is not performed. Alternatively, a case where the transform skip mode flag has a second value (or 1) may indicate that the primary transform and the secondary transform are not performed.
In the above case, the chroma array type ChromaArrayType may indicate the type of the chroma signal. For example, a case where ChromaArrayType has a first value (or 0) may indicate that a 4:0:0 color format is used where there is no chroma signal and only a luma signal.
Based on at least one of the encoding parameters described above in the embodiments (such as the intra prediction mode for the target block, the color component of the target block, the size of the target block, and the shape of the target block), at least one of the following may be determined: 1) a reduced set of secondary transform/inverse transform matrices, 2) a reduced secondary transform/inverse transform matrix, and 3) whether to perform a reduced secondary transform/inverse transform.
Fig. 20 is a flow diagram of an encoding method according to an embodiment.
The encoding apparatus 1600 may use at least one of the above-described embodiments when performing a transform on a target block.
At step 2010, a transform method for the target block may be determined.
Step 2010 may be performed by processing unit 1610 or transformation unit 130. Step 2010 may be part of the operation of transform unit 130 described above with reference to fig. 1.
The target block may be a CU. Alternatively, the target block may be a residual block for the CU.
The transforms may include a primary transform and a secondary transform. The description of the transform, the primary transform, and the secondary transform in the foregoing embodiment may also be applied to the transform, the primary transform, and the secondary transform in the present embodiment.
Whether a preliminary transform is to be performed on the target block may be determined based on the first encoding parameters for the target block. Alternatively, whether the preliminary transform is to be performed on the target block may depend on the value of the first encoding parameter for the target block.
For example, when the value of the first encoding parameter for the target block is a first value, the primary transform may not be performed on the target block. When the value of the first encoding parameter for the target block is the second value, the preliminary transform may be performed on the target block.
For example, the first encoding parameter may include information on a residual block for the target block.
For example, the first encoding parameter may include information about a tree of the target block. The encoding parameters may include information about the type of the target block.
For example, the first encoding parameter may be a tree structure or a tree type.
For example, the first encoding parameters may include information regarding partitioning of the target block.
For example, the first encoding parameter may include information specifying an intra sub-partition (ISP) of the target block. The information specifying the ISP may include an ISP logo and/or an ISP mode.
For example, a target block may be partitioned into a plurality of sub-blocks by an ISP, depending on the type of ISP partition indicated by an ISP flag and/or ISP mode. The same primary transform method may be applied to multiple sub-blocks. Whether a primary transform is to be performed on the plurality of sub-blocks may be determined based on a first encoding parameter for a target block.
The primary transformation method corresponding to the primary transformation may be one of a variety of methods.
The primary transform method index may indicate a primary transform method for the target block.
Whether a secondary transform is to be performed on the target block may be determined based on the encoding parameters for the target block. Alternatively, whether the secondary transform is to be performed on the target block may depend on the value of the second encoding parameter for the target block.
For example, when the value of the second encoding parameter for the target block is the first value, the secondary transform may not be performed on the target block. When the value of the second encoding parameter for the target block is a second value, a secondary transform may be performed on the target block.
For example, the second encoding parameter may include information on a residual block for the target block.
For example, the second encoding parameter may include information about a tree of the target block. The encoding parameters may include information about the type of the target block.
For example, the second encoding parameter may be a tree structure or a tree type.
For example, the second encoding parameters may include information regarding the partition of the target block.
For example, the second encoding parameters may include information specifying the ISP of the target block. The information specifying the ISP may include an ISP logo and/or an ISP mode.
For example, a target block may be partitioned into multiple sub-blocks by an ISP based on an ISP flag and/or ISP mode. The same secondary transform method may be applied to the plurality of sub-blocks. Whether a secondary transform is to be performed on the plurality of sub-blocks may be determined based on a second encoding parameter for the target block.
The secondary transformation method corresponding to the secondary transformation may be one of a variety of methods.
The secondary transform method index may indicate a secondary transform method for the target block.
At step 2020, a transform may be performed on the target block using a transform method.
Step 2020 may be performed by processing unit 1610 or transformation unit 130. Step 2020 may be part of the operation of transform unit 130 described above with reference to fig. 1.
In step 2030, a bitstream may be generated. The generated bitstream may be stored in the memory 1640, and may be transmitted to the decoding apparatus 1700 through the communication unit 1620.
Step 2030 may be performed by processing unit 1610 or entropy encoding unit 150. Step 2030 may be part of the operation of entropy coding unit 150 described above with reference to fig. 1.
Step 2030 may be performed after steps 2010 and 2020 have been performed, or in parallel with steps 2010 and 2020.
Each of the codes in tables 7 to 28 may indicate a bitstream. The bitstream may include syntax elements of the codes in tables 7 through 28. Alternatively, the bitstream may include a syntax element in one of the codes in tables 7 to 28.
The preliminary transform method index may be selectively signaled through a bitstream.
For example, when the first encoding parameter has the first value, the bitstream may not include the preliminary transform method index, and the preliminary transform method index may not be signaled. When the first encoding parameter has the second value, the primary transform method index may be included in the bitstream, and the primary transform method index may be signaled.
As illustrated by the codes in tables 7 through 28, whether the primary transformation method index is to be signaled may be determined according to the first conditional statement. The first conditional statement may include a first encoding parameter.
For example, a conditional statement may be an "if" statement, and a condition in the conditional statement may be a code within parentheses of the "if" statement.
For example, when the value of the condition in the first conditional statement is a first value (or 0), the bitstream may not include the primary transform method index, and the primary transform method index may not be signaled.
For example, when the value of the condition in the first conditional statement is a second value (or 1), the bitstream may include a primary transformation method index, and the primary transformation method index may be signaled.
For example, when the value of the condition in the first conditional statement is a first value (or 0), the bitstream may include a primary transformation method index, and the primary transformation method index may be signaled.
For example, when the value of the condition in the first conditional statement is a second value (or 1), the bitstream may not include the primary transform method index, and the primary transform method index may not be signaled.
Whether the primary transform method index is to be signaled may be determined independently of the particular encoding parameters. In other words, it may be determined whether the primary transform method index is to be signaled regardless of the value of the particular encoding parameter. Alternatively, the specific encoding parameter may be excluded when determining whether the primary transform method index is to be signaled.
For example, in the aforementioned codes in tables 7 to 28, whether the primary transformation method index is to be signaled may be determined according to the value of the condition in the first conditional statement. The specific encoding parameter may be an encoding parameter not included in the condition in the first condition statement.
Alternatively, the specific encoding parameter may be a parameter not included in a condition in a conditional statement for (directly) determining whether the primary transform method index is to be signaled. In other words, in the case where the conditional statement is set such that the primary transform method index is not signaled when the value of the condition in the conditional statement is a first value (or 0), and the primary transform method index is signaled when the value of the condition in the conditional statement is a second value (or 0), if the specific coding parameter is not included in the condition in the conditional statement, it can be considered whether the primary transform method index is to be signaled independently of the specific coding parameter.
The first conditional statement may be composed of a plurality of "IF" statements. The condition in the first conditional statement may include a condition in a plurality of "IF" statements. The conditions in the plurality of "IF" statements may be considered as a single condition linked by a logical AND operation.
The primary transform method index may have a first value unless the primary transform method index is included in the bitstream. The first value may indicate that the primary transformation is not applied. Alternatively, the primary transform method index may be derived as a first value indicating that the primary transform is not applied.
The secondary transform method index may be selectively signaled by a bitstream.
For example, when the second encoding parameter has the first value, the bitstream may not include the secondary transform method index, and the secondary transform method index may not be signaled. When the second encoding parameter has the second value, a secondary transform method index may be included in the bitstream, and the secondary transform method index may be signaled.
As illustrated by the codes in tables 7 to 28, whether the secondary transformation method index is to be signaled may be determined according to the second conditional statement. The second conditional statement may include a second encoding parameter.
In an example, the condition in the second conditional statement may be one of a plurality of codes corresponding to code 1 through code 17, a combination of the plurality of codes, or some of the plurality of codes.
In other words, whether the secondary transformation method index is to be signaled may be determined based on 1) one of codes corresponding to codes 1 to 17, 2) a combination of the plurality of codes, or 3) some of the plurality of codes.
For example, when the value of the condition in the second conditional statement is the first value (or 0), the bitstream may not include the secondary transform method index, and the secondary transform method index may not be signaled.
For example, when the value of the condition in the second conditional statement is a second value (or 1), the bitstream may include a secondary transform method index, and the secondary transform method index may be signaled.
For example, when the value of the condition in the second conditional statement is a first value (or 0), the bitstream may include a secondary transform method index, and the secondary transform method index may be signaled.
For example, when the value of the condition in the second conditional statement is a second value (or 1), the bitstream may not include the secondary transform method index, and the secondary transform method index may not be signaled.
Whether the secondary transform method index is to be signaled may be determined independently of the particular encoding parameters. In other words, it may be determined whether the secondary transform method index is to be signaled regardless of the value of the particular encoding parameter. Alternatively, the specific encoding parameter may be excluded when determining whether the secondary transform method index is to be signaled.
For example, the specific coding parameter may be transform _ skip _ flag [ x0] [ y0] [0], transform _ skip _ flag [ x0] [ y0] [1], or transform _ skip _ flag [ x0] [ y0] [2 ].
For example, in the aforementioned codes in tables 7 to 28, whether the secondary transformation method index is to be signaled may be determined according to the value of the condition in the second conditional statement. The specific encoding parameter may be an encoding parameter that is not included in the condition in the second conditional statement.
Alternatively, the specific encoding parameter may be an encoding parameter that is not included in one of the codes corresponding to the codes 1 to 17.
Alternatively, the specific encoding parameter may be a parameter that is not included in a condition in a conditional statement used to (directly) determine whether a secondary transform method index is to be signaled. In other words, in the case where the conditional statement is set such that the secondary transform method index is not signaled when the value of the condition in the conditional statement is a first value (or 0), and the secondary transform method index is signaled when the value of the condition in the conditional statement is a second value (or 0), if the specific coding parameter is not included in the condition in the conditional statement, it can be considered whether the secondary transform method index will be signaled independently of the specific coding parameter.
The second conditional statement may be composed of a plurality of "IF" statements. The condition in the second conditional statement may include a condition in a plurality of "IF" statements. The conditions in the plurality of "IF" statements may be considered as a single condition linked by a logical AND operation.
The secondary transform method index may have a first value unless the secondary transform method index is included in the bitstream. The first value may indicate that the secondary transform is not applied. Alternatively, the secondary transform method index may be derived as a first value indicating that the secondary transform is not applied.
Fig. 21 is a flowchart of a decoding method according to an embodiment.
The decoding apparatus 1700 may use at least one of the above-described embodiments in performing the inverse transform on the target block. Further, the encoding apparatus 1600 may use at least one of the above-described embodiments when performing inverse transform on the target block.
In step 2110, a bitstream may be obtained. A computer-readable storage medium for decoding an image may include a bitstream.
The bitstream may include encoding information regarding the target block, and decoding may be performed on the target block using the encoding information.
Step 2110 may be performed by processing unit 1710, communication unit 1720, or entropy decoding unit 210. Step 2110 may be part of the operation of the entropy decoding unit 210 described above with reference to fig. 2.
The processing unit 1710 may read a bitstream from the memory 1740. The communication unit 1720 may receive a bitstream from the encoding apparatus 1600.
Step 2110 may be performed before step 2120 and step 2130 are performed, or in parallel with step 2120 and step 2130.
Each of the codes in tables 7 to 28 may indicate a bitstream. The bitstream may include syntax elements of the codes in tables 7 to 28. Alternatively, the bitstream may include a syntax element in one of the codes in tables 7 to 28.
At step 2120, an inverse transform method for the target block may be determined.
Step 2120 may be performed by processing unit 1710 or inverse transform unit 230. Alternatively, step 2120 may be performed by processing unit 1610 or inverse transformation unit 170. Step 2120 may be part of the operation of the inverse transform unit 230 described above with reference to fig. 2. Step 2120 may be part of the operation of inverse transform unit 170 described above with reference to fig. 1.
The target block may be a CU. Alternatively, the target block may be a residual block for the CU.
The inverse transform may include a primary transform and a secondary inverse transform. The description of the inverse transform, the primary inverse transform, and the secondary inverse transform in the above embodiment may also be applied to the inverse transform, the primary inverse transform, and the secondary inverse transform in the present embodiment.
The description of the transformation in the above embodiment can also be applied to the inverse transformation in the present embodiment. Here, the input and output of the transform and the inverse transform may be opposite to each other. The input to the transform in the above-described embodiment may correspond to the output from the inverse transform (output reconstructed from the inverse transform) in the present embodiment. The output from the transform in the above-described embodiment may correspond to the input to the inverse transform in the present embodiment.
The description of the primary transform in the above embodiment may also be applied to the primary inverse transform in the present embodiment. Here, the inputs and outputs of the primary transform and the primary inverse transform may be opposite to each other. The input to the primary transform in the above-described embodiment may correspond to the output from the primary inverse transform (the output reconstructed from the inverse transform) in the present embodiment. The output from the primary transform in the above embodiment may correspond to the input to the primary inverse transform in the present embodiment.
During the process of encoding the target block, a primary transform may be applied to the target block, after which a secondary transform may be applied to the target block. According to the present application, during the process of decoding the target block, a secondary inverse transform may be applied to the target block, after which a primary inverse transform may be applied to the target block.
Whether a secondary inverse transform is to be performed on the target block may be determined based on the encoding parameters for the target block. Alternatively, whether or not the secondary inverse transform is to be performed on the target block may depend on the value of the second encoding parameter for the target block.
For example, when the second encoding parameter for the target block has the first value, the secondary inverse transform may not be performed on the target block. When the second encoding parameter for the target block has the second value, a secondary inverse transform may be performed on the target block.
For example, the second encoding parameter may include information on a residual block for the target block.
For example, the second encoding parameter may include information about a tree of the target block. The encoding parameters may include information about the type of the target block.
For example, the second encoding parameter may be a tree structure or a tree type.
For example, the second encoding parameters may include information regarding partitioning of the target block.
For example, the second encoding parameters may include information specifying the ISP of the target block. The information specifying the ISP may include an ISP logo and/or an ISP mode.
For example, a target block may be partitioned into multiple sub-blocks by an ISP based on an ISP flag and/or ISP mode. The same secondary inverse transform method may be applied to a plurality of sub-blocks. Whether a secondary inverse transform is to be performed on the plurality of sub-blocks may be determined based on the second encoding parameters for the target block.
The secondary inverse transformation method corresponding to the secondary inverse transformation may be one of a variety of methods.
The secondary inverse transform method index may indicate a secondary inverse transform method for the target block.
A determination may be made whether a preliminary inverse transform is to be performed on the target block based on the first encoding parameters for the target block. Alternatively, whether the preliminary inverse transform is to be performed on the target block may depend on the value of the first encoding parameter for the target block.
For example, when the first encoding parameter for the target block has a first value, the preliminary inverse transform may not be performed on the target block. When the first encoding parameter for the target block has the second value, a preliminary inverse transform may be performed on the target block.
For example, the first encoding parameter may include information on a residual block for the target block.
For example, the first encoding parameter may include information about a tree of the target block. The encoding parameters may include information on the type of the target block.
For example, the first encoding parameter may be a tree structure or a tree type.
For example, the first encoding parameters may include information regarding partitioning of the target block.
For example, the first encoding parameter may include information specifying an ISP of the target block. The information specifying the ISP may include an ISP logo and/or an ISP mode.
For example, a target block may be partitioned into a plurality of sub-blocks by an ISP according to the type of IPS partition indicated by an ISP flag and/or ISP mode. The same primary inverse transform method may be applied to a plurality of sub-blocks. Whether the primary inverse transform is to be performed on the plurality of sub-blocks may be determined based on the first encoding parameters for the target block.
The primary inverse transformation method corresponding to the primary inverse transformation may be one of a variety of methods.
The primary inverse transform method index may indicate a primary inverse transform method for the target block.
In step 2130, an inverse transform may be performed on the target block using an inverse transform method.
Step 2130 may be performed by processing unit 1710 or inverse transform unit 230. Alternatively, step 2130 may be performed by processing unit 1610 or inverse transforming unit 170. Step 2130 may be part of the operation of the inverse transform unit 230 described above with reference to fig. 2. Step 2130 may be part of the operation of the inverse transform unit 170 described above with reference to fig. 1.
The preliminary transform method index may be selectively signaled through a bitstream.
For example, when the first encoding parameter has the first value, the bitstream may not include the primary transform method index, and the primary transform method index may not be signaled. When the first encoding parameter has the second value, the primary transform method index may be included in the bitstream, and the primary transform method index may be signaled.
As illustrated by the codes in tables 7 through 28, whether the primary transformation method index is to be signaled may be determined according to the first conditional statement. The first conditional statement may include a first encoding parameter.
For example, a conditional statement may be an "if" statement, and a condition in the conditional statement may be a code within parentheses of the "if" statement.
For example, when the value of the condition in the first conditional statement is a first value (or 0), the bitstream may not include the primary transform method index, and the primary transform method index may not be signaled.
For example, when the value of the condition in the first conditional statement is a second value (or 1), the bitstream may include a primary transformation method index, and the primary transformation method index may be signaled.
For example, when the value of the condition in the first conditional statement is a first value (or 0), the bitstream may include a primary transformation method index, and the primary transformation method index may be signaled.
For example, when the value of the condition in the first conditional statement is a second value (or 1), the bitstream may not include the primary transform method index, and the primary transform method index may not be signaled.
Whether the primary transform method index is to be signaled may be determined independently of the particular encoding parameters. In other words, it may be determined whether the primary transform method index is to be signaled regardless of the value of the particular encoding parameter. Alternatively, the specific encoding parameter may be excluded when determining whether the primary transform method index is to be signaled.
For example, in the codes in the aforementioned tables 7 to 28, whether the primary transformation method index is to be signaled may be determined according to the value of the condition in the first conditional statement. The specific encoding parameter may be an encoding parameter that is not included in the condition in the first conditional statement.
Alternatively, the specific encoding parameter may be a parameter not included in a condition in a conditional statement for (directly) determining whether a primary transformation method index is to be signaled. In other words, in the case where the conditional statement is set such that the primary transformation method index is not signaled when the value of the condition in the conditional statement is a first value (or 0), and the primary transformation method index is signaled when the value of the condition in the conditional statement is a second value (or 0), it can be considered whether the primary transformation method index will be signaled independently of the specific coding parameter if the specific coding parameter is not included in the condition in the conditional statement.
The first conditional statement may be composed of a plurality of "IF" statements. The condition in the first conditional statement may include a condition in a plurality of "IF" statements. The conditions in the plurality of "IF" statements may be viewed as a single condition linked by a logical AND operation.
The primary transform method index may have a first value unless the primary transform method index is included in the bitstream. The first value may indicate that the primary transformation is not applied. Alternatively, the primary transform method index may be derived as a first value indicating that the primary transform is not applied.
The secondary transform method index may be selectively signaled by a bitstream.
For example, when the second encoding parameter has the first value, the bitstream may not include the secondary transform method index, and the secondary transform method index may not be signaled. When the second encoding parameter has the second value, a secondary transform method index may be included in the bitstream, and the secondary transform method index may be signaled.
As illustrated by the codes in tables 7 to 28, whether the secondary transformation method index is to be signaled may be determined according to the second conditional statement. The second conditional statement may include a second encoding parameter.
In an example, the condition in the second conditional statement may be one of a plurality of codes corresponding to code 1 to code 17, a combination of the plurality of codes, or some of the plurality of codes.
In other words, whether the secondary transformation method index is to be signaled may be determined based on 1) one code of a plurality of codes corresponding to the codes 1 to 17, 2) a combination of the plurality of codes, or 3) some of the plurality of codes.
For example, when the value of the condition in the second conditional statement is the first value (or 0), the bitstream may not include the secondary transform method index, and the secondary transform method index may not be signaled.
For example, when the value of the condition in the second conditional statement is a second value (or 1), the bitstream may include a secondary transform method index, and the secondary transform method index may be signaled.
For example, when the value of the condition in the second conditional statement is a first value (or 0), the bitstream may include a secondary transform method index, and the secondary transform method index may be signaled.
For example, when the value of the condition in the second conditional statement is a second value (or 1), the bitstream may not include the secondary transform method index, and the secondary transform method index may not be signaled.
Whether the secondary transform method index is to be signaled may be determined independently of the particular encoding parameters. In other words, it may be determined whether the secondary transform method index is to be signaled regardless of the value of the particular encoding parameter. Alternatively, the specific encoding parameter may be excluded when determining whether the secondary transform method index is to be signaled.
For example, the specific coding parameter may be transform _ skip _ flag [ x0] [ y0] [0], transform _ skip _ flag [ x0] [ y0] [1], or transform _ skip _ flag [ x0] [ y0] [2 ].
For example, in the aforementioned codes in tables 7 to 28, whether the secondary transformation method index is to be signaled may be determined according to the value of the condition in the second conditional statement. The specific coding parameter may be a coding parameter that is not included in the condition in the second conditional statement.
Alternatively, the specific encoding parameter may represent an encoding parameter that is not included in one of the codes corresponding to the codes 1 to 17.
Alternatively, the specific encoding parameter may be a parameter that is not included in a condition in a conditional statement used to (directly) determine whether a secondary transform method index is to be signaled. In other words, in the case where the conditional statement is set such that the secondary transform method index is not signaled when the value of the condition in the conditional statement is a first value (or 0), and the secondary transform method index is signaled when the value of the condition in the conditional statement is a second value (or 0), it can be considered whether the secondary transform method index will be signaled independently of the specific coding parameter if the specific coding parameter is not included in the condition in the conditional statement.
The second conditional statement may be composed of a plurality of "IF" statements. The condition in the second conditional statement may include a condition in a plurality of "IF" statements. The conditions in the plurality of "IF" statements may be considered as a single condition linked by a logical AND operation.
The secondary transform method index may have a first value unless the secondary transform method index is included in the bitstream. The first value may indicate that the secondary transformation is not applied. Alternatively, the secondary transform method index may be derived as a first value indicating that the secondary transform is not applied.
The above embodiments may be performed by the encoding apparatus 1600 and the decoding apparatus 1700 using methods identical and/or corresponding to each other. Furthermore, for encoding and/or decoding of images, combinations of one or more of the above embodiments may be used.
In the encoding apparatus 1600 and the decoding apparatus 1700, the order of application embodiments may be different from each other. Alternatively, the order of the application embodiments may be (at least partially) the same in the encoding apparatus 1600 and the decoding apparatus 1700.
In the encoding apparatus 1600 and the decoding apparatus 1700, the order of the application embodiments may be different from each other, or in the encoding apparatus 1600 and the decoding apparatus 1700, the order of the application embodiments may be the same.
The embodiment may be performed on each of a luminance signal and a chrominance signal. The embodiments may be equally performed on a luminance signal and a chrominance signal.
The form of the block to which embodiments of the present disclosure are applied may have a square or non-square shape.
The embodiments of the present disclosure may be applied according to the size of at least one of a target block, an encoding block, a prediction block, a transform block, a current block, an encoding unit, a prediction unit, a transform unit, a unit, and a current unit. Here, the size may be defined as a minimum size and/or a maximum size such that the embodiment is applied, and may be defined as a fixed size to which the embodiment is applied. Further, in the embodiments, the first embodiment may be applied to a first size, and the second embodiment may be applied to a second size. That is, the embodiments can be comprehensively applied according to the size. Further, the embodiments of the present disclosure may be applied only to the case where the size is equal to or greater than the minimum size and less than or equal to the maximum size. That is, the embodiments may be applied only to the case where the block size falls within a specific range.
Further, the embodiments of the present disclosure may be applied only to the case where a condition that a size is equal to or larger than a minimum size and a condition that a size is smaller than or equal to a maximum size are satisfied, where each of the minimum size and the maximum size may be a size of one of the blocks described in the above embodiments and the units described in the above embodiments. That is, a block that is a target of the minimum size may be different from a block that is a target of the maximum size. For example, embodiments of the present disclosure may be applied only to the case where the size of the target block is equal to or greater than the minimum size of the block and less than or equal to the maximum size of the block.
For example, the embodiment may be applied only to the case where the size of the target block is equal to or greater than 8 × 8. For example, the embodiment can be applied only to the case where the size of the target block is equal to or larger than 16 × 16. For example, the embodiment can be applied only to the case where the size of the target block is equal to or larger than 32 × 32. For example, the embodiment can be applied only to the case where the size of the target block is equal to or larger than 64 × 64. For example, the embodiment can be applied only to the case where the size of the target block is equal to or larger than 128 × 128. For example, the embodiment can be applied only to the case where the size of the target block is 4 × 4. For example, the embodiments may be applied only to the case where the size of the target block is less than or equal to 8 × 8. For example, the embodiments may be applied only to the case where the size of the target block is less than or equal to 16 × 16. For example, the embodiment can be applied only to the case where the size of the target block is equal to or larger than 8 × 8 and smaller than or equal to 16 × 16. For example, the embodiment can be applied only to the case where the size of the target block is equal to or larger than 16 × 16 and smaller than or equal to 64 × 64.
The syntax elements of the reduced secondary transform/inverse transform may be entropy-encoded by the encoding apparatus 100 and entropy-decoded by the decoding apparatus 200. For at least one of these syntax elements, one or more of the following binarization, inverse binarization, entropy encoding, and/or entropy decoding methods may be used.
Signed 0-order Exp _ Golomb binarization/inverse binarization method (se (v))
Signed k-order Exp _ Golomb binarization/inverse binarization method (sek (v))
0 th order Exp _ Golomb binarization/inverse binarization method for unsigned positive integers (ue (v))
-the k-order Exp _ Golomb binarization/inverse binarization method for unsigned positive integers (uek (v))
Fixed length binarization/inverse binarization method (f (n))
-truncating Rice binarization/inverse binarization method or truncating unary binarization/inverse binarization method (tu (v))
Truncated binary binarization/inverse binarization method (tb (v))
-context adaptive arithmetic coding/decoding method (ae (v))
-byte-by-byte bit string (b (8))
Signed integer binarization/inverse binarization method (i (n))
-unsigned positive integer binarization/inverse binarization method (u (n)). u (n) may be a fixed length binarization/inverse binarization method
Unary binarization/inverse binarization method
Embodiments of the present disclosure may be applied according to temporal layers. To identify the temporal layer to which an embodiment is applicable, a separate identifier may be signaled, and an embodiment may be applied to a temporal layer specified by the corresponding identifier. Here, the identifier may be defined as the lowest (bottom) layer and/or the highest (top) layer to which the embodiment is applicable, and may be defined to indicate a specific layer to which the embodiment is applied. In addition, fixed time layers for application embodiments may also be defined.
For example, the embodiment can be applied only to a case where the temporal layer of the target image is the lowermost layer. For example, the embodiments may be applied only to the case where the temporal layer identifier of the target image is equal to or greater than 1. For example, the embodiment may be applied only to a case where the temporal layer of the target image is the highest layer.
A stripe type or a parallel block group type of an embodiment of the present invention to which the embodiment is applied may be defined, and the embodiment of the present invention may be applied according to the corresponding stripe type or parallel block group type.
In the above-described embodiments, it may be explained that, during application of a specific process to a specific target, assuming that a specific condition may be required and the specific process is performed under a specific determination, when it has been described that whether the specific condition is satisfied is determined based on a specific encoding parameter or the specific determination is made based on the specific encoding parameter, the specific encoding parameter may be replaced with an additional encoding parameter. In other words, encoding parameters affecting a particular condition or a particular determination may be considered merely exemplary, and it is understood that a combination of one or more additional encoding parameters, in addition to a particular encoding parameter, is used as a particular encoding parameter.
In the above-described embodiments, although the method has been described based on the flowchart as a series of steps or units, the present disclosure is not limited to the order of the steps, and some steps may be performed in an order different from the order of the steps already described or simultaneously with other steps. Furthermore, those skilled in the art will understand that: the steps shown in the flowcharts are not exclusive and may include other steps as well, or one or more steps in the flowcharts may be deleted without departing from the scope of the present disclosure.
The above-described embodiments include examples of various aspects. Although not all possible combinations for indicating the various aspects may be described, a person skilled in the art will appreciate that other combinations are possible than those explicitly described. Accordingly, it is to be understood that the present disclosure includes other alternatives, modifications, and variations which fall within the scope of the appended claims.
The above-described embodiments according to the present disclosure may be implemented as programs that can be executed by various computer devices, and may be recorded on a computer-readable storage medium. Computer readable storage media may include program instructions, data files, and data structures, alone or in combination. The program instructions recorded on the storage medium may be specially designed and configured for the present disclosure, or may be known or available to those having ordinary skill in the computer software art.
Computer-readable storage media may include information used in embodiments of the present disclosure. For example, a computer-readable storage medium may include a bitstream, and the bitstream may include information described above in embodiments of the present disclosure.
The computer-readable storage medium may include a non-transitory computer-readable medium.
Examples of the computer-readable storage medium may include all types of hardware devices specifically configured to record and execute program instructions, such as magnetic media (such as hard disks, floppy disks, and magnetic tapes), optical media (such as Compact Disk (CD) -ROMs and Digital Versatile Disks (DVDs)), magneto-optical media (such as floppy disks, ROMs, RAMs, and flash memories). Examples of program instructions include both machine code, such as created by a compiler, and high-level language code that may be executed by the computer using an interpreter. The hardware devices may be configured to operate as one or more software modules to perform the operations of the present disclosure, and vice versa.
As described above, although the present disclosure has been described based on specific details (such as detailed components and a limited number of embodiments and drawings), which are provided only for easy understanding of the entire disclosure, the present disclosure is not limited to these embodiments, and those skilled in the art will practice various changes and modifications according to the above description.
Therefore, it is to be understood that the spirit of the present embodiments is not limited to the above-described embodiments, and the appended claims and their equivalents and modifications fall within the scope of the present disclosure.

Claims (20)

1. A method of decoding, comprising:
determining an inverse transformation method for the target block; and is provided with
Performing an inverse transform on the target block using the inverse transform method.
2. The decoding method of claim 1, wherein:
the inverse transform includes a secondary inverse transform and a primary inverse transform,
a secondary inverse transformation method corresponding to the secondary inverse transformation is determined based on the encoding parameters for the target block, an
The encoding parameters include information about a tree of the target block.
3. The decoding method of claim 2, wherein the encoding parameter is a tree type.
4. The decoding method of claim 2, wherein:
the secondary inverse transform method is one of a variety of methods,
the secondary inverse transform method index indicates the secondary inverse transform method, an
When the encoding parameter has a specific value, the secondary inverse transform method index is included in a bitstream.
5. The decoding method of claim 4, wherein when the secondary inverse transform method index is not included in the bitstream, the secondary inverse transform method index is derived as a first value indicating that a secondary inverse transform is not applied.
6. The decoding method of claim 1, wherein:
The target block is partitioned into a plurality of sub-blocks by intra sub-partitioning,
the inverse transform includes a secondary inverse transform and a primary inverse transform, an
The same secondary inverse transform method and the same primary inverse transform method are applied to the plurality of sub-blocks.
7. The decoding method of claim 6, wherein:
whether a secondary inverse transform is to be performed on the plurality of sub-blocks is determined based on the encoding parameters for the target block, an
The encoding parameters include information about a tree of the target block.
8. An encoding method, comprising:
determining a transform method for the target block; and is
Performing a transform on the target block using the transform method.
9. The encoding method of claim 8, wherein:
the transformation includes a primary transformation and a secondary transformation,
the secondary transform method corresponding to the secondary transform depends on the encoding parameters for the target block, an
The encoding parameters include information about a tree of the target block.
10. The encoding method of claim 9, wherein the encoding parameter is a tree type.
11. The encoding method of claim 9, wherein:
the secondary transformation method is one of a plurality of methods,
The secondary transformation method index indicates the secondary transformation method, an
When the encoding parameter has a specific value, the secondary transform method index is included in a bitstream.
12. The encoding method of claim 8, wherein:
the target block is partitioned into a plurality of sub-blocks by intra sub-partitioning,
the transformation comprises a primary transformation and a secondary transformation, an
The same primary transform method and the same secondary transform method are applied to the plurality of sub-blocks.
13. The encoding method of claim 12, wherein whether the secondary inverse transform is to be performed on the plurality of sub-blocks depends on encoding parameters for the target block, and
the encoding parameters include information about a tree of the target block.
14. A storage medium storing a bitstream generated by the encoding method of claim 8.
15. A computer-readable storage medium storing a bitstream for decoding an image, wherein:
the bitstream includes encoding information regarding the target block,
decoding of the target block is performed using the coding information,
an inverse transformation method for the target block is determined, and
performing an inverse transform on the target block using the inverse transform method.
16. The computer-readable storage medium of claim 15, wherein:
the inverse transform includes a secondary inverse transform and a primary inverse transform,
a secondary inverse transformation method corresponding to the secondary inverse transformation is determined based on the encoding parameters for the target block, an
The encoding parameters include information about a tree of the target block.
17. The computer-readable storage medium of claim 16, wherein the encoding parameter is a tree type.
18. The computer-readable storage medium of claim 16, wherein:
the secondary inverse transform method is one of a plurality of methods,
the secondary inverse transform method index indicates the secondary inverse transform method, an
When the encoding parameter has a specific value, the secondary inverse transform method index is included in a bitstream.
19. The computer-readable storage medium of claim 18, wherein when the secondary inverse transform method index is not included in the bitstream, the secondary inverse transform method index is derived as a first value indicating that a secondary inverse transform is not applied.
20. The computer-readable storage medium of claim 15, wherein:
the target block is partitioned into a plurality of sub-blocks by intra sub-partitioning;
The inverse transform comprises a secondary inverse transform and a primary inverse transform;
the same secondary inverse transform method and the same primary inverse transform method are applied to the plurality of sub-blocks.
CN202080086337.2A 2019-10-11 2020-10-12 Transform information encoding/decoding method and apparatus, and bit stream storage medium Pending CN114788288A (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
KR10-2019-0126242 2019-10-11
KR20190126242 2019-10-11
KR10-2019-0173708 2019-12-24
KR20190173708 2019-12-24
KR20200004424 2020-01-13
KR10-2020-0004424 2020-01-13
PCT/KR2020/013879 WO2021071342A1 (en) 2019-10-11 2020-10-12 Transform information encoding/decoding method and device, and bitstream storage medium

Publications (1)

Publication Number Publication Date
CN114788288A true CN114788288A (en) 2022-07-22

Family

ID=75744129

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080086337.2A Pending CN114788288A (en) 2019-10-11 2020-10-12 Transform information encoding/decoding method and apparatus, and bit stream storage medium

Country Status (2)

Country Link
KR (1) KR20210043478A (en)
CN (1) CN114788288A (en)

Also Published As

Publication number Publication date
KR20210043478A (en) 2021-04-21

Similar Documents

Publication Publication Date Title
CN110463201B (en) Prediction method and apparatus using reference block
CN111567045A (en) Method and apparatus for using inter prediction information
CN111699682A (en) Method and apparatus for encoding and decoding using selective information sharing between channels
US11985325B2 (en) Method, apparatus, and recording medium for encoding/decoding image by using geometric partitioning
CN112740697A (en) Image encoding/decoding method and apparatus, and recording medium storing bit stream
US20220312009A1 (en) Method and apparatus for image encoding and image decoding using area segmentation
US20230013063A1 (en) Method and device for encoding/decoding image by using palette mode, and recording medium
CN111684801A (en) Bidirectional intra prediction method and apparatus
US11812013B2 (en) Method, apparatus and storage medium for image encoding/decoding using subpicture
CN112740694A (en) Method and apparatus for encoding/decoding image and recording medium for storing bitstream
CN114450946A (en) Method, apparatus and recording medium for encoding/decoding image by using geometric partition
CN116325730A (en) Method, apparatus and recording medium for encoding/decoding image by using geometric partition
US20220201295A1 (en) Method, apparatus and storage medium for image encoding/decoding using prediction
CN111919448A (en) Method and apparatus for image encoding and image decoding using temporal motion information
US20220272321A1 (en) Method, device, and recording medium for encoding/decoding image using reference picture
CN114270865A (en) Method, apparatus and recording medium for encoding/decoding image
CN113841404A (en) Video encoding/decoding method and apparatus, and recording medium storing bitstream
CN113545052A (en) Image encoding/decoding method and apparatus, and recording medium storing bit stream
US20220295059A1 (en) Method, apparatus, and recording medium for encoding/decoding image by using partitioning
CN114270828A (en) Method and apparatus for image encoding and image decoding using block type-based prediction
KR20210063276A (en) Method, apparatus and recoding medium for video processing using motion prediction model
CN114503566A (en) Image encoding/decoding method and apparatus, and recording medium storing bit stream
US11838506B2 (en) Method, apparatus and storage medium for image encoding/decoding
CN114788288A (en) Transform information encoding/decoding method and apparatus, and bit stream storage medium
US20230082092A1 (en) Transform information encoding/decoding method and device, and bitstream storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination