WO2020032049A1 - Appareil de décodage d'image animée et appareil de codage d'image animée - Google Patents

Appareil de décodage d'image animée et appareil de codage d'image animée Download PDF

Info

Publication number
WO2020032049A1
WO2020032049A1 PCT/JP2019/030950 JP2019030950W WO2020032049A1 WO 2020032049 A1 WO2020032049 A1 WO 2020032049A1 JP 2019030950 W JP2019030950 W JP 2019030950W WO 2020032049 A1 WO2020032049 A1 WO 2020032049A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
picture
restricted
prediction
unit
Prior art date
Application number
PCT/JP2019/030950
Other languages
English (en)
Japanese (ja)
Inventor
瑛一 佐々木
将伸 八杉
中條 健
友子 青野
知宏 猪飼
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2018147658A external-priority patent/JP2021180342A/ja
Priority claimed from JP2018148470A external-priority patent/JP2021180343A/ja
Priority claimed from JP2018148471A external-priority patent/JP2021180344A/ja
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Priority to US17/264,869 priority Critical patent/US12003775B2/en
Priority to EP19847133.6A priority patent/EP3836541A4/fr
Priority to CN201980052552.8A priority patent/CN112534810A/zh
Publication of WO2020032049A1 publication Critical patent/WO2020032049A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/55Motion estimation with spatial constraints, e.g. at image or region borders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • H04N19/865Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness with detection of the former encoding block subdivision in decompressed video

Definitions

  • One aspect of an embodiment of the present invention relates to a predicted image generation device, a moving image decoding device, and a moving image encoding device.
  • a moving image encoding device that generates encoded data by encoding a moving image, and a moving image that generates a decoded image by decoding the encoded data
  • An image decoding device is used.
  • Specific moving image coding methods include, for example, methods proposed in H.264 / AVC and HEVC (High-Efficiency Video Coding).
  • an image (picture) constituting a moving picture includes a slice obtained by dividing the picture and a coding tree unit (CTU: Coding Tree Unit) obtained by dividing the slice. ), A coding unit obtained by dividing the coding tree unit (sometimes called a coding unit (Coding Unit: CU)), and a transform unit obtained by dividing the coding unit (TU: Transform @ Unit) is managed and encoded / decoded for each CU.
  • CTU Coding Tree Unit
  • a predicted image is usually generated based on a locally decoded image obtained by encoding / decoding an input image, and the predicted image is converted from the input image (original image).
  • a prediction error (sometimes called a “difference image” or a “residual image”) obtained by subtraction is encoded.
  • an inter-screen prediction inter prediction
  • an intra-screen prediction intra prediction
  • Non-Patent Literature 1 can be cited as a technique for encoding and decoding moving images in recent years.
  • FIG. 1 is a diagram showing an example of a generated code amount for each picture when a certain moving image is coded.
  • the horizontal axis indicates time (continuous pictures), and the vertical axis indicates the generated code amount of each picture.
  • FIG. 1 is a diagram illustrating an example of a generated code amount for each picture when a certain moving image is encoded, in which the horizontal axis indicates time (continuous pictures) and the vertical axis indicates the generated code amount of each picture.
  • an intra-picture I-picture
  • intra-refresh is performed in which only a part of one picture is intra-predicted and other areas are inter-predicted (there may be areas for intra prediction). There is a method called.
  • an object of the present invention is to reduce the delay while reducing the encoding efficiency as compared with an I picture without impairing the functionality of random access.
  • An object of the present invention is to provide a mechanism for realizing video encoding and decoding.
  • a video decoding device divides a picture into a restricted region and a non-restricted region, and performs intra prediction on blocks included in the restricted region by referring to only pixels in the restricted region in the picture, or , Prediction using inter prediction referring to a restricted reference region of a reference picture of the picture, and intra prediction referring to decoded pixels in the picture for blocks included in the unrestricted region, or Prediction is performed using inter prediction referring to a reference picture, and after decoding the picture, the restricted area of the picture is set as the restricted reference area.
  • a refresh mechanism in a moving image, to generate encoded data that is randomly accessible and has a small variation in code amount while suppressing a decrease in encoding efficiency. .
  • encoding at a substantially constant bit rate can be realized through encoded data, and a delay occurring for each I picture can be avoided.
  • FIG. 9 is a diagram illustrating an example of a generated code amount for each picture when a certain moving image is encoded. It is a figure which shows the hierarchical structure of the data of an encoding stream. It is a figure which shows the example of CTU division.
  • FIG. 3 is a conceptual diagram illustrating an example of a reference picture and a reference picture list. It is the schematic which shows the kind (mode number) of an intra prediction mode. It is a figure explaining a restricted area and an unrestricted area of the present invention.
  • FIG. 3 is a diagram illustrating a range in which a target block according to the present invention can be referred to. It is a figure explaining setting of a restricted reference area.
  • FIG. 3 is a diagram illustrating an example of a generated code amount for each picture when a certain moving image is encoded. It is a figure which shows the hierarchical structure of the data of an encoding stream. It is a figure which shows the example of CTU division.
  • FIG. 3 is a
  • FIG. 3 is a diagram illustrating a stepwise refresh area according to the present invention.
  • FIG. 4 is a diagram illustrating a new refresh area, a restricted area, and an unrestricted area according to the present invention.
  • FIG. 4 is a diagram illustrating a new refresh area according to the present invention.
  • FIG. 4 is a diagram illustrating a relationship between a new refresh area and a CU.
  • FIG. 4 is a diagram illustrating types of a new refresh area according to the present invention.
  • FIG. 4 is another diagram illustrating a new refresh area according to the present invention.
  • FIG. 4 is a diagram illustrating syntax required for stepwise refresh.
  • FIG. 4 is a diagram illustrating a relationship between a restriction area and an overlap area.
  • FIG. 3 is a schematic diagram illustrating a configuration of a moving image decoding device.
  • FIG. 2 is a block diagram illustrating a configuration of a video encoding device. It is a schematic diagram showing the composition of an inter prediction parameter decoding part.
  • FIG. 3 is a schematic diagram illustrating configurations of a merge prediction parameter derivation unit and an AMVP prediction parameter derivation unit.
  • FIG. 4 is a schematic diagram illustrating a configuration of an intra prediction parameter decoding unit.
  • FIG. 3 is a schematic diagram illustrating a configuration of an inter prediction image generation unit. It is a figure explaining boundary padding and boundary motion vector clipping.
  • FIG. 3 is a diagram illustrating a reference area used for intra prediction. It is a figure showing composition of an intra prediction picture generation part.
  • FIG. 4 is a diagram illustrating a reference pixel of a reference pixel filter. It is a figure which shows the relationship between the restricted area
  • FIG. 3 is a schematic diagram illustrating a configuration of an inter prediction parameter encoding unit.
  • FIG. 3 is a schematic diagram illustrating a configuration of an intra prediction parameter encoding unit.
  • FIG. 1 is a diagram illustrating a configuration of a transmission device equipped with a video encoding device according to the present embodiment and a reception device equipped with a video decoding device. (a) shows a transmitting device equipped with a moving picture encoding device, and (b) shows a receiving device equipped with a moving picture decoding device.
  • FIG. 1 is a diagram illustrating a configuration of a transmission device equipped with a video encoding device according to the present embodiment and a reception device equipped with a video decoding device. (a) shows a transmitting device equipped with a moving picture encoding device
  • FIG. 1 is a diagram illustrating a configuration of a recording device equipped with a moving image encoding device according to the present embodiment and a playback device equipped with a moving image decoding device.
  • (a) shows a recording device equipped with a video encoding device
  • (b) shows a playback device equipped with a video decoding device.
  • 1 is a schematic diagram illustrating a configuration of an image transmission system according to an embodiment.
  • FIG. 4 is a diagram illustrating a deblocking filter.
  • FIG. 4 is a diagram illustrating a relationship between a restricted area and a deblocking filter.
  • FIG. 4 is a diagram illustrating a relationship between a restriction region and a deblocking filter region.
  • 5 is a flowchart illustrating an operation of a deblocking filter.
  • FIG. 32 is a schematic diagram showing the configuration of the image transmission system 1 according to the present embodiment.
  • the image transmission system 1 is a system that transmits an encoded stream obtained by encoding an encoding target image, decodes the transmitted encoded stream, and displays an image.
  • the image transmission system 1 includes a moving image coding device (image coding device) 11, a network 21, a moving image decoding device (image decoding device) 31, and a moving image display device (image display device) 41. .
  • the image T is input to the video encoding device 11.
  • the network 21 transmits the coded stream Te generated by the video encoding device 11 to the video decoding device 31.
  • the network 21 is the Internet, a wide area network (WAN: Wide Area Network), a small network (LAN: Local Area Network), or a combination thereof.
  • the network 21 is not limited to a two-way communication network, but may be a one-way communication network for transmitting broadcast waves such as terrestrial digital broadcasting and satellite broadcasting. Further, the network 21 may be replaced by a storage medium that records an encoded stream Te such as a DVD (Digital Versatile Disc: registered trademark) and a BD (Blu-ray (registered trademark) Disc: registered trademark).
  • the video decoding device 31 decodes each of the encoded streams Te transmitted by the network 21, and generates one or a plurality of decoded images Td.
  • the video display device 41 displays all or a part of one or a plurality of decoded images Td generated by the video decoding device 31.
  • the moving image display device 41 includes a display device such as a liquid crystal display and an organic EL (Electro-luminescence) display. Examples of the form of the display include stationary, mobile, and HMD.
  • a display device such as a liquid crystal display and an organic EL (Electro-luminescence) display. Examples of the form of the display include stationary, mobile, and HMD.
  • X? Y: z is a ternary operator that takes y when x is true (other than 0) and z when x is false (0).
  • Abs (a) is a function that returns the absolute value of a.
  • Int (a) is a function that returns the integer value of a.
  • Floor (a) is a function that returns the largest integer less than or equal to a.
  • Ceil (a) is a function that returns the largest integer greater than or equal to a.
  • a / d represents the division of a by d (rounded down to the decimal point).
  • FIG. 2 is a diagram showing a hierarchical structure of data in the encoded stream Te.
  • the coded stream Te illustratively includes a sequence and a plurality of pictures constituting the sequence.
  • (A) to (f) of FIG. 2 respectively show an encoded video sequence defining a sequence SEQ, an encoded picture defining a picture PICT, an encoded slice defining a slice S, and an encoded slice defining slice data.
  • FIG. 3 is a diagram illustrating data, an encoding tree unit CTU included in encoded slice data, and an encoding unit CU included in the encoding tree unit.
  • the encoded video sequence In the encoded video sequence, a set of data referred to by the video decoding device 31 to decode the sequence SEQ to be processed is defined. As shown in FIG. 2A, the sequence SEQ includes a video parameter set (Video Parameter Set), a sequence parameter set SPS (Sequence Parameter Set), a picture parameter set PPS (Picture Parameter Set), a picture PICT, and additional extension.
  • Information Contains SEI (Supplemental Enhancement Information).
  • the video parameter set VPS includes, in a moving image composed of a plurality of layers, a set of encoding parameters common to a plurality of moving images and a plurality of layers included in the moving image and encoding parameters related to individual layers. Sets are defined.
  • the sequence parameter set SPS defines a set of encoding parameters that the video decoding device 31 refers to for decoding the target sequence. For example, the width and height of a picture are defined. Note that a plurality of SPSs may exist. In that case, one of the plurality of SPSs is selected from the PPS.
  • the picture parameter set PPS defines a set of encoding parameters referred to by the video decoding device 31 to decode each picture in the target sequence. For example, a reference value (pic_init_qp_minus26) of a quantization width used for decoding a picture and a flag (weighted_pred_flag) indicating application of weighted prediction are included. Note that a plurality of PPSs may exist. In that case, any one of the plurality of PPSs is selected from each picture in the target sequence.
  • the picture PICT includes slice 0 to slice NS-1 (NS is the total number of slices included in the picture PICT).
  • Coding slice In the coded slice, a set of data referred to by the video decoding device 31 to decode the processing target slice S is defined.
  • the slice includes a slice header and slice data as shown in FIG. 2 (c).
  • the slice header includes a group of encoding parameters referred to by the video decoding device 31 in order to determine a decoding method for the target slice.
  • the slice type designation information (slice_type) that designates a slice type is an example of an encoding parameter included in a slice header.
  • the slice types that can be specified by the slice type specification information include (1) an I slice using only intra prediction at the time of encoding, (2) a P slice using unidirectional prediction or intra prediction at the time of encoding, (3) B-slice using unidirectional prediction, bidirectional prediction, or intra prediction at the time of encoding.
  • the inter prediction is not limited to uni-prediction and bi-prediction, and a prediction image may be generated using more reference pictures.
  • P and B slices they indicate slices including blocks that can use inter prediction.
  • the slice header may include a reference (pic_parameter_set_id) to the picture parameter set PPS.
  • the slice data includes a CTU as shown in FIG. 2 (d).
  • the CTU is a block of a fixed size (for example, 64 ⁇ 64) constituting a slice, and may be called a maximum coding unit (LCU: Largest Coding Unit).
  • a set of data referred to by the video decoding device 31 for decoding the processing target CTU is defined.
  • the CTU is based on a recursive quadtree (QT (Quad Tree)), binary (BT (Binary Tree)) or ternary (TT (Ternary Tree)) coding process. Is divided into coding units CU, which are typical units.
  • the BT division and the TT division are collectively called a multi-tree division (MT (Multi Tree) division).
  • MT Multi Tree division
  • a tree-structured node obtained by recursive quad-tree division is called a coding node.
  • Intermediate nodes of the quadtree, the binary tree, and the ternary tree are coding nodes, and the CTU itself is defined as the highest coding node.
  • CT includes, as CT information, a QT split flag (cu_split_flag) indicating whether or not to perform QT split, an MT split mode (split_mt_mode) indicating the presence / absence of MT split, an MT split direction (split_mt_dir) indicating a split direction of the MT split, An MT split type (split_mt_type) indicating the split type of the MT split is included.
  • cu_split_flag, split_mt_flag, split_mt_dir, split_mt_type ⁇ are transmitted for each coding node.
  • the coding node is divided into four coding nodes (FIG. 3 (b)).
  • cu_split_flag is 0, if split_mt_flag is 0, the coding node is not divided and has one CU as a node (FIG. 3 (a)).
  • CU is a terminal node of the coding node, and is not further divided.
  • the CU is a basic unit of the encoding process.
  • split_mt_flag the encoded node is divided into MTs as follows.
  • split_mt_type is 0, when split_mt_dir is 1, the coding node is horizontally divided into two coding nodes (FIG. 3 (d)), and when split_mt_dir is 0, the coding node is vertical to the two coding nodes. It is divided (Fig. 3 (c)).
  • split_mt_type is 1, if split_mt_dir is 1, the coding node is horizontally divided into three coding nodes (FIG. 3 (f)), and if split_mt_dir is 0, the coding node becomes three coding nodes. (Fig. 3 (e)).
  • the CU size is 64x64 pixels, 64x32 pixels, 32x64 pixels, 32x32 pixels, 64x16 pixels, 16x64 pixels, 32x16 pixels, 16x32 pixels, 16x16 pixels, 64x8 pixels, 8x64 pixels 32x8 pixels, 8x32 pixels, 16x8 pixels, 8x16 pixels, 8x8 pixels, 64x4 pixels, 4x64 pixels, 32x4 pixels, 4x32 pixels, 16x4 pixels, 4x16 pixels, 8x4 pixels, 4x8 pixels, and any of 4x4 pixels .
  • a set of data referred to by the video decoding device 31 to decode the CU to be processed is defined.
  • the CU includes a CU header CUH, a prediction parameter, a conversion parameter, a quantized transform coefficient, and the like.
  • the prediction mode and the like are defined in the CU header.
  • the prediction process is performed in units of CUs, or in units of sub-CUs obtained by further dividing the CU.
  • the sizes of the CU and the sub-CU are equal, there is one sub-CU in the CU.
  • the CU is split into sub-CUs. For example, if the CU is 8x8 and the sub-CU is 4x4, the CU is divided into four sub-CUs, which are divided into two horizontal parts and two vertical parts.
  • Intra prediction is prediction within the same picture
  • inter prediction refers to prediction processing performed between different pictures (for example, between display times and between layer images).
  • the quantized transform coefficients may be entropy-coded in subblock units such as 4 ⁇ 4.
  • the prediction image is derived from prediction parameters associated with the block.
  • the prediction parameters include intra prediction and inter prediction prediction parameters.
  • the inter prediction parameter includes a prediction list use flag predFlagL0, predFlagL1, a reference picture index refIdxL0, refIdxL1, and a motion vector mvL0, mvL1.
  • the prediction list use flags predFlagL0 and predFlagL1 are flags indicating whether reference picture lists called L0 list and L1 list are used, respectively. When the value is 1, the corresponding reference picture list is used.
  • a flag other than 0 for example, 1) is XX, 0 is not XX, and logical negation, logical product, etc. Treat 1 as true and 0 as false (the same applies hereinafter).
  • other values can be used as a true value and a false value in an actual device or method.
  • inter prediction parameters include, for example, merge flag merge_flag, merge index merge_idx, inter prediction identifier inter_pred_idc, reference picture index refIdxLX, prediction vector index mvp_LX_idx, and difference vector mvdLX.
  • the reference picture list is a list including reference pictures stored in the reference picture memory 306.
  • FIG. 4 is a conceptual diagram showing an example of a reference picture and a reference picture list in a picture structure for low delay.
  • a rectangle is a picture
  • an arrow is a picture reference relationship
  • a horizontal axis is time
  • I, P, and B in the rectangle are intra pictures
  • uni-prediction pictures bi-prediction pictures
  • numbers in the rectangle are decoding. Indicates the order.
  • the decoding order of pictures is I0, P1 / B1, P2 / B2, P3 / B3, P4 / B4, and the display order is the same.
  • FIG. (B) in the figure shows an example of a reference picture list of picture B3 (target picture).
  • the reference picture list is a list representing reference picture candidates, and one picture (slice) may have one or more reference picture lists.
  • the target picture B3 has two reference picture lists, an L0 list RefPicList0 and an L1 list RefPicList1.
  • the reference picture list is only the L0 list.
  • LX is a description method used when L0 prediction and L1 prediction are not distinguished, and hereinafter, LX is replaced with L0 and L1 to distinguish between parameters for the L0 list and parameters for the L1 list.
  • the prediction parameter decoding (encoding) method includes a merge prediction (merge) mode and an AMVP (Adaptive Motion Vector Prediction) mode.
  • a merge flag merge_flag is a flag for identifying these.
  • the merge prediction mode is a mode in which a prediction list use flag predFlagLX (or an inter prediction identifier inter_pred_idc), a reference picture index refIdxLX, and a motion vector mvLX are not included in encoded data but are derived from prediction parameters of already processed neighboring blocks.
  • the merge index merge_idx is an index indicating which prediction parameter is used as the prediction parameter of the target block among the prediction parameter candidates (merge candidates) derived from the processed block.
  • the AMVP mode is a mode in which an inter prediction identifier inter_pred_idc, a reference picture index refIdxLX, and a motion vector mvLX are included in encoded data.
  • the motion vector mvLX is encoded as a prediction vector index mvp_LX_idx for identifying the prediction vector mvpLX and a difference vector mvdLX.
  • the inter prediction identifier inter_pred_idc is a value indicating the type and number of reference pictures, and takes one of PRED_L0, PRED_L1, and PRED_BI.
  • PRED_L0 and PRED_L1 indicate uni-prediction using one reference picture managed by the L0 list and the L1 list, respectively.
  • PRED_BI indicates bi-prediction BiPred using two reference pictures managed by the L0 list and the L1 list.
  • the motion vector mvLX indicates a shift amount between blocks on two different pictures.
  • the prediction vector and the difference vector related to the motion vector mvLX are called a prediction vector mvpLX and a difference vector mvdLX, respectively.
  • the intra prediction parameters include a luminance prediction mode IntraPredModeY and a color difference prediction mode IntraPredModeC.
  • FIG. 5 is a schematic diagram showing types (mode numbers) of intra prediction modes. As shown in the figure, there are, for example, 67 types (0 to 66) of intra prediction modes. For example, Planar prediction (0), DC prediction (1), Angular prediction (2-66). Further, for the color difference, an LM mode (67 to 72) may be added.
  • ⁇ ⁇ Syntax elements for deriving intra prediction parameters include, for example, prev_intra_luma_pred_flag, mpm_idx, rem_selected_mode_flag, rem_selected_mode, rem_non_selected_mode, and the like.
  • MPM prev_intra_luma_pred_flag is a flag indicating whether or not the luminance prediction mode IntraPredModeY of the target block matches the MPM (Most Probable Mode).
  • MPM is a prediction mode included in the MPM candidate list mpmCandList [].
  • the MPM candidate list is a list that stores candidates estimated to have a high probability of being applied to the target block from the intra prediction mode of a neighboring block and a predetermined intra prediction mode.
  • the prev_intra_luma_pred_flag is 1, the luminance prediction mode IntraPredModeY of the target block is derived using the MPM candidate list and the index mpm_idx.
  • IntraPredModeY mpmCandList [mpm_idx] (REM)
  • the intra prediction mode is selected from the remaining modes RemIntraPredMode excluding the intra prediction mode included in the MPM candidate list from the entire intra prediction mode.
  • An intra prediction mode that can be selected as RemIntraPredMode is called “non-MPM” or “REM”.
  • the flag rem_selected_mode_flag is a flag for specifying whether to refer to the rem_selected_mode to select the intra prediction mode or to refer to the rem_non_selected_mode to select the intra prediction mode.
  • RemIntraPredMode is derived using rem_selected_mode or rem_non_selected_mode.
  • FIG. 6 is a diagram for explaining the region A and the region B of the present invention.
  • the areas A and B in the picture are set.
  • the region A can be predicted only from the region A, and processing such as padding is performed outside the region.
  • the area B can be predicted from the entire picture including the area A.
  • the prediction processing refers to intra prediction, inter prediction, loop filter processing, and the like.
  • the area A since the encoding and decoding processes are closed in the area A, only the area A can be decoded.
  • the area A is referred to as a restricted area (first area, control area, clean area, refreshed area, area A).
  • an area other than the restricted area is also referred to as a non-restricted area (second area, non-control area, dirty area, unrefreshed area, area B, outside the restricted area).
  • an area that is encoded / decoded from only intra prediction and an area that has already been encoded in intra prediction is a restricted area.
  • the region to be coded / decoded is also the restricted region.
  • a region to be coded / decoded with reference to a restricted region in a reference picture is also a restricted region. That is, the restricted area is an area that is encoded and decoded with reference to the restricted area.
  • the upper left position of the restricted area is indicated by (xRA_st, yRA_st), the lower right position by (xRA_en, yRA_en), and the size by (wRA, hRA). Since the position and the size have the following relationship, the other may be derived from one.
  • xRA_en xRA_st + wRA-1
  • yRA_en yRA_st + hRA-1
  • the upper left position of the restricted reference area at time j is (xRA_st [j], yRA_st [j])
  • the lower right position is (xRA_en [j], yRA_en [j])
  • the size is (wRA [j], hRA [ j]).
  • the position of the restricted reference area of the reference picture Ref is (xRA_st [Ref], yRA_st [Ref])
  • the lower right position is (xRA_en [Ref], yRA_en [Ref]
  • the size is (wRA [Ref], hRA [ Ref]).
  • the following determination formula may be used.
  • the upper left coordinate of the current block Pb is (xPb, yPb)
  • the width and height are bW and bH
  • the following determination formula may be used.
  • the moving picture coding device and the moving picture decoding device in this specification perform the following operations.
  • FIG. 7 is a diagram showing a range in which a restricted region can be referred to in intra prediction, inter prediction, and a loop filter according to the present invention.
  • FIG. 7A shows a range in which the target block included in the restricted area can be referred to.
  • the already encoded / decoded region included in the restricted region in the same picture (target picture) as the target block is a range in which the target block can be referred to by intra prediction, inter prediction, and a loop filter.
  • the restricted region (restricted reference region) in the reference picture is a range in which the target block can be referred to by the inter prediction and the loop filter.
  • FIG. 7B shows a range in which the target block included in the unrestricted area can be referred to.
  • the already encoded / decoded region in the current picture is a range in which the current block can be referred to in intra prediction and inter prediction.
  • all the regions in the reference picture are in a range that can be referred to in inter prediction.
  • the target block included in the restricted region performs intra prediction referring only to the pixels of the restricted region in the target picture, or performs inter prediction referring to the restricted reference region of the reference picture.
  • the target block included in the restricted region is The coding of the target block is performed by referring to the coding parameters (for example, the intra prediction direction, the motion vector, and the reference picture index) of the restricted area in the current picture or by referring to the coding parameters of the restricted reference area of the reference picture. Derivation parameters are derived.
  • loop filter processing is performed with reference to only the pixels in the restricted area in the target picture.
  • prediction parameters (intra prediction mode, motion vector) of a target block may be derived using prediction parameters of an adjacent region.
  • the following processing may be performed.
  • intra prediction and inter prediction if the target block is a restricted area (IsRA (xPb, yPb) is true) and the reference position (xNbX, yNbX) of the block adjacent to the target block is an unrestricted area (IsRA (xNbX, yNbX) is false In the case of), the value of the adjacent block is not used for deriving the prediction parameter.
  • the target block is a restricted area (IsRA (xPb, yPb) is true) and the reference position (xNbX, yNbX) of an adjacent block of the target block is a restricted area (IsRA (xNbX, yNbX) is true)
  • the position (xNbX, yNbX) is used for deriving a prediction parameter.
  • the present invention may be applied to loop filter processing.
  • the determination of the restricted area may be used for the entire determination outside the screen, similarly to the determination outside the screen or the unit of parallel processing (slice boundary, tile boundary).
  • the target block is a restricted area (IsRA (xPb, yPb) is true) and the reference position (xNbX, yNbX) of the target block is a restricted area (IsRA (xNbX, yNbX) is true)
  • IsRA xNbX, yNbX
  • the target block is in the screen, and the reference position is not in the same parallel processing unit as the target block, and the target block is in the unrestricted area, or the reference position (xNbX, yNbX) of the target block is restricted.
  • availableNbX 1
  • the prediction parameter of the reference position is used to derive the prediction parameter of the target block, and the loop filter is used.
  • the motion compensation unit derives the case where the reference pixel is in the restricted reference area using the following determination formula.
  • the following determination formula may be used.
  • the motion compensation unit may clip the reference pixel to a position in the restricted area using the following equation.
  • xRef Clip3 (xRA_st [j], xRA_en [j], xRef)
  • yRef Clip3 (yRA_st [j], yRA_en [j], yRef)
  • the following derivation formula may be used.
  • xRef Clip3 (xRA_st [j], xRA_st [j] + wRA [j] -1, xRef)
  • yRef Clip3 (yRA_st [j], yRA_st [j] + hRA [j] -1, yRef)
  • the position of the restricted area is transmitted from the moving picture encoding apparatus to the moving picture decoding apparatus based on the stepwise refresh information described later. Note that the position and size of the restricted area are not derived according to time (e.g., POC), and after decoding the current picture or at the start of decoding of the current picture, the reference picture Ref in the reference memory may be set. Good. In this case, by specifying the reference picture Ref, the position and size of the restricted area can be derived.
  • (Set / update restricted reference area) The restricted area of a picture is set as the restricted reference area when a picture is used as a reference picture.
  • a restricted reference area may be set.
  • FIG. 8 is a diagram illustrating updating of the restricted reference area of the reference picture in the reference memory.
  • the restricted reference area of the reference picture j in the reference memory is the upper left coordinate (xRA_st [j], yRA_st [j]), the lower right coordinate (xRA_en [j], yRA_en [j]), and the size (wRA [j], hRA [j]), the restricted area of the target picture at time i, that is, upper left coordinates (xRA_st [i], yRA_st [i]), lower right coordinates (xRA_en [i], yRA_en [i]), size (wRA [i], hRA [i]).
  • the restricted reference area when storing the reference picture i in the reference memory may be as follows.
  • Restricted reference area of reference picture j upper left coordinates (xRA_st [j], yRA_st [j]), lower right coordinates (xRA_en [j], yRA_en [j]), size (wRA [j], hRA [j])
  • Restricted reference area of reference picture i upper left coordinates (xRA_st [i], yRA_st [i]), lower right coordinates (xRA_en [i], yRA_en [i]), size (wRA [i], hRA [i])
  • the restricted reference area of the reference picture may be smaller than the time when the current picture is decoded.
  • Restricted reference area of reference picture j upper left coordinates (xRA_st [j], yRA_st [j]), lower right coordinates (xRA_en [j], yRA_en [j]), size (wRA [j], hRA [j])
  • Restricted reference area of reference picture i upper left coordinates (xRA_st [i], yRA_st [i]), lower right coordinates (xRA_en [i] -wOVLP, yRA_en [i]), size (wRA [i] -wOVLP, hRA [ i])
  • a stepwise decoded refresh picture (Sequentially Decoder Refresh, SDR picture)
  • all areas that can be referenced from the restricted area in the reference picture (restricted reference area) are cleared to be unreferenceable.
  • a flag indicating whether to set a restricted reference area may be set to 0, or the restricted reference area may be eliminated by the following equation.
  • the restricted area in the reference picture is also called a restricted reference area (first reference area, non-control reference area, clean reference area, refreshed reference area, reference area A). Also referred to as a non-restricted reference area (second reference area, non-control reference area, dirty reference area, unrefreshed reference area, reference area B).
  • the outline of the refresh (stepwise refresh, stepwise decode refresh) of the present invention will be described.
  • the gradual refresh of the present invention is a stepwise transmission of a restricted area composed of intra prediction, and in a subsequent picture, expanding the size of the restricted area to the entire screen to gradually reduce the entire picture.
  • Refresh technology Note that, in the present picture, an area newly added as a restricted area is referred to as a new refresh area in this specification.
  • FIG. 9 is a diagram for explaining refresh (stepwise refresh) of the present invention.
  • a restricted area (new refresh area FRA) composed of intra prediction is inserted in a picture serving as a refresh start point.
  • the picture that is the starting point of this refresh is called an SDR picture (Sequentially Decoder Refresh, random access point).
  • SDR picture Sequially Decoder Refresh, random access point.
  • FRA Form Refresh Area
  • FIG. 9 (a) shows the restricted area at each time t.
  • a restricted region composed entirely of intra prediction is decoded.
  • all restricted reference areas in the reference picture are cleared to make reference impossible.
  • FIG. 9B shows a restricted reference area on a reference picture.
  • the restricted reference area is indicated by a horizontal line area.
  • a configuration in which SDR pictures are periodically inserted is also possible. In this specification, an SDR picture is inserted for each SDR cycle SDRperiod.
  • FIG. 10 is a view clearly showing an area (new refresh area) to be newly added to the target picture in the restricted area.
  • the entire picture can be refreshed by adding a new refresh area FRA step by step and setting a new restricted area including the added FRA.
  • all parts of the new refresh area FRA may be an area (Intra Refresh Area, IRA) consisting only of intra prediction.
  • IRA Intra Refresh Area
  • the restricted area must not refer to a picture before the first picture in the SDR cycle.
  • FIG. 11A shows an example in which one picture is divided into numIR areas (refresh unit areas).
  • numIR 5.
  • FIG. 11B is a diagram showing an example in which a new refresh area FRA (hatched area in the figure) added to the restricted area shifts from left to right.
  • FRA new refresh area
  • the FRA shifts to the right
  • the moving picture decoding apparatus decodes the intra-predicted restricted area (FRA) in the SDR picture, and generates an appropriate image by error concealment in the area other than the restricted area. Even in the picture following the SDR picture, the intra-prediction restricted area and the restricted area referring to the restricted reference area of the reference picture are decoded, and in the other areas, an appropriate image is generated by error concealment. By repeating this operation until the new restricted area (for example, FRA) moves from one end of the picture to the entire restricted picture (for numIR pictures), the entire picture can be correctly decoded.
  • FRA new restricted area
  • FIG. 12 is a diagram showing an FRA region.
  • FIG. 12 (a) shows a case of shifting from left to right.
  • the FRA is a rectangle having a width wIR and a height hPict with (xIR, yIR) as the upper left coordinate.
  • the width wIR of the FRA is an integral multiple of the minimum CU width minCU.
  • wIR minCU * a
  • a is a positive integer.
  • FIG. 12 (b) shows a case of shifting from top to bottom.
  • the FRA is a rectangle having a height hIR and a width wPict with (xIR, yIR) being the upper left coordinate.
  • the height hIR of the FRA is an integral multiple of the minimum CU width minCU.
  • hIR minCU * a
  • a is a positive integer. That is, the granularity of the width or height of the FRA is equal to or larger than the minimum value minCU of the width or height of the CU, and is set to an integral multiple of minCU.
  • FIG. 13 is a diagram illustrating the refresh direction.
  • FRA is inserted in the vertical direction and shifted from left to right.
  • the upper left coordinates (xIR [t], yIR [t]) and lower right coordinates (xRA_en [t], yRA_en [t]) of the new restricted area of the picture at time t can be expressed as follows.
  • Expression IR-1 may also be expressed as follows using a width wOVLP related to a pixel derived with reference to a pixel value or an encoding parameter outside the restricted area.
  • the upper left coordinates of the restricted area of the picture at time t can be expressed by (Equation IR-2).
  • Expression IR-2 may be expressed as follows using the height hOVLP related to the pixel derived with reference to the pixel value and the encoding parameter other than the restricted area.
  • FIG. 13D shows an example in which the FRA is inserted in the horizontal direction and shifted from the bottom to the top.
  • FIG. 14 is an example in which the FRA is composed of a plurality of regions.
  • the region k is one rectangle on a certain picture, but the region k may be a plurality of rectangles as shown in FIG.
  • the restricted area includes a plurality of areas of the picture.
  • the video decoding device 31 includes an entropy decoding unit 301, a parameter decoding unit (prediction image decoding device) 302, a loop filter 305, a reference picture memory 306, a prediction parameter memory 307, a prediction image generation unit (prediction image generation device) 308, and an inverse. It is configured to include a quantization / inverse transforming unit 311 and an adding unit 312. In addition, there is also a configuration in which the moving image decoding device 31 does not include the loop filter 305 in accordance with the moving image encoding device 11 described later.
  • the parameter decoding unit 302 includes a restriction region control unit 320, and the restriction region control unit 320 includes a header decoding unit 3020, a CT information decoding unit 3021, and a CU decoding unit 3022 (predictive mode decoding unit), not shown.
  • the CU decoding unit 3022 further includes a TU decoding unit 3024.
  • the header decoding unit 3020 decodes parameter set information such as VPS, SPS, and PPS from the encoded data.
  • the header decoding unit 3020 decodes a slice header (slice information) from the encoded data.
  • the CT information decoding unit 3021 decodes a CT from the encoded data.
  • the CU decoding unit 3022 decodes the CU from the encoded data.
  • the TU decoding unit 3024 decodes the QP update information (quantization correction value) and the quantization prediction error (residual_coding) from the encoded data.
  • the parameter decoding unit 302 includes an inter prediction parameter decoding unit 303 and an intra prediction parameter decoding unit 304 (not shown).
  • the prediction image generation unit 308 includes an inter prediction image generation unit 309 and an intra prediction image generation unit 310.
  • CTUs and CUs are used as processing units.
  • the present invention is not limited to this example, and processing may be performed in sub-CU units.
  • the CTU, CU, and TU may be read as blocks and sub-CUs as sub-blocks, and the processing may be performed on a block or sub-block basis.
  • the entropy decoding unit 301 performs entropy decoding on the encoded stream Te input from the outside, and separates and decodes individual codes (syntax elements).
  • Entropy coding includes a method of performing variable-length coding of syntax elements using a context (probability model) adaptively selected according to the type of the syntax elements and surrounding conditions, and a predetermined table or There is a method of performing variable-length coding on syntax elements using a calculation formula.
  • a representative of the former is CABAC (Context Adaptive Binary Arithmetic Coding).
  • the separated codes include prediction information for generating a predicted image, prediction errors for generating a difference image, and the like.
  • Entropy decoding section 301 outputs a part of the separated code to parameter decoding section 302.
  • the part of the separated code is, for example, a prediction mode predMode, a merge flag merge_flag, a merge index merge_idx, an inter prediction identifier inter_pred_idc, a reference picture index refIdxLX, a prediction vector index mvp_LX_idx, and a difference vector mvdLX.
  • Control of which code is to be decoded is performed based on an instruction from the parameter decoding unit 302.
  • Entropy decoding section 301 outputs the quantized transform coefficient to inverse quantization / inverse transform section 311.
  • FIG. 15 is a diagram illustrating an example of the syntax notified to realize the gradual refresh.
  • FIG. 15A shows a syntax (stepwise refresh information) notified by a sequence parameter set (SPS).
  • SPS sequence parameter set
  • seq_refresh_enable_flag is a flag indicating whether or not to use gradual refresh in subsequent pictures.
  • the parameter decoding unit 302 decodes the stepwise refresh information, and the moving picture decoding apparatus decodes the moving picture using the stepwise refresh when the seq_refresh_enable_flag flag is 1, and does not use the stepwise refresh when the seq_refresh_enable_flag flag is 0.
  • seq_refresh_enable_flag 1
  • the parameter decoding unit 302 decodes seq_refresh_mode, seq_refresh_direction, and seq_refresh_period.
  • seq_refresh_mode is a flag indicating the type of gradual refresh shown in FIG.
  • the video decoding device inserts a new refresh area (FRA) in the picture in the vertical direction as shown in FIGS. 13A and 13B. As shown in (c) and (d), the FRA inserts the picture horizontally.
  • FAA refresh area
  • Seq_refresh_direction is a flag indicating the shift direction of the gradual refresh shown in FIG. If seq_refresh_direction is 0, the video decoding apparatus shifts the FRA from left to right or from top to bottom of the picture as shown in FIGS. 13 (a) and 13 (c). If 1, the FRA shifts from right to left or from bottom to top of the picture as shown in FIGS. 13 (b) and (d).
  • seq_refresh_period is an SDR cycle SDRperiod, and indicates the number of pictures between random access points.
  • FIG. 15 (b) shows the syntax of the gradual refresh information notified by the picture parameter set (PPS).
  • PPS picture parameter set
  • seq_refresh_mode When seq_refresh_mode is 1 (horizontal rectangular limited area), the x coordinate yIR is set to 0, and the y coordinate yIR is set to seq_refresh_position.
  • seq_refresh_size is the size of the FRA, and when seq_refresh_mode is 0, it is set to the FRA width wIR, and when it is 1, it is set to the FRA height hIR.
  • seq_refresh_overlap is the size of the area where the FRA position of the previous picture in the decoding order and the FRA position of the current picture overlap, and the overlap width wOVLP when seq_refresh_mode is 0, and the overlap width when seq_refresh_mode is 1, Specifies the wrap height hOVLP. Note that seq_refresh_overlap may be 0.
  • the new refresh area (FRA) added to the restricted area in the target picture at time t may overlap (overlap) with the FRA of the picture at time t-1.
  • the width wOVLP is an overlapping area.
  • the end point of the restricted reference area when the picture at time t is referred to as the reference picture may be derived by subtracting the overlap area as follows.
  • the restriction area from which the overlap area is subtracted may be referred to as a filter restriction area (or a post-filter restriction area, an overlap restriction area, or a reference restriction area).
  • the derivation formula when seq_refresh_mode is 1 is as follows.
  • the moving picture decoding apparatus can acquire the information about the stepwise refresh by decoding the syntax related to the stepwise refresh using the SPS or the PPS. Therefore, the moving picture decoding apparatus can appropriately decode the encoded data by using the stepwise refresh in the decoding of the moving picture at the time of random access.
  • the loop filter 305 is a filter provided in the encoding loop, which removes block distortion and ringing distortion and improves image quality.
  • the loop filter 305 applies filters such as a deblocking filter 3051, a sample adaptive offset (SAO), and an adaptive loop filter (ALF) to the decoded image of the CU generated by the adding unit 312.
  • filters such as a deblocking filter 3051, a sample adaptive offset (SAO), and an adaptive loop filter (ALF) to the decoded image of the CU generated by the adding unit 312.
  • the deblocking filter 3051 determines that there is block distortion when a difference between pixel values of pixels adjacent to each other via a block (CTU / CU / TU) boundary is within a predetermined range. Then, an image near the block boundary is smoothed by performing a deblocking process on the block boundary in the decoded image before the deblocking filter.
  • the deblocking filter 3051 makes a filter on / off determination for each block. For example, if (Expression DB-1) is satisfied, the target block is filtered.
  • FIG. 33 (a) is an example in which block P and block Q are in contact in the vertical direction
  • FIG. 33 (b) is an example in which block P and block Q are in contact in the horizontal direction.
  • P2k, P1k, and P0k are pixels included in the block P among the blocks P and Q that are in contact with the boundary, and Q0k, Q1k, and Q2k are pixels included in the block Q.
  • k indicates the number of the pixel in the block boundary direction.
  • is a predetermined threshold.
  • the deblocking filter 3051 derives the filter strength with respect to the boundary between the block P and the block Q by (Formula DB-2). If all of (Expression DB-2) are satisfied, the deblocking filter 3051 sets the filter strength to "strong", and if not, sets the filter strength to "weak".
  • is a threshold value depending on the quantization parameter.
  • the deblocking filter 3051 refers to three pixels from the boundary indicated by the horizontal line in FIG. 33 (c) in order to determine the on / off of the filter and the strength of the filter in the two blocks bordering the boundary.
  • the area of block P filtered with reference to block Q is a random access area. Since decoding sometimes refers to an undetermined pixel value (unrestricted area), decoding cannot be performed normally.
  • This application provides a method for generating encoded data that can be randomly accessed and decoding normally by the following method.
  • FIG. 35 is a diagram illustrating a relationship between the restriction area and the deblocking filter.
  • the deblocking filter 3051 turns off the filter in a block adjacent to the boundary between the restricted area and the non-restricted area.
  • the deblocking filter 3051 determines whether or not the blocks P and Q sandwiching a block boundary (CU boundary, PU boundary, TU boundary) intersect a restricted area boundary.
  • XCUQ can be expressed by (Expression DB-4).
  • xCUP xRA_st + wRA-wCU (Formula DB-4)
  • xCUQ xRA_st + wRA
  • the deblocking filter 3051 may use the following determination formula using the coordinates (xP, yP) of the pixels of the block P and the coordinates (xQ, yQ) of the pixels of the block Q.
  • the deblocking filter 3051 may turn off the filter for the blocks P and Q in which the x coordinate at the upper left of the block satisfies (Expression DB-4).
  • FIG. 36 (a) is a flowchart showing the flow of filter processing of the deblocking filter 3051 of the present embodiment.
  • the deblocking filter 3051 checks, based on the upper left coordinates of the target blocks P and Q, whether or not these blocks are adjacent to each other across the restricted area boundary.
  • the deblocking filter 3051 turns off the deblocking filter at the boundary of the restricted area, and derives the pixel value of the restricted area included in the restricted area without using the pixel value of the non-restricted area. be able to. Therefore, by turning off the deblocking filter only at the boundary of the restricted area, it is possible to realize low decoding while realizing normal decoding in random access.
  • the deblocking filter 3051 turns off the filter only for the block P in the restricted area among the blocks adjacent to each other across the boundary between the restricted area and the non-restricted area. That is, when the x coordinate at the upper left of the block P satisfies the xCUP of (Formula DF-4), the filter is turned off.
  • FIG. 36 (b) is a flowchart showing the flow of the filter processing of this embodiment.
  • Steps S2202 and S2206 are the same processing as in FIG. 36 (a), and a description thereof will be omitted.
  • the deblocking filter 3051 performs the on / off determination of the filter based on (Equation DB-1) in the block Q, and when it is on, derives the filter strength using (Equation DB-1) and obtains (Equation DB- Filter only pixels Q0k, Q1k and Q2k of block Q using 3).
  • the deblocking filter 3051 turns off the deblocking filter only for blocks in the restricted area necessary to realize the random access function at the boundary of the restricted area, and turns off the deblocking filter for blocks not required for the random access function. turn on.
  • This makes it possible to derive a pixel value of an area referred to as a restricted area in subsequent encoding and decoding processes without using a pixel value of an unrestricted area. Therefore, by turning off the deblocking filter only at the boundary of the restricted area, it is possible to realize low decoding while realizing normal decoding in random access.
  • the deblocking filter 3051 forcibly turns off the filter in the filter processing between the restricted area (refresh area, first area) and the non-restricted area (unrefreshed area, second area).
  • the deblocking filter 3051 refers to the information of the block Q in the deblocking filter processing in the normal deblocking filter processing, in the derivation of the on / off of the filter, the derivation of the filter strength, and the filtering.
  • both blocks sandwiching the restricted area boundary refer to each other in the filter processing, and correct the pixel value. Therefore, if the information of the block Q cannot be obtained from the block P, the decoding process cannot be performed normally.
  • the overlap width wOVLP may be the CU width wCU, but the pixels corrected by filtering are part of the block. Therefore, the width wOVLP of the overlap region may be derived from the number of pixels numDFPEL corrected by filtering and the minimum value minC of the CU width as shown in (Equation DB-6).
  • xRA_st [k + 1] xRA_st [k] + wRA-wOVLP (Formula DB-7)
  • yRA_st [k + 1] yRA_st [k]
  • the new refresh area for example, the intra refresh area
  • the height hOVLP of the overlap area and the upper left coordinate (xRA_st) of the restricted reference area at time t k + 1 [k + 1], yRA_st [k + 1]
  • ⁇ ⁇ Decode the stepwise refresh information and determine whether the target block is in the restricted area.
  • intra prediction and inter prediction are performed using only the prediction parameters of the restricted area. Deriving prediction parameters for prediction. Further, intra prediction is performed using only the pixels in the restricted area, and inter prediction is performed using only the pixels in the restricted reference area of the reference image.
  • the filtering process may be performed by adding a restriction on the restricted area in a range excluding the area corresponding to the overlap area in the restricted area.
  • the minimum CU size or the CU size including the number of pixels required for the filter (or the CU size specified by the syntax) for the non-restricted area (unrefreshed area) side or the restricted area side of the restricted area (refresh area) The CU of (size) is encoded in the same manner as the restricted area.
  • the overlap width wOVLP is 8 from (Formula DF-6).
  • the overlap width wOVLP is 4.
  • the deblocking filter 3051 of the first modification switches the type of filter (long filter, short filter) depending on whether or not the block is in contact with the boundary of the restricted area. Whether or not the block is in contact with the restricted area boundary is determined by (Equation DF-9).
  • the upper left coordinate of the restricted area is (xRA_st [k], yRA_st [k]), the width of the restricted area is wRA, the upper left coordinate of the block is (xPb, yPb), and the width of the block is wPb.
  • the overlap width can be reduced, and a decrease in coding efficiency can be suppressed.
  • it is desirable that the number of taps of the short filter is equal to or smaller than the minimum size of the CU.
  • the inter prediction parameter decoding unit 303 decodes the inter prediction parameters based on the code input from the entropy decoding unit 301, with reference to the prediction parameters stored in the prediction parameter memory 307. Further, the inter prediction parameter decoding unit 303 outputs the decoded inter prediction parameter to the prediction image generation unit 308, and stores it in the prediction parameter memory 307.
  • FIG. 19A is a schematic diagram illustrating a configuration of the inter prediction parameter decoding unit 303 according to the present embodiment.
  • the inter prediction parameter decoding unit 303 includes a parameter decoding control unit 3031, an AMVP prediction parameter deriving unit 3032, an adding unit 3038, a merge prediction parameter deriving unit 3036, and a sub-block prediction parameter deriving unit 3037.
  • the AMVP prediction parameter derivation unit 3032, the merge prediction parameter derivation unit 3036, and the sub-block prediction parameter derivation unit 3037 may be collectively referred to as a motion vector derivation unit (motion vector derivation device).
  • the parameter decoding control unit 3031 instructs the entropy decoding unit 301 to decode syntax elements related to inter prediction, and syntax elements included in encoded data, for example, a merge flag merge_flag, a merge index merge_idx, an inter prediction identifier inter_pred_idc , Reference picture index refIdxLX, prediction vector index mvp_LX_idx, and difference vector mvdLX.
  • the parameter decoding control unit 3031 extracts a merge flag merge_flag.
  • the parameter decoding control unit 3031 expresses that a certain syntax element is to be extracted, it means that the decoding of the certain syntax element is instructed to the entropy decoding unit 301 and the corresponding syntax element is read from the encoded data.
  • the parameter decoding control unit 3031 extracts the AMVP prediction parameter from the encoded data using the entropy decoding unit 301.
  • the AMVP prediction parameters include, for example, an inter prediction identifier inter_pred_idc, a reference picture index refIdxLX, a prediction vector index mvp_LX_idx, and a difference vector mvdLX.
  • the AMVP prediction parameter deriving unit 3032 derives a prediction vector mvpLX from a prediction vector index mvp_LX_idx.
  • Parameter decoding control section 3031 outputs difference vector mvdLX to adding section 3038.
  • the adding unit 3038 adds the prediction vector mvpLX and the difference vector mvdLX to derive a motion vector.
  • the parameter decoding control unit 3031 extracts a merge index merge_idx as a prediction parameter related to merge prediction.
  • Parameter decoding control section 3031 outputs the extracted merge index merge_idx to merge prediction parameter deriving section 3036.
  • FIG. 20 (a) is a schematic diagram showing the configuration of the merge prediction parameter deriving unit 3036 according to the present embodiment.
  • the merge prediction parameter derivation unit 3036 includes a merge candidate derivation unit 30361, a merge candidate selection unit 30362, and a merge candidate storage unit 30363.
  • the merge candidate storage unit 30363 stores the merge candidates input from the merge candidate derivation unit 30361.
  • the merge candidate includes a prediction list use flag predFlagLX, a motion vector mvLX, and a reference picture index refIdxLX. An index is assigned to the merge candidate stored in the merge candidate storage unit 30363 according to a predetermined rule.
  • the merge candidate deriving unit 30361 derives a merge candidate using the motion vector of the decoded adjacent block and the reference picture index refIdxLX as they are.
  • the merge candidate deriving unit 30361 may apply affine prediction to a space merge candidate derivation process, a temporal merge candidate derivation process, a combined merge candidate derivation process, and a zero merge candidate derivation process described below.
  • the merge candidate deriving unit 30361 reads out prediction parameters (predFlagLX, motion vector mvLX, and reference picture index refIdxLX) stored in the prediction parameter memory 307 according to a predetermined rule, and performs merging.
  • the method of designating the reference picture is, for example, a method of specifying a neighboring block (for example, a block in contact with a left end L, a left lower end BL, a left upper end AL, an upper end A, and an upper right end AR, respectively) within a predetermined range from the target block. These are the prediction parameters for each.
  • Each merge candidate is called L, BL, AL, A, AR.
  • Merge candidate derivation unit 30361 when the target block is a restricted area (IsRA (xPb, yPb) is true) and the reference position (xNbX, yNbX) of an adjacent block of the target block is an unrestricted area (IsRA (xNbX, yNbX) In the case of “false”, the value of the adjacent block is not used for deriving the merge candidate.
  • IsRA xPb, yPb
  • the merge candidate deriving unit 30361 determines that the target block is a restricted area (IsRA (xPb, yPb) is true) and the reference position (xNbX, yNbX) of a block adjacent to the target block is a restricted area (IsRA (xNbX, yNbX) Is true) and the prediction is intra prediction, the position (xNbX, yNbX) is used for deriving a merge candidate.
  • IsRA xPb, yPb
  • IsRA xPb, yPb If) is true
  • B0 available
  • availableB0 As 1
  • the intra prediction unit 3104 sets the target block to the restricted area (IsRA (xPb, yPb) Is true) and the left adjacent position A0 (xPb-1, yPb + hPb), A1 (xPb- 1, yPb + hPb-1), B2 (xPb-1, yPb-1) of the target block is the restricted area
  • the values of the left adjacent positions A0, A1, B2 are used to derive the merge candidate.
  • the merging candidate deriving unit 30361 reads the prediction parameter of the block C in the reference image including the lower right CBR of the target block or the coordinates of the center from the prediction parameter memory 307 as the merging candidate, and Store in candidate list mergeCandList [].
  • the motion vector of the block C is Add to prediction vector candidates.
  • the method of specifying the reference image may be, for example, the reference picture index refIdxLX specified in the slice header, or may be specified using the smallest reference picture index refIdxLX of the adjacent block.
  • the merging candidate deriving unit 30361 may derive the position (xColCtr, yColCtr) of the block C and the position (xColCBr, yColCBr) of the block CBR using (formula MRG-1).
  • (xPb, yPb) is the upper left coordinate of the target block
  • (wPb, hPb) is the width and height of the target block.
  • the merge candidate derived by the merge candidate derivation unit 30361 is stored in the merge candidate storage unit 30363.
  • the order of storing in the merge candidate list mergeCandList [] is, for example, a spatial merge candidate and a temporal merge candidate, that is, ⁇ L, A, AR, BL, A, COL, COMB0,. It should be noted that reference blocks that cannot be used (the block is intra prediction or the like) are not stored in the merge candidate list.
  • the merge candidate selection unit 30362 generates a merge candidate mergeCandList [merge_idx] to which the merge index merge_idx input from the parameter decoding control unit 3031 is assigned, from among the merge candidates included in the merge candidate list stored in the merge candidate storage unit 30363. , As the inter prediction parameter of the target block.
  • the merge candidate selection unit 30362 stores the selected merge candidate in the prediction parameter memory 307 and outputs the selected merge candidate to the prediction image generation unit 308.
  • FIG. 20 (b) is a schematic diagram showing a configuration of the AMVP prediction parameter deriving unit 3032 according to the present embodiment.
  • the AMVP prediction parameter deriving unit 3032 includes a vector candidate deriving unit 3033, a vector candidate selecting unit 3034, and a vector candidate storage unit 3035.
  • the vector candidate derivation unit 3033 derives a prediction vector candidate from the motion vector mvLX of the decoded adjacent block stored in the prediction parameter memory 307 according to a predetermined rule, and stores the prediction vector candidate in the prediction vector candidate list mvpListLX [] of the vector candidate storage unit 3035. I do.
  • the vector candidate selection unit 3034 selects the motion vector mvpListLX [mvp_LX_idx] indicated by the prediction vector index mvp_LX_idx from the prediction vector candidates of the prediction vector candidate list mvpListLX [] as the prediction vector mvpLX.
  • the vector candidate selection unit 3034 outputs the selected prediction vector mvpLX to the addition unit 3038.
  • the prediction vector candidate may be derived by scaling a motion vector of a decoded adjacent block in a predetermined range from the target block by a method described later.
  • the adjacent block is a block spatially adjacent to the target block, for example, a left block, an upper block, and a region temporally adjacent to the target block, for example, a block including the same position as the target block and having a different display time. Includes regions obtained from prediction parameters.
  • Addition unit 3038 calculates motion vector mvLX by adding prediction vector mvpLX input from AMVP prediction parameter derivation unit 3032 and difference vector mvdLX input from parameter decoding control unit 3031.
  • the addition unit 3038 outputs the calculated motion vector mvLX to the prediction image generation unit 308 and the prediction parameter memory 307.
  • Motion vector Mv reference motion vector
  • picture PicMv including block with Mv
  • reference picture PicMvRef of Mv scaled motion vector
  • picture CurPic including block with sMv
  • reference picture CurPicRef referenced by sMv
  • sMv may be derived by a derivation function MvScale (Mv, PicMv, PicMvRef, CurPic, CurPicRef) shown in (Expression INTERP-1).
  • DiffPicOrderCnt (Pic1, Pic2) is a function that returns the difference between the time information (for example, POC) of Pic1 and Pic2.
  • scaling function MvScale (Mv, PicMv, PicMvRef, CurPic, CurPicRef) may be the following equation.
  • MvScale (Mv, PicMv, PicMvRef, CurPic, CurPicRef) Mv * DiffPicOrderCnt (CurPic, CurPicRef) / DiffPicOrderCnt (PicMv, PicMvRef) That is, Mv may be scaled according to the ratio of the difference between the time information between CurPic and CurPicRef and the difference between the time information between PicMv and PicMvRef.
  • the intra prediction parameter decoding unit 304 decodes an intra prediction parameter, for example, an intra prediction mode IntraPredMode, with reference to the prediction parameter stored in the prediction parameter memory 307 based on the code input from the entropy decoding unit 301.
  • the intra prediction parameter decoding unit 304 outputs the decoded intra prediction parameters to the prediction image generation unit 308, and stores the decoded intra prediction parameters in the prediction parameter memory 307.
  • the intra prediction parameter decoding unit 304 may derive different intra prediction modes for luminance and chrominance.
  • FIG. 21 is a schematic diagram showing a configuration of the intra prediction parameter decoding unit 304 of the parameter decoding unit 302.
  • the intra prediction parameter decoding unit 304 includes a parameter decoding control unit 3041, a luminance intra prediction parameter decoding unit 3042, and a chrominance intra prediction parameter decoding unit 3043.
  • the parameter decoding control unit 3041 instructs the entropy decoding unit 301 to decode a syntax element, and receives the syntax element from the entropy decoding unit 301. If prev_intra_luma_pred_flag is 1, the parameter decoding control unit 3041 outputs mpm_idx to the MPM parameter decoding unit 30422 in the luminance intra prediction parameter decoding unit 3042. When prev_intra_luma_pred_flag is 0, parameter decoding control section 3041 outputs rem_selected_mode_flag, rem_selected_mode, and rem_non_selected_mode to non-MPM parameter decoding section 30423 of luminance intra prediction parameter decoding section 3042. Also, the parameter decoding control unit 3041 outputs the syntax element of the intra prediction parameter of the chrominance to the chrominance intra prediction parameter decoding unit 3043.
  • the luminance intra prediction parameter decoding unit 3042 includes an MPM candidate list deriving unit 30421, an MPM parameter decoding unit 30422, and a non-MPM parameter decoding unit 30423 (decoding unit, deriving unit).
  • the MPM parameter decoding unit 30422 derives the luminance prediction mode IntraPredModeY with reference to the MPM candidate list mpmCandList [] and mpm_idx derived by the MPM candidate list derivation unit 30421, and outputs the luminance prediction mode IntraPredModeY.
  • the non-MPM parameter decoding unit 30423 derives RemIntraPredMode from the MPM candidate list mpmCandList [] and rem_selected_mode_flag, rem_selected_mode, and rem_non_selected_mode, and outputs the luminance prediction mode IntraPredModeY to the intra prediction image generation unit 310.
  • the chrominance intra prediction parameter decoding unit 3043 derives the chrominance prediction mode IntraPredModeC from the syntax element of the chrominance intra prediction parameter, and outputs the same to the intra prediction image generation unit 310.
  • pixels in the unrestricted reference region may be referred to in a reference pixel filter or a predicted image generation process described below. Constraints are required.
  • the reference picture memory 306 stores the decoded image of the CU generated by the adding unit 312 at a position predetermined for each of the target picture and the target CU.
  • the prediction parameter memory 307 stores the prediction parameter at a predetermined position for each CTU or CU to be decoded. Specifically, the prediction parameter memory 307 stores the parameters decoded by the parameter decoding unit 302, the prediction mode predMode separated by the entropy decoding unit 301, and the like.
  • the prediction mode predMode, prediction parameters, and the like are input to the prediction image generation unit 308. Further, the predicted image generation unit 308 reads a reference picture from the reference picture memory 306. The prediction image generation unit 308 generates a prediction image of a block or a sub-block in the prediction mode (intra prediction, inter prediction) indicated by the prediction mode predMode, using the prediction parameters and the read reference picture (reference picture block).
  • the reference picture block is a set of pixels on the reference picture (referred to as a block because it is usually rectangular), and is a region referred to for generating a predicted image.
  • Inter prediction image generation unit 309 uses the inter prediction parameter input from the inter prediction parameter decoding unit 303 and the reference picture to perform a prediction image of a block or sub block by inter prediction. Generate
  • FIG. 22 is a schematic diagram showing a configuration of the inter prediction image generation unit 309 included in the prediction image generation unit 308 according to the present embodiment.
  • the inter prediction image generation unit 309 includes a motion compensation unit (prediction image generation device) 3091 and a weight prediction unit 3094.
  • the motion compensation unit 3091 (interpolated image generation unit), based on the inter prediction parameters (predFlagLX, reference picture index refIdxLX, motion vector mvLX) input from the inter prediction parameter decoding unit 303, Then, an interpolated image (motion-compensated image) is generated by reading a block at a position shifted by the motion vector mvLX from the position of the target block in the reference picture RefLX specified by the reference picture index refIdxLX.
  • a filter called a motion compensation filter for generating a pixel at a decimal position is applied to generate a motion compensated image.
  • the motion compensation unit 3091 derives an integer position (xInt, yInt) and a phase (xFrac, yFrac) corresponding to the coordinates (x, y) in the predicted block by the following formula.
  • the motion compensation unit 3091 derives a temporary image temp [] [] by performing horizontal interpolation processing on the reference picture refImg using an interpolation filter.
  • shift1 is a normalization parameter for adjusting the value range
  • offset1 1 ⁇ (shift1-1).
  • the motion compensation unit 3091 derives an interpolated image Pred [] [] from the temporary image temp [] [] by performing vertical interpolation processing.
  • shift2 is a normalization parameter for adjusting the value range
  • offset2 1 ⁇ (shift2-1).
  • Pred [x] [y] ( ⁇ mcFilter [yFrac] [k] * temp [x] [y + k-NTAP / 2 + 1] + offset2) >> shift2
  • the above Pred [] [] is derived for each of the L0 list and the L1 list (referred to as interpolation images PredL0 [] [] and PredL1 [] []), and the interpolation image PredL0 [] []
  • an interpolation image Pred [] [] is generated from the interpolation image PredL1 [] [].
  • the weight prediction unit 3094 generates a prediction image of the block by multiplying the motion compensation image PredLX by a weight coefficient.
  • PredFlagL0 or predFlagL1 is 1 (simple prediction) and weight prediction is not used, the following equation processing is performed to adjust the motion compensation image PredLX (LX is L0 or L1) to the pixel bit number bitDepth. I do.
  • Pred [x] [y] Clip3 (0, (1 ⁇ bitDepth) -1, (PredLX [x] [y] + offset1) >> shift1)
  • shift1 14-bitDepth
  • offset1 1 ⁇ (shift1-1).
  • Pred [x] [y] Clip3 (0, (1 ⁇ bitDepth) -1, (PredL0 [x] [y] + PredL1 [x] [y] + offset2) >> shift2)
  • shift2 15-bitDepth
  • offset2 1 ⁇ (shift2-1).
  • the weight prediction unit 3094 derives a weight prediction coefficient w0 and an offset o0 from the encoded data, and performs the processing of the following equation.
  • Pred [x] [y] Clip3 (0, (1 ⁇ bitDepth) -1, ((PredLX [x] [y] * w0 + 2 ⁇ (log2WD-1)) >> log2WD) + o0)
  • log2WD is a variable indicating a predetermined shift amount.
  • the weight prediction unit 3094 derives weight prediction coefficients w0, w1, o0, and o1 from the encoded data, and performs the processing of the following equation.
  • Pred [x] [y] Clip3 (0, (1 ⁇ bitDepth) -1, (PredL0 [x] [y] * w0 + PredL1 [x] [y] * w1 + ((o0 + o1 + 1) ⁇ log2WD))>> (log2WD + 1))
  • the inter prediction image generation unit 309 outputs the generated prediction image of the block to the addition unit 312.
  • the boundary padding is performed as a pixel value of a reference pixel position (xIntL + i, yIntL + j) as a pixel value refImg of the following position xRef + i, yRef + j.
  • This is realized by using [xRef + i] [yRef + j]. That is, this is realized by clipping the reference position of the pixel used for generating the interpolated image at the position of the boundary pixel of the restricted reference area of the reference picture (for example, at the left end, upper end, lower end, right end).
  • the motion compensation unit 3091 performs the following boundary padding processing. That is, when the X coordinate xIntL + i of the reference position is smaller than the left end of the restricted reference area, the pixel is replaced with the position xRA_st of the left end pixel of the restricted reference area. Similarly, when the X coordinate xIntL + i of the reference position is larger than the right end of the restricted reference area, the X coordinate is replaced with the position xRA_en of the right end pixel of the restricted reference area.
  • the Y coordinate yIntL + j is smaller than the upper end of the restricted reference area, it is replaced with the upper pixel position yRA_st, and if the Y coordinate yIntL + j is larger than the lower end of the restricted reference area, it is replaced with the lower pixel position yRA_en.
  • xRef + i Clip3 (xRA_st, xRA_en, xIntL + i) (Expression INTERP-2)
  • yRef + j Clip3 (yRA_st, yRA_en, yIntL + j)
  • the motion compensation unit 3091 derives the coordinates of the reference pixel using (Equation INTERP-3).
  • xRef + i Clip3 (0, wRA [k] -1, xIntL + i) (Expression INTERP-3)
  • yRef + j Clip3 (0, hPict-1, yIntL + j)
  • hPict is the height of the picture.
  • xIntL and yIntL are (xPb, yPb) as the upper left coordinate of the target block based on the upper left coordinate of the picture and (mvLX [0], mvLX [1]) as the motion vector (formula INTERP-3 ′). May be used to derive the coordinates of the reference pixel.
  • MVBIT indicates that the accuracy of the motion vector is 1 / MVBIT pel.
  • xRef + i Clip3 (wRA [k], wPict-1, xIntL + i) (Expression INTERP-2 ′)
  • yRef + j Clip3 (0, hPict-1, yIntL + j)
  • xRA_st wRA [k]
  • xRA_en wPict-1
  • xRef + i Clip3 (0, wPict-1, xIntL + i) (Expression INTERP-4)
  • yRef + j Clip3 (0, yRA_en, yIntL + j)
  • wPict is the width of the picture.
  • padding is performed using the boundary pixels of the restricted reference area as described above, so that the motion vector in the inter prediction in the restricted area points outside the restricted reference area of the reference picture (unrestricted reference area).
  • the reference pixel is replaced using the pixel value in the restricted reference area, normal decoding processing after the random access point can be realized even in inter prediction.
  • the motion compensation unit 3091 replaces pixels in the non-restricted reference area with pixels in the restricted reference area in image generation processing using a motion vector pointing out of the restricted reference area in inter prediction of the restricted area (padding). ).
  • Motion vector restriction of restricted reference area Another method for generating a predicted image by referring to only pixels in the restricted reference area when referring to a reference picture is boundary motion vector restriction.
  • the motion vector is limited (clipped) so that the position (xIntL + i, yIntL + j) of the reference pixel falls within the restricted reference area.
  • the upper left coordinates (xPb, yPb) of the target block at time t k
  • the position of the restricted reference area of the reference picture at time j are represented by (xRA_st [j]
  • the block motion vector mvLX is input and the restricted motion vector mvLX is output.
  • the limitation of the motion vector mvLX is shown in (Equation 1)
  • ClipRA (mvLX) Clip3 (xRA_st [j] + (NTAP2-1-yPb) ⁇ log2 (MVBIT), (yRA_st [j] +
  • mvLX [0] Clip3 ((NTAP2-1-xPb) ⁇ log2 (MVBIT), (wRA [k] -NTAP2-xPb-wPb) ⁇ log2 (MVBIT), mvLX [0]) (Expression INTERP-6 )
  • mvLX [1] Clip3 ((NTAP2-1-yPb) ⁇ log2 (MVBIT), (hPict-NTAP2 + 1-yPb-hPb) ⁇ log2 (MVBIT), mvLX [1])
  • NTAP is the number of taps of a filter used for generating an interpolation image
  • NTAP2 NTAP / 2.
  • mvLX [0] Clip3 ((NTAP2-1-xPb) ⁇ log2 (MVBIT), (wPict-NTAP2 + 1-xPb-wPb + 1) ⁇ log2 (MVBIT), mvLX [0]) (Expression INTERP- 8)
  • mvLX [1] Clip3 ((NTAP2-1-yPb) ⁇ log2 (MVBIT), (hRA [k] -NTAP2 + 1-yPb-hPb + 1) ⁇ log2 (MVBIT), mvLX [1])
  • hRA [k] (hIR-hOVLP) * (k + 1)
  • the refresh by limiting the motion vector in this way, the motion vector points within the restricted area of the reference picture. Therefore, even in the inter prediction of the restricted area, normal decoding processing after the random access point can be realized.
  • the inter prediction parameter decoding unit 303 and the motion compensation unit 3091 clip the vector indicating the unrestricted reference area within the restricted reference boundary when performing inter prediction.
  • the restricted area (restricted reference area) that can be referred to in the inter prediction of the restricted area in the succeeding picture has a range that varies depending on the reference picture (time), as indicated by the horizontal line in FIG. Therefore, when estimating a motion vector of a target block from a motion vector of a peripheral block (hereinafter, an adjacent block) of the target block as in merge prediction, additional processing is required.
  • FIG. 23 (b) is a diagram showing an example of the present embodiment.
  • the motion vector is mvLX_AL.
  • the motion vector mvLX_AL is applied to the motion vector of the target block as a merge candidate, the region points outside the restricted reference region. It can be.
  • the merge prediction parameter deriving unit 3036 of the present embodiment derives a merge candidate such that a merge candidate pointing outside the restricted reference area is not included in the merge candidate list.
  • the merge prediction parameter deriving unit 3036 determines that the reference picture index and the motion vector are in the restricted reference area in the derived merge candidate (for example, merge candidate N), that is, IsRA If (Pb, mvLX_N) is true, the availabilityN of the merge candidate N is set to 1, and if it is outside the restricted reference area, the availabilityN is set to 0. Alternatively, only the merge candidate with the availability N of 1 may be set in the merge candidate list.
  • the merge prediction parameter deriving unit 3036 refers to the generated merge candidate list in order, and the reference picture index and the motion vector in the generated merge candidate list indicate outside the restricted reference area. May be excluded from the merge candidate list.
  • a usable merge candidate having a reference picture index and a motion vector to be satisfied is set.
  • the merge candidate deriving unit 30361 of the present embodiment uses the following equation to calculate the merge candidate N motion vector mvLX_N May be clipped to a position within the restricted area.
  • mvLX [0] ClipRA (mvLX_N [0])
  • mvLX [1] ClipRA (mvLX_N [1])
  • IsRA Pb, scaledMV
  • the reference picture index and the motion vector determine a usable merge candidate using (INTERP-11).
  • scaledMV MvScale (mvLX_AL, k, j, k, j ')
  • k is the time of the target picture
  • j is the time of the reference picture of the adjacent block.
  • scaledMV is the motion vector of the adjacent block after scaling, and is derived by (Expression INTERP-1).
  • j ′ is the time (POC) of the reference picture indicated by the corrected reference picture index.
  • the intra prediction image generation unit 310 performs intra prediction using the intra prediction parameters input from the intra prediction parameter decoding unit 304 and the reference pixels read from the reference picture memory 306.
  • the intra-prediction image generation unit 310 reads, from the reference picture memory 306, an adjacent block in the target picture within a predetermined range from the target block.
  • the predetermined range is an adjacent block on the left, upper left, upper, and upper right of the target block, and a reference area differs depending on the intra prediction mode.
  • the intra-prediction image generation unit 310 generates a prediction image of the target block with reference to the read decoded pixel value and the prediction mode indicated by the intra prediction mode IntraPredMode.
  • the intra prediction image generation unit 310 outputs the generated prediction image to the addition unit 312.
  • a decoded peripheral area adjacent to the target block is set as a reference area. Then, a predicted image is generated by extrapolating pixels on the reference area in a specific direction.
  • the reference region is an L-shaped region including the left and upper (or, further, upper left, upper right, lower left) of the target block (for example, a region indicated by hatched circle pixels in FIG. 24A). It may be set.
  • the intra prediction image generation unit 310 includes a target block setting unit 3101, an unfiltered reference image setting unit 3102 (first reference image setting unit), a filtered reference image setting unit 3103 (second reference image setting unit), and a prediction unit 3104 and a predicted image correction unit 3105 (predicted image correction unit, filter switching unit, weight coefficient changing unit).
  • the prediction unit 3104 Based on each reference pixel (unfiltered reference image) on the reference area, a filtered reference image generated by applying the reference pixel filter (first filter), and the intra prediction mode, the prediction unit 3104 performs tentative prediction of the target block. An image (predicted image before correction) is generated and output to the predicted image correction unit 3105.
  • the predicted image correction unit 3105 corrects the temporary predicted image according to the intra prediction mode, generates a predicted image (corrected predicted image), and outputs the generated predicted image.
  • the target block setting unit 3101 sets a target CU as a target block, and outputs information on the target block (target block information).
  • the target block information includes at least an index indicating the size, position, luminance or color difference of the target block.
  • the unfiltered reference image setting unit 3102 sets a region adjacent to the target block as a reference region based on the size and position of the target block. Next, each decoded pixel value at a corresponding position in the reference picture memory 306 is set to each pixel value (unfiltered reference image, boundary pixel) in the reference area.
  • the unfiltered reference is a line r [x] [-1] of decoded pixels adjacent to the upper side of the target block and a row r [-1] [y] of decoded pixels adjacent to the left side of the target block shown in FIG. It is an image.
  • the unfiltered reference image r [x] [y] is set by (formula INTRAP-1) using the decoded pixel value u [] [] of the current picture expressed based on the upper left coordinates of the current picture.
  • (xPb, yPb) indicates the upper left coordinate of the target block
  • maxBS indicates the larger value of the width wPb or the height hPb of the target block.
  • a line r [x] [ ⁇ 1] of decoded pixels adjacent to the upper side of the target block and a row r of decoded pixels adjacent to the left side of the target block [-1] [y] is an unfiltered reference image.
  • the default value for example, if the pixel bit depth is bitDepth, 1 ⁇ (bitDepth-1)
  • bitDepth 1 ⁇
  • a referable decoded pixel value existing near the corresponding decoded pixel may be set as an unfiltered reference image.
  • "y -1 ..
  • the filtered reference image setting unit 3103 applies a reference pixel filter (first filter) to the unfiltered reference image according to the intra prediction mode, and performs a filtered reference of each position (x, y) on the reference area.
  • the image s [x] [y] is derived.
  • a low-pass filter is applied to the position (x, y) and the unfiltered reference image around the position (x, y) to derive a filtered reference image (FIG. 24 (b)). Note that it is not always necessary to apply a low-pass filter to all intra prediction modes, and a low-pass filter may be applied to some intra prediction modes.
  • the unfiltered reference image setting unit 3102 does not use the method of (pixel value setting 1 of the reference area) at the boundary between the restricted area and the non-restricted area, and uses the normal method shown in (Expression INTRAP-1). Is described.
  • the filtered reference image setting unit 3103 refers to the pixels in the unrestricted area as the reference pixels as shown in FIG. Do not filter. That is, the pixels in the unrestricted area are set to “not available” as reference pixels. In this case, it is not necessary to apply the reference pixel filter to the pixels in the unrestricted area or all the reference areas above the target block.
  • the reference pixel filter is a linear filter that refers to the pixel value of r [maxBS * 2-1] [-1], at least the reference area above the target block is not filtered.
  • the reference area without filtering is the reference area on the left side of the target block.
  • the reference pixel filter is a linear filter that refers to the pixel value of r [-1] [maxBS * 2-1], at least the reference area on the left side of the target block is not filtered.
  • the filter applied to the unfiltered reference image in the reference area in the filtered reference pixel setting unit 3103 is referred to as a “reference pixel filter (first filter)”, whereas the filter to be described later
  • the filter that corrects the predicted image is called a “boundary filter (second filter)”.
  • the intra prediction unit 3104 generates a tentative predicted image (temporary predicted pixel value, pre-correction predicted image) of the target block based on the intra prediction mode, the unfiltered reference image, and the filtered reference pixel value, and the predicted image correction unit 3105 Output to
  • the prediction unit 3104 includes a Planar prediction unit 31041, a DC prediction unit 31042, an Angular prediction unit 31043, and an LM prediction unit 31044 inside.
  • the prediction unit 3104 selects a specific prediction unit according to the intra prediction mode, and inputs an unfiltered reference image and a filtered reference image.
  • the relationship between the intra prediction mode and the corresponding prediction unit is as follows.
  • DC prediction unit 31042 ⁇ Angular prediction ⁇ ⁇ ⁇ Angular prediction unit 31043 ⁇ LM prediction ⁇ ⁇ ⁇ LM prediction unit 31044
  • the Planar prediction is performed unless a constraint described later is provided.
  • the mode must not be used.
  • an intra prediction mode (IntraPredModeY> 50) larger than the vertical prediction mode (50) shown in FIG. 5 must not be used. This is because these modes refer to pixels included in the unrestricted area.
  • the Planar prediction unit 31041 linearly adds a plurality of filtered reference images according to the distance between the target pixel position and the reference pixel position to generate a temporary predicted image, and outputs the temporary predicted image to the predicted image correction unit 3105.
  • the pixel value q [x] [y] of the tentative prediction image is derived by (Expression INTRAP-4) using the filtered reference pixel value s [x] [y] and the width wPb and height hPb of the target block. You.
  • q [x] [y] ((wPb-1-x) * s [-1] [y] + (x + 1) * s [wPb-1] [-1] + (hPb-1-y) * s [x] [-1] + (y + 1) * s [-1] [hPb] + maxBS) >> (log2 (maxBS) +1) (INTRAP-5) Further, the pixel value q [x] [y] of the provisional predicted image may be calculated using s [-1] [hPb-1] instead of s [-1] [hPb].
  • the intra prediction unit 3104 A predicted image is generated by Planar prediction by substituting pixels.
  • the DC prediction unit 31042 derives a DC prediction value corresponding to an average value of the filtered reference image s [x] [y], and outputs a temporary prediction image q [x] [y] having the DC prediction value as a pixel value. I do.
  • the Angular prediction unit 31043 generates a temporary predicted image q [x] [y] using the filtered reference image s [x] [y] in the prediction direction (reference direction) indicated by the intra prediction mode, and Output to 3105.
  • Intra prediction section 3104 determines that the target block is a restricted area (IsRA (xPb, yPb) is true) and the reference position (xNbX, yNbX) of a block adjacent to the target block is an unrestricted area ( If IsRA (xNbX, yNbX) is false), the value of the neighboring block is not used for deriving the MPM candidate.
  • the target block is a restricted area (IsRA (xPb, yPb) is true) and the reference position (xNbX, yNbX) of a block adjacent to the target block is a restricted area (IsRA (xNbX, yNbX) True), in the case of intra prediction, the position (xNbX, yNbX) is used for the MPM candidate candIntraPredModeX.
  • the intra prediction unit 3104 sets the target block to the restricted area (IsRA (xPb, yPb) Is true) and the right adjacent position (xPb + wPb, yPb-1) of the target block is the restricted area (IsRA (xPb + wPb, yPb-1) is true), the MPM candidate is derived from the right adjacent position. Use values.
  • the intra prediction unit 3104 sets the target block to the restriction area (IsRA (xPb, yPb) Is true) and the left neighboring position (xPb-1, yPb) of the target block is a restricted area (IsRA (xPb-1, yPb) is true), the value of the left neighboring position is used to derive MPM candidates.
  • the reference pixel adjacent to the upper right of the target block is a non-restricted area.
  • the width of the target block is wPb
  • the intra prediction unit 3104 determines that the target block is a restricted region (IsRA (xPb, yPb) is true) and the right region of the target block is a non-restricted region (IsRA (xPb If + wPb, yPb-1) is false) and a value greater than IntraPredModeVer is derived for the MPM candidate, it is determined that the MPM candidate is not available and is not inserted into the MPM candidate. Also, in this case, if the derived intra prediction mode is larger than IntraPredModeVer, the intra prediction parameter decoding unit 304 may clip to IntraPredModeVer.
  • IntraPredMode Min (IntraPredMode, IntraPredModeVer)
  • IntraPredMode Max (IntraPredMode, IntraPredModeVer)
  • the restricted area is shifted from top to bottom.
  • the intra prediction unit 3104 determines that if the upper region (IsRA (xPb, yPb-1) is false) of the target block is an unrestricted region, and a value larger than IntraPredModeHor is derived for the MPM candidate, , The MPM candidate is not inserted into the MPM candidate because it is not available. In this case, if the derived intra prediction mode is larger than IntraPredModeHor, the intra prediction parameter decoding unit 304 may clip to IntraPredModeHor.
  • IntraPredMode Min (IntraPredMode, IntraPredModeHor)
  • IntraPredMode Max (IntraPredMode, IntraPredModeHor) In this way, by restricting the intra prediction mode, Angular prediction can be used even when random access is realized using a new restricted area.
  • the intra prediction unit 3104 excludes the Angular prediction mode using that area.
  • the intra prediction unit 3104 treats that area as not available.
  • the LM prediction unit 31044 predicts a color difference pixel value based on the luminance pixel value. Specifically, a method of generating a prediction image of a color difference image (Cb, Cr) using a linear model based on the decoded luminance image.
  • the LM prediction includes a cross-component linear model prediction (CCLM) prediction and a multiple model ccLM (MMLM) prediction.
  • the CCLM prediction is a prediction method using one linear model for predicting a color difference from luminance for one block.
  • the MMLM prediction is a prediction method that uses two or more linear models for predicting a color difference from luminance for one block.
  • the predicted image correction unit 3105 corrects the provisional predicted image output from the prediction unit 3104 according to the intra prediction mode. Specifically, the predicted image correction unit 3105 weights and adds (weighted average) the unfiltered reference image and the temporary predicted image to each pixel of the temporary predicted image according to the distance between the reference region and the target predicted pixel. Thus, a predicted image (corrected predicted image) Pred obtained by correcting the temporary predicted image is derived. In some intra prediction modes, the output of the prediction unit 3104 may be used as a prediction image without correcting the provisional prediction image by the prediction image correction unit 3105.
  • the inverse quantization / inverse transform unit 311 inversely quantizes the quantized transform coefficient input from the entropy decoding unit 301 to obtain a transform coefficient.
  • the quantized transform coefficients are used for DCT (Discrete Cosine Transform, Discrete Cosine Transform), DST (Discrete Sine Transform, Discrete Sine Transform), KLT (Karyhnen Loeve Transform, Karhunen-Loeve Transform) Is a coefficient obtained by performing frequency conversion and quantization.
  • the inverse quantization / inverse transform unit 311 performs an inverse frequency transform such as an inverse DCT, an inverse DST, an inverse KLT on the obtained transform coefficient, and calculates a prediction error.
  • the inverse quantization / inverse transforming unit 311 outputs the prediction error to the adding unit 312.
  • the addition unit 312 adds the prediction image of the block input from the prediction image generation unit 308 and the prediction error input from the inverse quantization / inverse transformation unit 311 for each pixel to generate a decoded image of the block.
  • the addition unit 312 stores the decoded image of the block in the reference picture memory 306, and outputs the decoded image to the loop filter 305.
  • FIG. 18 is a block diagram showing a configuration of the video encoding device 11 according to the present embodiment.
  • the moving picture coding apparatus 11 includes a prediction image generation unit 101, a subtraction unit 102, a transformation / quantization unit 103, an inverse quantization / inverse transformation unit 105, an addition unit 106, a loop filter 107, a prediction parameter memory (a prediction parameter storage unit) , Frame memory) 108, reference picture memory (reference image storage unit, frame memory) 109, coding parameter determination unit 110, parameter coding unit 111, and entropy coding unit 104.
  • the predicted image generation unit 101 generates a predicted image for each CU which is an area obtained by dividing each picture of the image T.
  • the operation of the predicted image generation unit 101 is the same as that of the predicted image generation unit 308 described above, and a description thereof will be omitted.
  • the subtraction unit 102 generates a prediction error by subtracting the pixel value of the predicted image of the block input from the predicted image generation unit 101 from the pixel value of the image T. Subtraction unit 102 outputs the prediction error to transform / quantization unit 103.
  • Transform / quantization section 103 calculates a transform coefficient by frequency transformation with respect to the prediction error input from subtraction section 102, and derives a quantized transform coefficient by quantization. Transform / quantization section 103 outputs the quantized transform coefficient to entropy encoding section 104 and inverse quantization / inverse transform section 105.
  • the inverse quantization / inverse transformation unit 105 is the same as the inverse quantization / inverse transformation unit 311 (FIG. 17) in the video decoding device 31, and the description is omitted.
  • the calculated prediction error is output to addition section 106.
  • the parameter encoding unit 111 includes a restricted area control unit 120, an inter prediction parameter encoding unit 112 (not shown), and an intra prediction parameter encoding unit 113.
  • Restriction region control section 120 includes header encoding section 1110, CT information encoding section 1111, CU encoding section 1112 (prediction mode encoding section), and inter prediction parameter encoding section 112 and intra prediction parameter encoding section (not shown). It has 113.
  • the CU encoding unit 1112 further includes a TU encoding unit 1114.
  • the parameter encoding unit 111 performs an encoding process on parameters such as header information, division information, prediction information, and quantized transform coefficients.
  • the CT information encoding unit 1111 encodes QT, MT (BT, TT) division information and the like from the encoded data.
  • CU encoding section 1112 encodes CU information, prediction information, TU division flag split_transform_flag, CU residual flag cbf_cb, cbf_cr, cbf_luma, and the like.
  • TU encoding section 1114 encodes QP update information (quantization correction value) and quantization prediction error (residual_coding) when the TU includes a prediction error.
  • the entropy encoding unit 104 converts the syntax element supplied from the source into binary data, generates encoded data by an entropy encoding method such as CABAC, and outputs the encoded data.
  • the source of the syntax element is a CT information encoding unit 1111 and a CU encoding unit 1112.
  • Syntax elements are inter prediction parameters (predMode predMode, merge flag merge_flag, merge index merge_idx, inter prediction identifier inter_pred_idc, reference picture index refIdxLX, prediction vector index mvp_LX_idx, difference vector mvdLX), intra prediction parameters (prev_intra_luma_pred_flag, mp_selected , Rem_selected_mode, rem_non_selected_mode,), quantized transform coefficients, and the like.
  • inter prediction parameters predMode predMode, merge flag merge_flag, merge index merge_idx, inter prediction identifier inter_pred_idc, reference picture index refIdxLX, prediction vector index mvp_LX_idx, difference vector mvdLX
  • intra prediction parameters prev_intra_luma_pred_flag, mp_selected , Rem_selected_mode, rem_non_s
  • the entropy coding unit 104 generates and outputs a coded stream Te by performing entropy coding on the division information, the prediction parameters, the quantized transform coefficients, and the like.
  • the inter prediction parameter coding unit 112 derives the inter prediction parameters based on the prediction parameters input from the coding parameter determination unit 110.
  • Inter prediction parameter coding section 112 includes a configuration that is partially the same as the configuration in which inter prediction parameter decoding section 303 derives the inter prediction parameters.
  • the configuration of the inter prediction parameter coding unit 112 will be described. As shown in FIG. 28 (a), it is configured to include a parameter coding control unit 1121, an AMVP prediction parameter derivation unit 1122, a subtraction unit 1123, a sub-block prediction parameter derivation unit 1125, and the like.
  • the parameter coding control unit 1121 includes a merge index deriving unit 11211 and a vector candidate index deriving unit 11212.
  • the merge index deriving unit 11211, the vector candidate index deriving unit 11212, the AMVP prediction parameter deriving unit 1122, and the sub-block prediction parameter deriving unit 1125 may be collectively referred to as a motion vector deriving unit (motion vector deriving device).
  • the inter prediction parameter coding unit 112 outputs the motion vector (mvLX, subMvLX), the reference picture index refIdxLX, the inter prediction identifier inter_pred_idc, or information indicating these to the predicted image generation unit 101. Further, the inter prediction parameter encoding unit 112, merge flag merge_flag, merge index merge_idx, inter prediction identifier inter_pred_idc, reference picture index refIdxLX, prediction vector index mvp_lX_idx, difference vector mvdLX, sub block prediction mode flag subPbMotionFlag to the entropy encoding unit 104 Output.
  • the merge index deriving unit 11211 compares the motion vector and the reference picture index input from the coding parameter determination unit 110 with the motion vector and the reference picture index of the merge candidate block read from the prediction parameter memory 108, and performs merging.
  • the index merge_idx is derived and output to the entropy coding unit 104.
  • a merge candidate is a reference block (for example, a block adjacent to the left end, the lower left end, the upper left end, the upper end, the upper right end of the target block) within a predetermined range from the target CU, and is a block for which the encoding process is completed. is there.
  • the vector candidate index deriving unit 11212 derives a predicted vector index mvp_lX_idx.
  • the AMVP prediction parameter deriving unit 1122 derives a prediction vector mvpLX based on the motion vector mvLX.
  • AMVP prediction parameter derivation section 1122 outputs prediction vector mvpLX to subtraction section 1123. Note that the reference picture index refIdxLX and the prediction vector index mvp_lX_idx are output to the entropy coding unit 104.
  • the subtraction unit 1123 generates a difference vector mvdLX by subtracting the prediction vector mvpLX output from the AMVP prediction parameter derivation unit 1122 from the motion vector mvLX input from the coding parameter determination unit 110.
  • the difference vector mvdLX is output to entropy coding section 104.
  • the intra prediction parameter coding unit 113 derives a coding format (for example, mpm_idx, rem_intra_luma_pred_mode, etc.) from the intra prediction mode IntraPredMode input from the coding parameter determination unit 110.
  • the intra prediction parameter coding unit 113 includes a part of the same configuration as the configuration in which the intra prediction parameter decoding unit 304 derives the intra prediction parameters.
  • FIG. 29 is a schematic diagram showing a configuration of the intra prediction parameter encoding unit 113 of the parameter encoding unit 111.
  • the intra prediction parameter coding unit 113 includes a parameter coding control unit 1131, a luminance intra prediction parameter derivation unit 1132, and a chrominance intra prediction parameter derivation unit 1133.
  • the parameter prediction control unit 1131 receives the luminance prediction mode IntraPredModeY and the color difference prediction mode IntraPredModeC from the coding parameter determination unit 110.
  • the parameter coding control unit 1131 determines the prev_intra_luma_pred_flag with reference to the MPM candidate list mpmCandList [] of the reference candidate list deriving unit 30421. Then, it outputs prev_intra_luma_pred_flag and luminance prediction mode IntraPredModeY to luminance intra prediction parameter deriving section 1132. Further, it outputs the color difference prediction mode IntraPredModeC to the color difference intra prediction parameter deriving unit 1133.
  • the luminance intra prediction parameter deriving unit 1132 includes an MPM candidate list deriving unit 30421 (candidate list deriving unit), an MPM parameter deriving unit 11322, and a non-MPM parameter deriving unit 11323 (encoding unit, deriving unit). You.
  • the MPM candidate list deriving unit 30421 derives the MPM candidate list mpmCandList [] with reference to the intra prediction mode of the adjacent block stored in the prediction parameter memory.
  • prev_intra_luma_pred_flag is 1
  • the MPM parameter deriving unit 11322 derives mpm_idx from the luminance prediction mode IntraPredModeY and the MPM candidate list mpmCandList [], and outputs it to the entropy encoding unit 104.
  • non-MPM parameter deriving section 11323 derives RemIntraPredMode from luminance prediction mode IntraPredModeY and MPM candidate list mpmCandList [], and outputs rem_selected_mode or rem_non_selected_mode to entropy encoding section 104.
  • the chrominance intra prediction parameter deriving unit 1133 derives not_dm_chroma_flag, not_lm_chroma_flag, and chroma_intra_mode_idx from the luminance prediction mode IntraPredModeY and the chrominance prediction mode IntraPredModeC, and outputs them.
  • the addition unit 106 generates a decoded image by adding the pixel value of the prediction image of the block input from the prediction image generation unit 101 and the prediction error input from the inverse quantization / inverse conversion unit 105 for each pixel.
  • the adding unit 106 stores the generated decoded image in the reference picture memory 109.
  • the loop filter 107 applies a deblocking filter, SAO, and ALF to the decoded image generated by the adding unit 106.
  • SAO deblocking filter
  • ALF ALF
  • the loop filter 107 does not necessarily need to include the above three types of filters, and may have a configuration including only a deblocking filter, for example.
  • the prediction parameter memory 108 stores the prediction parameters generated by the coding parameter determination unit 110 at a position predetermined for each of the target picture and the CU.
  • the reference picture memory 109 stores the decoded image generated by the loop filter 107 at a predetermined position for each target picture and CU.
  • the coding parameter determination unit 110 selects one set from a plurality of sets of coding parameters.
  • the coding parameter is the above-described QT, BT, or TT division information, a prediction parameter, or a parameter to be coded that is generated in association with them.
  • the predicted image generation unit 101 generates a predicted image using these encoding parameters.
  • the coding parameter determination unit 110 calculates an RD cost value indicating a magnitude of an information amount and a coding error for each of the plurality of sets.
  • the RD cost value is, for example, a sum of a code amount and a value obtained by multiplying a square error by a coefficient ⁇ .
  • the code amount is the information amount of the coded stream Te obtained by entropy coding the quantization error and the coding parameter.
  • the square error is the sum of squares of the prediction error calculated by the subtraction unit 102.
  • the coefficient ⁇ is a real number larger than a preset zero.
  • the coding parameter determination unit 110 selects a set of coding parameters that minimizes the calculated cost value. As a result, the entropy coding unit 104 outputs the selected coding parameter set as a coded stream Te.
  • the coding parameter determination unit 110 stores the determined coding parameter in the prediction parameter memory 108.
  • the parameter encoding unit 111 may be realized by a computer.
  • a program for realizing this control function may be recorded on a computer-readable recording medium, and the program recorded on this recording medium may be read and executed by a computer system.
  • the “computer system” is a computer system built in either the moving picture encoding device 11 or the moving picture decoding device 31, and includes an OS and hardware such as peripheral devices.
  • the “computer-readable recording medium” refers to a portable medium such as a flexible disk, a magneto-optical disk, a ROM, and a CD-ROM, and a storage device such as a hard disk built in a computer system.
  • the "computer-readable recording medium” is a medium that dynamically holds the program for a short time, such as a communication line for transmitting the program through a network such as the Internet or a communication line such as a telephone line,
  • a program holding a program for a certain period of time such as a volatile memory in a computer system serving as a server or a client, may be included.
  • the above-mentioned program may be for realizing a part of the above-mentioned functions, or may be for realizing the above-mentioned functions in combination with a program already recorded in a computer system.
  • the video decoding device divides a picture into a restricted region and a non-restricted region, and for a block included in the restricted region, intra prediction that refers to only pixels in the restricted region in the picture, or , Prediction using inter prediction referring to a restricted reference region of a reference picture of the picture, and intra prediction referring to decoded pixels in the picture for blocks included in the unrestricted region, or The prediction is performed using inter prediction referring to a reference picture, and after the picture is decoded, the restricted area of the picture is set as the restricted reference area.
  • a video decoding device divides a picture into a new refresh region, a restricted region, and a non-restricted region, and performs intra prediction on blocks included in the new refresh region with reference to only pixels in the restricted region.
  • intra prediction referring only to the pixels in the new refresh region or the restricted region, or performing inter prediction referring to the restricted reference region of the reference picture, to the blocks included in the restricted region, and including the blocks in the non-restricted region
  • the intra prediction referring to the decoded pixel in the picture, or the inter prediction referring to the reference picture of the picture the prediction is performed, and after the picture is decoded, the new refresh area is added.
  • the restriction area is set as the restriction reference area.
  • a video decoding device divides a picture into a restricted region and a non-restricted region, and performs intra prediction on blocks included in a new refresh region in the restricted region with reference to only pixels in the restricted region.
  • intra prediction referring only to the pixels in the new refresh region or the restricted region, or performing inter prediction referring to the restricted reference region of the reference picture, to the blocks included in the restricted region, and including the blocks in the non-restricted region
  • the prediction is performed using intra prediction referring to a decoded pixel in the picture or inter prediction referring to a reference picture of the picture, and after the picture is decoded, the restricted area is replaced with the restricted reference area. It is characterized by setting as.
  • the new refresh area is characterized in that areas decoded at time t and time t-1 overlap (overlap).
  • a video encoding device divides a picture into a restricted region and a non-restricted region, and for a block included in the restricted region, intra prediction that refers only to pixels in the restricted region in the picture, Alternatively, prediction using inter prediction referring to a restricted reference area of a reference picture of the picture, and intra prediction referring to decoded pixels in the picture for blocks included in the unrestricted area, or the picture And using the inter-prediction referring to the reference picture, and after coding the picture, setting the restricted area of the picture as the restricted reference area.
  • a video decoding device includes a deblocking filter, and the deblocking filter turns off filtering of a block in filtering of two blocks sandwiching the restricted region and the unrestricted region. It is characterized by.
  • a video decoding device includes a deblocking filter, and the deblocking filter performs filtering of a block included in the restricted region in filtering of two blocks sandwiching the restricted region and the unrestricted region. Is turned off.
  • the width of the overlapping area is derived from the number of pixels corrected by the filtering process and the minimum value of the block width.
  • a video decoding device includes a predicted image generation unit, and the predicted image generation unit sets a pixel of an unrestricted area to “unusable as a reference pixel” at a boundary between the restricted area and the unrestricted area. And setting a pixel value of the restricted area in the reference area of the target block.
  • a video decoding device includes a predicted image generation unit, and the predicted image generation unit includes a reference pixel filter for a pixel that refers to a pixel in an unrestricted area at a boundary between the restricted area and the unrestricted area. Is turned off.
  • a video decoding device includes a predicted image generation unit, and the predicted image generation unit includes a reference block in a prediction direction indicated by an intra prediction mode in a target block in a restricted region including an unrestricted region. In this case, the intra prediction mode is excluded.
  • a video decoding device includes a predicted image generation unit, and the predicted image generation unit includes a reference block in a prediction direction indicated by an intra prediction mode in a target block in a restricted region including an unrestricted region.
  • the reference pixel is set to “unusable”.
  • a video decoding device includes a predicted image generation unit.
  • the predicted image generation unit includes a non-restricted block in a target block in a restricted region when a pixel referred to in Planar prediction includes a non-restricted region. It is characterized in that reference pixels in the area are replaced with pixels in the restricted area.
  • a video decoding device includes a predicted image generation unit.
  • the predicted image generation unit includes a target block in a restricted area, where a pixel referred to in the planar prediction includes an unrestricted area. The mode is excluded.
  • a video decoding device includes a motion compensation unit.
  • the motion compensation unit A padding process for replacing a pixel value of the restricted reference area with a pixel value of the restricted reference area is performed.
  • the video decoding device includes a motion compensation unit, and the motion compensation unit determines a non-restricted reference area when a pixel of a picture referred to by the target block of the restricted area is in the unrestricted reference area.
  • the pointing vector is restricted (clipped) within the restricted reference boundary.
  • a video decoding device includes a motion compensation unit, and the motion compensation unit performs a combination of a reference picture index and a motion vector such that an area referred to by a target block of a restricted area is included in the restricted reference area. It is characterized in that adjacent blocks having the selected block are selected from a merge candidate list and set as merge candidates.
  • the video decoding device includes a motion compensation unit, and the motion compensation unit includes, in the restricted block target block, a reference region based on a reference picture index and a motion vector of a merge candidate in the restricted reference region. If not, the reference picture index is corrected and the motion vector is scaled, so that the reference area of the target block is set to be included in the restricted reference area.
  • a part or all of the moving image encoding device 11 and the moving image decoding device 31 in the above-described embodiment may be realized as an integrated circuit such as an LSI (Large Scale Integration).
  • LSI Large Scale Integration
  • Each functional block of the video encoding device 11 and the video decoding device 31 may be individually implemented as a processor, or a part or all thereof may be integrated and implemented as a processor.
  • the method of circuit integration is not limited to an LSI, and may be realized by a dedicated circuit or a general-purpose processor. Further, in the case where a technology for forming an integrated circuit that replaces the LSI appears due to the progress of the semiconductor technology, an integrated circuit based on the technology may be used.
  • the above-described moving image encoding device 11 and moving image decoding device 31 can be used by being mounted on various devices that transmit, receive, record, and reproduce moving images.
  • the moving image may be a natural moving image captured by a camera or the like, or may be an artificial moving image (including CG and GUI) generated by a computer or the like.
  • FIG. 30 (a) is a block diagram showing a configuration of a transmission device PROD_A equipped with the video encoding device 11.
  • the transmitting device PROD_A is a coding unit PROD_A1 that obtains coded data by coding a moving image, and modulates a carrier with the coded data obtained by the coding unit PROD_A1.
  • a transmitting section PROD_A3 for transmitting the modulated signal obtained by the modulating section PROD_A2.
  • the above-described video encoding device 11 is used as the encoding unit PROD_A1.
  • the transmitting device PROD_A is a camera PROD_A4 that captures a moving image, a recording medium PROD_A5 that records the moving image, an input terminal PROD_A6 for externally inputting the moving image, as a supply source of the moving image to be input to the encoding unit PROD_A1, and And an image processing unit A7 for generating or processing an image.
  • FIG. 30 (a) illustrates a configuration in which the transmitting device PROD_A includes all of them, but a portion thereof may be omitted.
  • the recording medium PROD_A5 may be a recording of a moving image that is not encoded, or may record a moving image encoded by a recording encoding method different from the transmission encoding method. It may be something. In the latter case, a decoding unit (not shown) that decodes the encoded data read from the recording medium PROD_A5 in accordance with the encoding method for recording may be interposed between the recording medium PROD_A5 and the encoding unit PROD_A1.
  • FIG. 30 (b) is a block diagram illustrating a configuration of a receiving device PROD_B including the video decoding device 31.
  • the receiving device PROD_B includes a receiving unit PROD_B1 for receiving a modulated signal, a demodulating unit PROD_B2 for obtaining encoded data by demodulating the modulated signal received by the receiving unit PROD_B1, and a demodulating unit.
  • a decoding unit PROD_B3 for obtaining a moving image by decoding the encoded data obtained by the PROD_B2.
  • the above-described video decoding device 31 is used as the decoding unit PROD_B3.
  • the receiving device PROD_B has a display PROD_B4 for displaying a moving image, a recording medium PROD_B5 for recording the moving image, and an output terminal for outputting the moving image to the outside, as a supply destination of the moving image output by the decoding unit PROD_B3.
  • PROD_B6 may be further provided.
  • FIG. 30 (b) a configuration in which all of them are included in the receiving device PROD_B is illustrated, but a part of them may be omitted.
  • the recording medium PROD_B5 may be for recording a moving image that is not encoded, or may be encoded using a recording encoding method different from the transmission encoding method. You may. In the latter case, an encoding unit (not shown) that encodes the moving image obtained from the decoding unit PROD_B3 according to the encoding method for recording may be interposed between the decoding unit PROD_B3 and the recording medium PROD_B5.
  • the transmission medium for transmitting the modulated signal may be wireless or wired.
  • the transmission mode for transmitting the modulated signal may be broadcast (here, a transmission mode in which the transmission destination is not specified in advance) or communication (here, transmission in which the transmission destination is specified in advance). (Which refers to an embodiment). That is, transmission of the modulated signal may be realized by any of wireless broadcasting, wired broadcasting, wireless communication, and wired communication.
  • a terrestrial digital broadcast station such as a broadcasting facility
  • a receiving station such as a television receiver
  • a transmitting device PROD_A / receiving device PROD_B that transmits and receives a modulated signal by wireless broadcasting.
  • a broadcasting station (broadcasting facility or the like) / receiving station (television receiver or the like) of cable television broadcasting is an example of a transmitting device PROD_A / receiving device PROD_B that transmits and receives a modulated signal by cable broadcasting.
  • Servers workstations, etc.
  • Clients television receivers, personal computers, smartphones, etc.
  • VOD Video On Demand
  • video sharing services using the Internet are transmitters that transmit and receive modulated signals by communication.
  • PROD_A / receiving device PROD_B normally, either a wireless or wired transmission medium is used in a LAN, and a wired transmission medium is used in a WAN.
  • the personal computer includes a desktop PC, a laptop PC, and a tablet PC.
  • the smartphone includes a multifunctional mobile phone terminal.
  • the client of the moving image sharing service has a function of decoding encoded data downloaded from the server and displaying the encoded data on a display, and a function of encoding a moving image captured by a camera and uploading the encoded moving image to the server. That is, the client of the moving image sharing service functions as both the transmitting device PROD_A and the receiving device PROD_B.
  • FIG. 31A is a block diagram illustrating a configuration of a recording device PROD_C in which the above-described video encoding device 11 is mounted.
  • a recording device PROD_C includes an encoding unit PROD_C1 that obtains encoded data by encoding a moving image, and a writing unit PROD_C2 that writes the encoded data obtained by the encoding unit PROD_C1 on a recording medium PROD_M.
  • the video encoding device 11 described above is used as the encoding unit PROD_C1.
  • the recording medium PROD_M may be (1) a type built in the recording device PROD_C such as an HDD (Hard Disk Drive) or an SSD (Solid State Drive), or (2) an SD memory. It may be of a type connected to the recording device PROD_C, such as a card or a USB (Universal Serial Bus) flash memory, or (3) a DVD (Digital Versatile Disc: registered trademark) or a BD (Blu-ray). Such as (registered trademark) @Disc: registered trademark) may be loaded in a drive device (not shown) built in the recording device PROD_C.
  • the recording device PROD_C includes a camera PROD_C3 for capturing a moving image, an input terminal PROD_C4 for externally inputting a moving image, and a reception terminal for receiving the moving image, as a supply source of the moving image to be input to the encoding unit PROD_C1.
  • a unit PROD_C5 and an image processing unit PROD_C6 for generating or processing an image may be further provided. In the figure, a configuration in which all of these are included in the recording device PROD_C is illustrated, but a part of the configuration may be omitted.
  • the receiving unit PROD_C5 may receive an uncoded moving image, or may receive coded data coded by a transmission coding method different from the recording coding method. May be used. In the latter case, a transmission decoding unit (not shown) for decoding encoded data encoded by the transmission encoding method may be interposed between the receiving unit PROD_C5 and the encoding unit PROD_C1.
  • Examples of such a recording device PROD_C include a DVD recorder, a BD recorder, an HDD (Hard Disk Drive) recorder, and the like (in this case, the input terminal PROD_C4 or the receiving unit PROD_C5 is a main source of a moving image).
  • a camcorder in this case, the camera PROD_C3 is a main source of moving images
  • a personal computer in this case, the receiving unit PROD_C5 or the image processing unit C6 is a main source of moving images
  • a smartphone this In this case, the camera PROD_C3 or the receiving unit PROD_C5 is a main source of the moving image
  • the like are also examples of such a recording device PROD_C.
  • FIG. 31 (b) is a block diagram illustrating a configuration of a playback device PROD_D including the above-described video decoding device 31.
  • the playback device PROD_D includes a reading unit PROD_D1 that reads encoded data written to the recording medium PROD_M, and a decoding unit PROD_D2 that obtains a moving image by decoding the encoded data read by the reading unit PROD_D1. , Is provided.
  • the above-described video decoding device 31 is used as the decoding unit PROD_D2.
  • the recording medium PROD_M may be (1) a type built in the playback device PROD_D, such as an HDD or SSD, or (2) a type such as an SD memory card or a USB flash memory. It may be a type connected to the playback device PROD_D, or (3) a device such as a DVD or a BD that is loaded into a drive device (not shown) built in the playback device PROD_D. Good.
  • the playback device PROD_D includes a display PROD_D3 for displaying a moving image, an output terminal PROD_D4 for outputting the moving image to the outside, and a transmitting unit for transmitting the moving image, as a supply destination of the moving image output by the decoding unit PROD_D2.
  • PROD_D5 may be further provided. In the figure, a configuration in which the playback device PROD_D includes all of these is illustrated, but a part of the configuration may be omitted.
  • the transmitting unit PROD_D5 may transmit an uncoded moving image, or may transmit coded data coded by a transmission coding method different from the recording coding method. May be used. In the latter case, an encoding unit (not shown) for encoding a moving image using a transmission encoding method may be interposed between the decoding unit PROD_D2 and the transmission unit PROD_D5.
  • Such a playback device PROD_D includes, for example, a DVD player, a BD player, an HDD player, and the like (in this case, an output terminal PROD_D4 to which a television receiver or the like is connected is a main destination of a moving image).
  • an output terminal PROD_D4 to which a television receiver or the like is connected is a main destination of a moving image).
  • a television receiver in this case, the display PROD_D3 is a main supply destination of the moving image
  • a digital signage also referred to as an electronic signboard or an electronic bulletin board, etc.
  • the display PROD_D3 or the transmission unit PROD_D5 is a main supply of the moving image.
  • Desktop PC in this case, the output terminal PROD_D4 or the transmission unit PROD_D5 is the main destination of the moving image
  • laptop or tablet PC in this case, the display PROD_D3 or the transmission unit PROD_D5 is the video A main supply destination of an image
  • a smartphone in this case, the display PROD_D3 or the transmission unit PROD_D5 is a main supply destination of a moving image
  • a playback device PROD_D are also examples of such a playback device PROD_D.
  • Each block of the video decoding device 31 and the video encoding device 11 described above may be realized in hardware by a logic circuit formed on an integrated circuit (IC chip), or may be a CPU (Central Processing). Unit) may be implemented as software.
  • IC chip integrated circuit
  • CPU Central Processing
  • Unit Central Processing Unit
  • each of the above devices includes a CPU that executes instructions of a program for realizing each function, a ROM (Read Only Memory) storing the program, a RAM (Random Access Memory) that expands the program, the program and various data. And a storage device (recording medium) such as a memory for storing the information.
  • An object of an embodiment of the present invention is to record a program code (executable program, intermediate code program, source program) of a control program of each device, which is software for realizing the above-described functions, in a computer-readable manner.
  • the present invention can also be achieved by supplying a medium to each of the above-described devices and causing a computer (or a CPU or an MPU) to read and execute a program code recorded on a recording medium.
  • Examples of the recording medium include tapes such as a magnetic tape and a cassette tape, magnetic disks such as a floppy (registered trademark) disk / hard disk, and CD-ROM (Compact Disc-Only Memory) / MO disk (Magneto-Optical disc).
  • tapes such as a magnetic tape and a cassette tape
  • magnetic disks such as a floppy (registered trademark) disk / hard disk
  • CD-ROM Compact Disc-Only Memory
  • MO disk Magnetic-Optical disc
  • Discs including optical discs such as) / MD (Mini Disc) / DVD (Digital Versatile Disc: registered trademark) / CD-R (CD Recordable) / Blu-ray disc (Blu-ray (registered trademark) Disc: registered trademark) Cards (including memory cards) / cards such as optical cards, mask ROM / EPROM (Erasable Programmable Read-Only Memory) / EEPROM (Electrically Erasable and Programmable Read-Only Memory: registered trademark) / Semiconductor memories such as flash ROM Alternatively, logic circuits such as a PLD (Programmable logic device) and an FPGA (Field programmable gate array) can be used.
  • PLD Programmable logic device
  • FPGA Field programmable gate array
  • each of the above devices may be configured to be connectable to a communication network, and the program code may be supplied via the communication network.
  • This communication network is not particularly limited as long as it can transmit a program code.
  • the Internet intranet, extranet, LAN (Local Area Network), ISDN (Integrated Services Digital Network), VAN (Value-Added Network), CATV (Community RAntenna Television / Cable Television) communication network, and virtual private network (Virtual Private Network)
  • a telephone line network, a mobile communication network, a satellite communication network and the like can be used.
  • the transmission medium constituting this communication network may be any medium that can transmit the program code, and is not limited to a specific configuration or type.
  • infrared rays such as IrDA (Infrared Data Association) and remote control , BlueTooth (registered trademark), IEEE 802.11 wireless, HDR (High Data Rate), NFC (Near Field Communication), DLNA (registered trademark) (Digital Living Network Alliance: registered trademark), mobile phone network, satellite line, terrestrial digital It can also be used by radio such as a broadcast network.
  • the embodiment of the present invention can also be realized in the form of a computer data signal embedded in a carrier wave, in which the program code is embodied by electronic transmission.
  • the embodiment of the present invention is suitably applied to a moving image decoding device that decodes encoded data obtained by encoding image data, and a moving image encoding device that generates encoded data obtained by encoding image data. be able to. Further, the present invention can be suitably applied to the data structure of encoded data generated by the moving image encoding device and referred to by the moving image decoding device.
  • Image decoding device 301 Entropy decoder 302 Parameter decoding unit 3020 Header decoding unit 303 Inter prediction parameter decoding unit 304 Intra prediction parameter decoding unit 308 Prediction image generator 309 Inter prediction image generator 310 Intra prediction image generator 311 Inverse quantization / inverse transform unit 312 Adder 320 Restricted area control unit 11 Image coding device 101 Predictive image generator 102 Subtraction unit 103 Transform / Quantizer 104 Entropy encoder 105 Inverse quantization / inverse transform unit 107 Loop filter 110 coding parameter determination unit 111 Parameter encoder 112 Inter prediction parameter coding unit 113 Intra prediction parameter coding unit 120 Restricted area control unit 1110 Header encoder 1111 CT information encoding unit 1112 CU encoder (prediction mode encoder) 1114 TU encoder

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention aborde le problème d'un système au moyen duquel seulement une partie d'une image unique est soumise à une prédiction intra et d'autres régions sont soumises à une prédiction inter, selon lequel, alors qu'il est possible d'éviter la génération d'une grande quantité de codage pour une image spécifique et un retard en résultant, étant donné que seulement une partie de l'image est codée par prédiction intra, une efficacité de codage est faible par rapport à une image I dans laquelle une image entière est codée par prédiction intra. Une image est divisée en une région limite et une région non limite. Par rapport à un bloc dans la région limite, une prédiction est réalisée à l'aide d'une prédiction intra qui référence uniquement les pixels dans la région limite, ou une prédiction inter qui référence une région de référence limite d'une image de référence. Par rapport à un bloc dans la région non limite, une prédiction est réalisée à l'aide d'une prédiction intra qui référence des pixels qui ont été décodés dans l'image, ou une prédiction inter qui référence l'image de référence. Une fois que l'image a été décodée, la région limite est définie en tant que région de référence de limite.
PCT/JP2019/030950 2018-08-06 2019-08-06 Appareil de décodage d'image animée et appareil de codage d'image animée WO2020032049A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US17/264,869 US12003775B2 (en) 2018-08-06 2019-08-06 Video decoding apparatus and video coding apparatus
EP19847133.6A EP3836541A4 (fr) 2018-08-06 2019-08-06 Appareil de décodage d'image animée et appareil de codage d'image animée
CN201980052552.8A CN112534810A (zh) 2018-08-06 2019-08-06 运动图像解码装置以及运动图像编码装置

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2018-147658 2018-08-06
JP2018147658A JP2021180342A (ja) 2018-08-06 2018-08-06 予測画像生成装置、動画像復号装置、および動画像符号化装置
JP2018148470A JP2021180343A (ja) 2018-08-07 2018-08-07 予測画像生成装置、動画像復号装置、および動画像符号化装置
JP2018-148471 2018-08-07
JP2018148471A JP2021180344A (ja) 2018-08-07 2018-08-07 予測画像生成装置、動画像復号装置、および動画像符号化装置
JP2018-148470 2018-08-07

Publications (1)

Publication Number Publication Date
WO2020032049A1 true WO2020032049A1 (fr) 2020-02-13

Family

ID=69413519

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/030950 WO2020032049A1 (fr) 2018-08-06 2019-08-06 Appareil de décodage d'image animée et appareil de codage d'image animée

Country Status (4)

Country Link
US (1) US12003775B2 (fr)
EP (1) EP3836541A4 (fr)
CN (1) CN112534810A (fr)
WO (1) WO2020032049A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112970252A (zh) * 2020-07-24 2021-06-15 深圳市大疆创新科技有限公司 视频编解码的方法和装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112534810A (zh) * 2018-08-06 2021-03-19 夏普株式会社 运动图像解码装置以及运动图像编码装置
US20230147701A1 (en) * 2020-04-02 2023-05-11 Sharp Kabushiki Kaisha Video decoding apparatus and video decoding method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012008054A1 (fr) * 2010-07-15 2012-01-19 富士通株式会社 Appareil de décodage d'image animée, procédé de décodage d'image animée, appareil de codage d'image animée et procédé de codage d'image animée
JP2013165340A (ja) * 2012-02-09 2013-08-22 Sony Corp 画像処理装置と画像処理方法
WO2014002385A1 (fr) * 2012-06-25 2014-01-03 日本電気株式会社 Dispositif, procédé et programme de codage/décodage vidéo
US9100636B2 (en) * 2012-09-07 2015-08-04 Intel Corporation Motion and quality adaptive rolling intra refresh

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FI114679B (fi) 2002-04-29 2004-11-30 Nokia Corp Satunnaisaloituspisteet videokoodauksessa
JP5353532B2 (ja) * 2009-07-29 2013-11-27 ソニー株式会社 画像処理装置及び画像処理方法
BR112014032182A2 (pt) * 2012-07-02 2017-06-27 Panasonic Ip Corp America método de decodificação de imagem, método de codificação de imagem, aparelho de decodificação de imagem, aparelho de codificação de imagem, e aparelho de codificação e decodificação de imagem
JP5995622B2 (ja) * 2012-09-19 2016-09-21 株式会社メガチップス 動画像符号化装置、動画像符号化方法およびプログラム
US9491457B2 (en) * 2012-09-28 2016-11-08 Qualcomm Incorporated Signaling of regions of interest and gradual decoding refresh in video coding
US9398293B2 (en) * 2013-01-07 2016-07-19 Qualcomm Incorporated Gradual decoding refresh with temporal scalability support in video coding
US20140294072A1 (en) * 2013-03-27 2014-10-02 Magnum Semiconductor, Inc. Apparatuses and methods for staggered-field intra-refresh
JP2015106747A (ja) * 2013-11-28 2015-06-08 富士通株式会社 動画像符号化装置、動画像符号化方法及び動画像符号化用コンピュータプログラム
CN112534810A (zh) * 2018-08-06 2021-03-19 夏普株式会社 运动图像解码装置以及运动图像编码装置
CN113557722A (zh) * 2019-03-11 2021-10-26 华为技术有限公司 视频译码中的逐步解码刷新
CN113796081A (zh) * 2019-05-06 2021-12-14 华为技术有限公司 视频译码中的恢复点指示
CN114731419A (zh) * 2019-09-05 2022-07-08 寰发股份有限公司 视频编解码中于画面和次画面边界的适应性环内滤波方法和装置
CN114830662B (zh) * 2019-12-27 2023-04-14 阿里巴巴(中国)有限公司 用于对图像执行逐步解码刷新处理的方法和***

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012008054A1 (fr) * 2010-07-15 2012-01-19 富士通株式会社 Appareil de décodage d'image animée, procédé de décodage d'image animée, appareil de codage d'image animée et procédé de codage d'image animée
JP2013165340A (ja) * 2012-02-09 2013-08-22 Sony Corp 画像処理装置と画像処理方法
WO2014002385A1 (fr) * 2012-06-25 2014-01-03 日本電気株式会社 Dispositif, procédé et programme de codage/décodage vidéo
US9100636B2 (en) * 2012-09-07 2015-08-04 Intel Corporation Motion and quality adaptive rolling intra refresh

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Algorithm Description of Joint Exploration Test Model 7", JVET-G1001, JOINT VIDEO EXPLORATION TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, 19 August 2017 (2017-08-19)
"Improved Cyclic Intra Refresh", JVET-K0212, JOINT VIDEO EXPLORATION TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, 10 July 2018 (2018-07-10)
KIMIHIKO KAZUI: "AHG14: Study of methods for progressive intra refresh", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, JVET-L0079_RL, 12TH MEETING, no. JVET L0079, October 2018 (2018-10-01), Macao, CN, pages 1 - 9, XP030195175 *
See also references of EP3836541A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112970252A (zh) * 2020-07-24 2021-06-15 深圳市大疆创新科技有限公司 视频编解码的方法和装置

Also Published As

Publication number Publication date
EP3836541A1 (fr) 2021-06-16
EP3836541A4 (fr) 2022-06-22
CN112534810A (zh) 2021-03-19
US12003775B2 (en) 2024-06-04
US20210227262A1 (en) 2021-07-22

Similar Documents

Publication Publication Date Title
CN109792535B (zh) 预测图像生成装置、运动图像解码装置以及运动图像编码装置
WO2019054300A1 (fr) Dispositif de codage d'image et dispositif de décodage d'image
WO2021111962A1 (fr) Dispositif de décodage vidéo
WO2020184487A1 (fr) Dispositif de décodage d'image dynamique
WO2020137920A1 (fr) Dispositif de génération d'images de prédiction, dispositif de décodage d'images animées, dispositif de codage d'images animées et procédé de génération d'images de prédiction
WO2020045248A1 (fr) Dispositif de décodage vidéo et dispositif de codage vidéo
JP2020053924A (ja) 動画像符号化装置、動画像復号装置
WO2020032049A1 (fr) Appareil de décodage d'image animée et appareil de codage d'image animée
JPWO2020241858A5 (fr)
JP2021027429A (ja) 動画像符号化装置、動画像復号装置
JP2020096279A (ja) 予測画像生成装置、動画像復号装置および動画像符号化装置
JP2020145650A (ja) 画像復号装置および画像符号化装置
JP2020108012A (ja) 画像復号装置および画像符号化装置
WO2021200658A1 (fr) Dispositif de décodage d'image dynamique et procédé de décodage d'image dynamique
JP2022156140A (ja) 動画像符号化装置、復号装置
JP2022087865A (ja) 画像復号装置及び画像符号化装置
JP2021106309A (ja) 動画像復号装置および動画像符号化装置
JP2020170901A (ja) 予測画像生成装置、動画像復号装置および動画像符号化装置
JP2020088577A (ja) 予測画像生成装置、動画像復号装置、および動画像符号化装置
WO2019065537A1 (fr) Dispositif de filtre de compensation de mouvement, dispositif de décodage d'image et dispositif de codage d'image animée
JP7378968B2 (ja) 予測画像生成装置、動画像復号装置および動画像符号化装置
CN112956206B (zh) 发送逐渐刷新的方法
JP7425568B2 (ja) 動画像復号装置、動画像符号化装置、動画像復号方法および動画像符号化方法
WO2021235448A1 (fr) Dispositif de codage vidéo et dispositif de décodage vidéo
JP2021180344A (ja) 予測画像生成装置、動画像復号装置、および動画像符号化装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19847133

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019847133

Country of ref document: EP

Effective date: 20210309

NENP Non-entry into the national phase

Ref country code: JP