WO2014045651A1 - Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method - Google Patents

Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method Download PDF

Info

Publication number
WO2014045651A1
WO2014045651A1 PCT/JP2013/066616 JP2013066616W WO2014045651A1 WO 2014045651 A1 WO2014045651 A1 WO 2014045651A1 JP 2013066616 W JP2013066616 W JP 2013066616W WO 2014045651 A1 WO2014045651 A1 WO 2014045651A1
Authority
WO
WIPO (PCT)
Prior art keywords
prediction
signal
reference sample
target block
block
Prior art date
Application number
PCT/JP2013/066616
Other languages
French (fr)
Japanese (ja)
Inventor
鈴木 芳典
ブン チュンセン
タン ティオ ケン
Original Assignee
株式会社エヌ・ティ・ティ・ドコモ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to EP23162588.0A priority Critical patent/EP4221222A1/en
Priority to SG11201502234VA priority patent/SG11201502234VA/en
Priority to MX2016010755A priority patent/MX351764B/en
Priority to PL17152385T priority patent/PL3179722T3/en
Priority to MX2015003512A priority patent/MX341412B/en
Priority to EP17152385.5A priority patent/EP3179722B1/en
Priority to KR1020157010543A priority patent/KR101662655B1/en
Priority to KR1020167019567A priority patent/KR101764235B1/en
Priority to AU2013319537A priority patent/AU2013319537B2/en
Priority to CN201380041882.XA priority patent/CN104604238B/en
Priority to EP19218030.5A priority patent/EP3654650B1/en
Priority to PL13839473T priority patent/PL2899982T3/en
Priority to IN3265DEN2015 priority patent/IN2015DN03265A/en
Priority to BR122016013292-7A priority patent/BR122016013292B1/en
Priority to KR1020177032938A priority patent/KR101869830B1/en
Priority to ES13839473.9T priority patent/ES2637502T3/en
Priority to KR1020177018972A priority patent/KR101799846B1/en
Priority to BR122016013354-0A priority patent/BR122016013354A2/en
Priority to CA2885802A priority patent/CA2885802C/en
Application filed by 株式会社エヌ・ティ・ティ・ドコモ filed Critical 株式会社エヌ・ティ・ティ・ドコモ
Priority to RU2015115487/08A priority patent/RU2602978C1/en
Priority to KR1020167026526A priority patent/KR101755363B1/en
Priority to EP13839473.9A priority patent/EP2899982B1/en
Priority to KR1020187016861A priority patent/KR101964171B1/en
Priority to BR112015006109-5A priority patent/BR112015006109B1/en
Publication of WO2014045651A1 publication Critical patent/WO2014045651A1/en
Priority to PH12015500622A priority patent/PH12015500622A1/en
Priority to US14/665,545 priority patent/US9736494B2/en
Priority to AU2016202132A priority patent/AU2016202132B2/en
Priority to US15/445,533 priority patent/US10123042B2/en
Priority to US15/445,552 priority patent/US10110918B2/en
Priority to AU2017248485A priority patent/AU2017248485B2/en
Priority to US16/152,025 priority patent/US10382783B2/en
Priority to US16/152,002 priority patent/US10477241B2/en
Priority to US16/152,009 priority patent/US10477242B2/en
Priority to AU2019210532A priority patent/AU2019210532B2/en
Priority to AU2019210530A priority patent/AU2019210530B2/en
Priority to AU2019210520A priority patent/AU2019210520B2/en
Priority to AU2019210516A priority patent/AU2019210516B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/521Processing of motion vectors for estimating the reliability of the determined motion vectors or motion vector field, e.g. for smoothing the motion vector field or for correcting motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/573Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/587Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • the present invention relates to a moving picture predictive coding apparatus and method, and a moving picture predictive decoding apparatus and method, and particularly relates to a filter process for a reference sample used for predictive coding in a screen.
  • Compressive encoding technology is used to efficiently transmit and store moving image data.
  • MPEG1 to 4 and H.264. 261-H. H.264 is widely used.
  • an image to be encoded is divided into a plurality of blocks and then encoded / decoded.
  • predictive coding within a screen a predicted signal is generated using an adjacent previously reproduced image signal (reconstructed compressed image data) in the same screen as the target block, and then the predicted signal is The differential signal subtracted from the signal of the target block is encoded.
  • the adjacent reproduced image signal in the screen different from the target block is referred to, the motion is corrected, the predicted signal is generated, and the predicted signal is subtracted from the signal of the target block. Encode the difference signal.
  • a prediction signal is generated by searching for a signal similar to the pixel signal from a screen that has already been reproduced for a block to be encoded. Then, a motion vector that is a spatial displacement amount between the target block and a region formed by the searched signal, and a residual signal between the pixel signal and the prediction signal of the target block are encoded. Such a method for searching for a motion vector for each block is called block matching.
  • FIG. 10 is a schematic diagram for explaining the block matching process.
  • a procedure for generating a prediction signal will be described using the target block 702 on the encoding target screen 701 as an example.
  • the reference screen 703 has already been reproduced, and the area 704 is an area in the same position as the target block 702.
  • a search range 705 surrounding the region 704 is set, and a region 706 in which the absolute value error sum with the pixel signal of the target block 702 is minimum is detected from the pixel signal in this search range.
  • the signal in the area 706 becomes a prediction signal, and the amount of displacement from the area 704 to the area 706 is detected as a motion vector 707.
  • H. H.264 provides a plurality of prediction types having different block sizes for encoding motion vectors in order to cope with changes in local features of an image.
  • H. H.264 prediction types are described in Patent Document 2, for example.
  • FIG. 11 shows ITU H.264. 2 is a schematic diagram for explaining an intra-screen prediction method used for H.264.
  • a target block 802 is a block to be encoded
  • a pixel group (reference sample group) 801 composed of pixels A to M adjacent to the boundary of the target block 802 is an adjacent area. It is an image signal that has already been reproduced in past processing.
  • a pixel group (reference sample group) 801 that is an adjacent pixel immediately above the target block 802 is extended downward to generate a prediction signal.
  • the already reproduced pixels (I to L) on the left of the target block 804 are extended to the right to generate a prediction signal.
  • a specific method for generating a prediction signal is described in Patent Document 1, for example.
  • the difference between each of the nine prediction signals generated by the method shown in FIGS. 11A to 11I and the pixel signal of the target block is taken, and the one with the smallest difference value is used as the optimum prediction signal.
  • a prediction signal intra prediction sample
  • the above contents are described in Patent Document 1 below.
  • Non-Patent Document 1 a low-pass filter is applied to the reference sample before generating the prediction signal in order to suppress distortion generated in the reference sample. Specifically, extrapolation prediction is performed after applying a 121 filter with a weighting factor of 1: 2: 1 to the reference samples. This process is called intra smoothing.
  • FIG. 7 and 8 will explain the intra-screen prediction of Non-Patent Document 1.
  • FIG. FIG. 7 shows an example of block division. Five blocks 220, 230, 240, 250, 260 adjacent to the target block 210 having a block size of N ⁇ N samples have already been reproduced.
  • FIG. 8 shows a processing flow of intra prediction.
  • step 320 the prediction signal generator performs a smoothing process on the reference sample with a 121 filter.
  • step 330 the prediction signal generator estimates the signal in the target block by extrapolation (direction of intra-screen prediction), and generates a prediction signal (intra prediction sample).
  • FIG. 9 shows an example of a flat region signal having similar pixel values.
  • the original pixel value (original sample value) 410 is encoded by coarse quantization
  • the reproduction value (reproduction sample value) 420 in the block is constant. Value, and step-like distortion occurs at the block boundary 430.
  • This distortion is known as block noise, and is usually removed by applying a filter that removes block noise to the reproduced image.
  • the reference sample used for the intra prediction is a signal before the filter process for removing the block noise
  • the block noise remaining in the reference sample at the block boundary is predicted by the prediction signal of the target block ( Propagate to intra prediction sample). Since the block noise that has propagated to the prediction signal cannot be removed by the block noise removal processing for the reproduction signal, it is propagated as it is to the reference sample group of the next target block.
  • Non-Patent Document 1 since 34 different extrapolation directions are prepared for the extrapolation method of intra prediction (direction of intra prediction), block noise propagates while changing the direction. As a result, a plurality of pseudo contours are generated in the reproduction signal of the flat area in the image. In particular, when noise propagates to a large block, the pseudo contour crosses the large block, which has a large visual effect.
  • the 121 filter described in the background art has an effect of removing noise in the reference sample, but cannot remove stepped noise as shown in FIG. 9 because the number of taps is short.
  • an object of the present invention is to suppress artificial noise such as the above-described pseudo contour.
  • a video predictive coding apparatus includes: a block dividing unit that divides an input image into a plurality of blocks; and a target block that is an encoding target among blocks divided by the block dividing unit.
  • Prediction signal generating means for generating an intra-screen prediction signal of a highly correlated block using a reference sample already reproduced adjacent to the target block, and a residual between the prediction signal of the target block and the pixel signal of the target block
  • a residual signal generating means for generating a signal, a residual signal compressing means for compressing the residual signal generated by the residual signal generating means, and a reproduction residual signal in which the compressed data of the residual signal is restored
  • Block storage means for restoring the pixel signal of the target block and storing the restored pixel signal of the target block for use as the reference sample, wherein the prediction signal generation means is saved in the block storage means
  • a reference sample is obtained from an already replayed block around the target block being selected, two or more key reference samples are selected from the reference samples, and the key reference samples are interpolated to generate an interpolated reference sample. Interpolating, determining a direction of intra prediction, extrapolating the interpolation reference sample based on the determined direction of intra prediction to generate the intra prediction, and the encoding means includes the intra prediction
  • the information of the direction is included in the compressed data and encoded.
  • the prediction signal generation means applies the reference sample interpolation process and the reference sample smoothing process based on a comparison between the key reference sample and a predetermined threshold value. You may switch to and carry out.
  • the reference sample is a reference sample located at an end of a reference sample group
  • the interpolation process is a bilinear interpolation process for a reference sample between the key reference samples. Also good.
  • the moving picture predictive decoding apparatus provides information on the direction of intra prediction used for intra prediction of a target block to be decoded from compressed data divided into a plurality of blocks and encoded.
  • Decoding means for decoding the compressed data of the residual signal
  • prediction signal generation means for generating an intra-screen prediction signal using information on the direction of intra-screen prediction and the already reproduced reference samples adjacent to the target block
  • a residual signal restoring means for restoring the reproduction residual signal of the target block from the compressed data of the residual signal, and adding the prediction signal and the reproduction residual signal to obtain the pixel signal of the target block
  • a block storage unit that stores the restored pixel signal of the target block for use as the reference sample, and the prediction signal generation unit includes the block.
  • the prediction signal generation means applies the reference sample interpolation process and the reference sample smoothing process based on a comparison between the key reference sample and a predetermined threshold. You may switch and implement.
  • the reference sample may be a reference sample located at an end of a reference sample group
  • the interpolation process may be a bilinear interpolation process for a reference sample between the key reference samples. Good.
  • the present invention can also be regarded as an invention related to a video predictive encoding method, an invention related to a video predictive decoding method, an invention related to a video predictive encoding program, an invention related to a video predictive decoding program, and is as follows. Can be described in
  • a video predictive encoding method is a video predictive encoding method executed by a video predictive encoding device, the block dividing step of dividing an input image into a plurality of blocks, Prediction signal generation for generating an intra-screen prediction signal of a block having a high correlation with the target block to be encoded among the blocks divided by the block division step, using already reproduced reference samples adjacent to the target block
  • a residual signal generating step for generating a residual signal between the prediction signal of the target block and a pixel signal of the target block, and a residual signal for compressing the residual signal generated by the residual signal generating step
  • a compression step a residual signal restoration step for generating a reproduction residual signal obtained by restoring the compressed data of the residual signal, and a compressed data of the residual signal
  • a pixel signal of the target block is restored by adding the prediction step and the reproduction residual signal, and the restored pixel signal of the target block is used as the reference sample.
  • a sample is selected, the key reference samples are interpolated to generate an interpolation reference sample, a direction of intra prediction is determined, and the interpolation reference sample is determined based on the determined direction of intra prediction.
  • the intra prediction is generated by extrapolation, and in the encoding step, information on the direction of the intra prediction is included in the compressed data and encoded.
  • a moving image predictive decoding method is a moving image predictive decoding method executed by a moving image predictive decoding device, and decodes from compressed data divided into a plurality of blocks and encoded.
  • a decoding step for decoding the information on the direction of the intra prediction and the compressed data of the residual signal used for the intra prediction of the target block to be processed; the information on the direction of the intra prediction and the already reproduced adjacent to the target block A prediction signal generation step of generating an intra-screen prediction signal using the reference sample of the residual signal, a residual signal recovery step of recovering the reproduction residual signal of the target block from the compressed data of the residual signal, and the prediction signal To restore the pixel signal of the target block by adding the reproduction residual signal and use the restored pixel signal of the target block as the reference sample
  • a block storing step for storing, and in the prediction signal generating step a reference sample is obtained from a previously reproduced block around the stored target block, and two or more key reference samples are obtained from the reference sample. Selecting, interpolating between the key reference samples
  • a moving picture predictive encoding program includes a block dividing unit that divides an input image into a plurality of blocks, and a target that is an encoding target among blocks divided by the block dividing unit.
  • a prediction signal generating means for generating an intra-screen prediction signal of a block having a high correlation with the block by using a previously reproduced reference sample adjacent to the target block; a prediction signal of the target block; and a pixel signal of the target block;
  • a residual signal generating means for generating the residual signal, a residual signal compressing means for compressing the residual signal generated by the residual signal generating means, and a reproduction residual obtained by restoring the compressed data of the residual signal
  • a residual signal restoring means for generating a signal; an encoding means for encoding compressed data of the residual signal; and the prediction signal and the reproduced residual signal are added.
  • a motion picture predictive encoding program for functioning as a block storage means for restoring the pixel signal of the target block by the above and storing the restored pixel signal of the target block as the reference sample
  • the prediction signal generation means obtains reference samples from the already reproduced blocks around the target block stored in the block storage means, selects two or more key reference samples from the reference samples, and interpolates reference samples To generate an intra-screen prediction by determining the direction of intra-screen prediction and extrapolating the interpolated reference sample based on the determined direction of intra-screen prediction.
  • the said encoding means includes the information of the direction of the said prediction in a screen in compression data, It encodes, It is characterized by the above-mentioned.
  • a moving picture predictive decoding program is an image prediction decoding program used for intra prediction of a target block to be decoded from compressed data that is encoded by dividing a computer into a plurality of blocks. Prediction for generating an intra prediction signal using decoding means for decoding the direction information and compressed data of the residual signal, the direction information of the intra prediction and the already-reproduced reference sample adjacent to the target block Signal generating means, residual signal restoring means for restoring the reproduction residual signal of the target block from the compressed data of the residual signal, and adding the prediction signal and the reproduction residual signal to add the prediction signal of the target block Operation for functioning as block storage means for restoring a pixel signal and storing the restored pixel signal of the target block for use as the reference sample
  • the prediction signal generation means obtains a reference sample from a previously reproduced block around the target block stored in the block storage means, and two or more key reference samples from the reference sample And interpolating between the key reference samples to generate an interpolation reference sample, and
  • FIG. 1 to 7 embodiments of the present invention will be described with reference to FIGS. 1 to 7 and FIGS. 13 to 17.
  • FIG. 13 to 17 embodiments of the present invention will be described with reference to FIGS. 1 to 7 and FIGS. 13 to 17.
  • FIG. 1 is a block diagram showing a video predictive coding apparatus 100 according to an embodiment of the present invention.
  • the moving image predictive encoding device 100 includes an input terminal 101, a block divider 102, a prediction signal generator 103, a frame memory 104, a subtractor 105, a converter 106, a quantizer 107, an inverse quantum, and the like.
  • the subtractor 105, the converter 106, and the quantizer 107 correspond to “encoding means” recited in the claims.
  • the inverse quantizer 108, the inverse transformer 109, and the adder 110 correspond to “decoding means” recited in the claims.
  • the frame memory 104 corresponds to “image storage means”
  • the block memory 113 corresponds to “block storage means”.
  • a moving image signal composed of a plurality of images is input to the input terminal 101.
  • An image to be encoded is divided into a plurality of regions by the block divider 102.
  • the block size is not limited as illustrated in FIG.
  • a plurality of block sizes and shapes may be mixed on one screen.
  • the coding order of blocks is described in Non-Patent Document 1, for example.
  • a prediction signal is generated for a region to be encoded (hereinafter referred to as a target block).
  • two types of prediction methods, inter-screen prediction and intra-screen prediction are used.
  • the prediction signal generation processing in the prediction signal generator 103 will be described later with reference to FIG.
  • the subtractor 105 subtracts the prediction signal (via the line L103) from the signal of the target block (via the line L102) to generate a residual signal.
  • the residual signal is subjected to discrete cosine transform by a transformer 106, and each transform coefficient is quantized by a quantizer 107.
  • the entropy encoder 111 encodes the quantized transform coefficient and sends it from the output terminal 112 together with prediction information necessary for generating a prediction signal.
  • the signal of the compressed target block is inversely processed and restored. That is, the quantized transform coefficient is inversely quantized by the inverse quantizer 108 and then inverse discrete cosine transformed by the inverse transformer 109 to restore the residual signal.
  • the residual signal restored by the adder 110 and the prediction signal sent from the line L103 are added to reproduce the signal of the target block.
  • the reproduced block signal is stored in the block memory 113 for intra-screen prediction.
  • a reproduced image constituted by the reproduced signal is stored in the frame memory 104 after block noise generated in the reproduced image is removed by the loop filter unit 114.
  • step S302 prediction information necessary for inter-screen prediction is generated. Specifically, using a reproduced image that has been encoded and restored in the past as a reference image, a motion vector that gives a prediction signal with the smallest error with respect to the target block and a reference screen are searched from this reference image. At this time, the target block is input via line L102 and the reference image is input via L104. As the reference image, a plurality of images encoded and restored in the past are used as the reference image. For details, see the conventional technology H.264. H.264 or the method shown in Non-Patent Document 1.
  • step S303 prediction information necessary for in-screen prediction is generated.
  • a prediction signal is generated for a plurality of in-screen prediction directions using already reproduced pixel values spatially adjacent to the target block.
  • the prediction direction (intra prediction mode) which gives the prediction signal with the smallest error with respect to an object block is determined.
  • the prediction signal generator 103 obtains already reproduced pixel signals in the same screen as reference samples from the block memory 113 via the line L113, and generates an intra-screen prediction signal by extrapolating these signals. To do.
  • the prediction method to be applied to the target block is selected from inter-screen prediction and intra-screen prediction. For example, a prediction method that gives a prediction value with a small error for the target block is selected. Alternatively, the encoding process may be actually performed for the two prediction methods, and the smaller evaluation value calculated from the relationship between the generated code amount and the sum of the absolute values of the encoding error images may be selected.
  • the selection information of the selected prediction method is sent to the entropy encoder 111 via the line L112 as information necessary for generating a prediction signal, and is transmitted from the output terminal 112 after being encoded (step S305).
  • step S306 When the prediction method selected in step S306 is inter-screen prediction, a prediction signal is generated in step S307 based on motion information (motion vector and reference screen information), and the generated inter-screen prediction signal is represented by line L103. And then output to the subtractor 105.
  • step S308 the motion information is sent to the entropy encoder 111 via the line L112 as information necessary for generating a prediction signal, and is transmitted from the output terminal 112.
  • step S306 If the prediction method selected in step S306 is intra prediction, a prediction signal is generated in step S309 based on the intra prediction mode, and the generated intra prediction signal is output to the subtractor 105 via the line L103. Is done.
  • step S310 the intra prediction mode is sent to the entropy encoder 111 via the line L112 as information necessary for generating a prediction signal, and is transmitted from the output terminal 112.
  • the encoding method used for the entropy encoder 111 may be arithmetic encoding or variable length encoding.
  • FIG. 2 is a block diagram of the video predictive decoding device 200 according to the embodiment of the present invention.
  • the moving picture predictive decoding apparatus 200 includes an input terminal 201, a data analyzer 202, an inverse quantizer 203, an inverse transformer 204, an adder 205, a prediction signal generator 208, a frame memory 207, and an output.
  • a terminal 206, a loop filter 209, and a block memory 215 are provided.
  • the inverse quantizer 203 and the inverse transformer 204 correspond to “decoding means” recited in the claims. A decoding means other than those described above may be used. Further, the inverse converter 204 may not be provided.
  • the frame memory 207 corresponds to “image storage means”, and the block memory 215 corresponds to “block storage means”.
  • the compressed data compressed and encoded by the method described above is input from the input terminal 201.
  • the compressed data includes a residual signal encoded by predicting a target block obtained by dividing an image into a plurality of blocks and information necessary for generating a prediction signal.
  • the block size is not limited. A plurality of block sizes and shapes may be mixed on one screen.
  • the decoding order of blocks is described in Non-Patent Document 1, for example.
  • Information necessary for generating the prediction signal includes prediction method selection information and motion information (in the case of inter-screen prediction) or intra prediction mode (in the case of intra-screen prediction).
  • the data analyzer 202 decodes the residual signal of the target block, information necessary for generating the prediction signal, and the quantization parameter from the compressed data.
  • the decoded residual signal of the target block is inversely quantized by the inverse quantizer 203 based on the quantization parameter (via the line L202). Further, the inverse quantized residual signal is subjected to inverse discrete cosine transform by the inverse transformer 204, and as a result, the residual signal is restored.
  • information necessary for generating a prediction signal is sent to the prediction signal generator 208 via the line L206.
  • the prediction signal generator 208 generates a prediction signal for the target block based on information necessary for generating the prediction signal. The prediction signal generation processing in the prediction signal generator 208 will be described later with reference to FIG.
  • the generated prediction signal is sent to the adder 205 via the line L208, added to the restored residual signal, regenerates the target block signal, and is output to the loop filter 209 via the line L205. Is stored in the block memory 215 for use in intra prediction.
  • the loop filter unit 209 removes block noise from the reproduction signal input via the line L205, and the reproduction image from which the block noise has been removed is stored in the frame memory 207 as a reproduction image used for decoding and reproduction of the subsequent image.
  • step S402 the prediction method decoded by the data analyzer 202 is acquired.
  • the decoded prediction method is inter-screen prediction (step S403)
  • the motion information (motion vector and reference screen information) decoded by the data analyzer 202 is acquired (step S404), and the frame memory 207 is based on the motion information.
  • the frame memory 207 is based on the motion information.
  • the decoded prediction method is intra-screen prediction (step S403)
  • the intra prediction mode decoded by the data analyzer 202 is acquired (step S406)
  • the block memory 215 is accessed, and an already reproduced block adjacent to the target block is reproduced.
  • a pixel signal is acquired as a reference sample, and a prediction signal is generated based on the intra prediction mode (step S407).
  • the generated prediction signal is output to the adder 205 via L208.
  • the decoding method used for the data analyzer 202 may be arithmetic decoding or variable length decoding.
  • step S309 in FIG. 13 and step S407 in FIG. 14 the intra prediction method according to the embodiment of the present invention will be described with reference to FIG. 3 and FIG. That is, it is the details of step S309 in FIG. 13 and step S407 in FIG. 14, and using the reference samples acquired from the block memory 113 in FIG. 1 or the block memory 215 in FIG. A method for estimation by extrapolation based on the above will be described.
  • bilinear interpolation is performed on the reference sample group used for intra prediction for the block that causes the pseudo contour. Apply processing. By slowing the change in the signal of the reference sample group, the appearance of step noise generated at the block boundary of the reference sample group is suppressed.
  • a bilinear interpolation process applied to the reference sample group will be described with reference to FIG.
  • ref '[0] ref [0] (1)
  • ref '[2N] ref [2N] (3)
  • ref '[4N] ref [4N] (5)
  • Expressions (2) and (4) may be modified as Expressions (2) ′ and (4) ′, respectively.
  • reference samples between BL and AL are generated by bilinear interpolation with key reference samples BL and AL, and reference samples between AL and AR are bilinearly interpolated with key reference samples AL and AR.
  • the level of the reference sample value after the interpolation processing adjacent to the target block changes gently. As a result, propagation of block noise to the prediction signal can be suppressed.
  • the determination is performed using three key reference samples, two reference samples at the block boundary, and two threshold values.
  • the threshold is used for determination for determining whether or not to apply bilinear interpolation. Bilinear interpolation is applied to reference samples that meet the criteria.
  • Interpolate_Above and Interpolate_Left in the following two expressions are Boolean values. If the expression on the right side is satisfied, it becomes ture (1) and bilinear interpolation is applied. Apply smoothing.
  • Interpolate_Left abs (BL + AL-2 * ref [N]) ⁇ THRESHOLD_LEFT (6)
  • Interpolate_Above abs (AL + AR-2 * ref [3N]) ⁇ THRESHOLD_ABOVE (7)
  • BL, AL and ref [3N] values are arranged on a straight line, the value of BL + AL ⁇ 2 * ref [N] is 0.
  • the value of AL + AR ⁇ 2 * ref [3N] is also 0.
  • the above two formulas show the magnitude of deviation (deviation) of ref [N] for the straight line connecting BL and AL, and the magnitude of deviation (deviation) of ref [3N] for the straight line connecting AL and AR, respectively. It is compared with the threshold value. If the two calculated deviations are smaller than the corresponding threshold THRESHOLD_ABOVE or THRESHOLD_LEFT, the Boolean value (Interpolate_Above or Interpolate_Left) is true, and bilinear interpolation is applied to the reference sample. In equations (6) and (7), abs (x) calculates the absolute value of x.
  • the two threshold values may be fixed values set in advance, or may be encoded in frame units or slice units in which a plurality of blocks are combined and restored by a decoder. Alternatively, encoding may be performed in block units and restored by a decoder.
  • the data analyzer 202 decodes the two threshold values, outputs them to the prediction signal generator 208, and uses them to generate an intra-screen prediction signal described in detail in FIGS. 3 and 4 below.
  • Fig. 3 shows a flowchart of processing for estimating an intra prediction sample by extrapolation (direction of intra prediction).
  • the adjacent block has not been reproduced yet for reasons such as the coding order and all of the 4N + 1 reference samples cannot be acquired, the nonexistent sample is padded (the nearby existing sample). Copy the value) and prepare 4N + 1 reference samples.
  • the details of the padding process are described in Non-Patent Document 1.
  • two Boolean values Interpolate_Above and Interpolate_Left are calculated based on equations (6) and (7).
  • the prediction signal generator determines whether the target block satisfies the bilinear interpolation application criterion. Specifically, it is determined whether the size of the target block is larger than a predetermined M, and it is determined whether the calculated Interpolate_Above and Interpolate_Left are both true.
  • the reason why the block size is used as a criterion is that a pseudo contour that is a problem is usually generated with a large block size. By setting a large value for M, there is an effect of suppressing unnecessary changes in the reference sample.
  • FIG. 4 illustrates FIG. 3 in more detail.
  • Non-Patent Document 1 if the adjacent block has not been reproduced yet for reasons such as the coding order and all of the 4N + 1 reference samples cannot be acquired, the nonexistent sample is padded (the nearby existing sample). Copy the value) and prepare 4N + 1 reference samples. The details of the padding process are described in Non-Patent Document 1.
  • step 680 two Boolean values Interpolate_Above and Interpolate_Left are calculated based on equations (6) and (7).
  • ref '[0] ref [0] (10)
  • step 640 it is determined whether or not the criterion for applying bilinear interpolation of the upper reference sample shown in Expression (7) is satisfied.
  • ref '[2N] ref [2N] (12)
  • ref '[4N] ref [4N] (14)
  • Estimate by insertion method direction of prediction in the screen.
  • Reference samples interpolated or smoothed reference samples after interpolation processing or smoothing at a position close to the projected line are used.
  • a moving picture predictive coding program for causing a computer to function as the moving picture predictive coding apparatus 100 is stored in a recording medium and can be provided.
  • a moving picture predictive decoding program for causing a computer to function as the moving picture predictive decoding apparatus 200 can be provided by being stored in a recording medium.
  • the recording medium include a recording medium such as a USB memory, a flexible disk, a CD-ROM, a DVD, or a ROM, or a semiconductor memory.
  • the moving picture predictive encoding program P100 includes a block division module P101, a prediction signal generation module P102, a residual signal generation module P103, a residual signal compression module P104, a residual signal restoration module P105, An encoding module P106 and a block storage module P107 are provided.
  • the moving picture predictive decoding program P200 includes a decoding module P201, a prediction signal generation module P202, a residual signal restoration module P203, and a block storage module P204.
  • the moving picture predictive encoding program P100 or the moving picture predictive decoding program P200 configured as described above is stored in the recording medium 10 shown in FIGS. 5 and 6 described later, and is executed by a computer described later.
  • FIG. 5 is a diagram showing a hardware configuration of the computer 30 for executing the program recorded on the recording medium
  • FIG. 6 is an overview diagram of the computer 30 for executing the program stored on the recording medium. is there.
  • the computer 30 here includes a wide range of DVD players, set-top boxes, mobile phones, and the like that have a CPU and perform information processing and control by software.
  • the computer 30 includes a reading device 12 such as a flexible disk drive device, a CD-ROM drive device, a DVD drive device, a working memory (RAM) 14 in which an operating system is resident, and a recording medium 10.
  • the computer 30 can access the moving image predictive encoding program stored in the recording medium 10 from the reading device 12, and the moving image predictive encoding program performs the above operation. It becomes possible to operate as the moving image predictive encoding device 100.
  • the computer 30 can access the moving image predictive decoding program stored in the recording medium 10 from the reading device 12, and the moving image predictive decoding program performs the above operation. It becomes possible to operate as the moving picture predictive decoding apparatus 200.
  • (A) Criteria for applying bilinear interpolation are not limited to the methods described in the above embodiments. For example, the determination result of interpolation application may always be true, and steps 520, 620, 625, and 640 may be omitted. In this case, an interpolation process is always applied instead of the smoothing process using the 121 filter.
  • ⁇ Intra prediction mode may be added to the criteria. For example, since the pseudo contour generated at the block boundary is reduced by the block noise removal process, when the prediction direction of the extrapolation process is vertical or horizontal, the determination result of applying the interpolation process may always be false.
  • Block size may be removed from the criteria.
  • the relative relationship between the block size of the target block and the adjacent block may be used as a criterion.
  • the block size of the block 260 adjacent to the left of the target block 210 is larger than that of the target block 210.
  • no block noise occurs around ref [N].
  • the criterion for applying interpolation may be set to false regardless of the result of Expression (6) or (7).
  • adjacent blocks 230, 240, and 250 on the target block 210 are smaller than the target block 210.
  • the interpolation application is determined based on the result of Expression (6) or (7).
  • the relative relationship between the block size of the target block and the adjacent block may be used as a determination criterion together with the block size of the target block.
  • the thresholds (THRESHOLD_ABOVE and THRESHOLD_LEFT) in Equations (6) and (7) are individually determined and encoded for different block sizes, block shapes (differences in the vertical and horizontal sizes of blocks) and different intra prediction modes. You may make it restore with.
  • THRESHOLD_ABOVE and THRESHOLD_LEFT may be set to the same value, and only one of them may be encoded and restored by a decoder.
  • the threshold value restored by the data analyzer 202 in FIG. 2 is input to the prediction signal generator 208.
  • the prediction signal generator 208 calculates Interpolate_Above and Interpolate_Left values based on the input threshold (step 560 in FIG. 3 or step 680 in FIG. 4).
  • the determination result may be encoded by including it in the bitstream and restored by a decoder.
  • the prediction signal generator 103 of FIG. 1 obtains the Interpolate_Above and Interpolate_Left values (0 or 1) based on the size of the target block and the results of the equations (6) and (7).
  • the prediction information necessary for prediction encoding is performed for each block or a block group unit in which a plurality of blocks are collected. That is, the data is sent to the entropy encoder 111 via the line L112, encoded, and sent from the output terminal 112. Note that when obtaining the Interpolate_Above and Interpolate_Left values (0 or 1), the above-described relative relationship between the block size of the target block and the adjacent block, the size of the target block, and the intra prediction mode may be used.
  • the data analyzer 202 in FIG. 2 decodes the values of Interpolate_Above and Interpolate_Left in units of blocks or a group of blocks in which a plurality of blocks are collected, and inputs them to the prediction signal generator 208.
  • the two values may be individually encoded and decoded, or may be encoded and decoded as a set of two values.
  • Step S406 values of Interpolate_Above and Interpolate_Left decoded together with the intra prediction mode are acquired.
  • the prediction signal generator (103 or 208, the following numbers are omitted) from the block memory (113 or 215, the following numbers are omitted) from the reference sample ref [ x] (x 0-4N) is acquired.
  • the neighboring block has not been reproduced yet for reasons such as the coding order and all the 4N + 1 reference samples cannot be obtained, the nonexistent sample is padded (the nearby existing sample). Copy the value) and prepare 4N + 1 reference samples.
  • the details of the padding process are described in Non-Patent Document 1.
  • Step 790 Interpolate_Above and Interpolate_Left values are acquired.
  • the prediction signal generator determines whether one of Interpolate_Above and Interpolate_Left is 1. If any value is 1, the process proceeds to step 725, and if not, the process proceeds to step 760.
  • step 760 intra smoothing by the 121 filter is applied to the reference sample group according to equations (8) and (9).
  • (B) Interpolation process In the above description, bilinear interpolation is used for the interpolation process. However, any other interpolation process may be used as long as noise at the block boundary can be removed. For example, all reference samples may be replaced with an average value of key reference samples.
  • the interpolation processing method may be switched depending on the block size or the intra prediction type, and the applied interpolation processing method may be included in the bitstream for encoding and decoding.
  • (C) Process flow of intra prediction of reference sample
  • the process flow for estimating an intra prediction sample by extrapolation is not limited to the procedure in FIG.
  • steps 625, 630 and 635 may be reversed in order with steps 640, 650 and 655.
  • the processing results of the expressions (1), (3), and (5) are the same as those of the expressions (10), (12), and (14), either immediately before step 625 (between steps 620 and 625) or between steps 650 and 655 Immediately after that (between step 650 or 655 and step 670) may be performed together.
  • the determination criterion in step 620 may be only the block size.
  • Expression (12) is replaced with Expressions (15) and (16)
  • ref '[2N] (ref [2N-1] + 2 * ref [2N] + ref [2N + 1] +2) / 4 others (16)
  • ref ′ [2N] indicates the value of the smoothed reference samples.
  • the target block is a square block.
  • the interpolation processing to the reference sample of the present invention can be similarly applied to a non-square block.
  • An example in which the target block 290 block size is N ⁇ 2N is shown in FIG. In this case, the number of ref [x] is 3N + 1.
  • the key reference samples are the three at the end and the center of the reference sample group, but the number and position are not limited. For example, the number and position may be changed according to the size of the reference block and the relative relationship between the reference block and the adjacent block, and the number and position of the key reference samples are included in the bitstream for encoding and decoding. Also good.
  • the key reference samples may be encoded and decoded as instruction information indicating whether the default or the other key reference sample is to be used, with the default three at the end and the center of the reference sample group. In the data analyzer 202 of FIG. 2, the key reference sample is updated.
  • ref [N + N / 2] and ref [2N + N / 2] in FIG. 7 may be added, or they may be used instead of ref [2N]. Also, ref [N / 2] and ref [3N + N / 2] are used instead of ref [0] and ref [4N], and ref [1] to ref [N / 2-1] and ref [3N + 121 filters may be applied to N / 2] to ref [4N-1].
  • DESCRIPTION OF SYMBOLS 100 ... Moving image predictive coding apparatus, 101 ... Input terminal, 102 ... Block divider, 103 ... Prediction signal generator, 104 ... Frame memory, 105 ... Subtractor, 106 ... Converter, 107 ... Quantizer, 108 ... Inverse quantizer 109 ... Inverse transformer 110 ... Adder 111 ... Entropy encoder 112 ... Output terminal 113 ... Block memory 114 ... Loop filter 200 ... Video predictive decoding device 201 ... Input Terminals 202, data analyzer 203, inverse quantizer 204, inverse transformer, 205, adder, 206, output terminal, 207, frame memory, 208, prediction signal generator, 209, loop filter, 215 ... block memory.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Eye Examination Apparatus (AREA)
  • Optical Measuring Cells (AREA)
  • Picture Signal Circuits (AREA)

Abstract

This decoding device is provided with a decoding means which decodes compressed data of a residue signal and direction information of intra-screen prediction of a target block, a prediction signal generation means which generates an intra-screen prediction signal from said direction information and an already-reproduced reference sample of an adjacent block, a residue signal restoring means which restores a reproduced residue signal of the target block, and a block storage means which restores and stores the pixel signal of the target block, wherein the prediction signal generation means acquires a reference sample from an already-reproduced block in the periphery of the stored target block, selects two or more key reference samples, interpolates between the key reference samples in order to generate an interpolated reference sample, and generates the intra-screen prediction by extrapolating the interpolated reference sample on the basis of the direction of the intra-screen prediction.

Description

動画像予測符号化装置、動画像予測符号化方法、動画像予測復号装置及び動画像予測復号方法Moving picture predictive coding apparatus, moving picture predictive coding method, moving picture predictive decoding apparatus, and moving picture predictive decoding method
 本発明は、動画像予測符号化装置及び方法、並びに、動画像予測復号装置及び方法に関するもので、とりわけ、画面内の予測符号化に用いられる参照サンプルへのフィルタ処理に関するものである。 The present invention relates to a moving picture predictive coding apparatus and method, and a moving picture predictive decoding apparatus and method, and particularly relates to a filter process for a reference sample used for predictive coding in a screen.
 動画像データの伝送や蓄積を効率よく行うために、圧縮符号化技術が用いられる。動画像の場合ではMPEG1~4やH.261~H.264の方式が広く用いられている。 Compressive encoding technology is used to efficiently transmit and store moving image data. In the case of moving images, MPEG1 to 4 and H.264. 261-H. H.264 is widely used.
 これらの符号化方式では、符号化の対象となる画像を複数のブロックに分割した上で符号化・復号処理を行う。画面内の予測符号化では、対象ブロックと同じ画面内にある隣接する既再生の画像信号(圧縮された画像データが復元されたもの)を用いて予測信号を生成した上で、その予測信号を対象ブロックの信号から引き算した差分信号を符号化する。画面間の予測符号化では、対象ブロックと異なる画面内にある隣接する既再生の画像信号を参照し、動きの補正を行ない、予測信号を生成し、その予測信号を対象ブロックの信号から引き算した差分信号を符号化する。 In these encoding methods, an image to be encoded is divided into a plurality of blocks and then encoded / decoded. In predictive coding within a screen, a predicted signal is generated using an adjacent previously reproduced image signal (reconstructed compressed image data) in the same screen as the target block, and then the predicted signal is The differential signal subtracted from the signal of the target block is encoded. In predictive coding between screens, the adjacent reproduced image signal in the screen different from the target block is referred to, the motion is corrected, the predicted signal is generated, and the predicted signal is subtracted from the signal of the target block. Encode the difference signal.
 通常の画面間予測(インター予測)符号化では、符号化の対象となるブロックについて、その画素信号に類似する信号を既に再生済みの画面から探索するという方法で予測信号を生成する。そして、対象ブロックと探索した信号が構成する領域との間の空間的な変位量である動きベクトルと、対象ブロックの画素信号と予測信号との残差信号とを符号化する。このようにブロック毎に動きベクトルを探索する手法はブロックマッチングと呼ばれる。 In normal inter-screen prediction (inter prediction) encoding, a prediction signal is generated by searching for a signal similar to the pixel signal from a screen that has already been reproduced for a block to be encoded. Then, a motion vector that is a spatial displacement amount between the target block and a region formed by the searched signal, and a residual signal between the pixel signal and the prediction signal of the target block are encoded. Such a method for searching for a motion vector for each block is called block matching.
 図10は、ブロックマッチング処理を説明するための模式図である。ここでは、符号化対象の画面701上の対象ブロック702を例に予測信号の生成手順を説明する。参照画面703は既に再生済みであり、領域704は対象ブロック702と空間的に同一位置の領域である。ブロックマッチングでは、領域704を囲む探索範囲705を設定し、この探索範囲の画素信号から対象ブロック702の画素信号との絶対値誤差和が最小となる領域706を検出する。この領域706の信号が予測信号となり、領域704から領域706への変位量が動きベクトル707として検出される。また、参照画面703を複数用意し、対象ブロック毎にブロックマッチングを実施する参照画面を選択し、参照画面選択情報を検出する方法もよく用いられる。H.264では、画像の局所的な特徴の変化に対応するため、動きベクトルを符号化するブロックサイズが異なる複数の予測タイプを用意している。H.264の予測タイプについては、例えば特許文献2に記載されている。 FIG. 10 is a schematic diagram for explaining the block matching process. Here, a procedure for generating a prediction signal will be described using the target block 702 on the encoding target screen 701 as an example. The reference screen 703 has already been reproduced, and the area 704 is an area in the same position as the target block 702. In the block matching, a search range 705 surrounding the region 704 is set, and a region 706 in which the absolute value error sum with the pixel signal of the target block 702 is minimum is detected from the pixel signal in this search range. The signal in the area 706 becomes a prediction signal, and the amount of displacement from the area 704 to the area 706 is detected as a motion vector 707. Also, a method of preparing a plurality of reference screens 703, selecting a reference screen for performing block matching for each target block, and detecting reference screen selection information is often used. H. H.264 provides a plurality of prediction types having different block sizes for encoding motion vectors in order to cope with changes in local features of an image. H. H.264 prediction types are described in Patent Document 2, for example.
 H.264の画面内予測(イントラ予測)符号化では、符号化の対象となるブロックに隣接する既再生の画素値を所定の方向に外挿して予測信号を生成する方法を採用している。図11は、ITU H.264に用いられる画面内予測方法を説明するための模式図である。図11(A)において、対象ブロック802は符号化の対象となるブロックであり、その対象ブロック802の境界に隣接する画素A~Mからなる画素群(参照サンプル群)801は隣接領域であり、過去の処理において既に再生された画像信号である。 H. In the H.264 intra-screen prediction (intra prediction) encoding, a method of generating a prediction signal by extrapolating the already reproduced pixel values adjacent to the encoding target block in a predetermined direction is adopted. FIG. 11 shows ITU H.264. 2 is a schematic diagram for explaining an intra-screen prediction method used for H.264. In FIG. 11A, a target block 802 is a block to be encoded, and a pixel group (reference sample group) 801 composed of pixels A to M adjacent to the boundary of the target block 802 is an adjacent area. It is an image signal that has already been reproduced in past processing.
 この場合、対象ブロック802の真上にある隣接画素である画素群(参照サンプル群)801を下方に引き伸ばして予測信号を生成する。また図11(B)では、対象ブロック804の左にある既再生画素(I~L)を右に引き伸ばして予測信号を生成する。予測信号を生成する具体的な方法はたとえば特許文献1に記載されている。このように図11(A)~(I)に示す方法で生成された9つの予測信号のそれぞれを対象ブロックの画素信号との差分をとり、差分値が最も小さいものを最適の予測信号とする。以上のように、画素を外挿することにより予測信号(イントラ予測サンプル)を生成することができる。以上の内容については、下記特許文献1に記載されている。 In this case, a pixel group (reference sample group) 801 that is an adjacent pixel immediately above the target block 802 is extended downward to generate a prediction signal. In FIG. 11B, the already reproduced pixels (I to L) on the left of the target block 804 are extended to the right to generate a prediction signal. A specific method for generating a prediction signal is described in Patent Document 1, for example. Thus, the difference between each of the nine prediction signals generated by the method shown in FIGS. 11A to 11I and the pixel signal of the target block is taken, and the one with the smallest difference value is used as the optimum prediction signal. . As described above, a prediction signal (intra prediction sample) can be generated by extrapolating pixels. The above contents are described in Patent Document 1 below.
 また、非特許文献1に示される画面内予測では、上記の9種類に加えて参照サンプルの引き伸ばし方向が異なる25種類(計34種類)の予測信号生成方法が用意されている。 In addition, in the in-screen prediction shown in Non-Patent Document 1, in addition to the above nine types, 25 types (34 types in total) of prediction signal generation methods with different reference sample stretching directions are prepared.
 また、非特許文献1では、参照サンプルに発生する歪を抑制するため、予測信号を生成する前に、参照サンプルに対してローパスフィルタを施す。具体的には、重み係数を1:2:1とする121フィルタを参照サンプルに適用してから外挿予測を行う。この処理はintra smoothingと呼ばれている。 In Non-Patent Document 1, a low-pass filter is applied to the reference sample before generating the prediction signal in order to suppress distortion generated in the reference sample. Specifically, extrapolation prediction is performed after applying a 121 filter with a weighting factor of 1: 2: 1 to the reference samples. This process is called intra smoothing.
 図7と図8にて非特許文献1の画面内予測について説明する。図7は、ブロック分割の例を示している。ブロックサイズをN×Nサンプルとする対象ブロック210に隣接する5個のブロック220、230、240、250、260は既に再生されている。対象ブロック210のイントラ予測には、ref[x] (x=0~4N)で示す参照サンプルが用いられる。図8に、画面内予測の処理フローを示す。まず、ステップ310にて画面内予測処理を実施する予測信号生成器が再生画素を保存するメモリから参照サンプルref[x] (x=0~4N)を取得する。この際、符号化順番等の理由で隣接ブロックがまだ再生されておらず、4N+1個の参照サンプルref[x]をすべて取得することができない場合がある。そのときは、存在しないサンプルはパディング処理(近くの存在するサンプル値をコピー)にて代用して、4N+1個の参照サンプルを準備する。パディング処理の詳細については、非特許文献1に記載されている。次に、ステップ320にて予測信号生成器は、121フィルタにて参照サンプルに平滑化処理を施す。最後に予測信号生成器はステップ330にて、対象ブロック内の信号を外挿法(画面内予測の方向)にて推定し、予測信号(イントラ予測サンプル)を生成する。 7 and 8 will explain the intra-screen prediction of Non-Patent Document 1. FIG. FIG. 7 shows an example of block division. Five blocks 220, 230, 240, 250, 260 adjacent to the target block 210 having a block size of N × N samples have already been reproduced. For intra prediction of the target block 210, a reference sample indicated by ref [x] (x = 0 to 4N) is used. FIG. 8 shows a processing flow of intra prediction. First, in step 310, a prediction signal generator that performs intra-screen prediction processing acquires reference samples ref [x] (x = 0 to 4N) from a memory that stores reproduced pixels. At this time, there are cases where adjacent blocks have not been reproduced yet for reasons such as the coding order, and all 4N + 1 reference samples ref [x] cannot be acquired. In that case, 4N + 1 reference samples are prepared by substituting non-existing samples by padding processing (copying existing sample values nearby). The details of the padding process are described in Non-Patent Document 1. Next, in step 320, the prediction signal generator performs a smoothing process on the reference sample with a 121 filter. Finally, in step 330, the prediction signal generator estimates the signal in the target block by extrapolation (direction of intra-screen prediction), and generates a prediction signal (intra prediction sample).
米国特許公報第6765964号US Pat. No. 6,765,964 米国特許公報第7003035号US Patent Publication No. 7003035
 図9に画素値の類似する平坦領域の信号の例を示すが、元の画素値(オリジナルサンプル値)410を粗い量子化で符号化すると、ブロック内の再生値(再生サンプル値)420が一定値となり、ブロック境界430にステップ状の歪が発生してしまう。この歪はブロックノイズとして知られており、通常は、再生画像にブロックノイズを取り除くフィルタをかけて除去する。しかしながら、画面内予測に用いる参照サンプルは、このブロックノイズを取り除くフィルタ処理の前の信号であるため、ブロック境界の参照サンプルに残っているブロックノイズは、画面内予測により、対象ブロックの予測信号(イントラ予測サンプル)に伝播する。予測信号に伝播してしまったブロックノイズは、再生信号に対するブロックノイズ除去処理では取り除けないため、次の対象ブロックの参照サンプル群にもそのまま伝播することになる。 FIG. 9 shows an example of a flat region signal having similar pixel values. When the original pixel value (original sample value) 410 is encoded by coarse quantization, the reproduction value (reproduction sample value) 420 in the block is constant. Value, and step-like distortion occurs at the block boundary 430. This distortion is known as block noise, and is usually removed by applying a filter that removes block noise to the reproduced image. However, since the reference sample used for the intra prediction is a signal before the filter process for removing the block noise, the block noise remaining in the reference sample at the block boundary is predicted by the prediction signal of the target block ( Propagate to intra prediction sample). Since the block noise that has propagated to the prediction signal cannot be removed by the block noise removal processing for the reproduction signal, it is propagated as it is to the reference sample group of the next target block.
 非特許文献1では、画面内予測の外挿法(画面内予測の方向)に34種類の異なる外挿方向が用意されているため、ブロックノイズは方向を変えながら伝播する。その結果、画像内の平坦領域の再生信号には複数の擬似輪郭が発生してしまう。特にサイズの大きいブロックにノイズが伝播すると、大ブロック内を擬似輪郭が横切る状況となり、視覚的な影響が大きい。 In Non-Patent Document 1, since 34 different extrapolation directions are prepared for the extrapolation method of intra prediction (direction of intra prediction), block noise propagates while changing the direction. As a result, a plurality of pseudo contours are generated in the reproduction signal of the flat area in the image. In particular, when noise propagates to a large block, the pseudo contour crosses the large block, which has a large visual effect.
 背景技術で説明した121フィルタは、参照サンプル内の雑音を取り除く効果はあるが、タップ数が短いため図9に示すようなステップ状のノイズを取り除くことができない。 The 121 filter described in the background art has an effect of removing noise in the reference sample, but cannot remove stepped noise as shown in FIG. 9 because the number of taps is short.
 そこで、本発明は、上述した擬似輪郭のような人工的なノイズを抑制することを目的とする。 Therefore, an object of the present invention is to suppress artificial noise such as the above-described pseudo contour.
 本発明の一側面に係る動画像予測符号化装置は、入力画像を複数のブロックに分割するブロック分割手段と、前記ブロック分割手段により分割されたブロックのうち、符号化対象である対象ブロックとの相関が高いブロックの画面内予測信号を、前記対象ブロックに隣接する既再生の参照サンプルを用いて生成する予測信号生成手段と、前記対象ブロックの予測信号と前記対象ブロックの画素信号との残差信号を生成する残差信号生成手段と、前記残差信号生成手段により生成された残差信号を圧縮する残差信号圧縮手段と、前記残差信号の圧縮データを復元した再生残差信号を生成する残差信号復元手段と、前記残差信号の圧縮データを符号化する符号化手段と、前記予測信号と前記再生残差信号とを加算することによって前記対象ブロックの画素信号を復元し、復元された前記対象ブロックの画素信号を前記参照サンプルとして利用するために保存するブロック格納手段と、を具備し、前記予測信号生成手段は、前記ブロック格納手段に保存されている前記対象ブロックの周囲の既再生ブロックから参照サンプルを取得し、前記参照サンプルから2つ以上のキー参照サンプルを選択し、内挿参照サンプルを生成するために前記キー参照サンプル間を内挿処理し、画面内予測の方向を決定し、決定した画面内予測の方向に基づいて前記内挿参照サンプルを外挿して前記画面内予測を生成し、前記符号化手段は、前記画面内予測の方向の情報を圧縮データに含めて符号化することを特徴とする。 A video predictive coding apparatus according to an aspect of the present invention includes: a block dividing unit that divides an input image into a plurality of blocks; and a target block that is an encoding target among blocks divided by the block dividing unit. Prediction signal generating means for generating an intra-screen prediction signal of a highly correlated block using a reference sample already reproduced adjacent to the target block, and a residual between the prediction signal of the target block and the pixel signal of the target block A residual signal generating means for generating a signal, a residual signal compressing means for compressing the residual signal generated by the residual signal generating means, and a reproduction residual signal in which the compressed data of the residual signal is restored A residual signal restoring means, an encoding means for encoding compressed data of the residual signal, and adding the prediction signal and the reproduced residual signal to add the target block. Block storage means for restoring the pixel signal of the target block and storing the restored pixel signal of the target block for use as the reference sample, wherein the prediction signal generation means is saved in the block storage means A reference sample is obtained from an already replayed block around the target block being selected, two or more key reference samples are selected from the reference samples, and the key reference samples are interpolated to generate an interpolated reference sample. Interpolating, determining a direction of intra prediction, extrapolating the interpolation reference sample based on the determined direction of intra prediction to generate the intra prediction, and the encoding means includes the intra prediction The information of the direction is included in the compressed data and encoded.
 上記の動画像予測符号化装置では、予測信号生成手段は、前記キー参照サンプルと予め定めた閾値との比較に基づいて、前記参照サンプルの内挿処理と参照サンプルの平滑化処理とを適用的に切り替えて実施してもよい。 In the above moving picture predictive encoding device, the prediction signal generation means applies the reference sample interpolation process and the reference sample smoothing process based on a comparison between the key reference sample and a predetermined threshold value. You may switch to and carry out.
 また、上記の動画像予測符号化装置では、前記参照サンプルを参照サンプル群の端に位置する参照サンプルとし、前記内挿処理が前記キー参照サンプル間の参照サンプルに対する双一次内挿処理であってもよい。 In the video predictive coding apparatus, the reference sample is a reference sample located at an end of a reference sample group, and the interpolation process is a bilinear interpolation process for a reference sample between the key reference samples. Also good.
 本発明の一側面に係る動画像予測復号装置は、複数のブロックに分割して符号化された圧縮データの中から、復号対象となる対象ブロックの画面内予測に用いる画面内予測の方向の情報と残差信号の圧縮データとを復号する復号手段と、前記画面内予測の方向の情報と前記対象ブロックに隣接する既再生の参照サンプルとを用いて画面内予測信号を生成する予測信号生成手段と、前記残差信号の圧縮データから前記対象ブロックの再生残差信号を復元する残差信号復元手段と、前記予測信号と前記再生残差信号とを加算することによって前記対象ブロックの画素信号を復元し、復元された前記対象ブロックの画素信号を前記参照サンプルとして利用するために保存するブロック格納手段と、を具備し、前記予測信号生成手段は、前記ブロック格納手段に保存されている前記対象ブロックの周囲の既再生ブロックから参照サンプルを取得し、前記参照サンプルから2つ以上のキー参照サンプルを選択し、内挿参照サンプルを生成するために前記キー参照サンプル間を内挿処理し、前記画面内予測の方向に基づいて前記内挿参照サンプルを外挿して前記画面内予測を生成することを特徴とする。 The moving picture predictive decoding apparatus according to one aspect of the present invention provides information on the direction of intra prediction used for intra prediction of a target block to be decoded from compressed data divided into a plurality of blocks and encoded. Decoding means for decoding the compressed data of the residual signal, prediction signal generation means for generating an intra-screen prediction signal using information on the direction of intra-screen prediction and the already reproduced reference samples adjacent to the target block And a residual signal restoring means for restoring the reproduction residual signal of the target block from the compressed data of the residual signal, and adding the prediction signal and the reproduction residual signal to obtain the pixel signal of the target block And a block storage unit that stores the restored pixel signal of the target block for use as the reference sample, and the prediction signal generation unit includes the block. Obtaining a reference sample from a previously played block around the target block stored in a storage means, selecting two or more key reference samples from the reference sample, and generating the interpolated reference sample Interpolation between samples is performed, and the intra prediction is generated by extrapolating the interpolation reference sample based on the direction of the intra prediction.
 上記の動画像予測復号装置では、予測信号生成手段は、前記キー参照サンプルと予め定めた閾値との比較に基づいて、前記参照サンプルの内挿処理と参照サンプルの平滑化処理とを適用的に切り替えて実施してもよい。 In the above moving picture predictive decoding apparatus, the prediction signal generation means applies the reference sample interpolation process and the reference sample smoothing process based on a comparison between the key reference sample and a predetermined threshold. You may switch and implement.
 また、上記の動画像予測復号装置では、前記参照サンプルを参照サンプル群の端に位置する参照サンプルとし、前記内挿処理が前記キー参照サンプル間の参照サンプルに対する双一次内挿処理であってもよい。 In the above moving picture predictive decoding apparatus, the reference sample may be a reference sample located at an end of a reference sample group, and the interpolation process may be a bilinear interpolation process for a reference sample between the key reference samples. Good.
 本発明は、動画像予測符号化方法に係る発明、動画像予測復号方法に係る発明、動画像予測符号化プログラムに係る発明、動画像予測復号プログラムに係る発明として捉えることもでき、以下のように記述することができる。 The present invention can also be regarded as an invention related to a video predictive encoding method, an invention related to a video predictive decoding method, an invention related to a video predictive encoding program, an invention related to a video predictive decoding program, and is as follows. Can be described in
 本発明の一側面に係る動画像予測符号化方法は、動画像予測符号化装置により実行される動画像予測符号化方法であって、入力画像を複数のブロックに分割するブロック分割ステップと、前記ブロック分割ステップにより分割されたブロックのうち、符号化対象である対象ブロックとの相関が高いブロックの画面内予測信号を、前記対象ブロックに隣接する既再生の参照サンプルを用いて生成する予測信号生成ステップと、前記対象ブロックの予測信号と前記対象ブロックの画素信号との残差信号を生成する残差信号生成ステップと、前記残差信号生成ステップにより生成された残差信号を圧縮する残差信号圧縮ステップと、前記残差信号の圧縮データを復元した再生残差信号を生成する残差信号復元ステップと、前記残差信号の圧縮データを符号化する符号化ステップと、前記予測信号と前記再生残差信号とを加算することによって前記対象ブロックの画素信号を復元し、復元された前記対象ブロックの画素信号を前記参照サンプルとして利用するために保存するブロック格納ステップと、を具備し、前記予測信号生成ステップでは、保存されている前記対象ブロックの周囲の既再生ブロックから参照サンプルを取得し、前記参照サンプルから2つ以上のキー参照サンプルを選択し、内挿参照サンプルを生成するために前記キー参照サンプル間を内挿処理し、画面内予測の方向を決定し、決定した画面内予測の方向に基づいて前記内挿参照サンプルを外挿して前記画面内予測を生成し、前記符号化ステップでは、前記画面内予測の方向の情報を圧縮データに含めて符号化することを特徴とする。 A video predictive encoding method according to an aspect of the present invention is a video predictive encoding method executed by a video predictive encoding device, the block dividing step of dividing an input image into a plurality of blocks, Prediction signal generation for generating an intra-screen prediction signal of a block having a high correlation with the target block to be encoded among the blocks divided by the block division step, using already reproduced reference samples adjacent to the target block A residual signal generating step for generating a residual signal between the prediction signal of the target block and a pixel signal of the target block, and a residual signal for compressing the residual signal generated by the residual signal generating step A compression step, a residual signal restoration step for generating a reproduction residual signal obtained by restoring the compressed data of the residual signal, and a compressed data of the residual signal A pixel signal of the target block is restored by adding the prediction step and the reproduction residual signal, and the restored pixel signal of the target block is used as the reference sample. A block storing step for storing, and in the prediction signal generating step, a reference sample is obtained from a previously reproduced block around the stored target block, and two or more key references are obtained from the reference sample. A sample is selected, the key reference samples are interpolated to generate an interpolation reference sample, a direction of intra prediction is determined, and the interpolation reference sample is determined based on the determined direction of intra prediction. The intra prediction is generated by extrapolation, and in the encoding step, information on the direction of the intra prediction is included in the compressed data and encoded. The features.
 本発明の一側面に係る動画像予測復号方法は、動画像予測復号装置により実行される動画像予測復号方法であって、複数のブロックに分割して符号化された圧縮データの中から、復号対象となる対象ブロックの画面内予測に用いる画面内予測の方向の情報と残差信号の圧縮データとを復号する復号ステップと、前記画面内予測の方向の情報と前記対象ブロックに隣接する既再生の参照サンプルとを用いて画面内予測信号を生成する予測信号生成ステップと、前記残差信号の圧縮データから前記対象ブロックの再生残差信号を復元する残差信号復元ステップと、前記予測信号と前記再生残差信号とを加算することによって前記対象ブロックの画素信号を復元し、復元された前記対象ブロックの画素信号を前記参照サンプルとして利用するために保存するブロック格納ステップと、を具備し、前記予測信号生成ステップでは、保存されている前記対象ブロックの周囲の既再生ブロックから参照サンプルを取得し、前記参照サンプルから2つ以上のキー参照サンプルを選択し、内挿参照サンプルを生成するために前記キー参照サンプル間を内挿処理し、前記画面内予測の方向に基づいて前記内挿参照サンプルを外挿して前記画面内予測を生成することを特徴とする。 A moving image predictive decoding method according to an aspect of the present invention is a moving image predictive decoding method executed by a moving image predictive decoding device, and decodes from compressed data divided into a plurality of blocks and encoded. A decoding step for decoding the information on the direction of the intra prediction and the compressed data of the residual signal used for the intra prediction of the target block to be processed; the information on the direction of the intra prediction and the already reproduced adjacent to the target block A prediction signal generation step of generating an intra-screen prediction signal using the reference sample of the residual signal, a residual signal recovery step of recovering the reproduction residual signal of the target block from the compressed data of the residual signal, and the prediction signal To restore the pixel signal of the target block by adding the reproduction residual signal and use the restored pixel signal of the target block as the reference sample A block storing step for storing, and in the prediction signal generating step, a reference sample is obtained from a previously reproduced block around the stored target block, and two or more key reference samples are obtained from the reference sample. Selecting, interpolating between the key reference samples to generate an interpolated reference sample, and extrapolating the interpolated reference sample based on the direction of the intra prediction to generate the intra prediction Features.
 本発明の一側面に係る動画像予測符号化プログラムは、コンピュータを、入力画像を複数のブロックに分割するブロック分割手段と、前記ブロック分割手段により分割されたブロックのうち、符号化対象である対象ブロックとの相関が高いブロックの画面内予測信号を、前記対象ブロックに隣接する既再生の参照サンプルを用いて生成する予測信号生成手段と、前記対象ブロックの予測信号と前記対象ブロックの画素信号との残差信号を生成する残差信号生成手段と、前記残差信号生成手段により生成された残差信号を圧縮する残差信号圧縮手段と、前記残差信号の圧縮データを復元した再生残差信号を生成する残差信号復元手段と、前記残差信号の圧縮データを符号化する符号化手段と、前記予測信号と前記再生残差信号とを加算することによって前記対象ブロックの画素信号を復元し、復元された前記対象ブロックの画素信号を前記参照サンプルとして利用するために保存するブロック格納手段、として機能させるための動画像予測符号化プログラムであり、前記予測信号生成手段は、前記ブロック格納手段に保存されている前記対象ブロックの周囲の既再生ブロックから参照サンプルを取得し、前記参照サンプルから2つ以上のキー参照サンプルを選択し、内挿参照サンプルを生成するために前記キー参照サンプル間を内挿処理し、画面内予測の方向を決定し、決定した画面内予測の方向に基づいて前記内挿参照サンプルを外挿して前記画面内予測を生成し、前記符号化手段は、前記画面内予測の方向の情報を圧縮データに含めて符号化することを特徴とする。 A moving picture predictive encoding program according to an aspect of the present invention includes a block dividing unit that divides an input image into a plurality of blocks, and a target that is an encoding target among blocks divided by the block dividing unit. A prediction signal generating means for generating an intra-screen prediction signal of a block having a high correlation with the block by using a previously reproduced reference sample adjacent to the target block; a prediction signal of the target block; and a pixel signal of the target block; A residual signal generating means for generating the residual signal, a residual signal compressing means for compressing the residual signal generated by the residual signal generating means, and a reproduction residual obtained by restoring the compressed data of the residual signal A residual signal restoring means for generating a signal; an encoding means for encoding compressed data of the residual signal; and the prediction signal and the reproduced residual signal are added. A motion picture predictive encoding program for functioning as a block storage means for restoring the pixel signal of the target block by the above and storing the restored pixel signal of the target block as the reference sample, The prediction signal generation means obtains reference samples from the already reproduced blocks around the target block stored in the block storage means, selects two or more key reference samples from the reference samples, and interpolates reference samples To generate an intra-screen prediction by determining the direction of intra-screen prediction and extrapolating the interpolated reference sample based on the determined direction of intra-screen prediction. And the said encoding means includes the information of the direction of the said prediction in a screen in compression data, It encodes, It is characterized by the above-mentioned.
 本発明の一側面に係る動画像予測復号プログラムは、コンピュータを、複数のブロックに分割して符号化された圧縮データの中から、復号対象となる対象ブロックの画面内予測に用いる画面内予測の方向の情報と残差信号の圧縮データとを復号する復号手段と、前記画面内予測の方向の情報と前記対象ブロックに隣接する既再生の参照サンプルとを用いて画面内予測信号を生成する予測信号生成手段と、前記残差信号の圧縮データから前記対象ブロックの再生残差信号を復元する残差信号復元手段と、前記予測信号と前記再生残差信号とを加算することによって前記対象ブロックの画素信号を復元し、復元された前記対象ブロックの画素信号を前記参照サンプルとして利用するために保存するブロック格納手段、として機能させるための動画像予測復号プログラムであり、前記予測信号生成手段は、前記ブロック格納手段に保存されている前記対象ブロックの周囲の既再生ブロックから参照サンプルを取得し、前記参照サンプルから2つ以上のキー参照サンプルを選択し、内挿参照サンプルを生成するために前記キー参照サンプル間を内挿処理し、前記画面内予測の方向に基づいて前記内挿参照サンプルを外挿して前記画面内予測を生成することを特徴とする。 A moving picture predictive decoding program according to an aspect of the present invention is an image prediction decoding program used for intra prediction of a target block to be decoded from compressed data that is encoded by dividing a computer into a plurality of blocks. Prediction for generating an intra prediction signal using decoding means for decoding the direction information and compressed data of the residual signal, the direction information of the intra prediction and the already-reproduced reference sample adjacent to the target block Signal generating means, residual signal restoring means for restoring the reproduction residual signal of the target block from the compressed data of the residual signal, and adding the prediction signal and the reproduction residual signal to add the prediction signal of the target block Operation for functioning as block storage means for restoring a pixel signal and storing the restored pixel signal of the target block for use as the reference sample An image predictive decoding program, wherein the prediction signal generation means obtains a reference sample from a previously reproduced block around the target block stored in the block storage means, and two or more key reference samples from the reference sample And interpolating between the key reference samples to generate an interpolation reference sample, and extrapolating the interpolation reference sample based on the direction of the intra prediction to generate the intra prediction It is characterized by.
 本発明の双一次内挿による参照サンプルへのフィルタ処理によれば、参照サンプルの両端のサンプルを用いて、参照サンプル内の信号を緩やかに変化させるため、擬似輪郭のような人工的なノイズを抑制できる。 According to the filtering process to the reference sample by the bilinear interpolation of the present invention, since the signal in the reference sample is gently changed using the samples at both ends of the reference sample, artificial noise such as a pseudo contour is not generated. Can be suppressed.
本発明の実施形態に係る動画像予測符号化装置を示すブロック図である。It is a block diagram which shows the moving image predictive coding apparatus which concerns on embodiment of this invention. 本発明の実施形態に係る動画像予測復号装置を示すブロック図である。It is a block diagram which shows the moving image predictive decoding apparatus which concerns on embodiment of this invention. 本発明の実施形態に係る画面内予測方法を示す流れ図である。It is a flowchart which shows the prediction method in a screen which concerns on embodiment of this invention. 本発明の実施形態に係る画面内予測方法の別例を示す流れ図である。It is a flowchart which shows another example of the prediction method in a screen which concerns on embodiment of this invention. 記録媒体に記録されたプログラムを実行するためのコンピュータのハードウェア構成を示す図である。It is a figure which shows the hardware constitutions of the computer for performing the program recorded on the recording medium. 記録媒体に記憶されたプログラムを実行するためのコンピュータの概観図である。It is a general-view figure of the computer for performing the program memorize | stored in the recording medium. 画面内予測に用いる参照サンプルの例を説明する図である。It is a figure explaining the example of the reference sample used for the prediction in a screen. 従来技術における画面内予測方法を示す流れ図である。It is a flowchart which shows the prediction method in a screen in a prior art. 平坦領域における元の信号と再生信号の関係を説明する図である。It is a figure explaining the relationship between the original signal and reproduction signal in a flat area | region. 画面間予測における動き推定処理を説明するための模式図である。It is a schematic diagram for demonstrating the motion estimation process in the prediction between screens. 参照サンプルの外挿による画面内予測を説明するための模式図である。It is a schematic diagram for demonstrating the prediction in a screen by extrapolation of a reference sample. 画面内予測に用いる参照サンプルの別例を説明する図である。It is a figure explaining another example of the reference sample used for the prediction in a screen. 図1の予測信号生成器103の処理を説明する流れ図である。It is a flowchart explaining the process of the prediction signal generator 103 of FIG. 図2の予測信号生成器208の処理を説明する流れ図である。3 is a flowchart for explaining processing of a prediction signal generator 208 in FIG. 2. 本発明の実施形態に係る画面内予測方法の第2の別例を示す流れ図である。It is a flowchart which shows the 2nd another example of the prediction method in a screen which concerns on embodiment of this invention. 動画像予測符号化プログラムの構成を示すブロック図である。It is a block diagram which shows the structure of a moving image predictive encoding program. 動画像予測復号プログラムの構成を示すブロック図である。It is a block diagram which shows the structure of a moving image prediction decoding program.
 以下、本発明の実施の形態について、図1から図7と図13~図17を用いて説明する。 Hereinafter, embodiments of the present invention will be described with reference to FIGS. 1 to 7 and FIGS. 13 to 17. FIG.
 図1は本発明の実施形態に係る動画像予測符号化装置100を示すブロック図である。図1に示すように、動画像予測符号化装置100は、入力端子101、ブロック分割器102、予測信号生成器103、フレームメモリ104、減算器105、変換器106、量子化器107、逆量子化器108、逆変換器109、加算器110、エントロピー符号化器111、出力端子112、ブロックメモリ113、およびループフィルタ器114を備える。減算器105、変換器106と量子化器107は、特許請求の範囲に記載された「符号化手段」に対応する。また、逆量子化器108、逆変換器109と加算器110は、特許請求の範囲に記載された「復号手段」に対応する。フレームメモリ104は「画像格納手段」、ブロックメモリ113は「ブロック格納手段」に対応する。 FIG. 1 is a block diagram showing a video predictive coding apparatus 100 according to an embodiment of the present invention. As shown in FIG. 1, the moving image predictive encoding device 100 includes an input terminal 101, a block divider 102, a prediction signal generator 103, a frame memory 104, a subtractor 105, a converter 106, a quantizer 107, an inverse quantum, and the like. , An adder 110, an entropy encoder 111, an output terminal 112, a block memory 113, and a loop filter 114. The subtractor 105, the converter 106, and the quantizer 107 correspond to “encoding means” recited in the claims. The inverse quantizer 108, the inverse transformer 109, and the adder 110 correspond to “decoding means” recited in the claims. The frame memory 104 corresponds to “image storage means”, and the block memory 113 corresponds to “block storage means”.
 以上のように構成された動画像予測符号化装置100について、以下その動作を述べる。複数枚の画像からなる動画像の信号は入力端子101に入力される。符号化の対象となる画像はブロック分割器102にて、複数の領域に分割される。本発明に係る実施形態では、図7に例を示したようにブロックサイズは限定されない。複数のブロックサイズや形が1画面に混在してよい。ブロックの符号化順については、例えば非特許文献1に記載されている。次に符号化処理の対象となる領域(以下対象ブロックとよぶ)に対して、予測信号を生成する。本発明に係る実施形態では、画面間予測と画面内予測の2種類の予測方法を用いる。予測信号生成器103における予測信号生成処理については、図13を用いて後述する。 The operation of the moving picture predictive coding apparatus 100 configured as described above will be described below. A moving image signal composed of a plurality of images is input to the input terminal 101. An image to be encoded is divided into a plurality of regions by the block divider 102. In the embodiment according to the present invention, the block size is not limited as illustrated in FIG. A plurality of block sizes and shapes may be mixed on one screen. The coding order of blocks is described in Non-Patent Document 1, for example. Next, a prediction signal is generated for a region to be encoded (hereinafter referred to as a target block). In the embodiment according to the present invention, two types of prediction methods, inter-screen prediction and intra-screen prediction, are used. The prediction signal generation processing in the prediction signal generator 103 will be described later with reference to FIG.
 減算器105にて対象ブロックの信号(ラインL102経由)から予測信号(ラインL103経由)を引き算し、残差信号を生成する。この残差信号は変換器106にて離散コサイン変換され、各変換係数は量子化器107にて量子化される。エントロピー符号化器111は量子化された変換係数を符号化して、予測信号の生成に必要となる予測情報とともに出力端子112より送出する。 The subtractor 105 subtracts the prediction signal (via the line L103) from the signal of the target block (via the line L102) to generate a residual signal. The residual signal is subjected to discrete cosine transform by a transformer 106, and each transform coefficient is quantized by a quantizer 107. The entropy encoder 111 encodes the quantized transform coefficient and sends it from the output terminal 112 together with prediction information necessary for generating a prediction signal.
 後続の対象ブロックに対する画面内予測もしくは画面間予測を行うために、圧縮された対象ブロックの信号は逆処理し復元される。すなわち、量子化された変換係数は逆量子化器108にて逆量子化されたのちに逆変換器109にて逆離散コサイン変換され、残差信号を復元する。加算器110にて復元された残差信号とラインL103から送られた予測信号とを加算し、対象ブロックの信号を再生する。再生されたブロックの信号は、画面内予測のためブロックメモリ113に格納される。再生された信号により構成される再生画像は、ループフィルタ器114にて再生画像内に発生するブロックノイズが除去されたのち、フレームメモリ104に格納される。 In order to perform intra prediction or inter prediction for the subsequent target block, the signal of the compressed target block is inversely processed and restored. That is, the quantized transform coefficient is inversely quantized by the inverse quantizer 108 and then inverse discrete cosine transformed by the inverse transformer 109 to restore the residual signal. The residual signal restored by the adder 110 and the prediction signal sent from the line L103 are added to reproduce the signal of the target block. The reproduced block signal is stored in the block memory 113 for intra-screen prediction. A reproduced image constituted by the reproduced signal is stored in the frame memory 104 after block noise generated in the reproduced image is removed by the loop filter unit 114.
 図13にて、予測信号生成器103における予測信号処理フローを説明する。まずステップS302にて、画面間予測に必要な予測情報を生成する。具体的には、過去に符号化されたのちに復元された再生画像を参照画像として、この参照画像から対象ブロックに対する誤差の最も小さい予測信号を与える動きベクトルと参照画面を探索する。この際、対象ブロックはラインL102経由で、参照画像はL104経由で入力される。参照画像としては、過去に符号化され復元された複数枚の画像を参照画像として用いる。詳細は従来の技術であるH.264あるいは非特許文献1に示した方法と同じである。 The prediction signal processing flow in the prediction signal generator 103 will be described with reference to FIG. First, in step S302, prediction information necessary for inter-screen prediction is generated. Specifically, using a reproduced image that has been encoded and restored in the past as a reference image, a motion vector that gives a prediction signal with the smallest error with respect to the target block and a reference screen are searched from this reference image. At this time, the target block is input via line L102 and the reference image is input via L104. As the reference image, a plurality of images encoded and restored in the past are used as the reference image. For details, see the conventional technology H.264. H.264 or the method shown in Non-Patent Document 1.
 ステップS303では、画面内予測に必要な予測情報を生成する。図7に示すように対象ブロックに空間的に隣接する既再生の画素値を用いて、複数の画面内予測の方向について、予測信号を生成する。そして、対象ブロックに対する誤差の最も小さい予測信号を与える予測方向(イントラ予測モード)を決定する。この際、予測信号生成器103では、ブロックメモリ113からラインL113経由で同じ画面内にある既再生の画素信号を参照サンプルとして取得し、これらの信号を外挿することによって画面内予測信号を生成する。 In step S303, prediction information necessary for in-screen prediction is generated. As illustrated in FIG. 7, a prediction signal is generated for a plurality of in-screen prediction directions using already reproduced pixel values spatially adjacent to the target block. And the prediction direction (intra prediction mode) which gives the prediction signal with the smallest error with respect to an object block is determined. At this time, the prediction signal generator 103 obtains already reproduced pixel signals in the same screen as reference samples from the block memory 113 via the line L113, and generates an intra-screen prediction signal by extrapolating these signals. To do.
 次にステップS304では、対象ブロックに適用する予測方法を画面間予測と画面内予測から選択する.例えば,対象ブロックに対する誤差の小さい予測値を与える予測方法を選択する。あるいは、実際に2つの予測方法について符号化処理まで行い,発生した符号量と符号化誤差画像の絶対値和の関係から算出される評価値が小さい方を選択するようにしてもよい.選択した予測方法の選択情報は、予測信号の生成に必要な情報としてラインL112経由でエントロピー符号化器111に送られ符号化した上で出力端子112から送出される(ステップS305)。 Next, in step S304, the prediction method to be applied to the target block is selected from inter-screen prediction and intra-screen prediction. For example, a prediction method that gives a prediction value with a small error for the target block is selected. Alternatively, the encoding process may be actually performed for the two prediction methods, and the smaller evaluation value calculated from the relationship between the generated code amount and the sum of the absolute values of the encoding error images may be selected. The selection information of the selected prediction method is sent to the entropy encoder 111 via the line L112 as information necessary for generating a prediction signal, and is transmitted from the output terminal 112 after being encoded (step S305).
 ステップS306にて選択した予測方法が画面間予測の場合には、動き情報(動きベクトルと参照画面情報)に基づいてステップS307にて予測信号が生成され、生成された画面間予測信号はラインL103経由で減算器105に出力される。ステップS308にて、動き情報は、予測信号の生成に必要な情報としてラインL112経由でエントロピー符号化器111に送られ符号化した上で出力端子112から送出される。 When the prediction method selected in step S306 is inter-screen prediction, a prediction signal is generated in step S307 based on motion information (motion vector and reference screen information), and the generated inter-screen prediction signal is represented by line L103. And then output to the subtractor 105. In step S308, the motion information is sent to the entropy encoder 111 via the line L112 as information necessary for generating a prediction signal, and is transmitted from the output terminal 112.
 ステップS306にて選択した予測方法が画面内予測の場合には、イントラ予測モードに基づいてステップS309にて予測信号が生成され、生成された画面内予測信号はラインL103経由で減算器105に出力される。ステップS310にて、イントラ予測モードは、予測信号の生成に必要な情報としてラインL112経由でエントロピー符号化器111に送られ符号化した上で出力端子112から送出される。 If the prediction method selected in step S306 is intra prediction, a prediction signal is generated in step S309 based on the intra prediction mode, and the generated intra prediction signal is output to the subtractor 105 via the line L103. Is done. In step S310, the intra prediction mode is sent to the entropy encoder 111 via the line L112 as information necessary for generating a prediction signal, and is transmitted from the output terminal 112.
 エントロピー符号化器111に用いる符号化方法は、算術符号化でも良いし、可変長符号化でも良い。 The encoding method used for the entropy encoder 111 may be arithmetic encoding or variable length encoding.
 図2は本発明の実施形態に係る動画像予測復号装置200のブロック図である。図2に示すように、動画像予測復号装置200は、入力端子201、データ解析器202、逆量子化器203、逆変換器204、加算器205、予測信号生成器208、フレームメモリ207、出力端子206、ループフィルタ器209、およびブロックメモリ215を備える。逆量子化器203と逆変換器204は、特許請求の範囲に記載された「復号手段」に対応する。復号手段としては上記以外のものを用いてもよい。また逆変換器204がなくてもよい。フレームメモリ207は「画像格納手段」、ブロックメモリ215は「ブロック格納手段」に対応する。 FIG. 2 is a block diagram of the video predictive decoding device 200 according to the embodiment of the present invention. As shown in FIG. 2, the moving picture predictive decoding apparatus 200 includes an input terminal 201, a data analyzer 202, an inverse quantizer 203, an inverse transformer 204, an adder 205, a prediction signal generator 208, a frame memory 207, and an output. A terminal 206, a loop filter 209, and a block memory 215 are provided. The inverse quantizer 203 and the inverse transformer 204 correspond to “decoding means” recited in the claims. A decoding means other than those described above may be used. Further, the inverse converter 204 may not be provided. The frame memory 207 corresponds to “image storage means”, and the block memory 215 corresponds to “block storage means”.
 以上のように構成された動画像予測復号装置200について、以下その動作を述べる。上述した方法で圧縮符号化された圧縮データは入力端子201から入力される。この圧縮データには、画像を複数のブロックに分割された対象ブロックを予測し符号化された残差信号や予測信号の生成に必要な情報が含まれている。図7に例を示したようにブロックサイズは限定されない。複数のブロックサイズや形が1画面に混在してよい。ブロックの復号順については、例えば非特許文献1に記載されている。予測信号の生成に必要な情報には、予測方法選択情報と動き情報(画面間予測の場合)あるいはイントラ予測モード(画面内予測の場合)が含まれている。 The operation of the moving picture predictive decoding apparatus 200 configured as described above will be described below. The compressed data compressed and encoded by the method described above is input from the input terminal 201. The compressed data includes a residual signal encoded by predicting a target block obtained by dividing an image into a plurality of blocks and information necessary for generating a prediction signal. As shown in FIG. 7, the block size is not limited. A plurality of block sizes and shapes may be mixed on one screen. The decoding order of blocks is described in Non-Patent Document 1, for example. Information necessary for generating the prediction signal includes prediction method selection information and motion information (in the case of inter-screen prediction) or intra prediction mode (in the case of intra-screen prediction).
 データ解析器202は、圧縮データから対象ブロックの残差信号、予測信号の生成に必要な情報、量子化パラメータを復号する。復号された対象ブロックの残差信号は逆量子化器203にて量子化パラメータ(ラインL202経由)をもとに逆量子化される。さらに、逆量子化された残差信号は、逆変換器204にて逆離散コサイン変換され、その結果として、残差信号が復元される。次に、ラインL206経由で予測信号の生成に必要な情報が予測信号生成器208に送られる。予測信号生成器208では、予測信号の生成に必要な情報に基づいて対象ブロックの予測信号を生成する。予測信号生成器208における予測信号の生成処理については、図14を用いて後述する。生成された予測信号はラインL208経由で加算器205に送られ、復元された残差信号に加算され、対象ブロック信号を再生し、ラインL205経由でループフィルタ器209に出力すると同時に、後続のブロックの画面内予測に用いるためブロックメモリ215に格納される。ループフィルタ器209は、ラインL205経由で入力された再生信号からブロックノイズを除去し、ブロックノイズを除去した再生画像は、後続の画像の復号・再生に用いられる再生画像としてフレームメモリ207に格納される。 The data analyzer 202 decodes the residual signal of the target block, information necessary for generating the prediction signal, and the quantization parameter from the compressed data. The decoded residual signal of the target block is inversely quantized by the inverse quantizer 203 based on the quantization parameter (via the line L202). Further, the inverse quantized residual signal is subjected to inverse discrete cosine transform by the inverse transformer 204, and as a result, the residual signal is restored. Next, information necessary for generating a prediction signal is sent to the prediction signal generator 208 via the line L206. The prediction signal generator 208 generates a prediction signal for the target block based on information necessary for generating the prediction signal. The prediction signal generation processing in the prediction signal generator 208 will be described later with reference to FIG. The generated prediction signal is sent to the adder 205 via the line L208, added to the restored residual signal, regenerates the target block signal, and is output to the loop filter 209 via the line L205. Is stored in the block memory 215 for use in intra prediction. The loop filter unit 209 removes block noise from the reproduction signal input via the line L205, and the reproduction image from which the block noise has been removed is stored in the frame memory 207 as a reproduction image used for decoding and reproduction of the subsequent image. The
 図14にて、予測信号生成器208における予測信号処理フローを説明する。まずステップS402にて、データ解析器202で復号した予測方法を取得する。 The prediction signal processing flow in the prediction signal generator 208 will be described with reference to FIG. First, in step S402, the prediction method decoded by the data analyzer 202 is acquired.
 復号した予測方法が画面間予測の場合(ステップS403)には、データ解析器202で復号した動き情報(動きベクトルと参照画面情報)を取得し(ステップS404)、動き情報に基づいてフレームメモリ207にアクセスし、複数の参照画像の中から参照信号を取得し予測信号を生成する(ステップS405)。 When the decoded prediction method is inter-screen prediction (step S403), the motion information (motion vector and reference screen information) decoded by the data analyzer 202 is acquired (step S404), and the frame memory 207 is based on the motion information. To obtain a reference signal from a plurality of reference images and generate a prediction signal (step S405).
 復号した予測方法が画面内予測の場合(ステップS403)には、データ解析器202で復号したイントラ予測モードを取得し(ステップS406)、ブロックメモリ215にアクセスし、対象ブロックに隣接する既再生の画素信号を参照サンプルとして取得し、イントラ予測モードに基づいて予測信号を生成する(ステップS407)。生成した予測信号はL208経由で加算器205に出力される。 When the decoded prediction method is intra-screen prediction (step S403), the intra prediction mode decoded by the data analyzer 202 is acquired (step S406), the block memory 215 is accessed, and an already reproduced block adjacent to the target block is reproduced. A pixel signal is acquired as a reference sample, and a prediction signal is generated based on the intra prediction mode (step S407). The generated prediction signal is output to the adder 205 via L208.
 データ解析器202に用いる復号方法は、算術復号でも良いし、可変長復号でも良い。 The decoding method used for the data analyzer 202 may be arithmetic decoding or variable length decoding.
 次に、図3と図7を用いて、本発明の実施形態における画面内予測方法について説明する。つまり、図13のステップS309と図14のステップS407の詳細であり、図1のブロックメモリ113あるいは図2のブロックメモリ215から取得した参照サンプルを用いて、対象ブロックのイントラ予測サンプルをイントラ予測モードに基づく外挿法により推定する方法について説明する。 Next, the intra prediction method according to the embodiment of the present invention will be described with reference to FIG. 3 and FIG. That is, it is the details of step S309 in FIG. 13 and step S407 in FIG. 14, and using the reference samples acquired from the block memory 113 in FIG. 1 or the block memory 215 in FIG. A method for estimation by extrapolation based on the above will be described.
 本発明では、発明が解決しようとする課題にて示した擬似輪郭のようなノイズの発生を抑制するため、擬似輪郭の要因となるブロックについて、画面内予測に用いる参照サンプル群に双一次内挿処理を適用する。参照サンプル群の信号の変化を緩やかにすることにより、参照サンプル群のブロック境界に発生するステップ状のノイズの出現を抑制する。 In the present invention, in order to suppress the occurrence of noise such as the pseudo contour shown in the problem to be solved by the invention, bilinear interpolation is performed on the reference sample group used for intra prediction for the block that causes the pseudo contour. Apply processing. By slowing the change in the signal of the reference sample group, the appearance of step noise generated at the block boundary of the reference sample group is suppressed.
 図7にて、参照サンプル群に適用する双一次内挿処理について説明する。対象ブロック210のブロックサイズがN×Nサンプルのとき、ここでは,その周囲の5つの既再生ブロック220、230、240、250、260に属する既再生の信号にて4N+1個の参照サンプル群270(ref[x] (x=0~4N))が構成される。本実施形態では,参照サンプル群270の端にある下左の参照サンプルBL=ref[0]と上左の参照サンプルAL=ref[2N],ならびに参照サンプル群270の中央にあり,対象ブロックの左上に位置する上右の参照サンプルAR=ref[4N]の3つを双一次内挿のキー参照サンプルとして定義する。このとき4N+1個の参照サンプルは下記のように内挿処理される。
ref’[0]=ref[0]                  (1)
ref’[i]=BL+(i*(AL-BL)+N)/2N  (i=1~2N-1)  (2)
ref’[2N]=ref[2N]                 (3)
ref’[2N+i]=AL+(i*(AR-AL)+N)/2N  (i=1~2N-1) (4)
ref’[4N]=ref[4N]                 (5)
ここでref’[x](x=0~4N)は内挿処理後の参照サンプル(interpolated reference samples)の値を示している。なお、式(2)と(4)は、それぞれ式(2)’と(4)’のように変形してもよい。
ref’[i]=((2N-i)*BL+i*AL+N)/2N  (i=1~2N-1)   (2)’
ref’[2N+i]=((2N-i)*AL+i*AR+N)/2N  (i=1~2N-1)  (4)’
A bilinear interpolation process applied to the reference sample group will be described with reference to FIG. When the block size of the target block 210 is N × N samples, here, 4N + 1 reference sample groups in the already reproduced signals belonging to the surrounding five already reproduced blocks 220, 230, 240, 250, 260 270 (ref [x] (x = 0 to 4N)) is configured. In the present embodiment, the lower left reference sample BL = ref [0] and the upper left reference sample AL = ref [2N] at the end of the reference sample group 270, and the center of the reference sample group 270, Three upper right reference samples AR = ref [4N] located at the upper left are defined as key reference samples for bilinear interpolation. At this time, 4N + 1 reference samples are interpolated as follows.
ref '[0] = ref [0] (1)
ref '[i] = BL + (i * (AL-BL) + N) / 2N (i = 1-2N-1) (2)
ref '[2N] = ref [2N] (3)
ref '[2N + i] = AL + (i * (AR-AL) + N) / 2N (i = 1-2N-1) (4)
ref '[4N] = ref [4N] (5)
Here, ref ′ [x] (x = 0 to 4N) indicates the value of the interpolated reference samples after interpolation processing. Expressions (2) and (4) may be modified as Expressions (2) ′ and (4) ′, respectively.
ref '[i] = ((2N-i) * BL + i * AL + N) / 2N (i = 1-2N-1) (2)'
ref '[2N + i] = ((2N-i) * AL + i * AR + N) / 2N (i = 1-2N-1) (4)'
 このように、BL~ALの間の参照サンプルをキー参照サンプルBLとALで双一次内挿にて生成し、AL~ARの間の参照サンプルをキー参照サンプルALとARで双一次内挿にて生成することで、対象ブロックに隣接する内挿処理後の参照サンプル値のレベルは緩やかに変化する。その結果、予測信号へのブロックノイズの伝播を抑制することができる。 In this way, reference samples between BL and AL are generated by bilinear interpolation with key reference samples BL and AL, and reference samples between AL and AR are bilinearly interpolated with key reference samples AL and AR. The level of the reference sample value after the interpolation processing adjacent to the target block changes gently. As a result, propagation of block noise to the prediction signal can be suppressed.
 次に図7にて、双一次内挿を適用する参照サンプルの判定基準について説明する。本実施形態では、3つのキー参照サンプルとブロック境界の2つの参照サンプル、ならびに2つの閾値を用いて判定を行う。ここで、THRESHOLD_ABOVEとTHRESHOLD_LEFTを、それぞれ対象ブロックの上端の参照サンプルref[x] (x=2N+1~4N-1)と左端の参照サンプルref[x] (x=1~2N-1)に双一次内挿を適用するか否かを決定するための判定に用いる閾値とする。判定基準を満たす参照サンプルに、双一次内挿を適用する。 Next, with reference to FIG. 7, the criterion for determining the reference sample to which bilinear interpolation is applied will be described. In this embodiment, the determination is performed using three key reference samples, two reference samples at the block boundary, and two threshold values. Here, THRESHOLD_ABOVE and THRESHOLD_LEFT are set to the reference sample ref [x] (x = 2N + 1 to 4N-1) at the upper end of the target block and the reference sample ref [x] (x = 1 to 2N-1) at the left end, respectively. The threshold is used for determination for determining whether or not to apply bilinear interpolation. Bilinear interpolation is applied to reference samples that meet the criteria.
 本実施形態では、下記の判定基準を用いる。下記の2つの式におけるInterpolate_AboveとInterpolate_Leftはブール値であり、右辺の式を満たす場合はture(1)となり双一次内挿を適用し、満たさない場合はfalse(0)となり従来の121フィルタによるintra smoothingを適用する。
Interpolate_Left=abs(BL+AL-2*ref[N])<THRESHOLD_LEFT  (6)
Interpolate_Above=abs(AL+AR-2*ref[3N])<THRESHOLD_ABOVE (7)
BL、ALとref[3N]の値が直線上に並ぶ場合、BL+AL-2*ref[N]の値は0となる。同様にAL、ARとref[3N]の値が直線上に並ぶ場合、AL+AR-2*ref[3N]の値も0となる。つまり、上記の2式は、BLからALをつなぐ直線に対するref[N]の偏差(deviation)の大きさ、ALとARをつなぐ直線に対するref[3N]の偏差(deviation)の大きさとをそれぞれの閾値と比較している。算出した2つの偏差が対応する閾値THRESHOLD_ABOVEまたはTHRESHOLD_LEFTより小さければブール値(Interpolate_AboveまたはInterpolate_Left)はtrueとなり、参照サンプルに双一次内挿を適用する。式(6)と(7)でabs(x)はxの絶対値を算出する。
In the present embodiment, the following criteria are used. Interpolate_Above and Interpolate_Left in the following two expressions are Boolean values. If the expression on the right side is satisfied, it becomes ture (1) and bilinear interpolation is applied. Apply smoothing.
Interpolate_Left = abs (BL + AL-2 * ref [N]) <THRESHOLD_LEFT (6)
Interpolate_Above = abs (AL + AR-2 * ref [3N]) <THRESHOLD_ABOVE (7)
When BL, AL and ref [3N] values are arranged on a straight line, the value of BL + AL−2 * ref [N] is 0. Similarly, when the values of AL, AR, and ref [3N] are arranged on a straight line, the value of AL + AR−2 * ref [3N] is also 0. In other words, the above two formulas show the magnitude of deviation (deviation) of ref [N] for the straight line connecting BL and AL, and the magnitude of deviation (deviation) of ref [3N] for the straight line connecting AL and AR, respectively. It is compared with the threshold value. If the two calculated deviations are smaller than the corresponding threshold THRESHOLD_ABOVE or THRESHOLD_LEFT, the Boolean value (Interpolate_Above or Interpolate_Left) is true, and bilinear interpolation is applied to the reference sample. In equations (6) and (7), abs (x) calculates the absolute value of x.
 このとき、2つの閾値の値(THRESHOLD_ABOVEとTHRESHOLD_LEFT)は、予め設定した固定値でもよいし、フレーム単位や複数のブロックをまとめたスライス単位で符号化し、復号器で復元するようにしてもよい。また、ブロック単位で符号化し、復号器で復元するようにしてもよい。図2では、データ解析器202にて2つの閾値を復号し、予測信号生成器208に出力し,下記の図3と図4に詳しく説明する画面内予測信号の生成に用いる。 At this time, the two threshold values (THRESHOLD_ABOVE and THRESHOLD_LEFT) may be fixed values set in advance, or may be encoded in frame units or slice units in which a plurality of blocks are combined and restored by a decoder. Alternatively, encoding may be performed in block units and restored by a decoder. In FIG. 2, the data analyzer 202 decodes the two threshold values, outputs them to the prediction signal generator 208, and uses them to generate an intra-screen prediction signal described in detail in FIGS. 3 and 4 below.
 図3にイントラ予測サンプルを外挿法(画面内予測の方向)により推定する処理の流れ図を示す。まず、予測信号生成器(103または208、以下番号は省略)は、ステップ510にて、ブロックメモリ(113または215、以下番号は省略)から図7の画素群270に示すような参照サンプルref[x] (x=0~4N)を取得する。この際、符号化順番等の理由で隣接ブロックがまだ再生されておらず、4N+1個の参照サンプルを全て取得することができない場合には、存在しないサンプルをパディング処理(近くの存在するサンプル値をコピー)にて生成し、4N+1個の参照サンプルを準備する。パディング処理の詳細については、非特許文献1に記載されている。次に、ステップ560にて、式(6)と(7)に基づいて、2つのブール値Interpolate_AboveとInterpolate_Leftを算出する。 Fig. 3 shows a flowchart of processing for estimating an intra prediction sample by extrapolation (direction of intra prediction). First, in step 510, the prediction signal generator (103 or 208, numbers are omitted) from the block memory (113 or 215, numbers are omitted), as shown in the pixel group 270 of FIG. x] (x = 0-4N) is acquired. At this time, if the adjacent block has not been reproduced yet for reasons such as the coding order and all of the 4N + 1 reference samples cannot be acquired, the nonexistent sample is padded (the nearby existing sample). Copy the value) and prepare 4N + 1 reference samples. The details of the padding process are described in Non-Patent Document 1. Next, in step 560, two Boolean values Interpolate_Above and Interpolate_Left are calculated based on equations (6) and (7).
 次に、予測信号生成器は、ステップ520にて、対象ブロックが,双一次内挿適用の判定基準を満たすかを判定する.具体的には,対象ブロックのサイズが予め定めたMより大きいかを判定すると共に、算出したInterpolate_AboveとInterpolate_Leftが共にtrueであるかを判断する。ブロックサイズを判定基準とするのは、通常、課題とする擬似輪郭は大きなブロックサイズで発生しやすいためである。Mの値を大きく設定することにより、参照サンプルの不要な変更を抑制する効果がある。 Next, in step 520, the prediction signal generator determines whether the target block satisfies the bilinear interpolation application criterion. Specifically, it is determined whether the size of the target block is larger than a predetermined M, and it is determined whether the calculated Interpolate_Above and Interpolate_Left are both true. The reason why the block size is used as a criterion is that a pseudo contour that is a problem is usually generated with a large block size. By setting a large value for M, there is an effect of suppressing unnecessary changes in the reference sample.
 これらの2つの判定基準を満たす場合(ブロックサイズ>=MかつInterpolate_Above==trueかつInterpolate_Left==true)には、ステップ530に進み、満たさない場合はステップ540に進む。ステップ530では、式(1)から(5)に示す双一次内挿処理を参照サンプルref[x](x=0~4N)に適用し、内挿処理後の参照サンプル(interpolated reference samples)ref’[x] (x=0~4N)を生成する。ステップ540では、式(8)と(9)に従って、参照サンプルref[x] (x=0~4N)に121フィルタによるintra smoothingを適用する。
ref’[i]=ref[i]  (i=0 and 4N)              (8)
ref’[i]=(ref[i-1]+2*ref[i]+ref[i+1]+2)/4  (i=1~4N-1) (9)
ここでref’[x](x=0~4N)は平滑化後の参照サンプル(smoothed reference samples)の値を示している。
If these two criteria are satisfied (block size> = M and Interpolate_Above == true and Interpolate_Left == true), the process proceeds to step 530. Otherwise, the process proceeds to step 540. In step 530, the bilinear interpolation processing shown in equations (1) to (5) is applied to the reference sample ref [x] (x = 0 to 4N), and the interpolated reference samples (interpolated reference samples) ref '[x] (x = 0-4N) is generated. In step 540, intra smoothing using 121 filters is applied to the reference sample ref [x] (x = 0 to 4N) according to equations (8) and (9).
ref '[i] = ref [i] (i = 0 and 4N) (8)
ref '[i] = (ref [i-1] + 2 * ref [i] + ref [i + 1] +2) / 4 (i = 1 to 4N-1) (9)
Here, ref ′ [x] (x = 0 to 4N) indicates the value of the smoothed reference samples.
 最後に、ステップ550では、既に定まっているイントラ予測モードと内挿後あるいは平滑化後の参照サンプルref’[x](x=0~4N)を用いて、対象ブロックのイントラ予測サンプルを外挿法(画面内予測の方向)により推定する。 Finally, in step 550, the intra prediction sample of the target block is extrapolated using the already determined intra prediction mode and the reference sample ref ′ [x] (x = 0 to 4N) after interpolation or smoothing. Estimated by the method (direction of prediction in the screen).
 図4は、図3をより詳細に説明するものであり、双一次内挿と121フィルタの切り替えを、左参照サンプル(ref[x],x=0~2N)と上参照サンプル(ref[x],x=2N~4N)に分けて独立に実施する場合におけるイントラ予測サンプルを外挿法(画面内予測の方向)により推定する処理の流れ図を示している。まず、予測信号生成器(103または208、以下番号は省略)は、ステップ610にて、ブロックメモリ(113または215、以下番号は省略)から図7の画素群270に示すような参照サンプルref[x] (x=0~4N)を取得する。この際、符号化順番等の理由で隣接ブロックがまだ再生されておらず、4N+1個の参照サンプルを全て取得することができない場合には、存在しないサンプルをパディング処理(近くの存在するサンプル値をコピー)にて生成し、4N+1個の参照サンプルを準備する。パディング処理の詳細については、非特許文献1に記載されている。 FIG. 4 illustrates FIG. 3 in more detail. Bilinear interpolation and 121 filter switching are performed by changing the left reference sample (ref [x], x = 0 to 2N) and the upper reference sample (ref [x ], x = 2N to 4N), and shows a flowchart of a process for estimating an intra prediction sample by an extrapolation method (direction of intra prediction). First, in step 610, the prediction signal generator (103 or 208, numbers are omitted) from the block memory (113 or 215, numbers are omitted) from the reference sample ref [ x] (x = 0-4N) is acquired. At this time, if the adjacent block has not been reproduced yet for reasons such as the coding order and all of the 4N + 1 reference samples cannot be acquired, the nonexistent sample is padded (the nearby existing sample). Copy the value) and prepare 4N + 1 reference samples. The details of the padding process are described in Non-Patent Document 1.
 次に、ステップ680にて、式(6)と(7)に基づいて、2つのブール値Interpolate_AboveとInterpolate_Leftを算出する。 Next, in step 680, two Boolean values Interpolate_Above and Interpolate_Left are calculated based on equations (6) and (7).
 次に、予測信号生成器は、ステップ620にて、対象ブロックが,双一次内挿適用の判定基準を満たすかを判断する.具体的には,対象ブロックのサイズが予め定めたMより大きいかを判定すると共に、算出したInterpolate_AboveとInterpolate_Leftの少なくとも一方がtrueであるかを判定する。この2つの判定基準を満たす場合(ブロックサイズ>=Mかつ、Interpolate_Above==trueまたはInterpolate_Left==true)には、ステップ625に進み、満たさない場合はステップ660に進む。ステップ660では、式(8)と(9)に従い、参照サンプル群に121フィルタによるintra smoothingを適用する。 Next, in step 620, the prediction signal generator determines whether the target block satisfies the criterion for applying bilinear interpolation. Specifically, it is determined whether the size of the target block is larger than a predetermined M, and it is determined whether at least one of the calculated Interpolate_Above and Interpolate_Left is true. If these two criteria are satisfied (block size> = M and Interpolate_Above == true or Interpolate_Left == true), the process proceeds to step 625, and if not, the process proceeds to step 660. In step 660, intra smoothing by the 121 filter is applied to the reference sample group according to equations (8) and (9).
 ステップ625では、式(6)に示した左参照サンプルの双一次内挿適用の判定基準を満たすかを判定する。つまり、Interpolate_Leftがtrue(1)の場合はステップ630に進み、式(1)と(2)に示す双一次内挿処理を参照サンプルref[x](x=0~2N)に適用し、内挿処理後の参照サンプル(interpolated reference samples)ref’[x] (x=0~2N)を生成する。式(6)の判定基準を満たさない場合には、ステップ635に進み、式(10)と(11)に従って、左参照サンプルref[x] (x=0~2N)に121フィルタによるintra smoothingを適用する。
ref’[0]=ref[0]   (10)
ref’[i]=(ref[i-1]+2*ref[i]+ref[i+1]+2)/4 (i=1~2N-1) (11)
ここでref’[x] (x=0~2N)は平滑化後の参照サンプル(smoothed reference samples)の値を示している。
In step 625, it is determined whether the criterion for applying bilinear interpolation of the left reference sample shown in Expression (6) is satisfied. That is, if Interpolate_Left is true (1), the process proceeds to step 630, where the bilinear interpolation processing shown in equations (1) and (2) is applied to the reference sample ref [x] (x = 0 to 2N) Generate interpolated reference samples ref '[x] (x = 0 to 2N). If the criterion of Expression (6) is not satisfied, the process proceeds to Step 635, and intra smoothing by 121 filter is performed on the left reference sample ref [x] (x = 0 to 2N) according to Expressions (10) and (11). Apply.
ref '[0] = ref [0] (10)
ref '[i] = (ref [i-1] + 2 * ref [i] + ref [i + 1] +2) / 4 (i = 1 to 2N-1) (11)
Here, ref ′ [x] (x = 0 to 2N) represents the value of the smoothed reference samples.
 次に、ステップ640では、式(7)に示した上参照サンプルの双一次内挿適用の判定基準を満たすかを判定する。つまり、Interpolate_Aboveがtrue(1)の場合はステップ650に進み、式(3)と(4)と(5)に基づいて上参照サンプルref[i] (i=2N+1~4N)に双一次内挿処理を適用する。式(7)の判定基準を満たさない場合には、ステップ655に進み、上参照サンプルref[x] (x=2N+1~4N)に式(12)と(13)と(14)に基づいて121フィルタによるintra smoothingを適用する。
ref’[2N]=ref[2N]   (12)
ref’[i]=(ref[i-1]+2*ref[i]+ref[i+1]+2)/4 (i=2N+1~4N-1) (13)
ref’[4N]=ref[4N]   (14)
ここでref’[x](x=2N+1~4N)は平滑化後の参照サンプル(smoothed reference samples)の値を示している。
Next, in step 640, it is determined whether or not the criterion for applying bilinear interpolation of the upper reference sample shown in Expression (7) is satisfied. In other words, if Interpolate_Above is true (1), the process proceeds to step 650, and the upper reference sample ref [i] (i = 2N + 1 to 4N) is bilinear based on equations (3), (4), and (5). Apply interpolation processing. If the determination criterion of Expression (7) is not satisfied, the process proceeds to Step 655, and the upper reference sample ref [x] (x = 2N + 1 to 4N) is based on Expressions (12), (13), and (14). Apply intra smoothing with 121 filter.
ref '[2N] = ref [2N] (12)
ref '[i] = (ref [i-1] + 2 * ref [i] + ref [i + 1] +2) / 4 (i = 2N + 1 to 4N-1) (13)
ref '[4N] = ref [4N] (14)
Here, ref ′ [x] (x = 2N + 1 to 4N) represents the value of the smoothed reference samples.
 最後に、ステップ670では、既に定まっているイントラ予測モードと内挿処理後あるいは平滑化後の参照サンプルref’[x](x=0~4N)を用いて、対象ブロックのイントラ予測サンプルを外挿法(画面内予測の方向)により推定する。外挿には、外挿される対象ブロック内のサンプルの位置から、内挿処理後あるいは平滑化後の参照サンプル(interpolated or smoothed reference samples)に向けて、イントラ予測の方向にラインを投影したとき、投影したラインに近い位置にある内挿処理後あるいは平滑化後の参照サンプル(interpolated or smoothed reference samples)が利用される。 Finally, in step 670, the intra prediction sample of the target block is excluded using the already determined intra prediction mode and the reference sample ref ′ [x] (x = 0 to 4N) after interpolation or smoothing. Estimate by insertion method (direction of prediction in the screen). For extrapolation, when a line is projected in the direction of intra prediction from the position of the sample in the target block to be extrapolated toward the reference sample after interpolation or smoothing (interpolated or smoothed reference samples), Reference samples (interpolated or smoothed reference samples) after interpolation processing or smoothing at a position close to the projected line are used.
 コンピュータを上記の動画像予測符号化装置100として機能させるための動画像予測符号化プログラムは、記録媒体に格納されて提供可能とされている。同様に、コンピュータを上記の動画像予測復号装置200として機能させるための動画像予測復号プログラムは、記録媒体に格納されて提供可能とされている。記録媒体としては、USBメモリ、フレキシブルディスク、CD-ROM、DVD、あるいはROM等の記録媒体、又は半導体メモリ等が例示される。 A moving picture predictive coding program for causing a computer to function as the moving picture predictive coding apparatus 100 is stored in a recording medium and can be provided. Similarly, a moving picture predictive decoding program for causing a computer to function as the moving picture predictive decoding apparatus 200 can be provided by being stored in a recording medium. Examples of the recording medium include a recording medium such as a USB memory, a flexible disk, a CD-ROM, a DVD, or a ROM, or a semiconductor memory.
 例えば、図16に示すように、動画像予測符号化プログラムP100は、ブロック分割モジュールP101、予測信号生成モジュールP102、残差信号生成モジュールP103、残差信号圧縮モジュールP104、残差信号復元モジュールP105、符号化モジュールP106、及びブロック格納モジュールP107を備えている。 For example, as shown in FIG. 16, the moving picture predictive encoding program P100 includes a block division module P101, a prediction signal generation module P102, a residual signal generation module P103, a residual signal compression module P104, a residual signal restoration module P105, An encoding module P106 and a block storage module P107 are provided.
 また、例えば、図17に示すように、動画像予測復号プログラムP200は、復号モジュールP201、予測信号生成モジュールP202、残差信号復元モジュールP203、及びブロック格納モジュールP204を備えている。 Also, for example, as shown in FIG. 17, the moving picture predictive decoding program P200 includes a decoding module P201, a prediction signal generation module P202, a residual signal restoration module P203, and a block storage module P204.
 このように構成された動画像予測符号化プログラムP100または動画像予測復号プログラムP200は、後述の図5及び図6に示す記録媒体10に記憶され、後述するコンピュータにより実行される。 The moving picture predictive encoding program P100 or the moving picture predictive decoding program P200 configured as described above is stored in the recording medium 10 shown in FIGS. 5 and 6 described later, and is executed by a computer described later.
 図5は、記録媒体に記録されたプログラムを実行するためのコンピュータ30のハードウェア構成を示す図であり、図6は、記録媒体に記憶されたプログラムを実行するためのコンピュータ30の概観図である。ここでのコンピュータ30は、CPUを具備しソフトウエアによる情報処理や制御を行うDVDプレーヤ、セットトップボックス、携帯電話などを広く含む。 FIG. 5 is a diagram showing a hardware configuration of the computer 30 for executing the program recorded on the recording medium, and FIG. 6 is an overview diagram of the computer 30 for executing the program stored on the recording medium. is there. The computer 30 here includes a wide range of DVD players, set-top boxes, mobile phones, and the like that have a CPU and perform information processing and control by software.
 図6に示すように、コンピュータ30は、フレキシブルディスクドライブ装置、CD-ROMドライブ装置、DVDドライブ装置等の読み取り装置12と、オペレーティングシステムを常駐させた作業用メモリ(RAM)14と、記録媒体10に記憶されたプログラムを記憶するメモリ16と、ディスプレイといった表示装置18と、入力装置であるマウス20及びキーボード22と、データ等の送受を行うための通信装置24と、プログラムの実行を制御するCPU26とを備えている。記録媒体10が読み取り装置12に挿入されると、コンピュータ30は、読み取り装置12から記録媒体10に格納された動画像予測符号化プログラムにアクセス可能になり、当該動画像予測符号化プログラムによって上記の動画像予測符号化装置100として動作することが可能になる。同様に、記録媒体10が読み取り装置12に挿入されると、コンピュータ30は、読み取り装置12から記録媒体10に格納された動画像予測復号プログラムにアクセス可能になり、当該動画像予測復号プログラムによって上記の動画像予測復号装置200として動作することが可能になる。 As shown in FIG. 6, the computer 30 includes a reading device 12 such as a flexible disk drive device, a CD-ROM drive device, a DVD drive device, a working memory (RAM) 14 in which an operating system is resident, and a recording medium 10. A memory 16 for storing programs stored therein, a display device 18 such as a display, a mouse 20 and a keyboard 22 as input devices, a communication device 24 for transmitting and receiving data and the like, and a CPU 26 for controlling execution of the programs. And. When the recording medium 10 is inserted into the reading device 12, the computer 30 can access the moving image predictive encoding program stored in the recording medium 10 from the reading device 12, and the moving image predictive encoding program performs the above operation. It becomes possible to operate as the moving image predictive encoding device 100. Similarly, when the recording medium 10 is inserted into the reading device 12, the computer 30 can access the moving image predictive decoding program stored in the recording medium 10 from the reading device 12, and the moving image predictive decoding program performs the above operation. It becomes possible to operate as the moving picture predictive decoding apparatus 200.
 本発明では、さらに下記の変形が可能である。 In the present invention, the following modifications are possible.
 (A)双一次内挿適用の判定基準
 双一次内挿適用の判定基準は、上記実施形態で説明した方法に限定されない。例えば、内挿適用の判定結果を常にtrueとし、ステップ520、620、625、640を省略してもよい。この場合、121フィルタによる平滑化処理(smoothing process)の代わりに内挿処理(interpolation process)が常に適用される。
(A) Criteria for applying bilinear interpolation The criteria for applying bilinear interpolation are not limited to the methods described in the above embodiments. For example, the determination result of interpolation application may always be true, and steps 520, 620, 625, and 640 may be omitted. In this case, an interpolation process is always applied instead of the smoothing process using the 121 filter.
 判定基準に、イントラ予測モードを加えても良い。例えば、ブロック境界に発生する擬似輪郭はブロックノイズ除去処理で軽減されるため、外挿処理の予測方向が垂直や水平のときは、内挿処理適用の判定結果を常にfalseとしてもよい。 ・ Intra prediction mode may be added to the criteria. For example, since the pseudo contour generated at the block boundary is reduced by the block noise removal process, when the prediction direction of the extrapolation process is vertical or horizontal, the determination result of applying the interpolation process may always be false.
 判断基準からブロックサイズを取り除いてもよい。また、対象ブロックブロックサイズの代わりに、対象ブロックと隣接ブロックのブロックサイズの相対関係を判断基準に用いてもよい。図7の例では、対象ブロック210の左に隣接するブロック260のブロックサイズは、対象ブロック210よりも大きい。この場合、ref[N]の周辺でブロックノイズは発生しない。このように、隣接ブロックのブロックサイズが対象ブロックより大きい場合には、式(6)または(7)の結果にかかわらず、内挿適用の判定基準をfalseとしてもよい。一方、対象ブロック210の上に隣接するブロック230,240、250は、対象ブロック210よりも小さい。この場合、ref[3N]やref[2N+N/2]の周辺でブロックノイズが発生する可能性があるため、式(6)または(7)の結果で内挿適用を判定する。なお、この対象ブロックと隣接ブロックのブロックサイズの相対関係は、対象ブロックのブロックサイズと共に判定基準として用いてもよい。 * Block size may be removed from the criteria. Further, instead of the target block block size, the relative relationship between the block size of the target block and the adjacent block may be used as a criterion. In the example of FIG. 7, the block size of the block 260 adjacent to the left of the target block 210 is larger than that of the target block 210. In this case, no block noise occurs around ref [N]. Thus, when the block size of the adjacent block is larger than the target block, the criterion for applying interpolation may be set to false regardless of the result of Expression (6) or (7). On the other hand, adjacent blocks 230, 240, and 250 on the target block 210 are smaller than the target block 210. In this case, since block noise may occur around ref [3N] or ref [2N + N / 2], the interpolation application is determined based on the result of Expression (6) or (7). Note that the relative relationship between the block size of the target block and the adjacent block may be used as a determination criterion together with the block size of the target block.
 式(6)と(7)の閾値(THRESHOLD_ABOVEとTHRESHOLD_LEFT)は異なるブロックサイズ、ブロック形状(ブロックの縦と横のサイズの違い)や異なるイントラ予測モードに対して個別に定めて符号化し、復号器で復元するようにしてもよい。また、THRESHOLD_ABOVEとTHRESHOLD_LEFTの値を同じ値にし、1方のみを符号化し、復号器で復元するようにしてもよい。復号器では、図2のデータ解析器202にて復元された閾値が予測信号生成器208に入力される。予測信号生成器208では、入力された閾値に基づいて、Interpolate_AboveとInterpolate_Leftの値を算出する(図3のステップ560または図4のステップ680)。 The thresholds (THRESHOLD_ABOVE and THRESHOLD_LEFT) in Equations (6) and (7) are individually determined and encoded for different block sizes, block shapes (differences in the vertical and horizontal sizes of blocks) and different intra prediction modes. You may make it restore with. Alternatively, THRESHOLD_ABOVE and THRESHOLD_LEFT may be set to the same value, and only one of them may be encoded and restored by a decoder. In the decoder, the threshold value restored by the data analyzer 202 in FIG. 2 is input to the prediction signal generator 208. The prediction signal generator 208 calculates Interpolate_Above and Interpolate_Left values based on the input threshold (step 560 in FIG. 3 or step 680 in FIG. 4).
 また、ステップ520、620、625と640に判定基準を設ける代わりに判定結果をビットストリームに含めて符号化し、復号器で復元するようにしてもよい。この場合、図1の予測信号生成器103にて、Interpolate_AboveとInterpolate_Leftの値(0か1)を、対象ブロックのサイズと式(6)や(7)の結果に基づいて、2つの値を求め、予測に必要な予測情報として、ブロック毎や複数のブロックをまとめたブロック群単位で符号化する。つまり、ラインL112経由でエントロピー符号化器111に送られ符号化した上で出力端子112から送出される。なお、Interpolate_AboveとInterpolate_Leftの値(0か1)の求める際に、上記で述べた対象ブロックと隣接ブロックのブロックサイズの相対関係や対象ブロックのサイズならびにイントラ予測モードを用いてもよい。 Also, instead of providing determination criteria in steps 520, 620, 625 and 640, the determination result may be encoded by including it in the bitstream and restored by a decoder. In this case, the prediction signal generator 103 of FIG. 1 obtains the Interpolate_Above and Interpolate_Left values (0 or 1) based on the size of the target block and the results of the equations (6) and (7). As the prediction information necessary for prediction, encoding is performed for each block or a block group unit in which a plurality of blocks are collected. That is, the data is sent to the entropy encoder 111 via the line L112, encoded, and sent from the output terminal 112. Note that when obtaining the Interpolate_Above and Interpolate_Left values (0 or 1), the above-described relative relationship between the block size of the target block and the adjacent block, the size of the target block, and the intra prediction mode may be used.
 図2のデータ解析器202では、Interpolate_AboveとInterpolate_Leftの値をブロック毎あるいは複数のブロックをまとめたブロック群単位で復号し、予測信号生成器208に入力される。なお、2つの値は、個別に符号化し、復号しても良いし、2つの値の組として符号化し、復号するようにしてもよい。 2, the data analyzer 202 in FIG. 2 decodes the values of Interpolate_Above and Interpolate_Left in units of blocks or a group of blocks in which a plurality of blocks are collected, and inputs them to the prediction signal generator 208. The two values may be individually encoded and decoded, or may be encoded and decoded as a set of two values.
 図15にて図2の予測信号生成器208内の画面内予測方法の処理を説明する。この場合、図15が図4に置き換えられる。図14では、ステップS406にて、イントラ予測モードと共に復号されたInterpolate_AboveとInterpolate_Leftの値を取得する。まず、予測信号生成器(103または208、以下番号は省略)は、ステップ710にて、ブロックメモリ(113または215、以下番号は省略)から図7の画素群270に示すような参照サンプルref[x] (x=0~4N)を取得する。この際、符号化順番等の理由で隣接ブロックがまだ再生されておらず、4N+1個の参照サンプルを全て取得することができない場合には、存在しないサンプルをパディング処理(近くの存在するサンプル値をコピー)にて生成し、4N+1個の参照サンプルを準備する。パディング処理の詳細については、非特許文献1に記載されている。 The processing of the in-screen prediction method in the prediction signal generator 208 of FIG. In this case, FIG. 15 is replaced with FIG. In FIG. 14, in Step S406, values of Interpolate_Above and Interpolate_Left decoded together with the intra prediction mode are acquired. First, in step 710, the prediction signal generator (103 or 208, the following numbers are omitted) from the block memory (113 or 215, the following numbers are omitted) from the reference sample ref [ x] (x = 0-4N) is acquired. At this time, if the neighboring block has not been reproduced yet for reasons such as the coding order and all the 4N + 1 reference samples cannot be obtained, the nonexistent sample is padded (the nearby existing sample). Copy the value) and prepare 4N + 1 reference samples. The details of the padding process are described in Non-Patent Document 1.
 次に、ステップ790にて、Interpolate_AboveとInterpolate_Leftの値を取得する。予測信号生成器は、ステップ720にて、Interpolate_AboveとInterpolate_Leftの値のいずれかが1であるかを判定する。いずれかの値が1の場合には、ステップ725に進み、満たさない場合はステップ760に進む。ステップ760では、式(8)と(9)に従い、参照サンプル群に121フィルタによるintra smoothingを適用する。 Next, in Step 790, Interpolate_Above and Interpolate_Left values are acquired. In step 720, the prediction signal generator determines whether one of Interpolate_Above and Interpolate_Left is 1. If any value is 1, the process proceeds to step 725, and if not, the process proceeds to step 760. In step 760, intra smoothing by the 121 filter is applied to the reference sample group according to equations (8) and (9).
 ステップ725では、Interpolate_Leftの値が1の場合はステップ730に進み、式(1)と(2)に示す双一次内挿処理を参照サンプルref[x](x=0~2N)に適用し、内挿処理後の参照サンプル(interpolated reference samples)ref’[x] (x=0~2N)を生成する。Interpolate_Leftの値が0の場合はステップ735に進み、式(10)と(11)に従って、左参照サンプルref[x] (x=0~2N)に121フィルタによるintra smoothingを適用する。 In step 725, if the value of Interpolate_Left is 1, the process proceeds to step 730, where the bilinear interpolation processing shown in equations (1) and (2) is applied to the reference sample ref [x] (x = 0 to 2N), Generate interpolated reference samples (interpolated reference samples) ref '[x] (x = 0 ~ 2N). When the value of Interpolate_Left is 0, the process proceeds to step 735, and intra smoothing by the 121 filter is applied to the left reference sample ref [x] (x = 0 to 2N) according to equations (10) and (11).
 次に、ステップ740では、Interpolate_Above値が1の場合はステップ750に進み、式(3)と(4)と(5)に基づいて上参照サンプルref[i] (i=2N+1~4N)に双一次内挿処理を適用する。Interpolate_Above値が0の場合はステップ755に進み、左参照サンプルref[x] (x=2N+1~4N)に式(12)と(13)と(14)に基づいて121フィルタによるintra smoothingを適用する。 Next, in Step 740, if the Interpolate_Above value is 1, the process proceeds to Step 750, and the upper reference sample ref [i] (i = 2N + 1 to 4N) is calculated based on Expressions (3), (4), and (5). Apply bilinear interpolation to. If the Interpolate_Above value is 0, the process proceeds to step 755, and intra smoothing by the 121 filter is performed on the left reference sample ref [x] (x = 2N + 1 to 4N) based on the expressions (12), (13), and (14). Apply.
 最後に、ステップ770では、復号されたイントラ予測モードと内挿処理後あるいは平滑化後の参照サンプルref’[x](x=0~4N)を用いて、対象ブロックのイントラ予測サンプルを外挿法(画面内予測の方向)により推定する。 Finally, in step 770, the intra prediction sample of the target block is extrapolated using the decoded intra prediction mode and the reference sample ref ′ [x] (x = 0 to 4N) after interpolation or smoothing. Estimate by the method (direction of in-screen prediction)
 (B)内挿処理
 上記では、内挿処理に双一次内挿を用いているが、ブロック境界のノイズが除去できればよいので、別の内挿処理でもよい。例えば、キー参照サンプルの平均値で全ての参照サンプルを置き換えてもよい。ブロックサイズや画面内予測タイプで内挿処理方法を切り替えても良いし、適用する内挿処理方法をビットストリームに含めて符号化し、復号するようにしてもよい。
(B) Interpolation process In the above description, bilinear interpolation is used for the interpolation process. However, any other interpolation process may be used as long as noise at the block boundary can be removed. For example, all reference samples may be replaced with an average value of key reference samples. The interpolation processing method may be switched depending on the block size or the intra prediction type, and the applied interpolation processing method may be included in the bitstream for encoding and decoding.
 (C)参照サンプルの画面内予測の処理フロー
 イントラ予測サンプルを外挿法(画面内予測の方向)により推定する処理のフローは図4の手順に限定されない。例えば、ステップ625、630と635はステップ640、650と655と順序を入れ替えてもよい。また、式(3)と式(12)は、ステップ650、655ではなく、ステップ630、635で実施してもよい。また、式(1)(3)(5)と式(10)(12)(14)の処理結果は同じであるため、ステップ625の直前(ステップ620と625の間)かステップ650と655の直後(ステップ650か655とステップ670の間)にまとめて実施してもよい。
(C) Process flow of intra prediction of reference sample The process flow for estimating an intra prediction sample by extrapolation (direction of intra prediction) is not limited to the procedure in FIG. For example, steps 625, 630 and 635 may be reversed in order with steps 640, 650 and 655. Moreover, you may implement Formula (3) and Formula (12) by step 630,635 instead of step 650,655. In addition, since the processing results of the expressions (1), (3), and (5) are the same as those of the expressions (10), (12), and (14), either immediately before step 625 (between steps 620 and 625) or between steps 650 and 655 Immediately after that (between step 650 or 655 and step 670) may be performed together.
 また、ステップ620の判定基準をブロックサイズのみとしてもよい。このとき、式(12)を式(15)と(16)に置き換えれば、処理結果は図4と同じになるので、そのようにしてもよい。
ref’[2N]=ref[2N]
if Interpolate_Above==true || Interpolate_Left==true   (15)
ref’[2N]=(ref[2N-1]+2*ref[2N]+ref[2N+1]+2)/4  others (16)
ここでref’[2N]は平滑化後の参照サンプル(smoothed reference samples)の値を示している。
Further, the determination criterion in step 620 may be only the block size. At this time, if Expression (12) is replaced with Expressions (15) and (16), the processing result will be the same as in FIG.
ref '[2N] = ref [2N]
if Interpolate_Above == true || Interpolate_Left == true (15)
ref '[2N] = (ref [2N-1] + 2 * ref [2N] + ref [2N + 1] +2) / 4 others (16)
Here, ref ′ [2N] indicates the value of the smoothed reference samples.
 (D)ブロックサイズ
 上記では、対象ブロックを正方ブロックとしているが、非正方ブロックにも本発明の参照サンプルへの内挿処理は同様に適用できる。対象ブロック290ブロックサイズがN×2Nの例を図12に示している。この場合、ref[x]の数は3N+1個となる。
(D) Block size In the above description, the target block is a square block. However, the interpolation processing to the reference sample of the present invention can be similarly applied to a non-square block. An example in which the target block 290 block size is N × 2N is shown in FIG. In this case, the number of ref [x] is 3N + 1.
 (E)キー参照サンプル
 上記では、キー参照サンプルを参照サンプル群の端と中央の3つとしているが、その数と位置は限定されない。例えば、参照ブロックのサイズや参照ブロックと隣接ブロックとの相対関係に応じて数や位置を変えても良いし、キー参照サンプルの数と位置をビットストリームに含めて符号化し、復号するようにしてもよい。また、キー参照サンプルを参照サンプル群の端と中央の3つをデフォルトとし、デフォルトを用いるか、別のキー参照サンプルを用いるかを指示情報として符号化し、復号してもよい。図2のデータ解析器202にて、キー参照サンプルを更新する。更新するキー参照サンプルとしては、図7でref[N+N/2]やref[2N+N/2]を加えても良いし、それらをref[2N]の代わりに用いても良い。また、ref[0]とref[4N]の代わりにref[N/2]とref[3N+N/2]を用い、ref[1]~ref[N/2-1]とref[3N+N/2]~ref[4N-1]には121フィルタを適用するようにしてもよい。
(E) Key Reference Sample In the above description, the key reference samples are the three at the end and the center of the reference sample group, but the number and position are not limited. For example, the number and position may be changed according to the size of the reference block and the relative relationship between the reference block and the adjacent block, and the number and position of the key reference samples are included in the bitstream for encoding and decoding. Also good. Alternatively, the key reference samples may be encoded and decoded as instruction information indicating whether the default or the other key reference sample is to be used, with the default three at the end and the center of the reference sample group. In the data analyzer 202 of FIG. 2, the key reference sample is updated. As a key reference sample to be updated, ref [N + N / 2] and ref [2N + N / 2] in FIG. 7 may be added, or they may be used instead of ref [2N]. Also, ref [N / 2] and ref [3N + N / 2] are used instead of ref [0] and ref [4N], and ref [1] to ref [N / 2-1] and ref [3N + 121 filters may be applied to N / 2] to ref [4N-1].
 (F)判定基準の式
 ステップ520、620、625、640で用いる判定式は、式(6)と(7)に限定されない。例えば、図7のref[N]やref[3N]の代わりにref[N+1]やref[3N+1]を用いても良い。
(F) Formula of determination criteria The determination formulas used in steps 520, 620, 625, and 640 are not limited to formulas (6) and (7). For example, ref [N + 1] or ref [3N + 1] may be used instead of ref [N] and ref [3N] in FIG.
 100…動画像予測符号化装置、101…入力端子、102…ブロック分割器、103…予測信号生成器、104…フレームメモリ、105…減算器、106…変換器、107…量子化器、108…逆量子化器、109…逆変換器、110…加算器、111…エントロピー符号化器、112…出力端子、113…ブロックメモリ、114…ループフィルタ器、200…動画像予測復号装置、201…入力端子、202…データ解析器、203…逆量子化器、204…逆変換器、205…加算器、206…出力端子、207…フレームメモリ、208…予測信号生成器、209…ループフィルタ器、215…ブロックメモリ。 DESCRIPTION OF SYMBOLS 100 ... Moving image predictive coding apparatus, 101 ... Input terminal, 102 ... Block divider, 103 ... Prediction signal generator, 104 ... Frame memory, 105 ... Subtractor, 106 ... Converter, 107 ... Quantizer, 108 ... Inverse quantizer 109 ... Inverse transformer 110 ... Adder 111 ... Entropy encoder 112 ... Output terminal 113 ... Block memory 114 ... Loop filter 200 ... Video predictive decoding device 201 ... Input Terminals 202, data analyzer 203, inverse quantizer 204, inverse transformer, 205, adder, 206, output terminal, 207, frame memory, 208, prediction signal generator, 209, loop filter, 215 ... block memory.

Claims (8)

  1.  入力画像を複数のブロックに分割するブロック分割手段と、
     前記ブロック分割手段により分割されたブロックのうち、符号化対象である対象ブロックとの相関が高いブロックの画面内予測信号を、前記対象ブロックに隣接する既再生の参照サンプルを用いて生成する予測信号生成手段と、
     前記対象ブロックの予測信号と前記対象ブロックの画素信号との残差信号を生成する残差信号生成手段と、
     前記残差信号生成手段により生成された残差信号を圧縮する残差信号圧縮手段と、
     前記残差信号の圧縮データを復元した再生残差信号を生成する残差信号復元手段と、
     前記残差信号の圧縮データを符号化する符号化手段と、
     前記予測信号と前記再生残差信号とを加算することによって前記対象ブロックの画素信号を復元し、復元された前記対象ブロックの画素信号を前記参照サンプルとして利用するために保存するブロック格納手段と、を具備し、
     前記予測信号生成手段は、
     前記ブロック格納手段に保存されている前記対象ブロックの周囲の既再生ブロックから参照サンプルを取得し、
     前記参照サンプルから2つ以上のキー参照サンプルを選択し、
     内挿参照サンプルを生成するために前記キー参照サンプル間を内挿処理し、
     画面内予測の方向を決定し、
     決定した画面内予測の方向に基づいて前記内挿参照サンプルを外挿して前記画面内予測を生成し、
     前記符号化手段は、前記画面内予測の方向の情報を圧縮データに含めて符号化する、
     ことを特徴とする動画像予測符号化装置。
    Block dividing means for dividing the input image into a plurality of blocks;
    Prediction signal for generating an intra-screen prediction signal of a block having a high correlation with the target block to be encoded among the blocks divided by the block dividing unit, using already reproduced reference samples adjacent to the target block Generating means;
    Residual signal generation means for generating a residual signal between the prediction signal of the target block and the pixel signal of the target block;
    Residual signal compression means for compressing the residual signal generated by the residual signal generation means;
    Residual signal restoration means for generating a reproduction residual signal obtained by restoring compressed data of the residual signal;
    Encoding means for encoding compressed data of the residual signal;
    Block storage means for restoring the pixel signal of the target block by adding the prediction signal and the reproduction residual signal, and storing the restored pixel signal of the target block for use as the reference sample; Comprising
    The prediction signal generating means includes
    A reference sample is acquired from the already played blocks around the target block stored in the block storage means,
    Selecting two or more key reference samples from the reference samples;
    Interpolating between the key reference samples to generate an interpolated reference sample;
    Determine the direction of in-screen prediction,
    Extrapolating the interpolated reference sample based on the determined in-screen prediction direction to generate the in-screen prediction;
    The encoding means encodes the information including the direction of the intra prediction direction in compressed data.
    A video predictive coding apparatus characterized by the above.
  2.  前記予測信号生成手段は、前記キー参照サンプルと予め定めた閾値との比較に基づいて、前記参照サンプルの内挿処理と参照サンプルの平滑化処理とを適用的に切り替えて実施することを特徴とする請求項1に記載の動画像予測符号化装置。 The prediction signal generating means is configured to switch between the interpolation process of the reference sample and the smoothing process of the reference sample by applying and switching based on a comparison between the key reference sample and a predetermined threshold value. The moving picture predictive coding apparatus according to claim 1.
  3.  前記参照サンプルを参照サンプル群の端に位置する参照サンプルとし、前記内挿処理が前記キー参照サンプル間の参照サンプルに対する双一次内挿処理であることを特徴とする請求項1に記載の動画像予測符号化装置。 The moving image according to claim 1, wherein the reference sample is a reference sample positioned at an end of a reference sample group, and the interpolation processing is bilinear interpolation processing on a reference sample between the key reference samples. Predictive encoding device.
  4.  複数のブロックに分割して符号化された圧縮データの中から、復号対象となる対象ブロックの画面内予測に用いる画面内予測の方向の情報と残差信号の圧縮データとを復号する復号手段と、
     前記画面内予測の方向の情報と前記対象ブロックに隣接する既再生の参照サンプルとを用いて画面内予測信号を生成する予測信号生成手段と、
     前記残差信号の圧縮データから前記対象ブロックの再生残差信号を復元する残差信号復元手段と、
     前記予測信号と前記再生残差信号とを加算することによって前記対象ブロックの画素信号を復元し、復元された前記対象ブロックの画素信号を前記参照サンプルとして利用するために保存するブロック格納手段と、を具備し、
     前記予測信号生成手段は、
     前記ブロック格納手段に保存されている前記対象ブロックの周囲の既再生ブロックから参照サンプルを取得し、
     前記参照サンプルから2つ以上のキー参照サンプルを選択し、
     内挿参照サンプルを生成するために前記キー参照サンプル間を内挿処理し、
     前記画面内予測の方向に基づいて前記内挿参照サンプルを外挿して前記画面内予測を生成する、
     ことを特徴とする動画像予測復号装置。
    Decoding means for decoding, from among compressed data divided into a plurality of blocks and encoded, information on the direction of intra prediction used for intra prediction of a target block to be decoded and compressed data of a residual signal; ,
    Prediction signal generating means for generating an intra-screen prediction signal using the information on the direction of the intra-screen prediction and the already-reproduced reference sample adjacent to the target block;
    Residual signal restoration means for restoring the reproduction residual signal of the target block from the compressed data of the residual signal;
    Block storage means for restoring the pixel signal of the target block by adding the prediction signal and the reproduction residual signal, and storing the restored pixel signal of the target block for use as the reference sample; Comprising
    The prediction signal generating means includes
    A reference sample is acquired from the already played blocks around the target block stored in the block storage means,
    Selecting two or more key reference samples from the reference samples;
    Interpolating between the key reference samples to generate an interpolated reference sample;
    Extrapolating the interpolated reference sample based on the direction of the intra prediction to generate the intra prediction.
    A video predictive decoding apparatus characterized by the above.
  5.  前記予測信号生成手段は、前記キー参照サンプルと予め定めた閾値との比較に基づいて、前記参照サンプルの内挿処理と参照サンプルの平滑化処理とを適用的に切り替えて実施することを特徴とする請求項4に記載の動画像予測復号装置。 The prediction signal generation means is configured to switch between the interpolation process of the reference sample and the smoothing process of the reference sample by applying switching based on a comparison between the key reference sample and a predetermined threshold value. The moving picture predictive decoding apparatus according to claim 4.
  6.  前記参照サンプルを参照サンプル群の端に位置する参照サンプルとし、前記内挿処理が前記キー参照サンプル間の参照サンプルに対する双一次内挿処理であることを特徴とする請求項4に記載の動画像予測復号装置。 5. The moving image according to claim 4, wherein the reference sample is a reference sample located at an end of a reference sample group, and the interpolation processing is bilinear interpolation processing on a reference sample between the key reference samples. Predictive decoding device.
  7.  動画像予測符号化装置により実行される動画像予測符号化方法であって、
     入力画像を複数のブロックに分割するブロック分割ステップと、
     前記ブロック分割ステップにより分割されたブロックのうち、符号化対象である対象ブロックとの相関が高いブロックの画面内予測信号を、前記対象ブロックに隣接する既再生の参照サンプルを用いて生成する予測信号生成ステップと、
     前記対象ブロックの予測信号と前記対象ブロックの画素信号との残差信号を生成する残差信号生成ステップと、
     前記残差信号生成ステップにより生成された残差信号を圧縮する残差信号圧縮ステップと、
     前記残差信号の圧縮データを復元した再生残差信号を生成する残差信号復元ステップと、
     前記残差信号の圧縮データを符号化する符号化ステップと、
     前記予測信号と前記再生残差信号とを加算することによって前記対象ブロックの画素信号を復元し、復元された前記対象ブロックの画素信号を前記参照サンプルとして利用するために保存するブロック格納ステップと、を具備し、
     前記予測信号生成ステップでは、
     保存されている前記対象ブロックの周囲の既再生ブロックから参照サンプルを取得し、
     前記参照サンプルから2つ以上のキー参照サンプルを選択し、
     内挿参照サンプルを生成するために前記キー参照サンプル間を内挿処理し、
     画面内予測の方向を決定し、
     決定した画面内予測の方向に基づいて前記内挿参照サンプルを外挿して前記画面内予測を生成し、
     前記符号化ステップでは、前記画面内予測の方向の情報を圧縮データに含めて符号化する、
     ことを特徴とする動画像予測符号化方法。
    A video predictive encoding method executed by a video predictive encoding device,
    A block dividing step for dividing the input image into a plurality of blocks;
    A prediction signal that generates an intra-screen prediction signal of a block having a high correlation with the target block to be encoded among the blocks divided by the block division step, using the already-reproduced reference samples adjacent to the target block Generation step;
    A residual signal generating step for generating a residual signal between the prediction signal of the target block and the pixel signal of the target block;
    A residual signal compression step of compressing the residual signal generated by the residual signal generation step;
    A residual signal restoration step for generating a reproduction residual signal obtained by restoring the compressed data of the residual signal;
    An encoding step of encoding the compressed data of the residual signal;
    A block storing step of restoring the pixel signal of the target block by adding the prediction signal and the reproduction residual signal, and storing the restored pixel signal of the target block for use as the reference sample; Comprising
    In the predicted signal generation step,
    A reference sample is obtained from the already played blocks around the stored target block,
    Selecting two or more key reference samples from the reference samples;
    Interpolating between the key reference samples to generate an interpolated reference sample;
    Determine the direction of in-screen prediction,
    Extrapolating the interpolated reference sample based on the determined in-screen prediction direction to generate the in-screen prediction;
    In the encoding step, the information on the direction of the intra prediction is included in the compressed data and encoded.
    A video predictive encoding method characterized by the above.
  8.  動画像予測復号装置により実行される動画像予測復号方法であって、
     複数のブロックに分割して符号化された圧縮データの中から、復号対象となる対象ブロックの画面内予測に用いる画面内予測の方向の情報と残差信号の圧縮データとを復号する復号ステップと、
     前記画面内予測の方向の情報と前記対象ブロックに隣接する既再生の参照サンプルとを用いて画面内予測信号を生成する予測信号生成ステップと、
     前記残差信号の圧縮データから前記対象ブロックの再生残差信号を復元する残差信号復元ステップと、
     前記予測信号と前記再生残差信号とを加算することによって前記対象ブロックの画素信号を復元し、復元された前記対象ブロックの画素信号を前記参照サンプルとして利用するために保存するブロック格納ステップと、を具備し、
     前記予測信号生成ステップでは、
     保存されている前記対象ブロックの周囲の既再生ブロックから参照サンプルを取得し、
     前記参照サンプルから2つ以上のキー参照サンプルを選択し、
     内挿参照サンプルを生成するために前記キー参照サンプル間を内挿処理し、
     前記画面内予測の方向に基づいて前記内挿参照サンプルを外挿して前記画面内予測を生成する、
     ことを特徴とする動画像予測復号方法。
     
    A video predictive decoding method executed by a video predictive decoding device,
    A decoding step for decoding, from among compressed data divided into a plurality of blocks and encoded, information on the direction of intra prediction used for intra prediction of a target block to be decoded and compressed data of a residual signal; ,
    A prediction signal generating step of generating an intra-screen prediction signal using the information on the direction of the intra-screen prediction and the already-reproduced reference sample adjacent to the target block;
    A residual signal restoration step of restoring the reproduction residual signal of the target block from the compressed data of the residual signal;
    A block storing step of restoring the pixel signal of the target block by adding the prediction signal and the reproduction residual signal, and storing the restored pixel signal of the target block for use as the reference sample; Comprising
    In the predicted signal generation step,
    A reference sample is obtained from the already played blocks around the stored target block,
    Selecting two or more key reference samples from the reference samples;
    Interpolating between the key reference samples to generate an interpolated reference sample;
    Extrapolating the interpolated reference sample based on the direction of the intra prediction to generate the intra prediction.
    A video predictive decoding method characterized by the above.
PCT/JP2013/066616 2012-09-24 2013-06-17 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method WO2014045651A1 (en)

Priority Applications (37)

Application Number Priority Date Filing Date Title
CA2885802A CA2885802C (en) 2012-09-24 2013-06-17 Video prediction encoding and decoding device and method using intra-prediction direction information and key reference samples to generate interpolated reference samples
MX2016010755A MX351764B (en) 2012-09-24 2013-06-17 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method.
PL17152385T PL3179722T3 (en) 2012-09-24 2013-06-17 Intra prediction based on interpolated reference samples
MX2015003512A MX341412B (en) 2012-09-24 2013-06-17 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method.
EP17152385.5A EP3179722B1 (en) 2012-09-24 2013-06-17 Intra prediction based on interpolated reference samples
KR1020157010543A KR101662655B1 (en) 2012-09-24 2013-06-17 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method
KR1020167019567A KR101764235B1 (en) 2012-09-24 2013-06-17 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method
AU2013319537A AU2013319537B2 (en) 2012-09-24 2013-06-17 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method
CN201380041882.XA CN104604238B (en) 2012-09-24 2013-06-17 Video prediction encoding device and method, and video prediction decoding device and method
EP19218030.5A EP3654650B1 (en) 2012-09-24 2013-06-17 Intra prediction based on interpolated reference samples
PL13839473T PL2899982T3 (en) 2012-09-24 2013-06-17 Intra prediction based on interpolated reference samples
IN3265DEN2015 IN2015DN03265A (en) 2012-09-24 2013-06-17
RU2015115487/08A RU2602978C1 (en) 2012-09-24 2013-06-17 Video predictive encoding device, video predictive encoding method, video predictive decoding device and video predictive decoding method
KR1020177032938A KR101869830B1 (en) 2012-09-24 2013-06-17 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method
SG11201502234VA SG11201502234VA (en) 2012-09-24 2013-06-17 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method
KR1020177018972A KR101799846B1 (en) 2012-09-24 2013-06-17 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method
BR122016013354-0A BR122016013354A2 (en) 2012-09-24 2013-06-17 VIDEO FORECASTING CODING DEVICE, VIDEO FORECASTING CODING METHOD, VIDEO FORECASTING DECODING DEVICE AND VIDEO FORECASTING DECODING METHOD
EP23162588.0A EP4221222A1 (en) 2012-09-24 2013-06-17 Intra prediction based on interpolated reference samples
ES13839473.9T ES2637502T3 (en) 2012-09-24 2013-06-17 Intraprediction based on interpolated reference samples
BR122016013292-7A BR122016013292B1 (en) 2012-09-24 2013-06-17 Video predictive coding device, video predictive coding method, video predictive decoding device and video predictive decoding method
KR1020167026526A KR101755363B1 (en) 2012-09-24 2013-06-17 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method
EP13839473.9A EP2899982B1 (en) 2012-09-24 2013-06-17 Intra prediction based on interpolated reference samples
KR1020187016861A KR101964171B1 (en) 2012-09-24 2013-06-17 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method
BR112015006109-5A BR112015006109B1 (en) 2012-09-24 2013-06-17 video predictive coding device, video predictive coding method, video predictive decoding device and video predictive decoding method
PH12015500622A PH12015500622A1 (en) 2012-09-24 2015-03-20 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method
US14/665,545 US9736494B2 (en) 2012-09-24 2015-03-23 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method
AU2016202132A AU2016202132B2 (en) 2012-09-24 2016-04-06 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method
US15/445,533 US10123042B2 (en) 2012-09-24 2017-02-28 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method
US15/445,552 US10110918B2 (en) 2012-09-24 2017-02-28 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method
AU2017248485A AU2017248485B2 (en) 2012-09-24 2017-10-18 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method
US16/152,025 US10382783B2 (en) 2012-09-24 2018-10-04 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method
US16/152,002 US10477241B2 (en) 2012-09-24 2018-10-04 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method
US16/152,009 US10477242B2 (en) 2012-09-24 2018-10-04 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method
AU2019210532A AU2019210532B2 (en) 2012-09-24 2019-07-30 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method
AU2019210530A AU2019210530B2 (en) 2012-09-24 2019-07-30 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method
AU2019210520A AU2019210520B2 (en) 2012-09-24 2019-07-30 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method
AU2019210516A AU2019210516B2 (en) 2012-09-24 2019-07-30 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012-209626 2012-09-24
JP2012209626A JP5798539B2 (en) 2012-09-24 2012-09-24 Moving picture predictive coding apparatus, moving picture predictive coding method, moving picture predictive decoding apparatus, and moving picture predictive decoding method

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/665,545 Continuation US9736494B2 (en) 2012-09-24 2015-03-23 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method

Publications (1)

Publication Number Publication Date
WO2014045651A1 true WO2014045651A1 (en) 2014-03-27

Family

ID=50340983

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/066616 WO2014045651A1 (en) 2012-09-24 2013-06-17 Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method

Country Status (22)

Country Link
US (6) US9736494B2 (en)
EP (4) EP3179722B1 (en)
JP (1) JP5798539B2 (en)
KR (6) KR101799846B1 (en)
CN (6) CN107071442B (en)
AU (7) AU2013319537B2 (en)
BR (3) BR122016013354A2 (en)
CA (6) CA3195958A1 (en)
DK (1) DK3654650T3 (en)
ES (3) ES2637502T3 (en)
FI (1) FI3654650T3 (en)
HU (1) HUE062434T2 (en)
IN (1) IN2015DN03265A (en)
MX (2) MX351764B (en)
MY (2) MY161733A (en)
PH (1) PH12015500622A1 (en)
PL (3) PL3179722T3 (en)
PT (3) PT3179722T (en)
RU (7) RU2602978C1 (en)
SG (1) SG11201502234VA (en)
TW (7) TW201415905A (en)
WO (1) WO2014045651A1 (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6504604B2 (en) 2015-08-25 2019-04-24 Kddi株式会社 Moving picture coding apparatus, moving picture decoding apparatus, moving picture processing system, moving picture coding method, moving picture decoding method, and program
WO2017043786A1 (en) * 2015-09-10 2017-03-16 엘지전자 주식회사 Intra prediction method and device in video coding system
CN117255194A (en) * 2016-09-30 2023-12-19 罗斯德尔动力有限责任公司 Image processing method and device
DE112017006638B4 (en) * 2016-12-28 2023-05-11 Arris Enterprises Llc Improved video bitstream encoding
WO2018124850A1 (en) * 2017-01-02 2018-07-05 한양대학교 산학협력단 Intra prediction method considering redundancy of prediction blocks, and image decoding apparatus for performing intra prediction
JP7036628B2 (en) * 2017-03-10 2022-03-15 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Encoding device, decoding device, coding method and decoding method
US10225578B2 (en) 2017-05-09 2019-03-05 Google Llc Intra-prediction edge filtering
US10992939B2 (en) 2017-10-23 2021-04-27 Google Llc Directional intra-prediction coding
JP2020120141A (en) * 2017-05-26 2020-08-06 シャープ株式会社 Dynamic image encoding device, dynamic image decoding device, and filter device
WO2019009506A1 (en) * 2017-07-04 2019-01-10 엘지전자 주식회사 Method and device for decoding image according to intra prediction in image coding system
CN107592539B (en) * 2017-08-21 2019-10-22 北京奇艺世纪科技有限公司 A kind of method for video coding and device
US11438579B2 (en) 2018-07-02 2022-09-06 Lg Electronics Inc. Method and apparatus for processing video signal by using intra-prediction
WO2020007747A1 (en) * 2018-07-05 2020-01-09 Telefonaktiebolaget Lm Ericsson (Publ) Deblocking of intra-reference samples
US10778972B1 (en) 2019-02-27 2020-09-15 Google Llc Adaptive filter intra prediction modes in image/video compression
JP7145793B2 (en) * 2019-03-11 2022-10-03 Kddi株式会社 Image decoding device, image decoding method and program
CN110035289B (en) * 2019-04-24 2022-04-01 润电能源科学技术有限公司 Layered compression method, system and related device for screen image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6765964B1 (en) 2000-12-06 2004-07-20 Realnetworks, Inc. System and method for intracoding video data
US7003035B2 (en) 2002-01-25 2006-02-21 Microsoft Corporation Video coding methods and apparatuses
WO2012077719A1 (en) * 2010-12-09 2012-06-14 シャープ株式会社 Image decoding device and image coding device

Family Cites Families (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
MXPA04006814A (en) * 2002-01-14 2004-12-06 Nokia Corp Coding dynamic filters.
US20040008775A1 (en) * 2002-07-12 2004-01-15 Krit Panusopone Method of managing reference frame and field buffers in adaptive frame/field encoding
CN101218829A (en) * 2005-07-05 2008-07-09 株式会社Ntt都科摩 Dynamic image encoding device, dynamic image encoding method, dynamic image encoding program, dynamic image decoding device, dynamic image decoding method, and dynamic image decoding program
JP2007043651A (en) * 2005-07-05 2007-02-15 Ntt Docomo Inc Dynamic image encoding device, dynamic image encoding method, dynamic image encoding program, dynamic image decoding device, dynamic image decoding method, and dynamic image decoding program
AU2006320064B2 (en) * 2005-11-30 2010-09-09 Kabushiki Kaisha Toshiba Image encoding/image decoding method and image encoding/image decoding apparatus
TWI432035B (en) * 2006-01-11 2014-03-21 Nokia Corp Backward-compatible aggregation of pictures in scalable video coding
US8494052B2 (en) * 2006-04-07 2013-07-23 Microsoft Corporation Dynamic selection of motion estimation search ranges and extended motion vector ranges
JP2008048272A (en) * 2006-08-18 2008-02-28 Canon Inc Reproducing apparatus and reproducing method
JP4325708B2 (en) * 2007-07-05 2009-09-02 ソニー株式会社 Data processing device, data processing method and data processing program, encoding device, encoding method and encoding program, and decoding device, decoding method and decoding program
JP4650461B2 (en) * 2007-07-13 2011-03-16 ソニー株式会社 Encoding device, encoding method, program, and recording medium
WO2009110160A1 (en) * 2008-03-07 2009-09-11 株式会社 東芝 Dynamic image encoding/decoding method and device
JP5680283B2 (en) * 2008-09-19 2015-03-04 株式会社Nttドコモ Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, moving picture decoding method, moving picture encoding program, and moving picture decoding program
JP5697301B2 (en) * 2008-10-01 2015-04-08 株式会社Nttドコモ Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, moving picture decoding method, moving picture encoding program, moving picture decoding program, and moving picture encoding / decoding system
JP5686499B2 (en) * 2009-01-22 2015-03-18 株式会社Nttドコモ Image predictive encoding apparatus, method and program, image predictive decoding apparatus, method and program, and encoding / decoding system and method
JP2010268259A (en) * 2009-05-15 2010-11-25 Sony Corp Image processing device and method, and program
JP5199196B2 (en) * 2009-08-04 2013-05-15 日本電信電話株式会社 Moving picture decoding method, moving picture decoding apparatus, and moving picture decoding program
KR20110068792A (en) * 2009-12-16 2011-06-22 한국전자통신연구원 Adaptive image coding apparatus and method
JP2011151682A (en) 2010-01-22 2011-08-04 Sony Corp Image processing apparatus and method
JP2011166592A (en) * 2010-02-12 2011-08-25 Mitsubishi Electric Corp Image encoding device, and image decoding device
JP5393573B2 (en) * 2010-04-08 2014-01-22 株式会社Nttドコモ Moving picture predictive coding apparatus, moving picture predictive decoding apparatus, moving picture predictive coding method, moving picture predictive decoding method, moving picture predictive coding program, and moving picture predictive decoding program
EP2559239A2 (en) * 2010-04-13 2013-02-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus for intra predicting a block, apparatus for reconstructing a block of a picture, apparatus for reconstructing a block of a picture by intra prediction
US20110301976A1 (en) * 2010-06-03 2011-12-08 International Business Machines Corporation Medical history diagnosis system and method
US20110310976A1 (en) * 2010-06-17 2011-12-22 Qualcomm Incorporated Joint Coding of Partition Information in Video Coding
CN105120280B (en) * 2010-07-20 2018-04-20 株式会社Ntt都科摩 Image prediction encoding device and method, image prediction/decoding device and method
KR101373814B1 (en) * 2010-07-31 2014-03-18 엠앤케이홀딩스 주식회사 Apparatus of generating prediction block
EP3125559B1 (en) * 2010-08-17 2018-08-08 M&K Holdings Inc. Apparatus for decoding an intra prediction mode
US8923395B2 (en) * 2010-10-01 2014-12-30 Qualcomm Incorporated Video coding using intra-prediction
US20120163457A1 (en) 2010-12-28 2012-06-28 Viktor Wahadaniah Moving picture decoding method, moving picture coding method, moving picture decoding apparatus, moving picture coding apparatus, and moving picture coding and decoding apparatus
CA2961824C (en) * 2011-01-12 2019-07-23 Mitsubishi Electric Corporation Image encoding device, image decoding device, image encoding method, and image decoding method
WO2012096622A1 (en) * 2011-01-14 2012-07-19 Telefonaktiebolaget Lm Ericsson (Publ) Methods and devices for intra coding of video
CN103329531A (en) * 2011-01-21 2013-09-25 汤姆逊许可公司 Methods and apparatus for geometric-based intra prediction
CN102685505B (en) * 2011-03-10 2014-11-05 华为技术有限公司 Intra-frame prediction method and prediction device
KR101383775B1 (en) * 2011-05-20 2014-04-14 주식회사 케이티 Method And Apparatus For Intra Prediction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6765964B1 (en) 2000-12-06 2004-07-20 Realnetworks, Inc. System and method for intracoding video data
US7003035B2 (en) 2002-01-25 2006-02-21 Microsoft Corporation Video coding methods and apparatuses
WO2012077719A1 (en) * 2010-12-09 2012-06-14 シャープ株式会社 Image decoding device and image coding device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
B. BROSS ET AL.: "High efficiency video coding (HEVC) text specification draft 8", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTCI/SC29/WGLL, JCTVC-J1003, 10TH MEETING, 11 July 2012 (2012-07-11)
JIE ZHAO ET AL.: "On Intra Coding and MDIS", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/ SC29/WG11 JCTVC-E437, ITU-T, 16 March 2011 (2011-03-16), pages 1 - 4, XP030008943 *
VIKTOR WAHADANIAH ET AL.: "Constrained Intra Prediction Scheme for Flexible-Sized Prediction Units in HEVC", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 JCTVC-D094, ITU-T, 21 January 2011 (2011-01-21), pages 1 - 8, XP030008134 *

Also Published As

Publication number Publication date
CN106131558A (en) 2016-11-16
CN107087181B (en) 2020-05-05
PT3179722T (en) 2020-04-02
TWI648981B (en) 2019-01-21
MY161733A (en) 2017-05-15
TW201909636A (en) 2019-03-01
CA3195958A1 (en) 2014-03-27
AU2019210532A1 (en) 2019-08-15
US10477241B2 (en) 2019-11-12
AU2019210532B2 (en) 2021-03-04
RU2642810C1 (en) 2018-01-26
CN107071442B (en) 2020-03-27
DK3654650T3 (en) 2023-07-03
FI3654650T3 (en) 2023-06-28
PH12015500622B1 (en) 2015-05-11
CN104604238B (en) 2017-03-22
CN106131559B (en) 2018-04-17
TWI577182B (en) 2017-04-01
TW201720162A (en) 2017-06-01
TW201937933A (en) 2019-09-16
KR20180069128A (en) 2018-06-22
KR20160116036A (en) 2016-10-06
KR101869830B1 (en) 2018-06-21
CA3118832A1 (en) 2014-03-27
RU2715015C1 (en) 2020-02-21
MY178848A (en) 2020-10-21
AU2013319537B2 (en) 2016-03-03
EP3654650A1 (en) 2020-05-20
TWI622287B (en) 2018-04-21
US20170180751A1 (en) 2017-06-22
BR112015006109A2 (en) 2017-05-16
EP2899982B1 (en) 2017-08-02
CN106878733A (en) 2017-06-20
AU2017248485B2 (en) 2019-07-11
CA2885802A1 (en) 2014-03-27
AU2019210520B2 (en) 2021-02-04
CN104604238A (en) 2015-05-06
US9736494B2 (en) 2017-08-15
BR112015006109A8 (en) 2018-02-06
US20190037235A1 (en) 2019-01-31
CA3118841A1 (en) 2014-03-27
TW201937932A (en) 2019-09-16
AU2019210530A1 (en) 2019-08-15
CN107087181A (en) 2017-08-22
TW201415905A (en) 2014-04-16
KR101964171B1 (en) 2019-04-01
KR101662655B1 (en) 2016-10-05
EP3179722B1 (en) 2020-02-26
ES2949638T3 (en) 2023-10-02
PT2899982T (en) 2017-08-30
US10477242B2 (en) 2019-11-12
TWI679881B (en) 2019-12-11
AU2016202132B2 (en) 2017-07-20
CN106878733B (en) 2020-01-07
EP2899982A1 (en) 2015-07-29
TWI678919B (en) 2019-12-01
MX2015003512A (en) 2015-07-17
RU2015115487A (en) 2016-11-20
TW201701668A (en) 2017-01-01
US20150201213A1 (en) 2015-07-16
US10123042B2 (en) 2018-11-06
AU2017248485A1 (en) 2017-11-09
TWI562614B (en) 2016-12-11
JP5798539B2 (en) 2015-10-21
ES2637502T3 (en) 2017-10-13
KR20150060877A (en) 2015-06-03
EP3179722A1 (en) 2017-06-14
EP4221222A1 (en) 2023-08-02
JP2014064249A (en) 2014-04-10
PL3179722T3 (en) 2020-07-13
BR112015006109B1 (en) 2018-08-14
CA3118832C (en) 2023-06-20
SG11201502234VA (en) 2015-05-28
HUE062434T2 (en) 2023-11-28
KR101764235B1 (en) 2017-08-03
EP2899982A4 (en) 2016-06-15
AU2013319537A1 (en) 2015-05-14
PL2899982T3 (en) 2017-10-31
TW201818726A (en) 2018-05-16
CA3118836C (en) 2023-06-27
TWI666923B (en) 2019-07-21
BR122016013292B1 (en) 2018-08-14
MX341412B (en) 2016-08-19
KR101755363B1 (en) 2017-07-07
CA2957095A1 (en) 2014-03-27
BR122016013354A2 (en) 2018-02-06
KR101799846B1 (en) 2017-11-22
CN107071442A (en) 2017-08-18
RU2715017C1 (en) 2020-02-21
CA3118836A1 (en) 2014-03-27
MX351764B (en) 2017-10-27
BR122016013292A2 (en) 2018-02-06
AU2016202132A1 (en) 2016-04-28
KR20160088450A (en) 2016-07-25
PT3654650T (en) 2023-07-05
RU2673393C1 (en) 2018-11-26
US10110918B2 (en) 2018-10-23
CN106131558B (en) 2018-02-27
ES2781556T3 (en) 2020-09-03
AU2019210530B2 (en) 2021-02-18
CA2957095C (en) 2021-07-13
US20190037236A1 (en) 2019-01-31
CN106131559A (en) 2016-11-16
CA3118841C (en) 2023-01-24
AU2019210516B2 (en) 2021-02-11
KR20170128635A (en) 2017-11-22
AU2019210516A1 (en) 2019-08-15
US20170180750A1 (en) 2017-06-22
US10382783B2 (en) 2019-08-13
CA2885802C (en) 2017-08-15
RU2699675C1 (en) 2019-09-09
EP3654650B1 (en) 2023-05-31
PL3654650T3 (en) 2023-10-02
PH12015500622A1 (en) 2015-05-11
KR20170083167A (en) 2017-07-17
RU2602978C1 (en) 2016-11-20
IN2015DN03265A (en) 2015-10-09
AU2019210520A1 (en) 2019-08-15
US20190037237A1 (en) 2019-01-31
RU2701052C1 (en) 2019-09-24

Similar Documents

Publication Publication Date Title
AU2019210516B2 (en) Video prediction encoding device, video prediction encoding method, video prediction decoding device and video prediction decoding method
JP6602931B2 (en) Video predictive decoding method
JP6408681B2 (en) Video predictive decoding method
JP6088689B2 (en) Moving picture predictive coding apparatus, moving picture predictive coding method, moving picture predictive decoding apparatus, and moving picture predictive decoding method
JP6242517B2 (en) Moving picture predictive decoding apparatus and moving picture predictive decoding method
JP5933086B2 (en) Moving picture predictive coding apparatus, moving picture predictive coding method, moving picture predictive decoding apparatus, and moving picture predictive decoding method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13839473

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: MX/A/2015/003512

Country of ref document: MX

WWE Wipo information: entry into national phase

Ref document number: 12015500622

Country of ref document: PH

ENP Entry into the national phase

Ref document number: 2885802

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112015006109

Country of ref document: BR

REEP Request for entry into the european phase

Ref document number: 2013839473

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2013839473

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 20157010543

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2015115487

Country of ref document: RU

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2013319537

Country of ref document: AU

Date of ref document: 20130617

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 112015006109

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20150319