WO2010100672A1 - Dispositif de codage d'image dynamique compressée, dispositif de décodage d'image dynamique compressée, procédé de codage d'image dynamique compressée et procédé de décodage d'image dynamique compressée - Google Patents

Dispositif de codage d'image dynamique compressée, dispositif de décodage d'image dynamique compressée, procédé de codage d'image dynamique compressée et procédé de décodage d'image dynamique compressée Download PDF

Info

Publication number
WO2010100672A1
WO2010100672A1 PCT/JP2009/000969 JP2009000969W WO2010100672A1 WO 2010100672 A1 WO2010100672 A1 WO 2010100672A1 JP 2009000969 W JP2009000969 W JP 2009000969W WO 2010100672 A1 WO2010100672 A1 WO 2010100672A1
Authority
WO
WIPO (PCT)
Prior art keywords
reference image
screen
image
generated
compressed
Prior art date
Application number
PCT/JP2009/000969
Other languages
English (en)
Japanese (ja)
Inventor
望月誠二
木村淳一
江浜真和
Original Assignee
ルネサスエレクトロニクス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ルネサスエレクトロニクス株式会社 filed Critical ルネサスエレクトロニクス株式会社
Priority to EP09841039.2A priority Critical patent/EP2405655B1/fr
Priority to JP2011502500A priority patent/JP5426655B2/ja
Priority to KR1020167001752A priority patent/KR101671676B1/ko
Priority to PCT/JP2009/000969 priority patent/WO2010100672A1/fr
Priority to EP14170348.8A priority patent/EP2773123B1/fr
Priority to KR1020117020536A priority patent/KR101589334B1/ko
Priority to CN200980157830.2A priority patent/CN102369730B/zh
Priority to US13/203,727 priority patent/US8958479B2/en
Publication of WO2010100672A1 publication Critical patent/WO2010100672A1/fr
Priority to US14/478,661 priority patent/US9813703B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/563Motion estimation with padding, i.e. with filling of non-object values in an arbitrarily shaped picture block or region for estimation purposes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • the present invention relates to a compressed moving image encoding device, a compressed moving image decoding device, a compressed moving image encoding method, and a compressed moving image decoding method, and in particular, movement from a peripheral region outside a screen in a compressed moving image encoding process or a compressed moving image decoding process.
  • the present invention relates to a technique effective in improving the accuracy of an extended reference image when enabling compensation.
  • the frame encoding mode can be predicted from an I frame that is encoded without using correlation between frames, a P frame that is predicted from one previously encoded frame, and two frames that are encoded in the past. There are B frames.
  • the I frame is called an intra-frame independent frame
  • the P frame is called a unidirectional prediction frame
  • the B frame is called a bidirectional prediction frame.
  • a motion-compensated reference image (predicted image) is subtracted from the moving image, and a prediction residual by this subtraction is encoded.
  • the encoding process includes processes of orthogonal transform such as DCT (Discrete Cosine Transform), quantization, and variable length encoding.
  • Motion compensation includes processing for spatially moving a reference frame for inter-frame prediction, and motion compensation processing is performed in units of blocks of a frame to be encoded. When there is no motion in the image content, there is no movement and the pixel at the same position as the predicted pixel is used. If there is motion, the most suitable block is searched, and the amount of movement is set as a motion vector.
  • the motion compensation block is a block of 16 pixels ⁇ 16 pixels / 16 pixels ⁇ 8 pixels.
  • the H.263 encoding method is a block of 16 pixels ⁇ 16 pixels / 8 pixels ⁇ 8 pixels
  • the MPEG-4 encoding method is a block of 16 pixels ⁇ 16 pixels / 16 pixels ⁇ 8 pixels / 8 pixels ⁇ 8 pixels.
  • H. In the H.264 coding method a block of 16 pixels ⁇ 16 pixels / 16 pixels ⁇ 8 pixels / 8 pixels ⁇ 16 pixels / 8 pixels ⁇ 8 pixels ⁇ 4 pixels / 4 pixels ⁇ 8 pixels / 4 pixels is used. is there.
  • Patent Document 1 a reference image that outputs a motion compensation pixel corresponding to a motion vector is detected so that a motion vector indicating that the prediction pixel block or a part of the motion compensation pixel block is outside the reference image can be detected. It is described that a peripheral pixel predicting unit that predicts a peripheral pixel of a reference image is provided in the output unit.
  • Patent Document 1 describes that an extended reference image, which is a peripheral image of a reference image, is set to an average value of all pixel values of the reference image or a pixel value of the closest reference image. .
  • the present inventors conducted research and development on the next generation international standard video coding system.
  • FIG. 1 is a diagram showing a configuration of a compressed moving image encoding apparatus (encoder) studied based on the international standard moving picture coding system examined by the present inventors prior to the present invention.
  • a motion compensator 8 a motion vector searcher 9, and a frame memory 10.
  • the video input signal to be encoded is supplied to one input terminal of the subtracter 1 and the input terminal of the motion vector searcher 9. Since the motion vector searcher 9 performs motion estimation (ME: Motion Estimation), a motion vector is generated from its output terminal, and this motion vector is used as a motion compensator 8 that performs motion compensation (MC). The signal is supplied to an encoder 4 that performs signal processing of variable length coding (VLC: Variable Length Coding). The output signal of the motion compensator 8 is supplied to the other input terminal of the subtracter 1 and one input terminal of the adder 7.
  • M Motion Estimation
  • VLC Variable Length Coding
  • the output signal of the subtracter 1 is supplied to an input terminal of an orthogonal transformer 2 that performs orthogonal transformation such as orthogonal cosine transform (DCT), and the output signal of the orthogonal transformer 2 is a quantum that performs quantization processing.
  • an orthogonal transformer 2 that performs orthogonal transformation such as orthogonal cosine transform (DCT)
  • the output signal of the orthogonal transformer 2 is a quantum that performs quantization processing.
  • the output signal of the quantizer 3 is supplied to the input terminal of the encoder 4, while the inverse quantizer 5 that performs inverse quantization processing and an inverse orthogonal cosine transform (IDCT: Inverse Discrete Cosine Transform) or the like.
  • IDCT Inverse Discrete Cosine Transform
  • An MPEG video stream as an encoded video output signal is generated from the output terminal of the encoder 4, while a reference image (local decoded image) is generated from the output of the adder 7 and stored in the frame memory 10.
  • the reference image read from the frame memory 10 is supplied to the motion compensator 8 and the motion vector searcher 9, and the motion vector searcher 9 uses the reference image as the block that best matches the block of the video input signal to be encoded.
  • the amount of movement is output as a motion vector by searching.
  • the motion compensator 8 generates a motion-compensated reference image (predicted image) from the motion vector and the reference image read from the frame memory 10 and supplies it to the other input terminal of the subtractor 1.
  • a prediction residual is generated by subtraction between the input signal and the reference image (predicted image).
  • the prediction error is subjected to the encoding process of the orthogonal transform by the orthogonal transformer 2, the quantization by the quantizer 3, and the variable length coding by the encoder 4, so that the encoded video is output from the output terminal of the encoder 4.
  • An MPEG video stream as a signal is generated.
  • the encoding process described above is performed for each video screen (frame or field), and a block obtained by subdividing the screen (usually 16 pixels ⁇ 16 pixels, called “macroblock” in MPEG) is a processing unit. It will be. That is, for each block to be encoded, the most similar block (predicted image) is selected from the already encoded reference image, and the difference signal between the encoded image (block) and the predicted image is encoded (orthogonal transform). , Quantization, etc.). The difference between the relative positions of the block to be encoded and the prediction signal in the screen is called a motion vector.
  • the MPEG-4 encoding method employs an unrestricted motion vector (UMV) that enables motion compensation from a peripheral area outside the screen.
  • UMV unrestricted motion vector
  • FIG. 2 shows the closest reference to the extended reference image, which is a peripheral image of the reference image described in Patent Document 1, in order to realize the unrestricted motion vector (UMV) employed in the MPEG-4 encoding method. It is a figure explaining the extended reference image at the time of setting it as the pixel value of a reference image.
  • UMV unrestricted motion vector
  • a boundary line 20 is a line indicating a boundary between the inside and the outside of the screen, and a reference image 21 of an object (object) exists in the screen inside the boundary line 20.
  • the extended reference image 22 generated from the pixel value of the reference image closest to the object (object) exists outside the screen outside 20.
  • the extended reference image 22 which is an off-screen image generated by the method shown in FIG. 2 does not consider the arrangement direction of the reference image 21 of the object (object) in the screen.
  • the shape of the reference image 22 is often very different from the actual shape.
  • the code amount of the motion vector increases.
  • the amount of information of the encoded video output signal generated from the compressed moving image encoding apparatus (encoder) 1000 shown in FIG. 1 is remarkably increased.
  • the reproduction image quality deteriorates at a constant information code amount, while the MPEG video stream information amount increases in order to maintain the constant reproduction image quality.
  • an object of the present invention is to improve the accuracy of an extended reference image when motion compensation from a peripheral region outside the screen is possible in the compressed video encoding process or the compressed video decoding process.
  • Another object of the present invention is to reduce an increase in the information amount of an MPEG video stream for maintaining a constant reproduction image quality while reducing a deterioration in the reproduction image quality.
  • the representative embodiment of the present invention generates a motion vector by searching for an image area most similar to the image area of the video input signal to be encoded using the reference image read from the frame memory (10). Is done.
  • a motion compensated reference image as a predicted image is generated from the motion vector and the reference image read from the frame memory (10).
  • a prediction residual is generated by subtraction between the motion compensated reference image and the video input signal.
  • the reference image stored in the frame memory (10) by adding the result of the orthogonal transformation process, quantization process, inverse quantization process, and inverse orthogonal transform process of the prediction residual and the motion compensated reference image Is generated.
  • a compressed video encoding apparatus 1000 that generates an encoded video output signal by the orthogonal transform process, the quantization process, and the variable length encoding process of the prediction residual (see FIG. 3).
  • the reference image includes an in-screen reference image (A, B, C) inside the video display screen and an out-of-screen reference image (D) outside the video display screen, and the out-of-screen reference image (D) It is generated based on the positional relationship of a plurality of reference images (A, B) similar to the in-screen reference images (A, B, C) (see FIG. 5).
  • the accuracy of the extended reference picture can be improved.
  • FIG. 1 is a diagram showing a configuration of a compressed moving image coding apparatus (encoder) studied based on the international standard moving picture coding system examined by the present inventors prior to the present invention.
  • FIG. 2 shows the closest reference to the extended reference image that is a peripheral image of the reference image described in Patent Document 1 in order to realize the unrestricted motion vector (UMV) employed in the MPEG-4 encoding method. It is a figure explaining the extended reference image at the time of setting it as the pixel value of an image.
  • FIG. 3 is a diagram showing a configuration of a compressed moving image encoding apparatus (encoder) according to Embodiment 1 of the present invention.
  • FIG. 1 is a diagram showing a configuration of a compressed moving image coding apparatus (encoder) studied based on the international standard moving picture coding system examined by the present inventors prior to the present invention.
  • FIG. 2 shows the closest reference to the extended reference image that is a peripheral image of the reference image described in Patent Document 1 in order to realize
  • FIG. 4 is a diagram showing a configuration of the reference image screen expansion unit 11 added to the compressed moving image encoding apparatus (encoder) according to Embodiment 1 of the present invention shown in FIG.
  • FIG. 5 shows an embodiment of the present invention shown in FIG. 3 to which the reference picture screen extension unit 11 shown in FIG. 4 is added in order to realize the unrestricted motion vector (UMV) employed in the MPEG-4 encoding method.
  • UMV unrestricted motion vector
  • FIG. 6 is a diagram for explaining how reference images of objects (objects) inside and outside the screen are generated by the extended reference image generation method according to Embodiment 1 of the present invention shown in FIG.
  • FIG. 7 shows an embodiment of the present invention for decoding an MPEG video stream that is an encoded video output signal generated by the compressed video encoding apparatus (encoder) according to Embodiment 1 of the present invention shown in FIG. It is a figure which shows the structure of the compression moving image decoding apparatus (decoder) by the form 2.
  • FIG. 8 shows an embodiment of the present invention for decoding an MPEG video stream, which is an encoded video output signal generated by the compressed video encoding apparatus (encoder) according to Embodiment 1 of the present invention shown in FIG.
  • FIG. 9 shows the first embodiment of the present invention shown in FIG. 3 to which a reference image screen expansion unit 11 is added in order to realize an unrestricted motion vector (UMV) employed in the MPEG-4 encoding method. It is a figure explaining the other method by which the extended reference image of the peripheral image of a reference image is produced
  • Reference image D Expanded reference image outside boundary line 50
  • w Expanded reference image width 60: Boundary between reference screen and outside screen 61: Reference image of object (object) inside boundary line 62: Boundary Extended reference image outside line 70... Compressed video decoding device (decoder) DESCRIPTION OF SYMBOLS 71 ... Decoder 72 ... Inverter 73 ... Inverse orthogonal transformer 74 ... Motion compensator 75 ... Adder 76 ... Frame memory 77 ... Reference image screen expansion unit 80 ... Compressed moving image decoding apparatus (decoder) DESCRIPTION OF SYMBOLS 81 ... Decoder 82 ... Inverter 83 ... Inverse orthogonal transformer 84 ... Motion compensator 85 ... Adder 96 ... Frame memory 86 L0: Boundary between reference screen and outside screen L1, L2, L3 ... Extension straight line 90 ... Extended reference image outside boundary line L0 91 to 99 ... Many reference images in the screen inside the reference screen
  • a typical embodiment of the present invention searches for an image area most similar to an image area of a video input signal to be encoded by searching a reference image read out from the frame memory (10), thereby moving motion. A vector is generated.
  • a motion compensated reference image as a predicted image is generated from the motion vector and the reference image read from the frame memory (10).
  • a prediction residual is generated by subtraction between the motion compensated reference image and the video input signal to be encoded.
  • the reference image stored in the frame memory (10) by adding the result of the orthogonal transformation process, quantization process, inverse quantization process, and inverse orthogonal transform process of the prediction residual and the motion compensated reference image Is generated.
  • a compressed video encoding apparatus 1000 that generates an encoded video output signal by the orthogonal transform process, the quantization process, and the variable length encoding process of the prediction residual (see FIG. 3).
  • the reference image includes an in-screen reference image (A, B, C) inside the video display screen and an out-of-screen reference image (D) outside the video display screen, and the out-of-screen reference image (D) It is generated based on the positional relationship of a plurality of reference images (A, B) similar to the in-screen reference images (A, B, C) (see FIG. 5).
  • the off-screen reference image (D) is generated based on the positional relationship between a plurality of reference images (A, B, C) similar to the in-screen reference images (A, B, C).
  • the accuracy of the extended reference image can be improved.
  • one reference image (A) of the plurality of reference images (A, B) similar to the in-screen reference images (A, B, C) includes the in-screen reference image and the screen. It is close to the boundary line (50) with the external reference image.
  • the other reference image (B) of the plurality of reference images (A, B) is located within the intra-screen reference image, farther from the boundary line (50) than the one reference image (A). Is.
  • the off-screen reference image (D) is closest to the one reference image (A) via the boundary line (50).
  • the other reference images (C) of the in-screen reference images (A, B, C) are the same as the positional relationship of the one reference image (A) and the off-screen reference image (D). It is close to the other reference image (B).
  • the image information of the off-screen reference image (D) is generated based on the image information of the other reference image (C) (see FIG. 5).
  • the in-screen reference image includes a plurality of start reference images (91, 92, 93).
  • a plurality of extended straight lines exist between the off-screen reference image (90) and the plurality of start reference images (91, 92, 93).
  • a plurality of similarities of reference images (91, 96, 97; 92, 94, 98; 93, 95, 99) on each extension line of the plurality of extension lines (L1, L2, L3) are calculated, and The extended straight line (L2) having the highest similarity among a plurality of similarities is selected.
  • Image information of the off-screen reference image (90) is generated based on image information of the reference image (92, 94, 98) of the extended straight line (L2) having the highest similarity. (See FIG. 9).
  • the off-screen reference image is based on a statistical processing result of the image information of the reference image (92, 94, 98) of the extended straight line (L2) having the highest similarity. (90) image information is generated (see FIG. 9).
  • the information indicating whether or not the off-screen reference image is generated in each of an upward direction, a downward direction, a left direction, and a right direction of the in-screen reference image includes the encoded video. It is added to the output signal.
  • the motion vector is extracted by decoding the encoded video input signal.
  • a motion compensated reference image as a predicted image is generated from the motion vector and the reference image read from the frame memory (76, 86).
  • the reference stored in the frame memory (76, 86) by adding the result of the decoding process, the inverse quantization process, and the inverse orthogonal transform process of the encoded video input signal and the motion compensated reference image An image is generated.
  • the reference image includes an in-screen reference image (A, B, C) inside the video display screen and an out-of-screen reference image (D) outside the video display screen, and the out-of-screen reference image (D) It is generated based on the positional relationship of a plurality of reference images (A, B) similar to the in-screen reference images (A, B, C) (see FIG. 5).
  • the off-screen reference image (D) is generated based on the positional relationship between a plurality of reference images (A, B, C) similar to the in-screen reference images (A, B, C).
  • the accuracy of the extended reference picture can be improved.
  • one reference image (A) of the plurality of reference images (A, B) similar to the in-screen reference images (A, B, C) includes the in-screen reference image and the screen. It is close to the boundary line (50) with the external reference image.
  • the other reference image (B) of the plurality of reference images (A, B) is located within the intra-screen reference image, farther from the boundary line (50) than the one reference image (A). Is.
  • the off-screen reference image (D) is closest to the one reference image (A) via the boundary line (50).
  • the other reference images (C) of the in-screen reference images (A, B, C) are the same as the positional relationship of the one reference image (A) and the off-screen reference image (D). It is close to the other reference image (B).
  • the image information of the off-screen reference image (D) is generated based on the image information of the other reference image (C) (see FIG. 5).
  • the in-screen reference image includes a plurality of start reference images (91, 92, 93).
  • a plurality of extended straight lines exist between the off-screen reference image (90) and the plurality of start reference images (91, 92, 93).
  • a plurality of similarities of reference images (91, 96, 97; 92, 94, 98; 93, 95, 99) on each extension line of the plurality of extension lines (L1, L2, L3) are calculated, and The extended straight line (L2) having the highest similarity among a plurality of similarities is selected.
  • Image information of the off-screen reference image (90) is generated based on image information of the reference image (92, 94, 98) of the extended straight line (L2) having the highest similarity. (See FIG. 9).
  • the off-screen reference image is based on a statistical processing result of the image information of the reference image (92, 94, 98) of the extended straight line (L2) having the highest similarity. (90) image information is generated (see FIG. 9).
  • the information indicating whether or not the off-screen reference image is generated in each of an upward direction, a downward direction, a left direction, and a right direction of the in-screen reference image includes the encoded video. It is extracted from the input signal.
  • the generation of the off-screen reference image regarding the direction in which the information indicates NO is omitted.
  • FIG. 3 is a diagram showing a configuration of a compressed moving image encoding apparatus (encoder) according to Embodiment 1 of the present invention.
  • a compressed moving picture encoding apparatus (encoder) 1000 according to Embodiment 1 of the present invention shown in FIG. 3 is similar to the compressed moving picture encoding apparatus shown in FIG. 1, with a subtracter 1, an orthogonal transformer 2, and a quantizer 3. , Encoder 4, inverse quantizer 5, inverse orthogonal transformer 6, adder 7, motion compensator 8, motion vector searcher 9, and frame memory 10.
  • FIG. 4 is a diagram showing a configuration of the reference image screen expansion unit 11 added to the compressed moving image encoding apparatus (encoder) 1000 according to Embodiment 1 of the present invention shown in FIG.
  • the reference image screen expansion unit 11 illustrated in FIG. 4 includes a similarity calculation unit 111, a most similar pixel search unit 112, and a reference screen outside pixel generation unit 113.
  • the in-screen pixel value is supplied from the frame memory 10 to the similarity calculation unit 111 of the reference image screen expansion unit 11, and the difference information is supplied from the similarity calculation unit 111 to the most similar pixel search unit 112. Similar position information is supplied from the search unit 112 to the reference screen pixel generation unit 113.
  • the in-screen pixel value is supplied from the frame memory 10 to the reference off-screen pixel generation unit 113, and the off-screen pixel value is supplied from the reference off-screen pixel generation unit 113 to the frame memory 10.
  • FIG. 5 shows an embodiment of the present invention shown in FIG. 3 to which the reference picture screen extension unit 11 shown in FIG. 4 is added in order to realize the unrestricted motion vector (UMV) employed in the MPEG-4 encoding method.
  • UMV unrestricted motion vector
  • 6 is a diagram illustrating a method for generating an extended reference image of a peripheral image of a reference image in the compressed video encoding device according to Embodiment 1.
  • a boundary line 50 is a line indicating the boundary between the inside and the outside of the screen.
  • a large number of reference images A, B, C... Exist in the screen inside the boundary line 50, and in order to realize an unrestricted motion vector (UMV) employed in the MPEG-4 encoding method, It is necessary to generate the extended reference image D outside the line 50.
  • UMV unrestricted motion vector
  • the extended reference image D is generated outside the boundary line 50 immediately after the reference image A in the screen close to the boundary line 50
  • a number of reference images inside the boundary line 50 are searched.
  • the reference image B in the screen that is most similar to the reference image A in the screen close to the boundary line 50 is selected.
  • the movement amount and the movement direction are determined from the positional relationship between the reference image A in the boundary line screen and the reference image B in the most similar screen, and the vector V is generated.
  • the outer extended reference image D is copied by copying the in-screen reference image C that is closest to the closest similar reference image B with the same positional relationship between the reference image A in the boundary screen and the outer extended reference image D.
  • the expanded reference image D is generated by pasting at the position.
  • an extended reference image is generated with a necessary horizontal width w by repeating the same processing as described above for a plurality of reference images in the boundary line screen of the vertical boundary line 50. It is also possible to generate an extended reference image with a required vertical width by repeating similar processing for a plurality of boundary-line reference images within the boundary line 50 in the horizontal direction.
  • FIG. 6 is a diagram for explaining how reference images of objects (objects) inside and outside the screen are generated by the extended reference image generation method according to Embodiment 1 of the present invention shown in FIG.
  • a boundary line 60 is a line indicating a boundary between the inside of the screen and the outside of the screen.
  • a reference image 61 of the object (object) exists in the screen inside the boundary line 60, while the boundary line 60
  • An extended reference image 62 exists outside the outer screen.
  • the extended reference image 62 that is an image outside the screen has an arrangement direction of the reference image 61 of the object (object) in the screen. Since (vector V) is taken into consideration, the shape of the extended reference image 22 outside the screen matches the actual shape with high accuracy. As a result, (1) the difference signal in the portion of the extended reference image becomes smaller, (2) the pixels of the extended reference image can be selected, the power of the difference signal is reduced, and the coding efficiency is improved, while being coded. The effect that the magnitude and direction of the motion vector of the power block becomes the same as the motion vector of the surrounding blocks and the code amount of the motion vector becomes small can be obtained.
  • FIG. 7 shows an embodiment of the present invention for decoding an MPEG video stream that is an encoded video output signal generated by the compressed video encoding apparatus (encoder) according to Embodiment 1 of the present invention shown in FIG. It is a figure which shows the structure of the compression moving image decoding apparatus (decoder) by the form 2.
  • a decoder 71 includes a decoder 71, an inverse unit 72, an inverse orthogonal transformer 73, a motion compensator 74, an adder 75, a frame memory 76, a reference image screen expansion unit. 77.
  • An MPEG video stream that is an encoded video output signal is supplied to a decoder 71 that performs variable length decoding (VLD: Variable Length Decoding) signal processing, and the output signal of the decoder 71 is inverse quantum.
  • VLD variable Length Decoding
  • IDCT inverse orthogonal transform
  • the motion compensator 74 generates a motion-compensated reference image (predicted image) from the motion vector and the reference image read from the frame memory 76, and supplies it to the other input terminal of the adder 75. Therefore, the adder 75 performs the addition of the inverse orthogonal transform output of the inverse orthogonal transformer 73 and the predicted image, and generates a decoded video signal from the frame memory 76.
  • the reference picture screen expansion unit 11 of FIG. 4 is used and the MPEG-4 coding method is used as shown in FIGS.
  • UMV unrestricted motion vector
  • a reference picture screen expansion unit 77 is added to decode an MPEG video stream employing a restricted motion vector (UMV). That is, in the compressed video decoding apparatus of FIG. 7, the reference image screen expansion unit 77 is used to generate the reference image from the in-screen reference image according to the MPEG video stream adopting the unrestricted motion vector (UMV) supplied in the decoding process.
  • UMV unrestricted motion vector
  • An extended reference image outside the screen is generated. Using an extended reference image outside the screen, an MPEG-4 encoded video stream employing an unrestricted motion vector (UMV) that enables motion compensation from a peripheral region outside the screen is used.
  • UMV unrestricted motion vector
  • the compressed video decoding apparatus in FIG. 7 that performs the same reference picture screen expansion method as described above decodes a video stream accurately. This is because an accurate image cannot be decoded unless the encoding device and the decoding device always hold the same reference screen. Therefore, the compressed video decoding device reference image screen expansion unit 77 of FIG. 7 performs the compressed video decoding process by the same method as the reference video screen expansion unit of the compressed video encoding device shown in FIGS. An extended reference image outside the screen can be generated from the reference image within the screen. When the reference image screen expansion unit 77 generates an extended reference image outside the screen from the reference image in the screen by the same method as shown in FIG. 9 to be described later, the compressed video encoding device, the compressed video The method shown in FIG.
  • FIGS. 5 and 6 can be used in combination with the method shown in FIG. 9, or two or three types of methods including image expansion in the conventional MPEG-4 can be used in combination. .
  • the screen expansion is performed by the designated method. An accurate reproduced image can be obtained by decoding each frame after processing.
  • FIG. 8 shows an embodiment of the present invention for decoding an MPEG video stream, which is an encoded video output signal generated by the compressed video encoding apparatus (encoder) according to Embodiment 1 of the present invention shown in FIG. It is a figure which shows the other structure of the compression moving image decoding apparatus (decoder) by the form 3.
  • the decoder 81, the inverse unit 82, the inverse orthogonal transformer 83, the adder 85, and the frame memory 86 are the compressed video decoding device (decoder) of FIG.
  • the decoder 71, the inverseizer 72, the inverse orthogonal transformer 73, the adder 75, and the frame memory 76 included in 70 each have an equivalent operation function.
  • the motion compensator 84 included in the compressed video decoding device (decoder) 80 of FIG. 8 includes the motion compensator 74 of the compressed video decoding device (decoder) 70 of FIG. It has both functions. Therefore, in the compressed video decoding apparatus of FIG. 8, as in the compressed video decoding apparatus of FIG. 7, the motion compensator 84 is used in accordance with the MPEG video stream employing the unrestricted motion vector (UMV) in the decoding process. Thus, an extended reference image outside the screen can be generated from the reference image within the screen. Also in the compressed video decoding apparatus of FIG. 8, MPEG-in which an unrestricted motion vector (UMV) that enables motion compensation from a peripheral region outside the screen is used by using the extended reference image outside the screen. A compressed video decoding process of 4 encoded video streams can be performed.
  • the motion compensator 84 of the compressed video decoding device in FIG. 8 is the same method as the method shown in FIGS. 5 and 6 as with the reference image screen expansion unit 77 of the compressed video decoding device shown in FIG.
  • an extended reference image outside the screen can be generated from the reference image within the screen during the compressed video decoding process.
  • the motion compensator 84 can also generate an extended reference image outside the screen from the reference image within the screen by the same method as the method shown in FIG. 9 described later.
  • FIG. 9 shows the first embodiment of the present invention shown in FIG. 3 to which a reference image screen expansion unit 11 is added in order to realize an unrestricted motion vector (UMV) employed in the MPEG-4 encoding method. It is a figure explaining the other method by which the extended reference image of the peripheral image of a reference image is produced
  • UMV unrestricted motion vector
  • a boundary line L0 is a line indicating a boundary between the inside and the outside of the screen.
  • a large number of reference images 91 to 99... Exist in the screen inside the boundary line L0, and in order to realize the unrestricted motion vector (UMV) employed in the MPEG-4 encoding method, the boundary line L0. It is necessary to generate the extended reference image 90 outside the image.
  • UMV unrestricted motion vector
  • the similarity of the plurality of first reference images 91, 96, 97 in the screen arranged on the straight line L1 between the extended reference image 90 and the first start reference image 91 in the screen is shown in FIG. It is calculated by the reference picture screen expansion unit 11 of the compressed moving picture coding apparatus according to the first embodiment of the present invention shown in FIG.
  • the similarity of the plurality of second reference images 92, 94, 98 in the screen arranged on the straight line L2 between the extended reference image 90 and the second start reference image 92 in the screen is shown in FIG. This is calculated by the reference image screen expansion unit 11 of the compressed video encoding apparatus according to the first embodiment of the present invention.
  • the similarity of the plurality of third reference images 93, 95, 99 in the screen arranged on the straight line L3 between the extended reference image 90 and the third start reference image 93 in the screen is shown in FIG. 3 is calculated by the reference image screen expansion unit 11 of the compressed moving picture coding apparatus according to the first embodiment of the present invention.
  • the expanded reference image 90 outside the boundary line L0 is determined according to the highest similarity calculated by the similarity calculation.
  • the luminance signals and the hues of the plurality of second reference images 92, 94, 98 in the screen arranged on the straight line L2 can be determined by summing and averaging the values obtained by multiplying the signal by a predetermined coefficient.
  • a compressed video decoding device as shown in FIG. 8 can be used.
  • the motion compensator 8, the motion vector search unit 9, and the reference image screen expansion unit 11 are not limited to signal processing by dedicated hardware. In other words, these signal processing can be replaced by software processing executed by a central processing unit (CPU) included in the compressed video encoding apparatus 1000.
  • CPU central processing unit
  • a decoder 71, an inverseizer 72, an inverse orthogonal transformer 73, a motion compensator 74, an adder 75, and a reference image screen expansion unit 77 included in the compressed video decoding apparatus (decoder) 70 shown in FIG. are not limited to signal processing by dedicated hardware. In other words, these signal processing can be replaced by software processing executed by a central processing unit (CPU) included in the compressed video decoding device 70.
  • CPU central processing unit
  • the expansion direction when the expanded reference image D is generated outside the boundary line 50 outside and within the screen of the reference image as shown in FIG. 5 is limited to all four sides of the rectangular reference image. It is not something.
  • the generation direction of the extended reference image based on the moving direction of the moving image shooting device output from the acceleration sensor mounted on the moving image shooting device.
  • the generation direction of the extended reference image can be limited to the X direction and the Y direction of the movement vector of the moving image capturing device. That is, when the generation direction is limited, information on which direction the extended reference image generation is limited and information on which direction the extended reference image generation is not limited are superimposed on the video stream. It is possible to simplify the decoding process of the video decoding device that decodes the stream.
  • the information (limitation information) for restricting the generation of the extended reference image described above is “1” when the extended reference image generation is not restricted for each direction of the screen, and “0” when the restriction is restricted. 4 bits of information arranged in this order. For example, when the 4-bit information is “0001”, it is necessary to generate an extended reference image in the right direction of the screen as in the case where the camera is moving in the right direction in the P frame (unidirectional prediction frame). It is shown that there is.
  • This restriction information can be added for each frame of the encoded image, or can be added in units of a plurality of frames.
  • the restriction direction can be appropriately set in accordance with the movement of the camera in units of frames, so that the generation of the extended reference image can be restricted without lowering the encoding efficiency.
  • the extension method used in MPEG-4 or the like can be adopted for the direction in which the extended minimum image generation is restricted.
  • the video input signal to be encoded is divided into a top field and a bottom field.
  • the present invention is not limited to an MPEG-4 encoding method and decoding method that employs an unrestricted motion vector (UMV) that enables motion compensation from a peripheral region outside the screen.
  • UMV unrestricted motion vector
  • the present invention relates to a compressed moving picture coding apparatus, a compressed moving picture decoding apparatus, a compressed moving picture coding method, and a compression that use inter-frame prediction coding, motion vector detection, motion compensation, extended reference pictures, and the like using temporal correlation. It can be widely applied to video decoding methods.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Une région d'image la plus similaire à la région d'image d'un signal d'entrée vidéo est recherchée dans une image de référence d'une mémoire d'image (10) de manière à générer un vecteur de mouvement. Une image de référence compensée en mouvement est générée à partir du vecteur de mouvement et de l'image de référence provenant de la mémoire d'image. Un résidu prévu est généré par soustraction entre l'image de référence compensée en mouvement et le signal d'entrée vidéo. Une image de référence à stocker dans une mémoire est générée par addition des résultats d'un processus de conversion orthogonale, d'un processus de quantification, d'un processus de quantification inverse et d'un processus de conversion orthogonale inverse du résidu prévu et de l'image de référence compensée en mouvement. Le dispositif de codage d'image dynamique compressée génère un signal de sortie vidéo codé grâce au processus de conversion orthogonale, au processus de quantification et à un processus de codage à longueur variable du résidu prévu. L'image de référence comprend des images de référence dans l'écran A, B, C situées à l'intérieur de l'écran d'affichage vidéo et une image de référence en dehors de l'écran D située en dehors de l'écran d'affichage vidéo. L'image de référence en dehors de l'écran D est générée par la relation de position entre la pluralité d'images de référence similaires A et B parmi les images de référence dans l'écran A, B, C.
PCT/JP2009/000969 2009-03-04 2009-03-04 Dispositif de codage d'image dynamique compressée, dispositif de décodage d'image dynamique compressée, procédé de codage d'image dynamique compressée et procédé de décodage d'image dynamique compressée WO2010100672A1 (fr)

Priority Applications (9)

Application Number Priority Date Filing Date Title
EP09841039.2A EP2405655B1 (fr) 2009-03-04 2009-03-04 Bourrage de limites d'images pour vecteurs de mouvement illimité
JP2011502500A JP5426655B2 (ja) 2009-03-04 2009-03-04 圧縮動画符号化装置、圧縮動画復号化装置、圧縮動画符号化方法および圧縮動画復号化方法
KR1020167001752A KR101671676B1 (ko) 2009-03-04 2009-03-04 압축 동화상 부호화 장치, 압축 동화상 복호화 장치, 압축 동화상 부호화 방법 및 압축 동화상 복호화 방법
PCT/JP2009/000969 WO2010100672A1 (fr) 2009-03-04 2009-03-04 Dispositif de codage d'image dynamique compressée, dispositif de décodage d'image dynamique compressée, procédé de codage d'image dynamique compressée et procédé de décodage d'image dynamique compressée
EP14170348.8A EP2773123B1 (fr) 2009-03-04 2009-03-04 Remplissage de pixel de bordure pour codage vidéo
KR1020117020536A KR101589334B1 (ko) 2009-03-04 2009-03-04 압축 동화상 부호화 장치, 압축 동화상 복호화 장치, 압축 동화상 부호화 방법 및 압축 동화상 복호화 방법
CN200980157830.2A CN102369730B (zh) 2009-03-04 2009-03-04 动态图像编码装置、动态图像解码装置、动态图像编码方法及动态图像解码方法
US13/203,727 US8958479B2 (en) 2009-03-04 2009-03-04 Compressed dynamic image encoding device, compressed dynamic image decoding device, compressed dynamic image encoding method and compressed dynamic image decoding method
US14/478,661 US9813703B2 (en) 2009-03-04 2014-09-05 Compressed dynamic image encoding device, compressed dynamic image decoding device, compressed dynamic image encoding method and compressed dynamic image decoding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2009/000969 WO2010100672A1 (fr) 2009-03-04 2009-03-04 Dispositif de codage d'image dynamique compressée, dispositif de décodage d'image dynamique compressée, procédé de codage d'image dynamique compressée et procédé de décodage d'image dynamique compressée

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US13/203,727 A-371-Of-International US8958479B2 (en) 2009-03-04 2009-03-04 Compressed dynamic image encoding device, compressed dynamic image decoding device, compressed dynamic image encoding method and compressed dynamic image decoding method
US14/478,661 Division US9813703B2 (en) 2009-03-04 2014-09-05 Compressed dynamic image encoding device, compressed dynamic image decoding device, compressed dynamic image encoding method and compressed dynamic image decoding method

Publications (1)

Publication Number Publication Date
WO2010100672A1 true WO2010100672A1 (fr) 2010-09-10

Family

ID=42709252

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2009/000969 WO2010100672A1 (fr) 2009-03-04 2009-03-04 Dispositif de codage d'image dynamique compressée, dispositif de décodage d'image dynamique compressée, procédé de codage d'image dynamique compressée et procédé de décodage d'image dynamique compressée

Country Status (6)

Country Link
US (2) US8958479B2 (fr)
EP (2) EP2405655B1 (fr)
JP (1) JP5426655B2 (fr)
KR (2) KR101589334B1 (fr)
CN (1) CN102369730B (fr)
WO (1) WO2010100672A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015151513A1 (fr) * 2014-04-04 2015-10-08 日本電気株式会社 Appareil, procédé et programme de codage d'image vidéo, et appareil, procédé et programme de décodage d'image
WO2019064640A1 (fr) * 2017-09-27 2019-04-04 株式会社Jvcケンウッド Dispositif de détection de vecteur de mouvement

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7319795B2 (en) * 2003-09-23 2008-01-15 Broadcom Corporation Application based adaptive encoding
US9892782B1 (en) 2011-05-25 2018-02-13 Terra Prime Technologies, Llc Digital to analog converters and memory devices and related methods
US9589633B2 (en) 2011-05-25 2017-03-07 Peter K. Nagey Memory devices and related methods
US8773887B1 (en) * 2011-05-25 2014-07-08 Peter K. Naji Resistive memory devices and related methods
CN109905710B (zh) * 2012-06-12 2021-12-21 太阳专利托管公司 动态图像编码方法及装置、动态图像解码方法及装置
CN104104960B (zh) * 2013-04-03 2017-06-27 华为技术有限公司 多级双向运动估计方法及设备
US11315326B2 (en) * 2019-10-15 2022-04-26 At&T Intellectual Property I, L.P. Extended reality anchor caching based on viewport prediction
JP7359653B2 (ja) * 2019-11-06 2023-10-11 ルネサスエレクトロニクス株式会社 動画像符号化装置
CN112733616B (zh) * 2020-12-22 2022-04-01 北京达佳互联信息技术有限公司 一种动态图像的生成方法、装置、电子设备和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06351001A (ja) 1993-06-08 1994-12-22 Matsushita Electric Ind Co Ltd 動きベクトル検出方法および動き補償予測方法並びにその装置
JP2003219417A (ja) * 2002-01-18 2003-07-31 Nippon Telegr & Teleph Corp <Ntt> 動画像符号化方法と装置、並びにこの方法の実行プログラムとこの方法の実行プログラムを記録した記録媒体
WO2006016788A1 (fr) * 2004-08-13 2006-02-16 Industry Academic Cooperation Foundation Kyunghee University Procede et dispositif permettant d'estimer et compenser le mouvement dans une image panoramique
EP1729520A2 (fr) * 2005-05-30 2006-12-06 Samsung Electronics Co., Ltd. Appareil et procédé pour coder et décoder une image avec des macroblocs non carrés

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001251632A (ja) * 1999-12-27 2001-09-14 Toshiba Corp 動きベクトル検出方法および装置並びに動きベクトル検出プログラム
US7450641B2 (en) 2001-09-14 2008-11-11 Sharp Laboratories Of America, Inc. Adaptive filtering based upon boundary strength
EP1503597A3 (fr) 2003-07-28 2007-01-03 Matsushita Electric Industrial Co., Ltd. Dispositif de décodage vidéo
KR100688383B1 (ko) * 2004-08-13 2007-03-02 경희대학교 산학협력단 파노라마 영상의 움직임 추정 및 보상
KR100694137B1 (ko) * 2005-07-08 2007-03-12 삼성전자주식회사 동영상 부호화 장치, 동영상 복호화 장치, 및 그 방법과,이를 구현하기 위한 프로그램이 기록된 기록 매체

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06351001A (ja) 1993-06-08 1994-12-22 Matsushita Electric Ind Co Ltd 動きベクトル検出方法および動き補償予測方法並びにその装置
JP2003219417A (ja) * 2002-01-18 2003-07-31 Nippon Telegr & Teleph Corp <Ntt> 動画像符号化方法と装置、並びにこの方法の実行プログラムとこの方法の実行プログラムを記録した記録媒体
WO2006016788A1 (fr) * 2004-08-13 2006-02-16 Industry Academic Cooperation Foundation Kyunghee University Procede et dispositif permettant d'estimer et compenser le mouvement dans une image panoramique
EP1729520A2 (fr) * 2005-05-30 2006-12-06 Samsung Electronics Co., Ltd. Appareil et procédé pour coder et décoder une image avec des macroblocs non carrés

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2405655A4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015151513A1 (fr) * 2014-04-04 2015-10-08 日本電気株式会社 Appareil, procédé et programme de codage d'image vidéo, et appareil, procédé et programme de décodage d'image
JPWO2015151513A1 (ja) * 2014-04-04 2017-04-13 日本電気株式会社 映像符号化装置、方法及びプログラム、並びに映像復号装置、方法及びプログラム
WO2019064640A1 (fr) * 2017-09-27 2019-04-04 株式会社Jvcケンウッド Dispositif de détection de vecteur de mouvement

Also Published As

Publication number Publication date
EP2405655B1 (fr) 2015-02-25
CN102369730B (zh) 2015-12-16
US20110305280A1 (en) 2011-12-15
CN102369730A (zh) 2012-03-07
JPWO2010100672A1 (ja) 2012-09-06
US9813703B2 (en) 2017-11-07
KR101671676B1 (ko) 2016-11-01
EP2405655A4 (fr) 2012-09-12
US20140376636A1 (en) 2014-12-25
US8958479B2 (en) 2015-02-17
KR101589334B1 (ko) 2016-01-27
EP2405655A1 (fr) 2012-01-11
EP2773123A1 (fr) 2014-09-03
KR20160017100A (ko) 2016-02-15
KR20110122167A (ko) 2011-11-09
JP5426655B2 (ja) 2014-02-26
EP2773123B1 (fr) 2016-06-29

Similar Documents

Publication Publication Date Title
JP5426655B2 (ja) 圧縮動画符号化装置、圧縮動画復号化装置、圧縮動画符号化方法および圧縮動画復号化方法
KR20110008653A (ko) 움직임 벡터 예측 방법과 이를 이용한 영상 부호화/복호화 장치 및 방법
JP2004336369A (ja) 動画像符号化装置、動画像復号化装置、動画像符号化方法、動画像復号化方法、動画像符号化プログラム及び動画像復号化プログラム
JP2009182623A (ja) 画像符号化方法
JP2006279573A (ja) 符号化装置と方法、ならびに復号装置と方法
JP2009267689A (ja) 動画像符号化装置、及び動画像符号化方法
US20070133689A1 (en) Low-cost motion estimation apparatus and method thereof
JP2007013298A (ja) 画像符号化装置
JP2013115583A (ja) 動画像符号化装置及びその制御方法並びにプログラム
JP2004242309A (ja) 飛越走査方式の動画符号化/復号化方法及びその装置
US8411749B1 (en) Optimized motion compensation and motion estimation for video coding
JP2007243784A (ja) 動画像復号装置および動画像復号方法
JP4126044B2 (ja) 動画像符号化装置及び方法
JP2006246277A (ja) 再符号化装置、再符号化方法、および再符号化用プログラム
JP2009284058A (ja) 動画像符号化装置
JP4802928B2 (ja) 画像データ処理装置
JP5718438B2 (ja) 圧縮動画符号化装置、圧縮動画復号化装置、圧縮動画符号化方法および圧縮動画復号化方法
JP5171675B2 (ja) 画像処理装置、およびそれを搭載した撮像装置
JP2014200078A (ja) 動画像符号化装置及びその制御方法
JP2006295502A (ja) 再符号化装置、再符号化方法、および再符号化用プログラム
JP4533157B2 (ja) 画像復号方法
JPWO2011099242A1 (ja) 画像符号化装置、画像復号装置、画像符号化方法及び画像復号方法
JP4301495B2 (ja) 動画像圧縮符号化装置
JP2007221202A (ja) 動画像符号化装置及び動画像符号化プログラム
JP2010011185A (ja) 動画像復号装置及び動画像復号方法

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200980157830.2

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09841039

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2011502500

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2009841039

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 13203727

Country of ref document: US

ENP Entry into the national phase

Ref document number: 20117020536

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE