WO2011122659A1 - Appareil codeur et appareil décodeur - Google Patents

Appareil codeur et appareil décodeur Download PDF

Info

Publication number
WO2011122659A1
WO2011122659A1 PCT/JP2011/058008 JP2011058008W WO2011122659A1 WO 2011122659 A1 WO2011122659 A1 WO 2011122659A1 JP 2011058008 W JP2011058008 W JP 2011058008W WO 2011122659 A1 WO2011122659 A1 WO 2011122659A1
Authority
WO
WIPO (PCT)
Prior art keywords
skip
parameter
value
decoding
prediction
Prior art date
Application number
PCT/JP2011/058008
Other languages
English (en)
Japanese (ja)
Inventor
山本 智幸
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Publication of WO2011122659A1 publication Critical patent/WO2011122659A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding

Definitions

  • the present invention relates to an encoding device that encodes an image and generates encoded data.
  • the present invention also relates to a decoding apparatus that decodes encoded data generated using such an encoding apparatus.
  • a moving image encoding device In order to efficiently transmit or record moving images, a moving image encoding device is used.
  • a specific moving picture encoding method for example, H.264 is used.
  • H.264 / MPEG-4 AVC hereinafter abbreviated as H264 / AVC
  • VCEG Video Coding Expert Group
  • an image (picture) constituting a moving image is usually obtained by dividing a slice obtained by dividing an image, a macroblock obtained by dividing the slice, and a macroblock.
  • a hierarchical structure including sub-macroblocks to be obtained and blocks obtained by dividing macroblocks or sub-macroblocks.
  • a prediction image is usually generated based on a local decoded image obtained by encoding / decoding an input image, and a difference image between the prediction image and the input image is encoded. It becomes.
  • methods for generating a predicted image methods called inter-frame prediction (inter prediction) and intra-frame prediction (intra prediction) are known.
  • a prediction image for a frame being encoded is generated by motion compensation prediction from a reference image in a frame that has been encoded / decoded.
  • various degrees of freedom are provided in the prediction method, such as from which frame motion compensation prediction is performed.
  • a prediction parameter indicating which prediction method is used to perform motion compensation prediction in a macroblock (or each sub-macroblock) is encoded.
  • a prediction image on a macroblock (or sub-macroblock) being encoded included in a frame being encoded is converted into a local area on an encoded / decoded region included in the same frame. Generated by extrapolation from the decoded image.
  • various degrees of freedom are provided in the prediction method (prediction mode), such as from which direction the extrapolation is performed, and which prediction method (prediction mode) is used for each block.
  • a prediction parameter indicating is encoded.
  • the prediction indicating the prediction method used to generate the predicted image The parameters need to be encoded and included in the encoded data. This causes a problem that the amount of encoded data increases.
  • H.C. In H.264 / AVC, a technique for reducing the code amount of encoded data by omitting encoding of a prediction parameter for a specific macroblock or sub-macroblock is employed.
  • macroblocks and sub-macroblocks for which encoding of prediction parameters is omitted are called skip blocks
  • macroblocks and sub-macroblocks for which prediction parameters are encoded are called non-skip blocks.
  • Table 1 is a table exemplifying skip block types.
  • sp, dsp, and dtp represent sp: spatial prediction (spatial pred), dsp: prediction in spatial direct mode (direct spatial pred), and dtp: prediction in temporal direct mode (direct temporal pred), respectively. ing.
  • FIG. 10A is a diagram schematically showing a process for deriving a motion vector regarding each sub macroblock (SMB1 to SMB4) included in the macroblock MB.
  • FIG. 10B is a diagram showing the positions of blocks that are referred to in order to estimate the motion vector (mv) related to the skip block.
  • SMB2 and SMB3 are B_Direct_8x8 sub-macroblocks.
  • the motion vector related to the block included in the encoded macro block adjacent to the macro block being encoded is referred to. Is done.
  • FIG. 10 (b) reference is made to motion vectors relating to blocks adjacent to the left, top, and top right of the sub-macroblock (SMB1) located at the top left corner in the macroblock being encoded. Is done.
  • SMB1 sub-macroblock
  • the prediction parameter related to the block adjacent to the right or bottom of the skip block is not referred to. It was.
  • the prediction residual of the prediction parameter to be encoded becomes large, resulting in a problem that the code amount increases.
  • the present invention has been made in view of the above problems, and its main object is to provide an encoding device capable of accurately estimating a prediction parameter (more generally a parameter) related to a skip block (more generally a skip region), and It is to realize a decoding device.
  • a decoding device decodes encoded data to set parameters relating to each unit region constituting a decoded image, and each unit constituting the decoded image A non-skip area in which the code obtained by encoding the parameter relating to the unit area is included in the encoded data, and the code obtained by encoding the parameter relating to the unit area Classification means for classifying into skip areas not included in the data, and parameters obtained by decoding the corresponding codes included in the encoded data with respect to the parameters regarding each non-skip area included in the decoded image
  • the decoding means to be set to and parameters related to each skip area included in the decoded image are set by the decoding means.
  • Derivation means for setting the estimated value derived by referring to the value of the parameter relating to the non-skip area located before and after the skip area in a predetermined decoding order, the parameter value relating to the non-skip area, It is characterized by having.
  • the decoding apparatus of the present invention determines the estimated value of the parameter related to the skip area from the parameter value related to the non-skip area after the predetermined decoding order and the skip area. Derived with reference to both the value of the parameter for the previous non-skip area.
  • a conventional decoding device derives an estimated value of a parameter related to a skip region with reference to only a parameter value related to a non-skip region in which a predetermined decoding order is earlier than the skip region.
  • the parameter value related to the non-skip region after the predetermined decoding order is not referred to. .
  • a motion vector of a non-skip block located to the right or below the skip block that is, a non-skip block located behind in the raster scan order
  • the decoding apparatus of the present invention it is possible to statistically improve the estimation accuracy of the parameter relating to the skip region as compared with the conventional case.
  • the unit region may be a known unit defined in Non-Patent Document 1 or the like, or a new unit not defined in Non-Patent Document 1 or the like.
  • the unit area is typically “sub-macroblock”, but is not limited thereto.
  • a unit for example, a block
  • a unit for example, a macroblock
  • the unit regions constituting the image are A non-skip area in which the code obtained by encoding the parameter relating to the unit area is included in the encoded data, and a skip in which the code obtained by encoding the parameter relating to the unit area is not included in the encoded data
  • Classifying means for classifying into regions and parameters relating to each non-skip region included in the image are generated with reference to the parameter values relating to non-skip regions whose pre-defined encoding order is earlier than the non-skip region.
  • Derivation means for setting a parameter value relating to the generated skip region to an estimated value derived by referring to a parameter value relating to a non-skip region located before and after the skip region in a predetermined encoding order; It is characterized by having.
  • the encoding apparatus of the present invention refers to the estimated value of the parameter related to the skip region with reference to the value of the parameter related to the non-skip region after the predetermined encoding order after the skip region. To derive.
  • the conventional coding apparatus derives an estimated value of a parameter related to a skip region with reference to the value of a parameter related to a non-skip region whose coding order defined in advance is earlier than that skip region. That is, in the conventional coding apparatus, in order to derive an estimated value of a parameter relating to a skip region, a parameter value relating to a non-skip region after the predetermined coding order is referred to after the skip region. There is no. For example, H.M. In the H.264 / AVC system, in order to derive a motion vector of a skip block, a motion vector of a non-skip block positioned to the right or below the skip block is not referred to.
  • the encoding apparatus of the present invention it is possible to statistically improve the parameter estimation accuracy related to the skip region compared to the conventional case.
  • the encoding device and the decoding device according to the present invention can accurately estimate a prediction parameter (more generally a parameter) related to a skip block (more generally a skip region).
  • (A) is a block diagram which shows the structure of the mv decoding part in MB with which the moving image decoding apparatus which concerns on embodiment is provided
  • (b) is a block diagram which shows the conventional mv decoding part.
  • (A)-(c) is the figure which showed typically the derivation
  • FIG. 2A shows a conventional technique
  • FIG. 2A is a diagram schematically showing a process for deriving a prediction parameter in each sub-macroblock included in a macroblock
  • FIG. It is the figure which showed the position of the prediction block referred in order to do.
  • the moving picture decoding apparatus 1 includes H.264 as a part thereof. This is a moving picture decoding apparatus using a technique adopted in the H.264 / MPEG-4 AVC standard.
  • the moving picture decoding apparatus 1 is an apparatus that generates and outputs a decoded image # 2 by decoding input encoded data # 1.
  • the moving image decoding apparatus 1 divides a certain region (partial region) on the image indicated by the encoded data # 1 into a plurality of prediction target regions (unit regions), and a prediction image generated for each prediction target region Is used to generate decoded image # 2.
  • the above-mentioned area is designated as H.264.
  • the description will be given by taking as an example a case where a macro block in the H.264 / MPEG-4 AVC standard is used and the prediction target area is a block in the macro block, but the present invention is not limited to this.
  • the certain unit area may be an area larger than the macro block or an area smaller than the macro block.
  • the prediction target area may be larger than the block or smaller than the block.
  • FIG. 1 is a block diagram showing a configuration of the moving picture decoding apparatus 1.
  • the moving picture decoding apparatus 1 includes a variable length code demultiplexing unit 11, a header information decoding unit 12, an MB setting unit 13, an MB decoding unit 14, and a frame memory 15.
  • the encoded data # 1 input to the video decoding device 1 is input to the variable length code demultiplexing unit 11.
  • the variable-length code demultiplexing unit 11 demultiplexes the input encoded data # 1, thereby converting the encoded data # 1 into header encoded data # 11a that is encoded data related to header information, and a slice.
  • the encoded data # 11a is output to the header information decoding unit 12 and the encoded data # 11b is output to the MB setting unit 13, respectively.
  • the header information decoding unit 12 decodes the header information # 12 from the encoded header data # 11a.
  • the header information # 12 is information including the size of the input image.
  • the MB setting unit 13 separates the encoded data # 11b into encoded data # 13 corresponding to each macroblock based on the input header information # 12 and sequentially outputs the encoded data # 11b to the MB decoding unit 14.
  • the encoded data # 13 includes skip information for each block (unit area) included in the corresponding macroblock.
  • the skip information is information indicating whether a block corresponds to a skip block (skip area) or a non-skip block (non-skip area).
  • the MB decoding unit 14 generates and outputs a decoded image # 2 corresponding to each macroblock by sequentially decoding the encoded data # 13 corresponding to each input macroblock.
  • the decoded image # 2 is also output to the frame memory 15.
  • the configuration of the MB decoding unit 14 will be described later and will not be described here.
  • the decoded image # 2 is recorded in the frame memory 15.
  • decoded images corresponding to all macroblocks preceding the macroblock in the raster scan order at the time of decoding the specific macroblock are recorded.
  • FIG. 2 is a block diagram showing a configuration of the MB decoding unit 14.
  • the MB decoding unit 14 includes an MB mv decoding unit 141, a motion compensation prediction unit 142, a predicted image generation unit 143, and an MB decoded image generation unit 144.
  • the mv decoding unit 141 in MB derives the motion vector data # 141 for each block included in the macroblock (hereinafter referred to as “target macroblock”) corresponding to the encoded data # 13 that has been input. Output.
  • the MB mv decoding unit 141 refers to the encoded motion vector data related to the non-skip block included in the encoded data # 13 in order to derive the motion vector data # 141 related to the non-skip block.
  • the mv decoding unit 141 in MB refers to motion vector data related to blocks other than the skip block in order to derive motion vector data # 141 related to the skip block. This is because the encoded data # 13 does not include motion vector data related to the skip block. Details of the MB mv decoding unit 141 will be described later.
  • the motion compensation prediction unit 142 generates a prediction image # 142 corresponding to each block based on the motion vector data # 141, the decoded image # 2, and the decoded image # 15 recorded in the frame memory 15, and the subsequent stage Output to.
  • the prediction residual decoding unit 143 generates transform coefficients for the block by applying variable length code decoding to the encoded data of each block included in the target macroblock. Also, the prediction residual decoding unit 143 generates a decoding residual # 143 by applying a reverse transform of DCT (Discrete Cosine Transform) to the generated transform coefficient, and outputs it to the subsequent stage.
  • DCT Discrete Cosine Transform
  • the MB decoded image generation unit 144 generates a decoded image # 2 of the target macroblock based on the prediction image # 142 and the decoding residual # 143 output for each block, and outputs them to the subsequent stage.
  • FIG. 3A is a block diagram showing a configuration of the mv decoding unit 141 in the MB.
  • the mv decoding unit 141 in the MB includes a block scan unit 141a, an mv estimation unit 141b, a skip information decoding unit 141c, a switch 141d, an mv decoding unit 141e, a skip mv estimation unit 141f, and An mv buffer 141g is provided.
  • the block scanning unit 141a performs differential motion vectors (difference value, difference mv) in the selected order for each block selected from the target macroblock in the raster scan order (hereinafter, the selected block is referred to as “target block”). Information and skip information are output to the subsequent stage.
  • the mv estimation unit 141b calculates a predicted motion vector (predicted value, pmv) based on the mv information recorded in the mv buffer 141g, and the pmv information indicating pmv and the difference mv Information is output to the subsequent stage.
  • a predicted motion vector predicted value, pmv
  • the skip information decoding unit 141c decodes the encoded skip information and outputs it to the subsequent stage.
  • the switch 141d outputs the input information as it is from the switching destination, and switches the switching destination according to whether the skip information indicates a skip block or a non-skip block.
  • the mv decoding unit 141e decodes the input encoded difference mv information (encoding parameter), generates mv information (parameter) by adding pmv and the difference mv, and sends the mv information to the subsequent stage. Output.
  • Information on each block is recorded in the mv buffer 141g. That is, skip information, mv information, and order information indicating the order in which the target block is scanned are recorded in association with each other.
  • the skip mv estimation unit 141f estimates mv (estimated value) of the skip block in the selected order for each skip block selected in raster order from the target macroblock.
  • the MB decoding unit 141 is characterized in that it includes a skipping mv estimation unit 141f.
  • FIG. 4 is a flowchart showing an mv derivation process for deriving a motion vector of each block in the target macroblock.
  • FIGS. 5A to 5C show the respective blocks (SB1 to SB16) in the target macroblock MB before the start of the mv derivation process, when the process of S7 is performed, and after the completion of the mv derivation process. It is the figure which showed typically the derivation
  • the large arrows in FIGS. 5B and 5C indicate the mv actually used for generating the predicted image, and the small arrows in FIG. 5B are not used for generating the predicted image.
  • the provisional mv (temporary parameter) used only for the generation of pmv in the non-skip block is shown.
  • the block scanning unit 141a finally selects the next block in the raster scan order of the block set as the target block in S1 (the block located in the upper left corner of the target macroblock when the target macroblock is subjected to the processing of S1 for the first time). ) Is set as the target block, and the order information indicating the order in which the target block was scanned and the pmv information of the target block are output to the mv estimating unit 141b. At the same time, the block scanning unit 141a outputs the skip information of the target block to the skip information decoding unit 141c (step S1).
  • the order information is defined as 1 to 16 in the raster scan order from the upper left block to the lower right block in the target macroblock.
  • step S1 the block scanning unit 141a determines whether the target block is the last block (block SB16).
  • the mv estimation unit 141b generates pmv information from the mv information (or provisional mv information indicating provisional mv) recorded in the mv buffer 141g, and combines it with the input difference mv information and order information. Output to the switch 141d (step S2).
  • the pmv information is generated in step S2 based on mv information (or provisional mv information) of three blocks (left, top, and top right blocks) around the target block. ) Is set to pmv.
  • mv information or provisional mv information
  • the target block is the block SB11
  • pmv information indicating the median value pmv of the mv of the block SB7, the provisional mv of the block SB8, and the provisional pmv of the block SB10 is generated. Yes.
  • the median value of n two-dimensional vectors is pmv (mv)
  • the horizontal component of pmv (mv) is the median value of the horizontal components of each two-dimensional vector
  • pmv The vertical component of (mv) is used as the median value of the vertical component of each two-dimensional vector.
  • step S2 the skip information decoding unit 141c decodes the input skip information and outputs it to the switch 141d.
  • the switch 141d determines whether the skip information indicates a skip block or a non-skip block (step S3).
  • the switch 141d switches the switch and outputs the input skip information, order information, pmv information, and difference mv information to the mv decoding unit 141e. Then, the mv decoding unit 141e decodes the input difference mv information, generates mv information from the pmv information and the decoded difference mv information (step S4), generates the order information, the skip information, and The mv information is associated and recorded in the mv buffer 141g (step S5). For example, in FIG. 5B, when the target block is the block SB11, the generated mv information is associated with the order information value 11 and the skip information indicating a non-skip block in the mv buffer 141g. Record.
  • the switch 141d switches the switch and records the input pmv information as provisional mv information in association with the skip information and the order information in the mv buffer 141g. (Step S5).
  • the input pmv information is used as provisional mv information, and is associated with the order information value 10 and the skip information indicating a skip block. Recorded in the mv buffer 141g. It progresses to step S6 after the process of step S5.
  • step S6 If it is determined that the block is not the last block in the immediately preceding step S1 (NO in step S6), the process returns to step S1.
  • the block scanning unit 141a triggers the skip mv estimating unit 141f to start the process of deriving the mv of the skip block Is output to the skipping mv estimation unit 141f (step S7).
  • the skip mv estimation unit 141f to which the trigger signal is input refers to the mv buffer 141g, and skips the block next to the skip block from which mv was finally derived in the raster scan order (the process of S8 is first performed on the target macroblock). If it is applied, mv is derived for the skip block having the smallest order information value (step S8). A detailed description of this process will be given later with reference to another drawing.
  • the skipping mv estimation unit 141f updates the provisional mv information (provisional mv information recorded in the mv buffer 141g) of the skip block from which mv is derived with the mv derived in step S8 (step S9). ).
  • the skip mv estimating unit 141f determines whether or not the processes of S8 and S9 have been performed for all skip blocks in the target macroblock (step S10).
  • step S10 If it is determined that there is a skip block that has not been subjected to the processes of S8 and S9 yet (NO in step S10), the process returns to the process of S8.
  • the skip mv estimation unit 141f refers to the mv buffer 141g and derives each block of the target macroblock.
  • the mv information # 141 of mv (mv represented by each of the 16 arrows in FIG. 5C) is output to the motion compensation prediction unit 142.
  • FIG. 9 is a diagram schematically showing a skip block to be predicted and four blocks A to D adjacent vertically and horizontally.
  • mv information is derived as follows according to the number of blocks for which mv has already been derived in step S5 or step S8 (hereinafter referred to as “available blocks”). .
  • mv be the median of mv.
  • the number of usable blocks is three (for example, when the skip block to be predicted is SB12 in FIG. 5B), three usable blocks (SB8, SB11, The median value of mv of SB16) is set as derived mv.
  • the mv itself of the block that can be used is derived mv, and when no usable block exists, the derived mv is a zero vector.
  • the motion vector is given as a specific example of the encoding parameter to be derived.
  • encoding parameters are not limited to motion vectors, and may be other prediction parameters referred to for generating a predicted image by motion compensation prediction.
  • the encoding parameter may be, for example, a reference image index, a reference image list number, a flag indicating whether or not to perform bi-directional prediction on a B picture.
  • a weighting factor in weighted prediction may be used as an encoding parameter. Note that the frame number of the reference image referenced in the motion compensation prediction can be specified by the reference image list number and the reference image index.
  • weighted prediction refers to generating a predicted image by taking a weighted average of predicted images generated by motion compensated prediction from each of a plurality of reference images.
  • the weighting coefficient for weighted prediction is a weighting coefficient used in this weighted average.
  • the DC component of the prediction residual transform coefficient may be used as the encoding parameter.
  • the DC component of the prediction residual transform coefficient in the skip block is used as the DC component prediction residual in each adjacent block.
  • the difference change coefficient is derived by performing a weighted average according to the distance between the skip block and the adjacent block. Accordingly, the same is derived in the moving picture decoding apparatus. That is, the prediction residual decoding unit 143 decodes only the prediction residual transform coefficient of the AC component for the skip block.
  • the encoding parameter generation processing according to the embodiment may be performed on the above-described two or more specific examples (for example, motion vectors and reference image indexes).
  • the mv decoding unit 141e decodes mv from the pmv information and the decoded difference mv information. However, when encoded mv information is input, the mv information itself is decoded. do it.
  • the mv estimation unit 141b derives the provisional mv for derivation of the pmv information in the non-skip block.
  • the moving image does not derive the provisional mv for the derivation of the pmv information.
  • An image decoding device is also included in the scope of the present invention.
  • the mv of the skip block is derived by the mv estimation unit 141f for skipping using the mv of the non-skip block adjacent to the right or the bottom, but the mv of the non-skip block adjacent to the lower left or the lower right of the skip block is used. May be.
  • the skip mv estimation unit 141f may derive mv in the skip block in the input image as follows. That is, mv of a collocated block (collocated block: a block located in the same position in the reference image and the position of the skip block in the input image), mv of four blocks adjacent vertically and horizontally, May be used, the derived mv may be the median of these five mvs.
  • the skip mv estimation unit 141f may derive mv in the skip block as follows. That is, the derived mv is composed of mv of four blocks adjacent to the skip block in the vertical and horizontal directions, and mv related to the mode value of all derived (decoded) mvs in the macroblock including the skip block. The median value of two mvs may be used.
  • the timing at which the mv estimation unit 141f for skipping starts the derivation of the mv of each skip block is the timing at which the mv decoding unit 141e has decoded all the mvs in the non-skip blocks within the specified range from the skip block.
  • the prescribed range indicates a range including a block that may refer to mv in order for the skip mv estimation unit 141f to derive mv of the skip block. That is, according to the above-described embodiment, the blocks included in the specified range are the upper, lower, left, and right blocks adjacent to the skip block for deriving mv. For example, in the macro block shown in FIG.
  • the timing at which the skip mv estimating unit 141f starts to derive the skip block SB10 is four non-included ranges included in the specified range from the skip block SB10. It is after the time when the mv decoding unit 141e has decoded all the mvs in the skip blocks SB6, SB9, SB11 and SB14.
  • the timing at which the skip mv estimation unit 141f starts to derive the skip block SB8 is the time when the mv decoding unit 141e decodes both mv in the two non-skip blocks SB4 and SB7 included in the specified range from the skip block SB8. After that.
  • provisional mv information may be used when derivation of pmv information required for decoding mv in a non-skip block.
  • pmv information equivalent to conventionally used pmv information can be derived. The reason will be described below.
  • the pmv information is derived based on the mv of the block preceding the block in the raster scan order.
  • the provisional mv information is used to derive the pmv information in the non-skip block because the block adjacent to either the left, top, or top right of the non-skip block set as the target block. This is the case for skip blocks.
  • the provisional mv in the skip block is derived based on the mv of the block preceding the block in the raster scan order.
  • the pmv information in the non-skip block is equivalent because it is derived based on the mv of the block preceding the block in the raster scan order. That is, according to the video decoding device 1, it is possible to improve the estimation accuracy of mv in the skip block while maintaining the accuracy of the pmv information in the non-skip block (that is, the estimation accuracy of mv) equivalent to the conventional method.
  • the block scanning unit 141a encodes each block included in the encoded data # 13, and the code obtained by encoding the mv related to the block is the encoded data # 13.
  • the non-skip block included is classified into a skip block in which the code obtained by encoding mv related to the block is not included in the encoded data # 13.
  • the mv decoding unit 141e decodes the mv related to the skip block and the corresponding code included in the encoded data # 13. Set to.
  • the skip mv estimation unit 141f sets the mv for the skip block as the value of mv for the non-skip block set by the mv decoding unit 141e, It is set to an estimated value derived by referring to the value of mv regarding the non-skip blocks before and after the skip block in the prescribed decoding order.
  • the moving image decoding apparatus 1 can derive mv using mv of a block that has not been used in the conventional mv estimation method, the possibility of deriving a more appropriate mv is increased. That is, the possibility that the code amount of the prediction residual to be received is reduced compared to the conventional case increases, and the code amount of the encoded data received by the video decoding device 1 is increased as compared to the conventional case.
  • the decoded image can be output with a smaller processing amount.
  • FIG. 6 is a block diagram illustrating a configuration of the moving image encoding device 2.
  • the moving image encoding device 2 includes a header information determination unit 21, a header information encoding unit 22, an MB setting unit 23, an MB encoding unit 24, a variable length code multiplexing unit 25, and an MB decoding unit. 26 and a frame memory 27.
  • the video encoding device 2 is a device that generates and outputs encoded data # 1 by encoding the input image # 100, in brief.
  • the moving image encoding apparatus 2 includes H.264 as a part thereof.
  • H.264 / MPEG-4 A moving picture coding apparatus using the technology adopted in the AVC standard.
  • the header information determination unit 21 determines header information based on the input image # 100. The determined header information is output as header information # 21.
  • the header information # 21 includes the image size of the input image # 100.
  • the header information # 21 is input to the MB setting unit 23 and supplied to the header information encoding unit 22.
  • the header information encoding unit 22 encodes header information # 21 and outputs encoded header information # 22.
  • the encoded header information # 22 is supplied to the variable length code multiplexer 25.
  • the MB setting unit 23 divides the input image # 100 into a plurality of macro blocks based on the header information # 21, and outputs a macro block image # 23 related to each macro block.
  • the macro block image # 23 is sequentially supplied to the MB encoding unit 24.
  • the MB encoding unit 24 encodes sequentially input macroblock images # 23 to generate MB encoded data # 24.
  • the generated MB encoded data # 24 is supplied to the variable length code multiplexer 25. Since the configuration of the MB encoding unit 24 will be described later, the description thereof is omitted here.
  • variable length code multiplexer 25 generates encoded data # 1 by multiplexing the encoded header information # 22 and the MB encoded data # 24, and outputs the encoded data # 1.
  • the MB decoding unit 26 generates and outputs a decoded image # 26 corresponding to each macroblock by sequentially decoding the MB encoded data # 24 corresponding to each input macroblock.
  • the decoded image # 26 is supplied to the frame memory 27.
  • the input decoded image # 26 is recorded in the frame memory 27.
  • decoded images corresponding to all macroblocks preceding the macroblock in the raster scan order are recorded in the frame memory 27.
  • FIG. 7 is a block diagram showing a configuration of the MB encoding unit 24.
  • the MB encoding unit 24 includes a skip region selection unit 24a, a motion search unit 24b, a skip mv estimation unit 24c, an mv buffer 24d, a motion compensation prediction unit 24e, an mv encoding unit 24f, and a prediction residual.
  • a difference encoding unit 24g and a skip information encoding unit 24h are provided.
  • the skip region selection unit 24a uses the input macroblock image # 23 (hereinafter, the macroblock related to the macroblock image # 23 input to the MB encoding unit 24 is referred to as “target macroblock”) as a plurality of block images. And whether each block is a skip block or a non-skip block is determined based on a predetermined algorithm. Then, the skip area selection unit 24a outputs the skip information of each block selected in the raster scan order (hereinafter, the selected block is referred to as “target block”) and the order information to the subsequent stage in the selected order. To do.
  • Examples of the predetermined algorithm include an algorithm that performs the following processing for all 2 n combinations that can be determined for n blocks when the number of blocks constituting the target macroblock is n. . That is, the motion search unit 24b to the motion compensated prediction unit 24e and the prediction residual encoding unit 24g execute the processing of each unit described later, and 2 n MB encoded data # 24 generated for each combination is stored in the MB decoding unit. 26, and 2 n decoded images # 26 are recorded in the frame memory 27. Then, for each decoded image # 26, the skip region selection unit 24a is based on the distortion of the decoded image and the code amount of the prediction residual encoded by the prediction residual encoding unit 24g for generating the decoded image. Thus, an optimum combination is determined so that the distortion is as small as possible and the code amount of the prediction residual is small.
  • the motion search unit 24b generates pmv based on the derived mv or provisional mv recorded in the mv buffer 24d, and performs motion search with reference to the decoded image # 26 recorded in the frame memory 27. To derive mv.
  • the skip mv estimator 24c generates pmv based on the derived mv or provisional mv recorded in the mv buffer 24d (the mv or provisional mv of the left, upper, and upper right blocks of the target block), and the target block Tentative mv.
  • the skip mv estimation unit 24c calculates the mv of the target block based on the derived mv recorded in the mv buffer 24d (mv of a maximum of three blocks among the blocks A to D in FIG. 9). (Estimated value) is estimated.
  • mv buffer 24d In the mv buffer 24d, skip information, mv information, and order information are recorded in association with each other.
  • the motion compensation prediction unit 24e reads each mv information from the mv buffer 24d and also reads the decoded image # 26 related to the target macroblock from the frame memory 27 to generate a predicted image of the target macroblock.
  • the mv encoding unit 24f encodes the mv information of each non-skip block read from the mv buffer 24d.
  • the prediction residual encoding unit 24g generates a prediction residual image from the macroblock image # 23 and the decoded image # 26 related to the target macroblock, and applies DCT (Discrete Cosine Transform) conversion to the prediction residual image. To generate a conversion coefficient. Also, the prediction residual encoding unit 24g quantizes and variable-quantizes the generated transform coefficient.
  • DCT Discrete Cosine Transform
  • the skip information encoding unit 24h encodes the skip information of each block read from the mv buffer 24d.
  • FIG. 8 is a flowchart showing an mv generation process for generating a motion vector of each block in the target macroblock.
  • the skip area selection unit 24a divides the input macroblock image # 23 into a plurality of block images, and determines whether each block is a skip block or a non-skip block based on the above-described predetermined algorithm. (Step S21).
  • the skip area selection unit 24a finally sets the next block in the raster scan order of the block set as the target block in S22 (if the target macroblock is subjected to the processing of S22 for the first time, the upper left corner of the target macroblock).
  • (Positioned block) is set as a target block (step S22), and it is determined whether the target block is a skip block or a non-skip block (step S23).
  • the block scanning unit 141a determines whether the target block is the last block (block SB16).
  • the skip region selection unit 24a If it is determined that the block is a non-skip block (NO in step S23), the skip region selection unit 24a generates skip information and order information and outputs the generated skip information and order information to the motion search unit 24b.
  • the motion search unit 24b When the skip information and order information of the target block are input, the motion search unit 24b generates pmv based on the derived mv or provisional mv recorded in the mv buffer 24d and is recorded in the frame memory 27. Mv is derived by performing motion search with reference to the decoded image # 26 (step S24). Then, the generated pmv, the derived mv, the skip information, and the order information are associated and recorded in the mv buffer 24d (step S25). Thereafter, the process proceeds to step S29.
  • the skip region selection unit 24a when it is determined that the block is a skip block (YES in step S23), the skip region selection unit 24a generates skip information and order information and outputs them to the skip mv estimation unit 24c.
  • the skip mv estimation unit 24c receives the derived mv or provisional mv recorded in the mv buffer 24d (mv of the block on the left, upper, and upper right of the target block).
  • pmv is generated based on provisional mv) (step S26), and the generated pmv is recorded as provisional mv of the target block in the mv buffer 24d in association with skip information and order information (skip S27).
  • step S28 If it is determined that the block is not the last block in the immediately preceding step S23 (NO in step S28), the process returns to step S22.
  • the skip area selection unit 24a skips a signal serving as a trigger for starting the process of deriving mv of the skip block. It outputs to the mv estimation part 24c for use (step S29). Then, the skip mv estimating unit 24c refers to the mv buffer 24d, and skips the next skip block after the last mv estimated in the raster scan order (if the target macroblock is subjected to the processing of S30 for the first time, the order is determined). For the skip block having the smallest information value, mv is estimated (step S30). The process of S30 is the same as the process of S8 of the video decoding device 1.
  • the skip mv estimation unit 24c then updates the provisional mv information (provisional mv information recorded in the mv buffer 24d) of the skip block in which mv is estimated to mv information by using the mv estimated in step S30 (step S30). S31).
  • the skip mv estimation unit 24c determines whether or not the processing of S30 and S31 has been performed for all skip blocks in the target macroblock (step S32).
  • step S32 If it is determined that there is a skip block that has not been subjected to the processes of S30 and S31 yet (NO in step S32), the process returns to S30.
  • the skip mv estimation unit 24c when it is determined that the processing of S30 and S31 has been performed for all skip blocks (YES in step S32), the skip mv estimation unit 24c generates the blocks of the target macroblock with reference to the mv buffer 24d.
  • mv information (mv represented by each of the 16 arrows in FIG. 5C) is output to the motion compensation prediction unit 24e.
  • the derived mv mv information and the generated pmv pmv information are output to the mv encoding unit 24f.
  • the motion compensation prediction unit 24e generates a motion compensation prediction image based on the input mv information and the decoded image # 26 read from the frame memory 27 (step S33).
  • the prediction residual encoding unit 24g generates a prediction residual from the input motion compensated prediction image and the macroblock image # 23, performs DCT transform, quantization, and variable length encoding, and obtains the obtained code
  • the converted prediction residual data is supplied to the variable length code multiplexing unit 25 (step S34).
  • the mv encoding unit 24f generates and encodes difference mv information based on the mv information and pmv information input from the skip mv estimation unit 24c, and supplies the difference mv information to the variable length code multiplexing unit 25 (step). S35). That is, the mv encoding unit 24f encodes the difference mv information of the non-skip block, but does not encode the difference mv information of the skip block.
  • the skip information encoding unit 24h reads the skip information of each block from the mv buffer 24d, encodes it, and supplies it to the variable length code multiplexing unit 25 (step S36).
  • variable length code multiplexing unit 25 receives the MB encoded data # 24 (that is, the encoded prediction residual, the encoded difference mv information, and the encoded skip information) input from the MB encoding unit 24. And the encoded header information # 22 input from the header information encoding unit 22 are multiplexed and output to the outside of the video encoding device 2 (step S37).
  • a motion vector is described as a specific example of the encoding parameter to be generated.
  • the encoding parameter is not limited to a motion vector, and may be another prediction parameter. That is, the encoding parameter may be, for example, a reference image index, a reference image list number, a flag indicating whether or not to perform bi-directional prediction on a B picture.
  • a weighting coefficient in weighted prediction for compensating for a luminance change between frames in motion compensation prediction may be used as an encoding parameter.
  • the encoding parameter includes a conversion coefficient (prediction residual conversion coefficient) for reconstructing the prediction residual.
  • the number of prediction residual transform coefficients depends on the size of the unit region for performing frequency transform (the number of pixels included in the unit region). For example, when the unit area is a block of 4 pixels ⁇ 4 lines and 4 ⁇ 4 DCT is applied, the number of transform coefficients is 16.
  • the spatial correlation of the prediction residual transform coefficient is not high, but the spatial correlation of the DC component is relatively high.
  • the prediction residual transform coefficient C_DC of the DC component in the skip block may be calculated by Equation 1.
  • i is an index indicating each of the eight blocks (adjacent blocks) adjacent to the skip block for calculating the prediction residual transform coefficient.
  • L i indicates the distance between the adjacent block indicated by index i and the skip block for calculating the prediction residual transform coefficient.
  • C i is a prediction residual transform coefficient of the DC component obtained by DCT transform in the adjacent block indicated by the index i.
  • the prediction residual encoding unit 24g performs a weighted average on the prediction residual change coefficient of the DC component in each adjacent block of the skip block to be calculated according to the distance between the skip block and the adjacent block. As a result, the prediction residual transform coefficient of the DC component is calculated. Then, the prediction residual encoding unit 24g encodes only the prediction residual transform coefficient of the AC component for the skip block.
  • the encoding parameter generation processing according to the embodiment may be performed on the above-described two or more specific examples (for example, motion vectors and reference image indexes).
  • the mv encoding unit 24f generates and encodes the difference mv from the pmv information and the searched mv information.
  • the searched mv information itself is encoded instead of the difference mv. It may be.
  • the skip mv estimation unit 24c generates the temporary mv for generating the pmv information in the non-skip block.
  • the process of generating the temporary mv does not necessarily have to be performed.
  • the mv of the skip block is estimated by the skip mv estimation unit 24c using the mv of the non-skip block adjacent to the right or bottom, but the mv of the non-skip block adjacent to the lower left or lower right of the skip block is used. May be.
  • the skip mv estimation unit 24c may estimate mv in the skip block in the input image as follows. That is, mv of a collocated block (collocated block: a block located in the same position in the reference image and the position of the skip block in the input image), mv of four blocks adjacent vertically and horizontally, May be used, the estimated mv may be the median of these five mvs.
  • the skip mv estimation unit 24c may estimate mv in the skip block as follows. That is, the mv to be estimated is five mvs consisting of mvs of four blocks adjacent to the skip block in the top, bottom, left and right, and mvs related to the mode values of all generated mvs in the macroblock including the skip block The median value may be used.
  • the skip region selection unit 24a causes the skip mv estimation unit 24c to start estimating mv at the timing when the motion search unit 24b derives all mvs, but the timing is not limited to this. .
  • the timing at which the skip mv estimation unit 24c starts to estimate the mv of the skip block is the timing at which the motion search unit 24b derives all mv in the non-skip block within the specified range from each skip block.
  • the specified range indicates a range including a block that may refer to mv in order for the skip mv estimation unit 24c to derive mv of the skip block. That is, according to the above-described embodiment, the blocks included in the specified range are the upper, lower, left, and right blocks adjacent to the skip block for estimating mv.
  • the macroblock in the above embodiment is H.264.
  • the sub-macroblock and the block correspond to HEVC CU (Coding Unit, sometimes referred to as coding tree leaf), PU (Prediction Unit), or TU (transformation Unit).
  • the skip area selection unit 24a converts each block included in the macroblock image # 23 into a code obtained by encoding mv related to the block as encoded data #.
  • the non-skip block to be included in 1 and the code obtained by encoding mv related to the block are classified into a skip area that is not included in the encoded data # 1.
  • the motion search unit 24b refers to the mv related to each non-skip block included in the macroblock image # 23 with reference to the parameter value related to the non-skip block whose encoding order is earlier than the non-skip block. To generate.
  • the skip mv estimation unit 24c sets the mv for each skip region included in the macroblock image # 23 as the mv value for the skip block generated by the motion search unit 24b, in a predetermined encoding order.
  • the estimated value derived by referring to the value of mv regarding the non-skip blocks before and after the skip block is set.
  • the moving image encoding apparatus 2 can estimate mv using mv of a block that has not been used in the conventional mv estimation method, the possibility of estimating a more appropriate mv is increased. . That is, the code amount of the encoded data to be transmitted can be reduced.
  • the deriving unit further derives the estimated value with reference to parameter values relating to other skip areas already derived by the deriving unit.
  • the decoding apparatus of the present invention derives an estimated value of a parameter related to a skip region with reference to both a parameter value related to a non-skip region and a parameter value related to a skip region already derived. To do.
  • the decoding device of the present invention it is possible to further improve the estimation accuracy of the parameter relating to the skip region.
  • the parameters referred to by the deriving means for deriving the parameters relating to each skip area are limited to the parameters relating to the unit area included in the specified range from the skip area,
  • the timing at which the derivation means starts derivation of parameters relating to each skip area is after the time when the decoding means finishes setting parameters relating to non-skip areas included in the specified range from the skip area. Is desirable.
  • the decoding apparatus performs decoding after the point when the decoding unit finishes setting the parameters related to the non-skip area included in the specified range from the skip area first positioned in the decoding order. There is an additional effect that it is possible to cause the means and the derivation means to perform processing in parallel.
  • non-skip areas included in the same partial area as the skip area for example, macroblock
  • non-skip areas whose distance from the skip area is equal to or less than a predetermined threshold, and the like correspond to non-skip areas within the specified range. To do.
  • the decoded image is divided into a plurality of partial areas, and the deriving means sets parameters relating to each skip area as parameters relating to non-skip areas belonging to the same partial area as the skip area. It is desirable to derive with reference to the value of.
  • the decoding device of the present invention derives the parameter related to each skip area included in the partial area by referring to the value of the parameter related to the non-skip area included in the same partial area.
  • the decoding device of the present invention has the further effect that the accuracy of parameter estimation regarding the skip region can be further improved.
  • the partial area may be an arbitrary partial area included in the decoded image excluding the entire decoded image.
  • the partial area may be a macro block or a slice.
  • the timing at which the deriving unit starts deriving the parameters related to each skip region is such that the decoding unit has set all the parameters related to the non-skip regions belonging to the same partial region as the skip region. It is desirable to be after the time.
  • the decoding apparatus refers to the value of an arbitrary parameter related to a non-skip area that has already been set in order to derive a parameter at the timing of starting the derivation of the parameter related to the skip area. There is a further effect of being able to.
  • the deriving means at the timing when the deriving means starts derivation of the parameter related to the skip area belonging to the partial area, the maximum parameter of the non-skip area belonging to the partial area already decoded by the decoding means. It is desirable that the derivation means derives a parameter related to each skip area belonging to the partial area based on the frequent value.
  • the parameter relating to each skip area belonging to the partial area is set to the mode value of the parameters set by the decoding means for each non-skip area included in the same partial area. It is a response. Therefore, the decoding device according to the present invention has the further effect that the accuracy of the derived parameter can be further improved.
  • the decoding order is a raster scan order in each partial region
  • the derivation means sets parameters relating to the skip regions to the right, bottom, bottom left, and bottom right of the skip region. It is desirable to derive a parameter value related to a non-skip area adjacent to any one of the parameters and set based on the parameter value set by the decoding means.
  • the decoding apparatus uses the encoding that is derived because the parameter referred to in order to derive the parameter related to the skip region is the parameter related to the non-skip region adjacent to the skip region. There is a further effect that the accuracy of the parameters can be further improved.
  • the partial area is preferably a macroblock or a slice.
  • the parameter is preferably a prediction parameter that is referred to in order to generate a prediction image by motion compensation prediction.
  • the prediction parameters include a motion vector, a reference image index, a reference image list number, a flag indicating whether or not to perform bi-directional prediction, and a weighting coefficient for weighted prediction.
  • the parameter is preferably a DC component prediction residual transform coefficient.
  • the code included in the encoded data includes a difference value between a prediction value for predicting the prediction parameter and a value of the prediction parameter.
  • the derivation means before starting the derivation of the prediction parameter, determines the temporary parameter related to each skip region by using the value of the prediction parameter set by the decoding means and the provisional parameter already derived by the derivation means.
  • the decoding means calculates a prediction parameter value for each non-skip region from the prediction value and the difference value obtained by decoding the code. It is desirable to derive the value by referring to the value of the prediction parameter set by the decoding unit and the value of the temporary parameter.
  • the decoding means derives the prediction value of the prediction parameter related to each non-skip block based on the value of the temporary parameter related to the skip region.
  • the value of the temporary parameter related to the skip area is derived by referring to the value of the parameter related to the non-skip area before the skip area in the predetermined decoding order as in the conventional decoding device. It is.
  • the decoding apparatus can improve the accuracy of the generated prediction value even when there are few non-skip regions in which the decoding means can refer to the prediction parameter in order to derive the prediction value. There is a further effect that it can be done.
  • the encoding device of the present invention includes an encoding unit that encodes the parameter generated by the generating unit, and the encoding unit does not encode the estimated value derived by the deriving unit.
  • the encoding apparatus can further reduce the code amount of the entire encoded data transmitted to the decoding apparatus.
  • the parameter is a prediction parameter referred to for generating a predicted image by motion compensation prediction, and the prediction of the image based on the encoding target image and the prediction parameter is performed. It is desirable to encode a difference image from the image.
  • the encoding apparatus of the present invention has a further effect that the code amount of the entire encoded data is reduced because the code amount of the difference image to be encoded is reduced as the prediction accuracy is improved. Play.
  • a prediction value for predicting the prediction parameter is calculated by using the encoding order before the non-skip region. It is generated by referring to the value of the prediction parameter related to the non-skip region, and the code included in the encoded data is encoded with a difference value between the value of the prediction parameter and the value of the prediction value. It is desirable that the code obtained by this is included.
  • the encoding apparatus of the present invention since the encoding apparatus of the present invention includes the code of the difference value in the encoded data instead of the code of the prediction parameter, there is a further effect that the code amount of the entire encoded data is reduced. Play.
  • the temporary parameter relating to each skip region is obtained by using the value of the prediction parameter generated by the generation means and the derivation means. It is desirable to generate based on the derived value of the temporary parameter, and the generating means to generate the predicted value based on the value of the temporary parameter.
  • the generating unit generates a prediction value of a prediction parameter related to each non-skip block based on a value of a temporary parameter related to a skip region.
  • the value of the temporary parameter related to the skip region is derived by referring to the value of the parameter related to the non-skip region before the skip region in a predetermined encoding order as in a conventional encoding device. Is.
  • the generation means can refer to the prediction parameters in order to generate a prediction value, the accuracy of the prediction value to be generated can be improved.
  • the present invention can be suitably applied to an encoding device that encodes an image and generates encoded data, and a decoding device that decodes encoded data generated using such an image encoding device.
  • Video decoding device 14 MB decoding unit 141 mv decoding unit 141 MB block scanning unit (classification means) 141b mv estimation unit 141c skip information decoding unit 141d switch 141e mv decoding unit (decoding means) 141f mv estimation unit for skipping (derivation means) 141g mv buffer 2 video encoding device (encoding device) 24 MB encoding unit (generation means) 24a Skip area selection unit (classification means) 24b Motion search unit (generation means) 24c mv estimation unit for skipping (derivation means) 24d mv buffer 24e motion compensation prediction unit 24f mv encoding unit (encoding means) 24g prediction residual encoding unit (encoding means) 24h Skip information encoding unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention porte sur un appareil décodeur d'image aminée qui comprend : une unité de balayage de blocs (141a) pour classifier des blocs en blocs de non-saut dans chacun desquels un code obtenu par codage d'un paramètre relatif au bloc est inclus dans des données codées (#13) et en blocs de saut dans chacun desquels ce code n'est pas inclus dans les données codées (#13) ; une unité de décodage mv (141e) pour régler un vecteur de mouvement (mv) relatif à chaque bloc de non-saut à une valeur obtenue par décodage du code correspondant inclus dans les données codées (#13) ; et une unité d'estimation de mv de saut (141f) pour obtenir le mv relatif à chaque bloc de saut en se rapportant à la valeur d'un paramètre réglé par l'unité de décodage mv (141e) qui est la valeur d'un paramètre relatif aux blocs de non-saut précédent et suivant dans l'ordre de balayage tramé.
PCT/JP2011/058008 2010-03-30 2011-03-30 Appareil codeur et appareil décodeur WO2011122659A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010-079580 2010-03-30
JP2010079580 2010-03-30

Publications (1)

Publication Number Publication Date
WO2011122659A1 true WO2011122659A1 (fr) 2011-10-06

Family

ID=44712353

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/058008 WO2011122659A1 (fr) 2010-03-30 2011-03-30 Appareil codeur et appareil décodeur

Country Status (1)

Country Link
WO (1) WO2011122659A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014087861A1 (fr) * 2012-12-06 2014-06-12 ソニー株式会社 Dispositif de traitement d'image, procédé de traitement d'image, et programme
WO2018173432A1 (fr) * 2017-03-21 2018-09-27 シャープ株式会社 Dispositif de génération d'image prédictive, dispositif de décodage d'image en mouvement et dispositif d'encodage d'image en mouvement

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03217185A (ja) * 1990-01-23 1991-09-24 Victor Co Of Japan Ltd 動きベクトル情報の伝送方法及びその送信機並びに受信機
JPH04234283A (ja) * 1990-08-28 1992-08-21 Philips Gloeilampenfab:Nv 動き評価ハードウェアとビデオシステムのデータ伝送容量要求を低減する方法と装置
JP2000023190A (ja) * 1998-06-26 2000-01-21 Sony Corp 動きベクトル生成方法、画像符号化装置、動き補償方法、動き補償装置、及び提供媒体
JP2006525766A (ja) * 2003-05-02 2006-11-09 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ ビデオアーチファクトを低減するためのバイアスされた動きベクトル補間
JP2009021673A (ja) * 2007-07-10 2009-01-29 Nippon Telegr & Teleph Corp <Ntt> 符号化パラメータ決定方法、符号化パラメータ決定装置、符号化パラメータ決定プログラムおよびそのプログラムを記録したコンピュータ読み取り可能な記録媒体
WO2009050658A2 (fr) * 2007-10-15 2009-04-23 Nokia Corporation Saut d'images animées et codage monoboucle pour contenu vidéo multivue

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03217185A (ja) * 1990-01-23 1991-09-24 Victor Co Of Japan Ltd 動きベクトル情報の伝送方法及びその送信機並びに受信機
JPH04234283A (ja) * 1990-08-28 1992-08-21 Philips Gloeilampenfab:Nv 動き評価ハードウェアとビデオシステムのデータ伝送容量要求を低減する方法と装置
JP2000023190A (ja) * 1998-06-26 2000-01-21 Sony Corp 動きベクトル生成方法、画像符号化装置、動き補償方法、動き補償装置、及び提供媒体
JP2006525766A (ja) * 2003-05-02 2006-11-09 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ ビデオアーチファクトを低減するためのバイアスされた動きベクトル補間
JP2009021673A (ja) * 2007-07-10 2009-01-29 Nippon Telegr & Teleph Corp <Ntt> 符号化パラメータ決定方法、符号化パラメータ決定装置、符号化パラメータ決定プログラムおよびそのプログラムを記録したコンピュータ読み取り可能な記録媒体
WO2009050658A2 (fr) * 2007-10-15 2009-04-23 Nokia Corporation Saut d'images animées et codage monoboucle pour contenu vidéo multivue

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014087861A1 (fr) * 2012-12-06 2014-06-12 ソニー株式会社 Dispositif de traitement d'image, procédé de traitement d'image, et programme
WO2018173432A1 (fr) * 2017-03-21 2018-09-27 シャープ株式会社 Dispositif de génération d'image prédictive, dispositif de décodage d'image en mouvement et dispositif d'encodage d'image en mouvement

Similar Documents

Publication Publication Date Title
RU2679116C1 (ru) Устройство кодирования видео, устройство декодирования видео, способ кодирования видео и способ декодирования видео
JP4908180B2 (ja) 動画像符号化装置
KR101911012B1 (ko) 참조 픽쳐 리스트 관리 방법 및 이러한 방법을 사용하는 장치
KR101420957B1 (ko) 화상 부호화 장치, 화상 복호 장치, 화상 부호화 방법 및 화상 복호 방법
KR101473278B1 (ko) 화상 예측 부호화 장치, 화상 예측 복호 장치, 화상 예측 부호화 방법, 화상 예측 복호 방법, 화상 예측 부호화 프로그램, 및 화상 예측 복호 프로그램
US20110182523A1 (en) Method and apparatus for image encoding/decoding
US20100118945A1 (en) Method and apparatus for video encoding and decoding
JP4703449B2 (ja) 符号化方法
JP5194119B2 (ja) 画像処理方法及び対応する電子装置
KR20130099243A (ko) 인터 예측 부호화 방법
WO2012098845A1 (fr) Procédé de codage d&#39;image, dispositif de codage d&#39;image, procédé de décodage d&#39;image et dispositif de décodage d&#39;image
US9036918B2 (en) Image processing apparatus and image processing method
WO2009133845A1 (fr) Dispositif et procédé de codage/décodage vidéo
JP2012028863A (ja) 動画像符号化装置
WO2011122659A1 (fr) Appareil codeur et appareil décodeur
JP4719854B2 (ja) 動画像符号化装置、動画像符号化プログラム、動画像復号化装置及び動画像復号化プログラム
JP5388977B2 (ja) 画像符号化方法、画像復号方法、画像符号化装置、画像復号装置、及びプログラム
JP6608904B2 (ja) 動画像符号化装置及び方法
JP6181242B2 (ja) 画像復号化方法
JP5946980B1 (ja) 画像復号化方法
JP5951915B2 (ja) 画像復号化方法
JP5911982B2 (ja) 画像復号化方法
JP5750191B2 (ja) 画像復号化方法
JP2023549182A (ja) 参照フレームの適応的な並べ替えのための方法および装置
JP2018074603A (ja) 動画像符号化装置及び方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11762883

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11762883

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP