WO2015152504A1 - 시점 간 움직임 병합 후보 유도 방법 및 장치 - Google Patents
시점 간 움직임 병합 후보 유도 방법 및 장치 Download PDFInfo
- Publication number
- WO2015152504A1 WO2015152504A1 PCT/KR2015/000450 KR2015000450W WO2015152504A1 WO 2015152504 A1 WO2015152504 A1 WO 2015152504A1 KR 2015000450 W KR2015000450 W KR 2015000450W WO 2015152504 A1 WO2015152504 A1 WO 2015152504A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- inter
- block
- view
- reference block
- motion merging
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
- H04N19/139—Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/521—Processing of motion vectors for estimating the reliability of the determined motion vectors or motion vector field, e.g. for smoothing the motion vector field or for correcting motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/56—Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search
Definitions
- the present invention relates to a method and apparatus for deriving an inter-view motion merging candidate, and more particularly, to a method and apparatus for deriving a motion merging candidate using encoding information of a reference block to derive a motion merging candidate for a current block.
- JCT-VC Joint Collaborative Team on Video Coding
- the 3D video vividly provides the user with a three-dimensional effect as seen and felt in the real world through the three-dimensional stereoscopic display device.
- JCT-3V Joint Collaborative Team on 3D Video Coding Extension Development
- the 3D video standard includes standards for advanced data formats and related technologies that can support not only stereoscopic images but also autostereoscopic images using real images and their depth information maps.
- 3D-HEVC which is being standardized as a 3D extension standard of HEVC, may use motion merge as a prediction encoding tool.
- Motion merging is a method of inheriting motion information derived from neighboring blocks of the current block as it is and using the motion information of the current block.
- the motion merging of 3D-HEVC is based on HEVC.
- 3D-HEVC may use inter-view motion merging based on images from multiple viewpoints. That is, in 3D-HEVC, motion information may be derived from a block (hereinafter, referred to as a reference block) at a position corresponding to the current block among blocks of adjacent views.
- a reference block a block at a position corresponding to the current block among blocks of adjacent views.
- 3D-HEVC can not derive motion information for the current block from a reference block in certain cases, and in this case, there is a problem that motion inter-view merging cannot be used.
- Korean Patent Publication No. 10-2013-7027419 name of the invention: a method and apparatus for predicting and compensating motion vectors and disparity vectors for 3D video coding
- a skip mode for a block of a current screen in 3D video coding Obtain MV (motion vector) / MVP (motion vector predictor) or DV (disparity vector) / DVP (disparity vector predictor) associated with merge mode or inter mode
- MV motion vector
- MVP motion vector predictor
- DV disparity vector
- DVP disarity vector predictor
- the predetermined information when merging the inter-view motion for the current block, when the reference block is intra coded, the predetermined information, such as motion information from the adjacent block spatially adjacent to the reference block It is inherited and used for merging movements on the current block.
- the method for generating a motion merge candidate according to an embodiment of the present invention is a view of the current block based on the encoding information of the inter-view reference block derived through the disparity vector of the current block. Determining whether inter-motion merging is possible; And generating an inter-view motion merging candidate for the current block by using encoding information of an adjacent block that is spatially adjacent to the inter-view reference block if the inter-view motion merging is impossible for the current block.
- the apparatus for generating motion merge candidates may further include a block search unit and an inter-view point, each of which obtains encoding information from an inter-view reference block derived from the disparity vector of the current block and at least one neighboring block adjacent to the reference block.
- An information analyzer for determining whether inter-view motion merging is possible with respect to the current block based on encoding information of the reference block; and if the inter-view motion merging with respect to the current block is impossible, a view of the current block using encoding information of the adjacent block.
- a candidate generator for generating inter-motion merge candidates.
- the method for generating a motion merge candidate generates an inter-view motion merge candidate for a current block by using encoding information of an inter-view reference block when inter-view motion merging is possible for the current block.
- the step may further include.
- the step of determining whether inter-view motion merging is possible with respect to the current block based on the encoding information of the inter-view reference block derived through the disparity vector of the current block may include intra-view reference blocks being intra based on encoding information of the inter-view reference blocks. It may be determined whether or not it is encoded.
- the step of generating the inter-view motion merge candidate for the current block using the encoding information of the neighboring block spatially adjacent to the inter-view reference block is performed spatially with the inter-view reference block.
- the encoding information may be generated using encoding information of a high correlation neighboring block included in an object area including a reference block among a plurality of adjacent blocks.
- the step of generating the inter-view motion merging candidate for the current block using the encoding information of the adjacent block that is spatially adjacent to the inter-view reference block may be performed in a spatial manner.
- the inheritance priority may be set in advance according to the inter-view reference block and the encoding order of each neighboring block.
- the high correlation neighboring block may be a neighboring block encoded after the inter-view reference block.
- the candidate generator may generate the inter-view motion merging candidate for the current block by using encoding information of the inter-view reference block.
- the information analyzer may determine whether the inter-view reference block is intra encoded based on the encoding information of the inter-view reference block.
- the information analyzer may determine the reference by referring to a header including a flag indicating whether to use encoded information.
- the header may be a video parameter set extension.
- the encoding information of the reference block may include depth information and motion information of the reference block
- the encoding information of the neighboring block may include depth information and motion information of the neighboring block.
- the candidate generator may generate using the encoding information of the high correlation neighboring block included in the object region including the reference block among a plurality of adjacent blocks spatially adjacent to the inter-view reference block.
- the candidate generation unit is generated by using encoding information of a high correlation neighboring block determined according to the inheritance priority among a plurality of adjacent blocks that are spatially adjacent to the inter-view reference block, and the inheritance priority is after the inter-view reference block and the inter-view reference block. It may be preset in the order of the neighboring block to be encoded in the neighboring block to be encoded before the inter-view reference block.
- the present invention uses a method of deriving motion information from an adjacent block of the inter-view reference block. Therefore, the present invention can increase the coding efficiency of motion merging in 3D video encoding. In addition, the present invention can reduce computational complexity and memory complexity in decoding.
- FIG. 1 is a block diagram illustrating an example of an image encoding apparatus.
- FIG. 2 is a block diagram illustrating an example of an image decoding apparatus.
- FIG 3 illustrates a case in which it is impossible to generate an inter-view motion merging candidate in the conventional method.
- FIG 4 illustrates an example of an intra coded block.
- FIG. 5 is a block diagram of an apparatus for generating inter-view motion merging candidates according to an embodiment of the present invention.
- FIG. 6 is a schematic diagram to which the method of generating the inter-view motion merge candidate according to an embodiment of the present invention is applied.
- FIG. 7 is a flowchart of a method of generating a time-to-view motion merge candidate according to an embodiment of the present invention.
- FIG. 8 is a schematic diagram to which the method of generating the inter-view motion merging candidate according to another embodiment of the present invention is applied.
- FIG. 9 is a flowchart of a method for generating an inter-view motion merging candidate according to another embodiment of the present invention.
- the method and apparatus disclosed in the embodiments of the present invention can be applied to both an encoding process and a decoding process performed in an image processing process, and the term 'coding' used throughout this specification refers to an encoding process and a decoding process. It is a parent concept that includes all of them.
- a person skilled in the art will be able to easily understand the decoding process with reference to the contents described as the encoding process and vice versa.
- encoding means converting a form or format of an image into another form or format for standardization, security, compression, or the like.
- Decoding also means converting the encoded video back into a form or format before being encoded.
- FIG. 1 is a block diagram illustrating an example of an image encoding apparatus 100.
- the image encoding apparatus 100 may include a predictor 110, a subtractor 120, a transformer 130, a quantizer 140, an encoder 150, an inverse quantizer 160, It may include an inverse transform unit 170, an adder 180, and a memory 190.
- the predictor 110 generates a predicted block by predicting a current block to be currently encoded in an image. That is, the prediction unit 110 may generate a pixel value of each pixel of the current block as a prediction block having a pixel value predicted according to the motion information determined based on the motion estimation. In addition, the prediction unit 110 may transmit the information about the prediction mode to the encoder such that the encoding unit 150 encodes the information about the prediction mode.
- the subtraction unit 120 may generate a residual block by subtracting the prediction block from the current block.
- the converter 130 may convert the residual block into the frequency domain to convert each pixel value of the residual block into a frequency coefficient.
- the transform unit 130 may convert the time-domain image signal into the frequency domain based on a transform method such as a Hadamard transform or a Discrete Cosine transform based transform. .
- the quantization unit 140 may quantize the residual block transformed into the frequency domain by the transform unit 130.
- the encoder 150 may encode the quantized residual block based on an encoding technique and output the encoded quantized block in a bit stream.
- the encoding technique may be an entropy coding technique.
- the encoder 150 may encode information about the prediction mode for the current block received from the predictor 110 together.
- the inverse quantization unit 160 may inverse quantize the residual block quantized by the quantization unit 140. That is, the inverse quantization unit 160 may inversely quantize the residual block in the quantized frequency domain to convert the residual block converted into the frequency domain.
- the inverse transform unit 170 may inverse transform the residual block inversely quantized by the inverse quantizer 160. That is, the inverse transform unit 170 may restore the residual block in the frequency domain to the residual block having the pixel value. In this case, the inverse transform unit 170 may perform inverse transform on the transform method of the transform unit 130.
- the adder 180 may reconstruct the current block by adding the prediction block predicted by the predictor 110 and the residual block inversely transformed and reconstructed by the inverse transformer 170.
- the restored current block is stored in the memory 190, and the restored current block stored in the memory 190 may be transferred to the predictor 110 to be used to predict the next block as a reference block.
- the image encoding apparatus 100 may include a deblocking filter (not shown).
- the deblocking filter (not shown) may perform a function of improving the image having a better image quality before storing the current block restored by the adder 180 in the memory.
- FIG. 2 is a block diagram illustrating an example of an image decoding apparatus 200.
- the image decoding apparatus 200 may decode a bitstream to extract a residual block and a prediction mode before encoding by the image encoding apparatus 100.
- the image decoding apparatus 200 may include a decoder 210, an inverse quantizer 220, an inverse transformer 230, an adder 240, a predictor 250, and a memory 260.
- the decoder 210 may restore motion information of the encoded residual block and the current block from the input bitstream. That is, the decoder 210 may restore the encoded residual block to the quantized residual block based on the encoding technique.
- the encoding technique of the decoder 210 may be an entropy encoding technique.
- the inverse quantizer 220 may inverse quantize the quantized residual block. That is, the inverse quantizer 220 may inversely quantize the quantized residual block and restore the residual block transformed into the frequency domain.
- the inverse transform unit 230 may inversely transform an inverse quantized residual block restored from the inverse quantizer 220 to restore the residual block.
- the inverse transformer 230 may perform inverse transformation by performing a transformation technique used by the transformer 130 of the image encoding apparatus 100 in reverse.
- the predictor 240 may be extracted from the bitstream, and may be predicted by the decoder 210 based on the motion information of the current block decoded and reconstructed, and generate a predicted block.
- the adder 250 may reconstruct the current block by adding the prediction block and the reconstructed residual block. That is, the adder 250 adds the predicted pixel value of the predicted block output from the predictor 240 and the residual signal of the reconstructed residual block output from the inverse transform unit 230 to add the reconstructed pixel value of the current block. To restore the current block.
- the current block restored by the adder 250 may be stored in the memory 260.
- the stored current block may be stored as a reference block so that the prediction unit 240 may be used to predict the next block.
- FIG 3 illustrates a case in which it is impossible to generate an inter-view motion merging candidate in the conventional method.
- a previous view frame 310 corresponding to the current block 321 with respect to a current block 321 that is a block to be currently encoded within the previous view frame 320 is included.
- Reference block 311 may be found.
- the conventional inter-view motion merging candidate generation method may use a disparity vector 130 based on disparity that corrects positions on frames of different viewpoints.
- the conventional inter-view motion merging candidate generation method may be used as an inter-view motion merging candidate for the current block 321 inherited from the motion information of the reference block 311.
- the conventional inter-view motion merging candidate generation method when generating the inter-view motion merging candidate of the current block 321, typically, when the reference block 311 is intra coded, motion information is generated from the reference block 311. Cannot inherit. Therefore, the conventional inter-view motion merging candidate generation method cannot use the inter-view motion merging method.
- FIG 4 illustrates an example of an intra coded block.
- the current block X2' when the current block X2 'is encoded, the current block X2' may refer to encoding information of an adjacent block that is spatially adjacent to the current block X2 '.
- the current block X2' when the current block X2 'is encoded, the current block X2' may refer to blocks A21, A22, A23, B21, and C21 encoded before the current block X2 '.
- the upper blocks A21, A22, A23, B21, and C21 of FIG. 4 may not be reference blocks to which the current block X2 ′ may refer. Therefore, the current block X2 'is encoded in the intra mode.
- the correlation is the same concept as the correlation coefficient between two variables in probability statistical theory.
- the correlation may indicate similarity between pixel values in a block in an image processing field.
- the current block may be expressed as having a high correlation with the first neighboring block.
- the same object region described above may be determined using depth information obtained through the depth camera, but is not limited in this manner.
- the apparatus and method for generating the inter-view motion merge candidate according to the present invention may be performed by inheriting motion information among adjacent blocks of the reference block. For this reason, the apparatus and method for generating the inter-view motion merge candidate may increase the coding efficiency of the motion merge for the current block. In addition, the apparatus and method for generating inter-view motion merge candidates may reduce computational complexity and memory complexity when decoding or encoding.
- the encoding efficiency may be a value considering the difference in image quality from the original image and the bit rate of the compressed video stream when the video is compressed.
- the image quality difference may be determined by a peak signal-to-noise ratio (PSNR).
- PSNR peak signal-to-noise ratio
- inter-view motion merging candidate generating apparatus 500 and the method according to an embodiment of the present invention will be described in detail.
- FIG. 5 is a block diagram of an apparatus 500 for deriving inter-view motion merging candidates according to an embodiment of the present invention.
- the apparatus for deriving inter-view motion candidate candidates 500 may include a block search unit 510, an information analyzer 520, and a candidate generator 530. have.
- the block search unit 510 may obtain encoding information from the inter-view reference block derived from the disparity vector of the current block and at least one adjacent block spatially adjacent to the reference block.
- the information analyzer 520 may determine whether inter-view motion merging of the current block is possible based on encoding information of the inter-view reference block.
- the information analyzer 520 may determine whether the inter-view reference block is intra-encoded based on the encoding information of the inter-view reference block.
- the information analyzer 520 may determine the header by including a flag indicating whether to use encoded information.
- the header may be a video parameter set extension.
- the above-described encoding information of the reference block may include motion information and depth information of the reference block.
- the candidate generator 530 may generate the inter-view motion merging candidate for the current block by using encoding information of the adjacent block.
- the encoding information of the neighboring block may include motion information and depth information of the neighboring block.
- the candidate generator 530 may generate an inter-view motion merging candidate for the current block by using encoding information of the inter-view reference block.
- the candidate generator 530 may generate the encoding information of the high correlation neighboring block included in the object region including the inter-view reference block among a plurality of adjacent blocks that are spatially adjacent to the inter-view reference block.
- the highly correlated neighboring block may be determined according to whether it is included in an object region including an inter-view reference block, and the above-described information may be used when determining the object region.
- the candidate generator 530 may generate the encoding information of the high correlation neighboring block determined according to the inheritance priority among the plurality of adjacent blocks spatially adjacent to the inter-view reference block.
- the inheritance priority may be preset in the order of the inter-view reference block, the neighboring block encoded after the inter-view reference block, and the neighboring block encoded before the inter-view reference block.
- the high correlation neighboring block may be determined according to this inheritance priority.
- the inter-view motion merging candidate deriving apparatus 500 may be included in the image encoding apparatus 100 illustrated in FIG. 1 or the image decoding apparatus 200 illustrated in FIG. 2.
- the inter-view motion merging candidate derivation apparatus 500 may be mounted in the image encoding apparatus 100 or the image decoding apparatus 200 in one configuration.
- each component of the inter-view motion merging candidate derivation apparatus 500 or a program that performs the operation of each component is included in existing components such as the predictor 110 and the adder 180 of the image encoding apparatus 100. Or a form included in an existing configuration such as the predictor 250 or the adder 240 of the image decoding apparatus 200.
- FIG. 6 is a schematic diagram to which the method of generating the inter-view motion merge candidate according to an embodiment of the present invention is applied.
- motion information for the current block X4 may be derived according to an embodiment of the present invention.
- the above-described reference block X4 ' may be an inter-view reference block X4' derived through the disparity vector 630 of the current block X4.
- the inter-view motion merging candidate generation method may use inter-view motion merging even when motion information does not exist, such as when the reference block X4 'is encoded in the intra mode.
- the method of generating the inter-view motion merging candidate belongs to the same object as the reference block X4' among the blocks encoded after the reference block X4 ', so that the current block X4 moves. High correlation with information
- the inter-view motion merging candidate generation method cannot derive motion information from the reference block X4 '.
- the method of generating the inter-view motion merging candidate includes blocks B43, C42, and C43 that are encoded after the reference block X4 'is encoded in the intra mode among adjacent blocks spatially adjacent to the reference block X4'. That is, if motion information is derived from blocks B43, C42, and C43 having a high correlation with the reference block, an inter-view motion merge candidate having a high encoding efficiency may be used when generating the inter-view motion merge candidate.
- FIG. 7 is a flowchart of a method of generating a time-to-view motion merge candidate according to an embodiment of the present invention.
- a view of a current block is based on encoding information of an inter-view reference block derived from the current vector's disparity vector. It may be determined whether inter-movement merging is possible (S720). In addition, when the inter-view motion merging candidate generation method is impossible, the inter-view motion merging candidate for the current block may be generated using encoding information of an adjacent block spatially adjacent to the inter-view reference block. It may be (S750).
- the inter-view motion merging candidate derivation method may calculate the position of the reference block of the previous view corresponding to the current block by using the disparity vector of the current block of the current view (S10).
- an inter-view motion merging candidate for the current block may be generated using encoding information of the inter-view reference block (S730).
- the method of deriving the inter-view motion merging candidate may be performed to determine whether inter-view motion merging is possible for the current block based on encoding information of the inter-view reference block derived through the disparity vector of the current block (S720). It may be determined whether the inter-view reference block is intra coded based on the encoding information of the reference block.
- the method of deriving inter-view motion merging candidate determines whether inter-view motion merging is possible for the current block based on the encoding information of the inter-view reference block derived through the disparity vector of the current block described above. If it is impossible, it may be determined whether encoding information of an adjacent block is available (S740).
- FIG. 8 illustrates a method of generating inter-view motion merging candidates according to another embodiment of the present invention.
- the method for generating the inter-view motion merging candidate may derive motion information for the current block X6.
- the reference block X6 ′ corresponding to the current block X6 may be encoded in the intra mode.
- the above-described reference block X6 ′ may be an inter-view reference block derived through the disparity vector 830 of the current block X6, and an arrow 812 indicated in each block is assigned to each block described above. It may be included motion information.
- the shaded area 811 may be an object area divided using predetermined information.
- the predetermined information may be depth information input by a depth camera.
- FIG. 9 is a flowchart of a method for generating an inter-view motion merging candidate according to another embodiment of the present invention.
- the method of deriving inter-view motion merging candidates calculates the position of the reference block of the previous view corresponding to the current block by using the disparity vector of the current block of the current view (S910). Based on the encoding information of the inter-view reference block derived from the disparity vector of the current block at the current view, it may be determined whether inter-view motion merging is possible for the current block (S920). In the motion merging candidate derivation method, when inter-view motion merging is impossible for the current block, an inter-view motion merging candidate for the current block may be generated using encoding information of an adjacent block spatially adjacent to the inter-view reference block ( S950).
- an inter-view motion merging candidate for the current block may be generated using encoding information of the inter-view reference block (S930).
- the inter-view motion merging candidate derivation method may be performed to determine whether inter-view motion merging is possible for the current block based on encoding information of the inter-view reference block derived through the disparity vector of the current block (S920). It may be determined whether the inter-view reference block is intra coded based on the encoding information of the reference block.
- the method of deriving the inter-view motion merging candidate may determine whether at least one neighboring block among the encoded neighboring blocks before the reference block is capable of motion merging when at least one of the neighboring blocks encoded after the reference block is impossible to merge. It may be (S960).
- the inter-view motion merging candidate derivation method may generate an inter-view motion merging candidate from the adjacent block (S970).
- 'Components' included in an embodiment of the present invention are not limited to software or hardware, and each component may be configured to be in an addressable storage medium and configured to reproduce one or more processors. May be
- a component may include components such as software components, object-oriented software components, class components, and task components, and processes, functions, properties, procedures, and subs. Routines, segments of program code, drivers, firmware, microcode, circuits, data, databases, data structures, tables, arrays, and variables.
- Components and the functionality provided within those components may be combined into a smaller number of components or further separated into additional components.
- Computer readable media can be any available media that can be accessed by a computer and includes both volatile and nonvolatile media, removable and non-removable media.
- Computer readable media may include both computer storage media and communication media.
- Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
- Communication media typically includes computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave, or other transmission mechanism, and includes any information delivery media.
- the above-described inter-view motion merging candidate generating method according to the present invention may be embodied as computer readable codes on a computer readable recording medium.
- Computer-readable recording media include all kinds of recording media having data stored thereon that can be decrypted by a computer system. For example, there may be a read only memory (ROM), a random access memory (RAM), a magnetic tape, a magnetic disk, a flash memory, an optical data storage device, and the like.
- the computer readable recording medium can also be distributed over computer systems connected over a computer network, stored and executed as readable code in a distributed fashion.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (14)
- 시점 간 움직임 병합 후보 유도 방법에 있어서,(a) 현재 블록의 변이 벡터를 통해 유도된 시점 간 참조 블록의 부호화 정보를 기초로 상기 현재 블록에 대한 시점 간 움직임 병합이 가능한지 판단하는 단계; 및(b) 상기 현재 블록에 대한 시점 간 움직임 병합이 불가능한 경우, 상기 시점 간 참조 블록과 공간적으로 인접한 인접 블록의 부호화 정보를 이용하여 상기 현재 블록에 대한 시점 간 움직임 병합 후보를 생성하는 단계를 포함하는 시점 간 움직임 병합 후보 유도 방법.
- 제 1 항에 있어서,(c) 상기 현재 블록에 대한 시점 간 움직임 병합이 가능한 경우, 상기 시점 간 참조 블록의 부호화 정보를 이용하여 상기 현재 블록에 대한 시점 간 움직임 병합 후보를 생성하는 단계를 더 포함하는 시점 간 움직임 병합 후보 유도 방법.
- 제 1 항에 있어서,상기 (a)단계는상기 시점 간 참조 블록의 부호화 정보를 기초로 상기 시점 간 참조 블록이 인트라 부호화되었는지 판단하는 시점 간 움직임 병합 후보 유도 방법.
- 제 1 항에 있어서,상기 (b)단계는상기 시점 간 참조 블록과 공간적으로 인접한 복수의 인접 블록 중, 상기 참조 블록을 포함하는 객체 영역에 포함되는 고상관성 인접 블록의 부호화 정보를 이용하여 생성하는 시점 간 움직임 병합 후보 유도 방법.
- 제 1 항에 있어서,상기 (b)단계는 상기 시점 간 참조 블록과 공간적으로 인접한 복수의 인접 블록 중 상속 우선순위에 따라 결정된 고상관성 인접 블록의 부호화 정보를 이용하여 생성하고,상기 상속 우선순위는 상기 시점 간 참조 블록과 각각의 인접 블록 부호화된 순서에 따라 미리 설정되는 것인 시점 간 움직임 병합 후보 유도 방법.
- 제 5 항에 있어서,상기 고상관성 인접 블록은상기 시점 간 참조 블록 이후에 부호화된 인접 블록인 시점 간 움직임 병합 후보 유도 방법.
- 시점 간 움직임 병합 후보 유도 장치에 있어서,현재 블록의 변이 벡터를 통해 유도된 시점 간 참조 블록 및 상기 참조 블록과 공간적으로 인접한 적어도 하나의 인접 블록으로부터 부호화 정보를 각각 획득하는 블록 탐색부;상기 시점 간 참조 블록의 부호화 정보를 기초로 상기 현재 블록에 대한 시점 간 움직임 병합이 가능한지 판단하는 정보 분석부; 및상기 현재 블록에 대한 시점 간 움직임 병합이 불가능한 경우, 상기 인접 블록의 부호화 정보를 이용하여 상기 현재 블록에 대한 시점 간 움직임 병합 후보를 생성하는 후보 생성부를 포함하는 시점 간 움직임 병합 후보 유도 장치.
- 제 7 항에 있어서,상기 후보 생성부는상기 현재 블록에 대한 시점 간 움직임 병합이 가능한 경우, 상기 시점 간 참조 블록의 부호화 정보를 이용하여 상기 현재 블록에 대한 시점 간 움직임 병합 후보를 생성하는 시점 간 움직임 병합 후보 유도 장치.
- 제 7 항에 있어서,상기 정보 분석부는상기 시점 간 참조 블록의 부호화 정보를 기초로 상기 시점 간 참조 블록이 인트라 부호화되었는지 판단하는 시점 간 움직임 병합 후보 유도 장치.
- 제 9 항에 있어서,상기 정보 분석부는상기 부호화 정보의 사용 여부를 나타내는 플래그를 포함하는 헤더를 참조하여 판단하는 시점 간 움직임 병합 후보 유도 장치.
- 제 10 항에 있어서,상기 헤더는 비디오 파라미터 세트 익스텐션(Video Parameter Set Extension)인 시점 간 움직임 병합 후보 유도 장치.
- 제 7항에 있어서,상기 참조 블록의 부호화 정보는상기 참조 블록의 깊이 정보 및 움직임 정보를 포함하고,상기 인접 블록의 부호화 정보는상기 인접 블록의 깊이 정보 및 움직임 정보를 포함하는 시점 간 움직임 병합 후보 유도 장치.
- 제 7항에 있어서,상기 후보 생성부는상기 시점 간 참조 블록과 공간적으로 인접한 복수의 인접 블록 중, 상기 참조 블록을 포함하는 객체 영역에 포함되는 고상관성 인접 블록의 부호화 정보를 이용하여 생성하는 시점 간 움직임 병합 후보 유도 장치.
- 제 7항에 있어서,상기 후보 생성부는상기 시점 간 참조 블록과 공간적으로 인접한 복수의 인접 블록 중 상속 우선순위에 따라 결정된 고상관성 인접 블록의 부호화 정보를 이용하여 생성하고,상기 상속 우선순위는 상기 시점 간 참조 블록, 상기 시점 간 참조 블록 이후에 부호화되는 인접 블록 및 상기 시점 간 참조 블록 이전에 부호화되는 인접 블록의 순서로 미리 설정되는 것인 시점 간 움직임 병합 후보 유도 장치.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010032169.0A CN111343459B (zh) | 2014-03-31 | 2015-01-15 | 用于解码/编码视频信号的方法以及可读存储介质 |
CN201580024064.8A CN106464898B (zh) | 2014-03-31 | 2015-01-15 | 用于导出视图间运动合并候选的方法和装置 |
US15/126,028 US10616602B2 (en) | 2014-03-31 | 2015-01-15 | Method and device for deriving inter-view motion merging candidate |
US16/800,268 US11729421B2 (en) | 2014-03-31 | 2020-02-25 | Method and device for deriving inter-view motion merging candidate |
US18/319,186 US20230291931A1 (en) | 2014-03-31 | 2023-05-17 | Method and device for deriving inter-view motion merging candidate |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2014-0038097 | 2014-03-31 | ||
KR1020140038097A KR102260146B1 (ko) | 2014-03-31 | 2014-03-31 | 시점 간 움직임 병합 후보 유도 방법 및 장치 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/126,028 A-371-Of-International US10616602B2 (en) | 2014-03-31 | 2015-01-15 | Method and device for deriving inter-view motion merging candidate |
US16/800,268 Continuation US11729421B2 (en) | 2014-03-31 | 2020-02-25 | Method and device for deriving inter-view motion merging candidate |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015152504A1 true WO2015152504A1 (ko) | 2015-10-08 |
Family
ID=54240785
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2015/000450 WO2015152504A1 (ko) | 2014-03-31 | 2015-01-15 | 시점 간 움직임 병합 후보 유도 방법 및 장치 |
Country Status (4)
Country | Link |
---|---|
US (3) | US10616602B2 (ko) |
KR (5) | KR102260146B1 (ko) |
CN (2) | CN111343459B (ko) |
WO (1) | WO2015152504A1 (ko) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10616602B2 (en) | 2014-03-31 | 2020-04-07 | Intellectual Discovery Co., Ltd. | Method and device for deriving inter-view motion merging candidate |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020098812A1 (en) * | 2018-11-16 | 2020-05-22 | Beijing Bytedance Network Technology Co., Ltd. | Pruning method for history-based affine parameters |
WO2020169109A1 (en) | 2019-02-22 | 2020-08-27 | Beijing Bytedance Network Technology Co., Ltd. | Sub-table for history-based affine mode |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20080036910A (ko) * | 2006-10-24 | 2008-04-29 | 엘지전자 주식회사 | 비디오 신호 디코딩 방법 및 장치 |
KR20120066741A (ko) * | 2010-12-15 | 2012-06-25 | 에스케이 텔레콤주식회사 | 움직임정보 병합을 이용한 부호움직임정보생성/움직임정보복원 방법 및 장치와 그를 이용한 영상 부호화/복호화 방법 및 장치 |
US20120177125A1 (en) * | 2011-01-12 | 2012-07-12 | Toshiyasu Sugio | Moving picture coding method and moving picture decoding method |
KR20130139827A (ko) * | 2011-02-09 | 2013-12-23 | 엘지전자 주식회사 | 영상 부호화 및 복호화 방법과 이를 이용한 장치 |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7961963B2 (en) * | 2005-03-18 | 2011-06-14 | Sharp Laboratories Of America, Inc. | Methods and systems for extended spatial scalability with picture-level adaptation |
KR20070069615A (ko) * | 2005-12-28 | 2007-07-03 | 삼성전자주식회사 | 움직임 추정장치 및 움직임 추정방법 |
CN101473655B (zh) * | 2006-06-19 | 2011-06-08 | Lg电子株式会社 | 用于处理多视点视频信号的方法和装置 |
KR101370919B1 (ko) * | 2006-07-12 | 2014-03-11 | 엘지전자 주식회사 | 신호처리방법 및 장치 |
US9571851B2 (en) * | 2009-09-25 | 2017-02-14 | Sk Telecom Co., Ltd. | Inter prediction method and apparatus using adjacent pixels, and image encoding/decoding method and apparatus using same |
JP5368631B2 (ja) * | 2010-04-08 | 2013-12-18 | 株式会社東芝 | 画像符号化方法、装置、及びプログラム |
CN103597837B (zh) | 2011-06-15 | 2018-05-04 | 寰发股份有限公司 | 推导运动和视差矢量的方法及装置 |
KR102029401B1 (ko) * | 2011-11-11 | 2019-11-08 | 지이 비디오 컴프레션, 엘엘씨 | 깊이-맵 추정 및 업데이트를 사용한 효율적인 멀티-뷰 코딩 |
KR101806341B1 (ko) | 2011-12-14 | 2017-12-08 | 연세대학교 산학협력단 | 예측 움직임 벡터 선정에 따른 영상 부호화 방법 및 장치, 및 영상 복호화 방법 및 장치 |
WO2013159326A1 (en) * | 2012-04-27 | 2013-10-31 | Mediatek Singapore Pte. Ltd. | Inter-view motion prediction in 3d video coding |
CN102769748B (zh) * | 2012-07-02 | 2014-12-24 | 华为技术有限公司 | 运动矢量预测方法、装置及*** |
WO2014005280A1 (en) * | 2012-07-03 | 2014-01-09 | Mediatek Singapore Pte. Ltd. | Method and apparatus to improve and simplify inter-view motion vector prediction and disparity vector prediction |
WO2014005548A1 (en) * | 2012-07-05 | 2014-01-09 | Mediatek Inc. | Method and apparatus of unified disparity vector derivation for 3d video coding |
WO2014053086A1 (en) * | 2012-10-05 | 2014-04-10 | Mediatek Singapore Pte. Ltd. | Method and apparatus of motion vector derivation 3d video coding |
EP2966868B1 (en) * | 2012-10-09 | 2018-07-18 | HFI Innovation Inc. | Method for motion information prediction and inheritance in video coding |
CN102946536B (zh) * | 2012-10-09 | 2015-09-30 | 华为技术有限公司 | 候选矢量列表构建的方法及装置 |
US9936219B2 (en) * | 2012-11-13 | 2018-04-03 | Lg Electronics Inc. | Method and apparatus for processing video signals |
EP2941867A4 (en) * | 2013-01-07 | 2016-07-06 | Mediatek Inc | METHOD AND DEVICE FOR DERIVING A PREDICTION OF SPATIAL MOTION VECTORS FOR DIRECT AND SKIP MODES IN A THREE-DIMENSIONAL VIDEO-CORDING |
US9288507B2 (en) * | 2013-06-21 | 2016-03-15 | Qualcomm Incorporated | More accurate advanced residual prediction (ARP) for texture coding |
US9800895B2 (en) * | 2013-06-27 | 2017-10-24 | Qualcomm Incorporated | Depth oriented inter-view motion vector prediction |
US9554150B2 (en) * | 2013-09-20 | 2017-01-24 | Qualcomm Incorporated | Combined bi-predictive merging candidates for 3D video coding |
US9967592B2 (en) * | 2014-01-11 | 2018-05-08 | Qualcomm Incorporated | Block-based advanced residual prediction for 3D video coding |
KR102260146B1 (ko) | 2014-03-31 | 2021-06-03 | 인텔렉추얼디스커버리 주식회사 | 시점 간 움직임 병합 후보 유도 방법 및 장치 |
-
2014
- 2014-03-31 KR KR1020140038097A patent/KR102260146B1/ko active IP Right Grant
-
2015
- 2015-01-15 CN CN202010032169.0A patent/CN111343459B/zh active Active
- 2015-01-15 US US15/126,028 patent/US10616602B2/en active Active
- 2015-01-15 WO PCT/KR2015/000450 patent/WO2015152504A1/ko active Application Filing
- 2015-01-15 CN CN201580024064.8A patent/CN106464898B/zh active Active
-
2020
- 2020-02-25 US US16/800,268 patent/US11729421B2/en active Active
-
2021
- 2021-05-28 KR KR1020210068860A patent/KR102363415B1/ko active IP Right Grant
-
2022
- 2022-02-10 KR KR1020220017709A patent/KR102480955B1/ko active IP Right Grant
- 2022-12-20 KR KR1020220179606A patent/KR102572012B1/ko active IP Right Grant
-
2023
- 2023-05-17 US US18/319,186 patent/US20230291931A1/en active Pending
- 2023-08-24 KR KR1020230111043A patent/KR20230129320A/ko not_active Application Discontinuation
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20080036910A (ko) * | 2006-10-24 | 2008-04-29 | 엘지전자 주식회사 | 비디오 신호 디코딩 방법 및 장치 |
KR20120066741A (ko) * | 2010-12-15 | 2012-06-25 | 에스케이 텔레콤주식회사 | 움직임정보 병합을 이용한 부호움직임정보생성/움직임정보복원 방법 및 장치와 그를 이용한 영상 부호화/복호화 방법 및 장치 |
US20120177125A1 (en) * | 2011-01-12 | 2012-07-12 | Toshiyasu Sugio | Moving picture coding method and moving picture decoding method |
KR20130139827A (ko) * | 2011-02-09 | 2013-12-23 | 엘지전자 주식회사 | 영상 부호화 및 복호화 방법과 이를 이용한 장치 |
Non-Patent Citations (1)
Title |
---|
E. MORA ET AL.: "Modification of the merge candidate list for dependent views in 3D-HEVC''.", IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, 2013, pages 1709 - 1713, XP032565647 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10616602B2 (en) | 2014-03-31 | 2020-04-07 | Intellectual Discovery Co., Ltd. | Method and device for deriving inter-view motion merging candidate |
US11729421B2 (en) | 2014-03-31 | 2023-08-15 | Dolby Laboratories Licensing Corporation | Method and device for deriving inter-view motion merging candidate |
Also Published As
Publication number | Publication date |
---|---|
US10616602B2 (en) | 2020-04-07 |
KR20210068338A (ko) | 2021-06-09 |
US11729421B2 (en) | 2023-08-15 |
CN106464898A (zh) | 2017-02-22 |
US20170078698A1 (en) | 2017-03-16 |
CN106464898B (zh) | 2020-02-11 |
KR102572012B1 (ko) | 2023-08-28 |
KR20150113713A (ko) | 2015-10-08 |
KR20220024338A (ko) | 2022-03-03 |
CN111343459B (zh) | 2023-09-12 |
KR102480955B1 (ko) | 2022-12-22 |
KR102363415B1 (ko) | 2022-02-15 |
US20200195968A1 (en) | 2020-06-18 |
CN111343459A (zh) | 2020-06-26 |
KR102260146B1 (ko) | 2021-06-03 |
KR20230129320A (ko) | 2023-09-08 |
KR20230002235A (ko) | 2023-01-05 |
US20230291931A1 (en) | 2023-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013070006A1 (ko) | 스킵모드를 이용한 동영상 부호화 및 복호화 방법 및 장치 | |
WO2015142054A1 (ko) | 다시점 비디오 신호 처리 방법 및 장치 | |
WO2010068020A9 (ko) | 다시점 영상 부호화, 복호화 방법 및 그 장치 | |
WO2013069932A1 (ko) | 영상의 부호화 방법 및 장치, 및 복호화 방법 및 장치 | |
WO2013005941A2 (ko) | 영상 부호화 및 복호화 방법과 장치 | |
WO2011145819A2 (ko) | 영상 부호화/복호화 장치 및 방법 | |
WO2010087620A2 (ko) | 보간 필터를 적응적으로 사용하여 영상을 부호화 및 복호화하는 방법 및 장치 | |
WO2012144829A2 (en) | Method and apparatus for encoding and decoding motion vector of multi-view video | |
WO2011019246A2 (en) | Method and apparatus for encoding/decoding image by controlling accuracy of motion vector | |
EP3566453A1 (en) | Encoding optimization with illumination compensation and integer motion vector restriction | |
WO2012015275A2 (ko) | 블록 분할예측을 이용한 영상 부호화/복호화 방법 및 장치 | |
WO2013141671A1 (ko) | 인터 레이어 인트라 예측 방법 및 장치 | |
WO2013157814A1 (ko) | 영상의 레퍼런스 픽쳐 세트를 결정하기 위한 방법 및 장치 | |
WO2016056821A1 (ko) | 3d 비디오 코딩을 위한 움직임 정보 압축 방법 및 장치 | |
WO2017043766A1 (ko) | 비디오 부호화, 복호화 방법 및 장치 | |
WO2016056782A1 (ko) | 비디오 코딩에서 뎁스 픽처 코딩 방법 및 장치 | |
KR20140043032A (ko) | 움직임 벡터와 변이 벡터를 예측하는 영상 처리 방법 및 장치 | |
WO2012053796A2 (ko) | 차분 움직임벡터 부호화/복호화 장치 및 방법, 및 그것을 이용한 영상 부호화/복호화 장치 및 방법 | |
US20230291931A1 (en) | Method and device for deriving inter-view motion merging candidate | |
WO2014010918A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2011071316A2 (ko) | 영상 부호화 장치 및 방법, 및 거기에 이용되는 변환 부호화 장치 및 방법, 변환기저 생성장치 및 방법, 및 영상 복호화 장치 및 방법 | |
WO2014073877A1 (ko) | 다시점 비디오 신호의 처리 방법 및 이에 대한 장치 | |
WO2018070568A1 (ko) | 복호화기 기반의 화면 내 예측 모드 추출 기술을 사용하는 비디오 코딩 방법 및 장치 | |
WO2015009091A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014107098A1 (ko) | 영상을 부호화/복호화하기 위한 파라미터 세트 생성 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15774114 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15126028 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: IDP00201607051 Country of ref document: ID |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15774114 Country of ref document: EP Kind code of ref document: A1 |