WO2012167711A1 - Method and apparatus of scalable video coding - Google Patents

Method and apparatus of scalable video coding Download PDF

Info

Publication number
WO2012167711A1
WO2012167711A1 PCT/CN2012/076316 CN2012076316W WO2012167711A1 WO 2012167711 A1 WO2012167711 A1 WO 2012167711A1 CN 2012076316 W CN2012076316 W CN 2012076316W WO 2012167711 A1 WO2012167711 A1 WO 2012167711A1
Authority
WO
WIPO (PCT)
Prior art keywords
mode
information
mvp
intra prediction
motion information
Prior art date
Application number
PCT/CN2012/076316
Other languages
French (fr)
Inventor
Tzu-Der Chuang
Ching-Yeh Chen
Chih-Ming Fu
Yu-Wen Huang
Shaw-Min Lei
Original Assignee
Mediatek Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mediatek Inc. filed Critical Mediatek Inc.
Priority to US14/005,555 priority Critical patent/US9860528B2/en
Priority to CN201280024337.5A priority patent/CN103621081B/en
Priority to EP12796116.7A priority patent/EP2719181A4/en
Priority to RU2013154579/08A priority patent/RU2575411C2/en
Priority to BR112013031215A priority patent/BR112013031215B8/en
Publication of WO2012167711A1 publication Critical patent/WO2012167711A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/187Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/33Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Definitions

  • the present invention relates to video coding.
  • the present invention relates to scalable video coding that utilizes information of the base layer for enhancement layer coding.
  • Compressed digital video has been widely used in various applications such as video streaming over digital networks and video transmission over digital channels.
  • a single video content may be delivered over networks with different characteristics.
  • a live sport event may be carried in a high-bandwidth streaming format over broadband networks for premium video service.
  • the compressed video usually preserves high resolution and high quality so that the video content is suited for high-definition devices such as an HDTV or a high resolution LCD display.
  • the same content may also be carried through cellular data network so that the content can be watch on a portable device such as a smart phone or a network-connected portable media device.
  • the video content usually is compressed into lower resolution and lower bitrates. Therefore, for different network environment and for different applications, the video resolution and video quality requirement are quite different. Even for the same type of network, users may experience different available bandwidths due to different network infrastructure and network traffic condition. Therefore, a user may desire to receive the video at higher quality when the available bandwidth is high and receive a lower-quality, but smooth, video when the network congestion occurs.
  • a high-end media player can handle high-resolution and high bitrate compressed video while a low-cost media player is only capable of handling low-resolution and low bitrate compressed video due to limited computational resources. Accordingly, it is desirable to construct the compressed video in a scalable manner so that video at different spatial-temporal resolution and/or quality can be derived from the same compressed bitstream.
  • SVC Scalable Video Coding
  • SVC In SVC, three types of scalabilities, i.e., temporal scalability, spatial scalability, and quality scalability, are provided.
  • SVC uses multi-layer coding structure to realize the three dimensions of scalability.
  • a main goal of SVC is to generate one scalable bitstream that can be easily and rapidly adapted to the bit-rate requirement associated with various transmission channels, diverse display capabilities, and different computational resources without trans-coding or re-encoding.
  • An important feature of SVC design is that the scalability is provided at a bitstream level.
  • bitstreams for deriving video with a reduced spatial and/or temporal resolution can be simply obtained by extracting Network Abstraction Layer (NAL) units (or network packets) from a scalable bitstream that are required for decoding the intended video.
  • NAL units for quality refinement can be additionally truncated in order to reduce the bit-rate and the associated video quality.
  • temporal scalability can be derived from hierarchical coding structure based on B-pictures according to the H.264/AVC standard.
  • Fig. 1 illustrates an example of hierarchical B-picture structure with 4 temporal layers and the Group of Pictures (GOP) with eight pictures.
  • Pictures 0 and 8 in Fig. 1 are called key pictures.
  • Inter prediction of key pictures only uses previous key pictures as references.
  • Other pictures between two key pictures are predicted hierarchically.
  • Video having only the key pictures forms the coarsest temporal resolution of the scalable system.
  • Temporal scalability is achieved by progressively refining a lower-level (coarser) video by adding more B pictures corresponding to enhancement layers of the scalable system.
  • Fig. 1 illustrates an example of hierarchical B-picture structure with 4 temporal layers and the Group of Pictures (GOP) with eight pictures.
  • Pictures 0 and 8 in Fig. 1 are called key pictures.
  • Inter prediction of key pictures only uses previous key pictures as references.
  • picture 4 is first bi-directional predicted using key pictures, i.e., pictures 0 and 8 after the two key pictures are coded.
  • pictures 2 and 6 are processed.
  • Picture 2 is bi-directional predicted using picture 0 and 4
  • picture 6 is bi-directional predicted using picture 4 and 8.
  • remaining pictures i.e., pictures 1, 3, 5 and 7 are processed bi-directionally using two respective neighboring pictures as shown in Fig. 1. Accordingly, the processing order for the GOP is 0, 8, 4, 2, 6, 1, 3, 5, and 7.
  • the second-level, third-level and fourth- level video correspond to 7.5, 15, and 30 frames per second.
  • the first temporal-order pictures are also called base-level video or based-level pictures.
  • the second temporal-order pictures through fourth temporal-order pictures are also called enhancement-level video or enhancement-level pictures.
  • the coding structure of hierarchical B-pictures also improves the coding efficiency over the typical IBBP GOP structure at the cost of increased encoding-decoding delay.
  • SVC spatial scalability is supported based on the pyramid coding scheme as shown in Fig. 2.
  • the video sequence is first down-sampled to obtain smaller pictures at different spatial resolutions (layers).
  • picture 210 at the original resolution can be processed by spatial decimation 220 to obtain resolution-reduced picture 211.
  • the resolution-reduced picture 211 can be further processed by spatial decimation 221 to obtain further resolution-reduced picture 212 as shown in Fig. 2.
  • SVC also supports arbitrary resolution ratios, which is called extended spatial scalability (ESS).
  • ESS extended spatial scalability
  • layer 2 illustrates an example of spatial scalable system with three layers, where layer 0 corresponds to the pictures with lowest spatial resolution and layer 2 corresponds to the pictures with the highest resolution.
  • the layer-0 pictures are coded without reference to other layers, i.e., single-layer coding.
  • the lowest layer picture 212 is coded using motion- compensated and intra prediction 230.
  • the motion-compensated and intra prediction 230 will generate syntax elements as well as coding related information such as motion information for further entropy coding 240.
  • Fig. 2 actually illustrates a combined SVC system that provides spatial scalability as well as quality scalability (also called SNR scalability). The system may also provide temporal scalability, which is not explicitly shown.
  • the residual coding errors can be refined using SNR enhancement layer coding 250.
  • the SNR enhancement layer in Fig. 2 may provide multiple quality levels (quality scalability).
  • Each supported resolution layer can be coded by respective single-layer motion-compensated and intra prediction like a non-scalable coding system.
  • Each higher spatial layer may also be coded using inter-layer coding based on one or more lower spatial layers.
  • layer lvideo can be adaptively coded using inter-layer prediction based on layer 0 video or a single-layer coding on a macroblock by macroblock basis or other block unit.
  • layer 2 video can be adaptively coded using inter-layer prediction based on reconstructed layer 1 video or a single-layer coding.
  • layer-1 pictures 211 can be coded by motion-compensated and intra prediction 231, base layer entropy coding 241 and SNR enhancement layer coding 251.
  • layer-2 pictures 210 can be coded by motion- compensated and intra prediction 232, base layer entropy coding 242 and SNR enhancement layer coding 252.
  • the coding efficiency can be improved due to inter-layer coding.
  • the information required to code spatial layer 1 may depend on reconstructed layer 0 (inter-layer prediction).
  • the inter- layer differences are termed as the enhancement layers.
  • the H.264 SVC provides three types of inter- layer prediction tools: inter-layer motion prediction, inter-layer intra prediction, and inter-layer residual prediction.
  • the enhancement layer can reuse the motion information in the base layer (BL) to reduce the inter-layer motion data redundancy.
  • the EL macroblock coding may use a flag, such as base_mode_flag before mb_type is determined to indicate whether the EL motion information is directly derived from the BL. If base_mode_flag is equal to 1, the partitioning data of the EL macroblock together with the associated reference indexes and motion vectors are derived from the corresponding data of the co-located 8x8 block in the BL. The reference picture index of the BL is directly used in EL.
  • the motion vectors of EL are scaled from the data associated with the BL. Besides, the scaled BL motion vector can be used as an additional motion vector predictor for the EL.
  • Inter-layer residual prediction uses the up-sampled BL residual information to reduce the information of EL residuals.
  • the co-located residual of BL can be block-wise up-sampled using a bilinear filter and can be used as prediction for the residual of a current macroblock in the EL.
  • the up- sampling of the reference layer residual is done on a transform block basis in order to ensure that no filtering is applied across transform block boundaries.
  • the inter-layer intra prediction reduces the redundant texture information of the EL.
  • the prediction in the EL is generated by block-wise up- sampling the co-located BL reconstruction signal.
  • inter-layer intra prediction up-sampling procedure 4-tap and 2-tap FIR filters are applied for luma and chroma components, respectively. Different from inter-layer residual prediction, filtering for the inter-layer intra prediction is always performed across sub-block boundaries. For decoding simplicity, inter-layer intra prediction can be restricted to only intra-coded macroblocks in the BL.
  • quality scalability is realized by coding multiple quality ELs which are composed of refinement coefficients.
  • the scalable video bitstream can be easily truncated or extracted to provide different video bitstreams with different video qualities or bitstream sizes.
  • the quality scalability (also called SNR scalability) can be provided via two strategies, coarse grain scalability (CGS), and medium grain scalability (MGS).
  • CGS coarse grain scalability
  • MMS medium grain scalability
  • the CGS can be regarded as a special case of spatial scalability, where the spatial resolution of the BL and the EL are the same. However, the quality of the EL is better (the QP of the EL is smaller than the QP of the BL).
  • the same inter-layer prediction mechanism for spatial scalable coding can be employed. However, no corresponding up- sampling or deblocking operations are performed. Furthermore, the inter-layer intra and residual prediction are directly performed in the transform domain. For the inter-layer prediction in CGS, a refinement of texture information is typically achieved by re-quantizing the residual signal in the EL with a smaller quantization step size than that used for the preceding CGS layer. CGS can provide multiple pre-defined quality points.
  • MGS is used by H.264 SVC.
  • MGS can be considered as an extension of CGS, where the quantized coefficients in one CGS slice can be divided into several MGS slices.
  • the quantized coefficients in CGS are classified to 16 categories based on its scan position in the zig-zag scan order. These 16 categories of coefficients can be distributed into different slices to provide more quality extraction points than CGS.
  • a method and apparatus for scalable video coding that exploits Base Layer (BL) information for Enhancement Layer (EL) are disclosed, where the EL has higher resolution and/or better quality than the BL.
  • Embodiments of the present invention exploit various pieces of the BL information to improve coding efficiency of the EL.
  • the method and apparatus utilizes CU structure information, mode information, or motion information of the BL to derive respective CU structure information, mode information, or motion vector predictor (MVP) information for the EL.
  • MVP motion vector predictor
  • the method and apparatus derives Motion Vector Predictor (MVP) candidates or merge candidates of the EL based on MVP candidates or merge candidates of the BL.
  • MVP Motion Vector Predictor
  • the method and apparatus derives intra prediction mode of the EL based on intra prediction mode of the BL.
  • An embodiment of the present invention utilizes Residual Quadtree Structure information of the BL to derive the Residual Quadtree Structure for the EL.
  • Another embodiment of the present invention derives the texture of the EL by re-sampling the texture of the BL.
  • a further embodiment of the present invention derives the predictor of residual of the EL by re-sampling the residual of the BL.
  • One aspect of the present invention addresses the coding efficiency of context-based adaptive entropy coding for the EL.
  • An embodiment of the present invention determines context information for processing a syntax element of the EL using the information of the BL.
  • Another aspect of the present invention addresses the coding efficiency related in-loop processing.
  • An embodiment of the present invention derives the ALF information, the SAO information, or the DF information for the EL using the ALF information, the SAO information, or the DF information of the BL respectively.
  • Fig. 1 illustrates an example of temporal scalable video coding using hierarchical B-pictures.
  • FIG. 2 illustrates an example of a combined scalable video coding system that provides spatial scalability as well as quality scalability where three spatial layers are provides.
  • FIG. 3 illustrates an example of CU structure reuse for scalable video coding where a CU structure for the base layer is scaled and used as an initial CU structure for the enhancement layer.
  • FIG. 4 illustrates an exemplary flow chart of CU structure coding or motion information coding for scalable video coding according to an embodiment of the present invention.
  • Fig. 5 illustrates an exemplary flow chart of MVP derivation or merge candidate derivation for scalable video coding according to an embodiment of the present invention.
  • FIG. 6 illustrates an exemplary flow chart of intra prediction mode derivation for scalable video coding according to an embodiment of the present invention.
  • Fig. 7 illustrates an exemplary flow chart of Residual Quadtree Structure coding for scalable video coding according to an embodiment of the present invention.
  • Fig. 8 illustrates an exemplary flow chart of texture prediction and re- sampling for scalable video coding according to an embodiment of the present invention.
  • Fig. 9 illustrates an exemplary flow chart of residual prediction and re-sampling for scalable video coding according to an embodiment of the present invention.
  • Fig. 10 illustrates an exemplary flow chart of context adaptive entropy coding for scalable video coding according to an embodiment of the present invention.
  • FIG. 11 illustrates an exemplary flow chart of ALF information coding, SAO information coding and DF information coding for scalable video coding according to an embodiment of the present invention.
  • coding unit (CU) structure was introduced as a new block structure for coding process.
  • a picture is divided into largest CUs (LCUs) and each LCU is adaptively partitioned into CUs until a leaf CU is obtained or a minimum CU size is reached.
  • the CU structure information has to be conveyed to the decoder side so that the same CU structure can be recovered at the decoder side.
  • an embodiment according to the present invention allows the CU structure of the BL reused by the EL. In the EL LCU or CU level, one flag is transmitted to indicate whether the CU structure is reused from corresponding CU of the BL.
  • the BL CU structure is scaled to match the resolutions of the EL and the scaled BL CU structure is reused by the EL.
  • the CU structure information that can be reused by the EL includes the CU split flag and residual quad-tree split flag.
  • the leaf CU of scaled CU structures can be further split into sub-CUs.
  • Fig. 3 illustrates an example of CU partition reuse. Partition 310 corresponds to the CU structure of the BL.
  • the video resolution of the EL is two times of the video resolution of the BL horizontally and vertically.
  • the CU structure of corresponding CU partition 315 of BL is scaled up by 2.
  • the scaled CU structure 320 is then used as the initial CU structure for the EL LCU.
  • the leaf CUs of the scaled CU in the EL can be further split into sub-CUs and the result is indicated by 330 in Fig. 3.
  • a flag may be used to indicate whether the leaf CU is further divided into sub-CUs.
  • Fig. 3 illustrates an example of CU structure is reused, other information may also be reused. For example, the prediction type, prediction size, merge index, inter reference direction, reference picture index, motion vectors, MVP index, and intra mode.
  • the information/data can be scaled when needed before the information/data is reused in the EL.
  • the mode information for a leaf CU is reused.
  • the mode information may include skip flag, prediction type, prediction size, inter reference direction, reference picture index, motion vectors, motion vector index, merge flag, merge index, skip mode, merge mode, and intra mode.
  • the mode information of the leaf CU in the EL can share the same or scaled mode information of the corresponding CU in the BL. One flag can be used to indicate whether the EL will reuse the mode information from the BL or not.
  • one flag can be used to indicate whether the EL will reuse this mode information from the BL or not.
  • the motion information of corresponding Prediction Unit (PU) or Coding Unit (CU) in the BL is reused to derive the motion information of a PU or CU in the EL.
  • the motion information may include inter prediction direction, reference picture index, motion vectors (MVs), Motion Vector Predictors (MVPs), MVP index, merge index, merge candidates, and intra mode.
  • the motion information for the BL can be utilized as predictors or candidates for the motion vector predictor (MVP) information in the EL.
  • the BL MVs and BL MVPs can be added into the MVP list and/or merge list for EL MVP derivation.
  • the aforementioned MVs of BL can be the MVs of the corresponding PU in the BL, the MVs of neighboring PUs of the corresponding PUs in the BL, the MVs of merge candidates of the corresponding PUs in the BL, the MVP of the corresponding PUs in the BL, or the co-located MVs of the corresponding PUs in the BL.
  • the merge candidate derivation for the EL can utilize the motion information of the BL.
  • the merge candidates of a corresponding PU in the BL can be added into the merge candidate list and/or MVP list.
  • the aforementioned motion information of the BL can be the motion information of the corresponding PU in the BL, the motion information associated with a neighboring PU of the corresponding PU in the BL, merge candidates of the corresponding PUs in the BL, MVP of the corresponding PUs in the BL, or the co-located PU of the corresponding PU in the BL.
  • the motion information includes inter prediction direction, reference picture index, and motion vectors.
  • the intra mode of a corresponding PU or CU in the BL can be reused for the EL.
  • the intra mode of a corresponding PU or CU in the BL can be added into the intra most probable mode list.
  • An embodiment according to the present invention uses the motion information of the BL to predict the intra mode for the EL.
  • the order for the most probable mode list in the EL can be adaptively changed according to the intra prediction mode information in the BL.
  • the codeword lengths for codewords in the most probable mode list in the EL can be adaptively changed according to the intra prediction mode information in the BL.
  • the codewords of the intra remaining modes with prediction directions close to the prediction direction of coded BL intra mode are assigned a shorter length.
  • the neighboring direction modes of BL intra mode can also be added into intra Most Probable Mode (MPM) list of the EL intra mode coding.
  • the intra prediction mode information of the BL can be the intra prediction mode of the corresponding PU in the BL, or the neighboring direction modes of BL intra mode, or the intra prediction mode of a neighboring PU of the corresponding PU in the BL.
  • the selected MVP index, merge index, and intra mode index of BL motion information can be utilized to adaptively change the indices order in EL MVP list, merge index list, and intra most probable mode list.
  • the order of the MVP list is ⁇ left MVP, above MVP, co-located MVP ⁇ . If the corresponding BL PU selects the above MVP, the order of the above MVP will be moved forward in the EL. Accordingly, the MVP list in the EL will become ⁇ above MVP, left MVP, co-located MVP ⁇ .
  • the BL coded MV, scaled coded MV, MVP candidates, scaled MVP candidates, merge candidates, and scaled merge candidates can replace part of EL MVP candidates and/or merge candidates.
  • the process of deriving the motion information for a PU or CU in the EL based on the motion information for a corresponding PU or CU in the BL is invoked when an MVP candidate or a merge candidate for a PU or CU in the EL is needed for encoding or decoding.
  • the CU structure information for the BL can be used to determine the CU structure information for the EL. Furthermore, the CU structure information, the mode information and the motion information for the BL can be used jointly to determine the CU structure information, the mode information and the motion information for the EL. The mode information or the motion information for the BL may also be used to determine the mode information or the motion information for the EL. The process of deriving the CU structure information, the mode information, the motion information or any combination for the EL based on corresponding information for the BL can be invoked when the CU structure information, the mode information, the motion information or any combination for the EL needs to be encoded or decoded.
  • the prediction residual is further processed using quadtree partitioning and a coding type is selected for each block of results of residual quadtree partition.
  • Both residual quadtree partition information and coding block pattern (CBP) information have to be incorporated into the bitstream so that the decoder can recover the residual quadtree information.
  • An embodiment according to the present invention reuses the residual quadtree partition and CBP of a corresponding CU in the BL for the EL.
  • the residual quadtree partition and CBP can be scaled and utilized as the predictor for the EL residual quadtree partition and CBP coding.
  • the unit for block transform is termed as Transform Unit (TU) and a TU can be partitioned into smaller TUs.
  • TU Transform Unit
  • one flag for a root TU level or a TU level of the EL is transmitted to indicate that whether the Residual Quadtree Coding (RQT) structure of a corresponding TU in the BL is utilized to predict the RQT structure of the current TU in the EL. If the RQT structure of a corresponding TU in the BL is utilized to predict the RQT structure of the current TU in the EL, the RQT structure of the corresponding TU in the BL is scaled and used as the initial RQT structure of the current TU in the EL. In the leaf TU of the initial RQT structure for the EL, one split flag can be transmitted to indicate whether the TU is divided into sub-TUs.
  • RQT Residual Quadtree Coding
  • the process of deriving the RQT structure of the EL based on the information of the RQT structure of the BL is performed when an encoder needs to encode the RQT structure of the EL or a decoder needs to decode the RQT structure of the EL.
  • H.264/AVC scalable extension 4-tap and 2-tap FIR filters are adopted for the up-sampling operation of texture signal for luma and chroma components respectively.
  • An embodiment according to the present invention re-samples the BL texture as the predictor of EL texture, where the re-sampling utilizes improved up-sampling methods to replace the 4-tap and 2-tap FIR filter in H.264/AVC scalable extension.
  • the filter according to the present invention uses one of the following filters or a combination of the following filters: Discrete Cosine Transform Interpolation Filter (DCTIF), Discrete Sine Transform Interpolation Filter (DSTIF), Wiener filter, non-local mean filter, smoothing filter, and bilateral filter.
  • DCTIF Discrete Cosine Transform Interpolation Filter
  • DSTIF Discrete Sine Transform Interpolation Filter
  • Wiener filter non-local mean filter
  • smoothing filter and bilateral filter.
  • the filter according to the present invention can cross TU boundaries or can be restricted within TU boundaries.
  • An embodiment according to the present invention may skip the padding and deblocking procedures in inter-layer intra prediction to alleviate computation and data dependency problem.
  • the Sample Adaptive Offset (SAO), Adaptive Loop Filter (ALF), non-local mean filter, and/or smoothing filter in the BL could also be skipped.
  • the skipping of padding, deblocking, SAO, ALF, non-local mean filter, and smoothing filter can be applied to the entire LCU, leaf CU, PU, TU, pre-defined region, LCU boundary, leaf CU boundary, PU boundary, TU boundary, or boundary of a pre-defined region.
  • the texture of the BL is processed using a filter to produce filtered BL texture, and the BL texture has the same resolution as the EL texture and is used as the predictor of the texture of the EL.
  • Wiener filter, ALF (Adaptive Loop Filter), non-local mean filter, smoothing filter, or SAO (Sample Adaptive Offset) can be applied to the texture of the BL before the texture of BL is utilized as the predictor of the texture of the EL.
  • an embodiment of the present invention applies Wiener filter or adaptive filter to the texture of the BL before the texture of the BL is re-sampled.
  • the Wiener filter or adaptive filter can be applied to the texture of the BL after the texture of the BL is re- sampled.
  • an embodiment of the present invention applies SAO or ALF to the texture of the BL before the texture of the BL is re-sampled.
  • Another embodiment according to the present invention utilizes LCU-based or CU-based Wiener filter and/or adaptive offset for inter-layer intra prediction.
  • the filtering can be applied to BL texture data or up-sampled BL texture data.
  • H.264 SVC 2-tap FIR filter is adopted for the up-sampling operation of residual signal for both luma and chroma components.
  • An embodiment according to the present invention uses improved up-sampling methods to replace the 2-tap FIR filter of H.264 SVC.
  • the filter can be one of the following filters or a combination of the following filters: Discrete Cosine Transform Interpolation Filter (DCTIF), Discrete Sine Transform Interpolation Filter (DSTIF), Wiener filter, non-local mean filter, smoothing filter, and bilateral filter.
  • DCTIF Discrete Cosine Transform Interpolation Filter
  • DSTIF Discrete Sine Transform Interpolation Filter
  • Wiener filter Wiener filter
  • non-local mean filter non-local mean filter
  • smoothing filter smoothing filter
  • bilateral filter bilateral filter
  • the residual prediction can be performed in either the spatial domain or the frequency domain if the BL and the EL have the same resolution or the EL has a higher resolution than the BL.
  • the residual of the BL can be re-sampled in frequency domain to form predictors for the EL residual.
  • the process of deriving the predictor of residual of the EL by re- sampling the residual of the BL can be performed when an encoder or a decoder needs to derive the predictor of the residual of the EL based on the re-sampled residual of the BL.
  • An embodiment according to the present invention may utilize the BL information for context- based adaptive entropy coding in the EL.
  • the context formation or binarization of (Context- based Adaptive Binary Arithmetic Coding) CAB AC can exploit the information of the BL.
  • the EL can use different context models, different context formation methods, or different context sets based on corresponding information in the BL.
  • the EL PU can use different context models depending on whether the corresponding PU in the BL is coded in skip mode or not.
  • the probability or most probable symbol (MPS) of part of context models for CAB AC in the BL can be reused to derive the initial probability and MPS of part of context models for CABAC in the EL.
  • the syntax element can be split flag, skip flag, merge flag, merge index, chroma intra mode, luma intra mode, partition size, prediction mode, inter prediction direction, motion vector difference, motion vector predictor index, reference index, delta quantization parameter, significant flag, last significant position, coefficient-greater-than-one, coefficient-magnitude-minus-one, ALF(Adaptive Loop Filter) control flag, ALF flag, ALF footprint size, ALF merge flag, ALF ON/OFF decision, ALF coefficient, sample adaptive offset (SAO) flag, SAO type, SAO offset, SAO merge flag, SAO run, SAO on/off decision, transform subdivision flags, residual quadtree CBF (Coded Block Flag), or residual quadtree root CBF.
  • a codeword corresponding to the syntax elements can be adaptively changed according to the information of the BL and the codeword order corresponding to the syntax elements of the EL in a look-up codeword table can also be adaptively changed according to the information of the BL.
  • the process of determining context information for processing the syntax element of the EL using the information of the BL is performed when the syntax element of the EL needs to be encoded or decoded.
  • An embodiment of the present invention uses some ALF information in the BL to derive the ALF information in the EL.
  • the ALF information may include filter adaptation mode, filter coefficients, filter footprint, region partition, ON/OFF decision, enable flag, and merge results.
  • the EL can use part of ALF parameters in the BL as the ALF parameters or predictors of ALF parameters in the EL.
  • a flag can be used to indicate whether the ALF information for the EL is predicted from the ALF information of the BL.
  • the ALF information of the BL can be scaled and used as the predictor for the ALF information of the EL.
  • a value can be used to denote the difference between the predictor of the ALF information and the ALF information of the EL.
  • the process of deriving the ALF information for the EL using the ALF information of the BL is performed when an encoder or a decoder needs to derive the ALF information of the EL.
  • An embodiment of the present invention uses some SAO information in the BL to derive the SAO information in the EL.
  • the SAO information may include offset type, offsets, region partition, ON/OFF decision, enable flag, and merge results.
  • the EL can use part of SAO parameters in the BL as the SAO parameters for the EL.
  • a flag can be used to indicate whether the SAO information for the EL is predicted from the SAO information of the BL.
  • the SAO information of the BL can be scaled and used as the predictor for the SAO information of the EL.
  • a value can be used to denote the difference between the predictor of the SAO information and the SAO information of the EL.
  • the process of deriving the SAO information for the EL using the SAO information of the BL is performed when an encoder or a decoder needs to derive the SAO information of the EL.
  • An embodiment of the present invention uses some Deblocking Filter (DF) information in the BL to derive the DF information in EL.
  • the DF information may include threshold values, such as thresholds ⁇ , ⁇ , and t c that are used to determine Boundary Strength (BS).
  • the DF information may also include filter parameters, ON/OFF filter decision, Strong/Weak filter selection, or filter strength.
  • a flag can be used to indicate whether the DF information for the EL is predicted from the DF information of the BL.
  • the DF information of the BL can be scaled and used as the predictor for the DF information of the EL.
  • a value can be used to denote the difference between the predictor of the DF information and the DF information of the EL.
  • the process of deriving the DF information for the EL using the DF information of the BL is performed when an encoder or a decoder needs to derive the DF information of the EL.
  • Figs. 4 through 11 illustrate exemplary flow charts for scalable video coding according to various embodiments of the present invention.
  • Fig. 4 illustrates an exemplary flow chart of CU structure coding or motion information coding for scalable video coding according to an embodiment of the present invention, wherein video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution or better video quality than the BL.
  • the CU structure (Coding Unit structure), motion information, or a combination of the CU structure and the motion information for a CU (Coding Unit) in the BL is determined in step 410.
  • CU structure, motion vector predictor (MVP) information, or a combination of the CU structure and the MVP information for a corresponding CU in the EL based on the CU structure, the motion information, or the combination of the CU structure and the motion information for the CU in the BL is respectively determined in step 420.
  • Fig. 5 illustrates an exemplary flow chart of MVP derivation or merge candidate derivation for scalable video coding according to an embodiment of the present invention, wherein video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution or better video quality than the BL.
  • the motion information for the BL is determined in step 510.
  • the Motion Vector Predictor (MVP) candidates or merge candidates in the EL based on the motion information of the BL is derived in step 520.
  • Fig. 6 illustrates an exemplary flow chart of intra prediction mode derivation for scalable video coding according to an embodiment of the present invention, wherein video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution or better video quahty than the BL.
  • the information of intra prediction mode of the BL is determined in step 610.
  • the intra prediction mode of the EL based on the information of the intra prediction mode of the BL is derived in step 620.
  • Fig. 7 illustrates an exemplary flow chart of Residual Quadtree Structure coding for scalable video coding according to an embodiment of the present invention, wherein video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution or better video quality than the BL.
  • the information of RQT structure (Residual Quadtree Coding structure) of the BL is determined in step 710.
  • the RQT structure of the EL based on the information of the RQT structure of the BL is derived in step 720.
  • FIG. 8 illustrates an exemplary flow chart of texture prediction and re-sampling for scalable video coding according to an embodiment of the present invention, wherein video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution than the BL or better video quality than the BL.
  • the information of texture of the BL is determined in step 810.
  • a predictor of texture of the EL based on the information of the texture of the BL is derived in step 820.
  • FIG. 9 illustrates an exemplary flow chart of residual prediction and resampling for scalable video coding according to an embodiment of the present invention, wherein video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution than the BL or better video quality than the BL.
  • the residual information of the BL is determined in step 910.
  • a predictor of residual of the EL by re-sampling the residual of the BL is derived in step 920.
  • Fig. 10 illustrates an exemplary flow chart of context adaptive entropy coding for scalable video coding according to an embodiment of the present invention, wherein video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution or better video quality than the BL.
  • the information of the BL is determined in step 1010.
  • the context information for processing a syntax element of the EL using the information of the BL is determined in step 1020.
  • FIG. 11 illustrates an exemplary flow chart of ALF information coding, SAO information coding and DF information coding for scalable video coding according to an embodiment of the present invention, wherein video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution or better video quality than the BL.
  • the ALF information, SAO information or DF information of the BL is determined in step 1110.
  • the ALF information, SAO information, or DF information for the EL using the ALF information, SAO information, or DF information of the BL is respectively derived in step 1120.
  • Embodiments of scalable video coding, where the enhancement layer coding exploits the information of the base layer, according to the present invention as described above may be implemented in various hardware, software codes, or a combination of both.
  • an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein.
  • An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
  • DSP Digital Signal Processor
  • the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA).
  • processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
  • the software code or firmware codes may be developed in different programming languages and different format or style.
  • the software code may also be compiled for different target platforms. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

A method and apparatus for scalable video coding are disclosed, wherein the video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution or better video quality than the BL. According to embodiments of the present invention, information from the base layer is exploited for coding the enhancement layer. The information coding for the enhancement layer includes CU structure, motion vector predictor (MVP) information, MVP/merge candidates, intra prediction mode, residual quadtree information, texture information, residual information, context adaptive entropy coding, Adaptive Lop Filter (ALF), Sample Adaptive Offset (SAO), and deblocking filter.

Description

METHOD AND APPARATUS OF SCALABLE VIDEO CODING
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] The present invention claims priority to U.S. Provisional Patent Application, Serial No. 61/495,740, filed June 10, 2011, entitled "Scalable Coding of High Efficiency Video Coding" and U.S. Provisional Patent Application, Serial No. 61/567,774, filed Dec 7, 2011. The U.S. Provisional Patent Application is hereby incorporated by reference in its entirety.
TECHNICAL FIELD
[0002] The present invention relates to video coding. In particular, the present invention relates to scalable video coding that utilizes information of the base layer for enhancement layer coding.
BACKGROUND
[0003] Compressed digital video has been widely used in various applications such as video streaming over digital networks and video transmission over digital channels. Very often, a single video content may be delivered over networks with different characteristics. For example, a live sport event may be carried in a high-bandwidth streaming format over broadband networks for premium video service. In such applications, the compressed video usually preserves high resolution and high quality so that the video content is suited for high-definition devices such as an HDTV or a high resolution LCD display. The same content may also be carried through cellular data network so that the content can be watch on a portable device such as a smart phone or a network-connected portable media device. In such applications, due to the network bandwidth concerns as well as the typical low-resolution display on the smart phone or portable devices, the video content usually is compressed into lower resolution and lower bitrates. Therefore, for different network environment and for different applications, the video resolution and video quality requirement are quite different. Even for the same type of network, users may experience different available bandwidths due to different network infrastructure and network traffic condition. Therefore, a user may desire to receive the video at higher quality when the available bandwidth is high and receive a lower-quality, but smooth, video when the network congestion occurs. In another scenario, a high-end media player can handle high-resolution and high bitrate compressed video while a low-cost media player is only capable of handling low-resolution and low bitrate compressed video due to limited computational resources. Accordingly, it is desirable to construct the compressed video in a scalable manner so that video at different spatial-temporal resolution and/or quality can be derived from the same compressed bitstream.
[0004] In the current H.264/AVC video standard, there is an extension of the H.264/AVC standard, named Scalable Video Coding (SVC). SVC provides temporal, spatial, and quality scalabilities based on a single bitstream. The SVC bitstream contains scalable video information from low frame-rate, low resolution, and low quality to high frame rate, high definition, and high quality respectively. Accordingly, SVC is suitable for various video applications such as video broadcasting, video streaming, and video surveillance to adapt to network infrastructure, traffic condition, user preference, and etc.
[0005] In SVC, three types of scalabilities, i.e., temporal scalability, spatial scalability, and quality scalability, are provided. SVC uses multi-layer coding structure to realize the three dimensions of scalability. A main goal of SVC is to generate one scalable bitstream that can be easily and rapidly adapted to the bit-rate requirement associated with various transmission channels, diverse display capabilities, and different computational resources without trans-coding or re-encoding. An important feature of SVC design is that the scalability is provided at a bitstream level. In other words, bitstreams for deriving video with a reduced spatial and/or temporal resolution can be simply obtained by extracting Network Abstraction Layer (NAL) units (or network packets) from a scalable bitstream that are required for decoding the intended video. NAL units for quality refinement can be additionally truncated in order to reduce the bit-rate and the associated video quality.
[0006] For example, temporal scalability can be derived from hierarchical coding structure based on B-pictures according to the H.264/AVC standard. Fig. 1 illustrates an example of hierarchical B-picture structure with 4 temporal layers and the Group of Pictures (GOP) with eight pictures. Pictures 0 and 8 in Fig. 1 are called key pictures. Inter prediction of key pictures only uses previous key pictures as references. Other pictures between two key pictures are predicted hierarchically. Video having only the key pictures forms the coarsest temporal resolution of the scalable system. Temporal scalability is achieved by progressively refining a lower-level (coarser) video by adding more B pictures corresponding to enhancement layers of the scalable system. In the example of Fig. 1, picture 4 is first bi-directional predicted using key pictures, i.e., pictures 0 and 8 after the two key pictures are coded. After picture 4 is processed, pictures 2 and 6 are processed. Picture 2 is bi-directional predicted using picture 0 and 4, and picture 6 is bi-directional predicted using picture 4 and 8. After pictures 2 and 6 are coded, remaining pictures, i.e., pictures 1, 3, 5 and 7 are processed bi-directionally using two respective neighboring pictures as shown in Fig. 1. Accordingly, the processing order for the GOP is 0, 8, 4, 2, 6, 1, 3, 5, and 7. The pictures processed according to the hierarchical process of Fig. 1 results in hierarchical four-level pictures, where pictures 0 and 8 belong to the first temporal order, picture 4 belongs the second temporal order, pictures 2 and 6 belong to the third temporal order and pictures 1, 3, 5, and 7 belong to the fourth temporal order. By decoding the base level pictures and adding higher temporal order pictures will be able to provide a higher level video. For example, base-level pictures 0 and 8 can be combined with second temporal-order picture 4 to form second-level pictures. By further adding the third temporal-order pictures to the second-level video can form the third-level video. Similarly, by adding the fourth temporal-order pictures to the third-level video can form the fourth-level video. Accordingly, the temporal scalability is achieved. If the original video has a frame rate of 30 frames per second, the base- level video has a frame rate of 30/8 = 3.75 frames per second. The second-level, third-level and fourth- level video correspond to 7.5, 15, and 30 frames per second. The first temporal-order pictures are also called base-level video or based-level pictures. The second temporal-order pictures through fourth temporal-order pictures are also called enhancement-level video or enhancement-level pictures. In addition to enable temporal scalability, the coding structure of hierarchical B-pictures also improves the coding efficiency over the typical IBBP GOP structure at the cost of increased encoding-decoding delay.
[0007] In SVC, spatial scalability is supported based on the pyramid coding scheme as shown in Fig. 2. In a SVC system with spatial scalability, the video sequence is first down-sampled to obtain smaller pictures at different spatial resolutions (layers). For example, picture 210 at the original resolution can be processed by spatial decimation 220 to obtain resolution-reduced picture 211. The resolution-reduced picture 211 can be further processed by spatial decimation 221 to obtain further resolution-reduced picture 212 as shown in Fig. 2. In addition to dyadic spatial resolution, where the spatial resolution is reduced to half in each level, SVC also supports arbitrary resolution ratios, which is called extended spatial scalability (ESS). The SVC system in Fig. 2 illustrates an example of spatial scalable system with three layers, where layer 0 corresponds to the pictures with lowest spatial resolution and layer 2 corresponds to the pictures with the highest resolution. The layer-0 pictures are coded without reference to other layers, i.e., single-layer coding. For example, the lowest layer picture 212 is coded using motion- compensated and intra prediction 230.
[0008] The motion-compensated and intra prediction 230 will generate syntax elements as well as coding related information such as motion information for further entropy coding 240. Fig. 2 actually illustrates a combined SVC system that provides spatial scalability as well as quality scalability (also called SNR scalability). The system may also provide temporal scalability, which is not explicitly shown. For each single-layer coding, the residual coding errors can be refined using SNR enhancement layer coding 250. The SNR enhancement layer in Fig. 2 may provide multiple quality levels (quality scalability). Each supported resolution layer can be coded by respective single-layer motion-compensated and intra prediction like a non-scalable coding system. Each higher spatial layer may also be coded using inter-layer coding based on one or more lower spatial layers. For example, layer lvideo can be adaptively coded using inter-layer prediction based on layer 0 video or a single-layer coding on a macroblock by macroblock basis or other block unit. Similarly, layer 2 video can be adaptively coded using inter-layer prediction based on reconstructed layer 1 video or a single-layer coding. As shown in Fig. 2, layer-1 pictures 211 can be coded by motion-compensated and intra prediction 231, base layer entropy coding 241 and SNR enhancement layer coding 251. Similarly, layer-2 pictures 210 can be coded by motion- compensated and intra prediction 232, base layer entropy coding 242 and SNR enhancement layer coding 252. The coding efficiency can be improved due to inter-layer coding. Furthermore, the information required to code spatial layer 1 may depend on reconstructed layer 0 (inter-layer prediction). The inter- layer differences are termed as the enhancement layers. The H.264 SVC provides three types of inter- layer prediction tools: inter-layer motion prediction, inter-layer intra prediction, and inter-layer residual prediction.
[0009] In SVC, the enhancement layer (EL) can reuse the motion information in the base layer (BL) to reduce the inter-layer motion data redundancy. For example, the EL macroblock coding may use a flag, such as base_mode_flag before mb_type is determined to indicate whether the EL motion information is directly derived from the BL. If base_mode_flag is equal to 1, the partitioning data of the EL macroblock together with the associated reference indexes and motion vectors are derived from the corresponding data of the co-located 8x8 block in the BL. The reference picture index of the BL is directly used in EL. The motion vectors of EL are scaled from the data associated with the BL. Besides, the scaled BL motion vector can be used as an additional motion vector predictor for the EL.
[0010] Inter-layer residual prediction uses the up-sampled BL residual information to reduce the information of EL residuals. The co-located residual of BL can be block-wise up-sampled using a bilinear filter and can be used as prediction for the residual of a current macroblock in the EL. The up- sampling of the reference layer residual is done on a transform block basis in order to ensure that no filtering is applied across transform block boundaries. [0011] Similar to inter-layer residual prediction, the inter-layer intra prediction reduces the redundant texture information of the EL. The prediction in the EL is generated by block-wise up- sampling the co-located BL reconstruction signal. In the inter-layer intra prediction up-sampling procedure, 4-tap and 2-tap FIR filters are applied for luma and chroma components, respectively. Different from inter-layer residual prediction, filtering for the inter-layer intra prediction is always performed across sub-block boundaries. For decoding simplicity, inter-layer intra prediction can be restricted to only intra-coded macroblocks in the BL.
[0012] In SVC, quality scalability is realized by coding multiple quality ELs which are composed of refinement coefficients. The scalable video bitstream can be easily truncated or extracted to provide different video bitstreams with different video qualities or bitstream sizes. In SVC, the quality scalability, (also called SNR scalability) can be provided via two strategies, coarse grain scalability (CGS), and medium grain scalability (MGS). The CGS can be regarded as a special case of spatial scalability, where the spatial resolution of the BL and the EL are the same. However, the quality of the EL is better (the QP of the EL is smaller than the QP of the BL). The same inter-layer prediction mechanism for spatial scalable coding can be employed. However, no corresponding up- sampling or deblocking operations are performed. Furthermore, the inter-layer intra and residual prediction are directly performed in the transform domain. For the inter-layer prediction in CGS, a refinement of texture information is typically achieved by re-quantizing the residual signal in the EL with a smaller quantization step size than that used for the preceding CGS layer. CGS can provide multiple pre-defined quality points.
[0013] To provide finer bit rate granularity while maintaining reasonable complexity for quality scalability, MGS is used by H.264 SVC. MGS can be considered as an extension of CGS, where the quantized coefficients in one CGS slice can be divided into several MGS slices. The quantized coefficients in CGS are classified to 16 categories based on its scan position in the zig-zag scan order. These 16 categories of coefficients can be distributed into different slices to provide more quality extraction points than CGS.
[0014] In the current HEVC, it only provides single layer coding based on hierarchical-B coding structure without any spatial scalability and quality scalability. It is desirable to provide the capability of spatial scalability and quality scalability to the current HEVC. Furthermore, it is desirable to provide improved SVC over the H.264 SVC to achieve higher efficiency and/or more flexibility.
SUMMARY
[0015] A method and apparatus for scalable video coding that exploits Base Layer (BL) information for Enhancement Layer (EL) are disclosed, where the EL has higher resolution and/or better quality than the BL. Embodiments of the present invention exploit various pieces of the BL information to improve coding efficiency of the EL. In one embodiment according to the present invention, the method and apparatus utilizes CU structure information, mode information, or motion information of the BL to derive respective CU structure information, mode information, or motion vector predictor (MVP) information for the EL. A combination of the CU structure, the mode, and the motion information may also be used to derive the respective information for the EL. In another embodiment according to the present invention, the method and apparatus derives Motion Vector Predictor (MVP) candidates or merge candidates of the EL based on MVP candidates or merge candidates of the BL. In yet another embodiment of the present invention, the method and apparatus derives intra prediction mode of the EL based on intra prediction mode of the BL.
[0016] An embodiment of the present invention utilizes Residual Quadtree Structure information of the BL to derive the Residual Quadtree Structure for the EL. Another embodiment of the present invention derives the texture of the EL by re-sampling the texture of the BL. A further embodiment of the present invention derives the predictor of residual of the EL by re-sampling the residual of the BL.
[0017] One aspect of the present invention addresses the coding efficiency of context-based adaptive entropy coding for the EL. An embodiment of the present invention determines context information for processing a syntax element of the EL using the information of the BL. Another aspect of the present invention addresses the coding efficiency related in-loop processing. An embodiment of the present invention derives the ALF information, the SAO information, or the DF information for the EL using the ALF information, the SAO information, or the DF information of the BL respectively.
BRIEF DESCRIPTION OF DRAWINGS [0018] Fig. 1 illustrates an example of temporal scalable video coding using hierarchical B-pictures.
[0019] Fig. 2 illustrates an example of a combined scalable video coding system that provides spatial scalability as well as quality scalability where three spatial layers are provides.
[0020] Fig. 3 illustrates an example of CU structure reuse for scalable video coding where a CU structure for the base layer is scaled and used as an initial CU structure for the enhancement layer.
[0021] Fig. 4 illustrates an exemplary flow chart of CU structure coding or motion information coding for scalable video coding according to an embodiment of the present invention.
[0022] Fig. 5 illustrates an exemplary flow chart of MVP derivation or merge candidate derivation for scalable video coding according to an embodiment of the present invention.
[0023] Fig. 6 illustrates an exemplary flow chart of intra prediction mode derivation for scalable video coding according to an embodiment of the present invention.
[0024] Fig. 7 illustrates an exemplary flow chart of Residual Quadtree Structure coding for scalable video coding according to an embodiment of the present invention.
[0025] Fig. 8 illustrates an exemplary flow chart of texture prediction and re- sampling for scalable video coding according to an embodiment of the present invention.
[0026] Fig. 9 illustrates an exemplary flow chart of residual prediction and re-sampling for scalable video coding according to an embodiment of the present invention.
[0027] Fig. 10 illustrates an exemplary flow chart of context adaptive entropy coding for scalable video coding according to an embodiment of the present invention.
[0028] Fig. 11 illustrates an exemplary flow chart of ALF information coding, SAO information coding and DF information coding for scalable video coding according to an embodiment of the present invention.
DETAILED DESCRIPTION
[0029] In HEVC, coding unit (CU) structure was introduced as a new block structure for coding process. A picture is divided into largest CUs (LCUs) and each LCU is adaptively partitioned into CUs until a leaf CU is obtained or a minimum CU size is reached. The CU structure information has to be conveyed to the decoder side so that the same CU structure can be recovered at the decoder side. In order to improve coding efficiency associated with the CU structure for a scalable HEVC, an embodiment according to the present invention allows the CU structure of the BL reused by the EL. In the EL LCU or CU level, one flag is transmitted to indicate whether the CU structure is reused from corresponding CU of the BL. If the BL CU structure is reused, the BL CU structure is scaled to match the resolutions of the EL and the scaled BL CU structure is reused by the EL. In some embodiments, the CU structure information that can be reused by the EL includes the CU split flag and residual quad-tree split flag. Moreover, the leaf CU of scaled CU structures can be further split into sub-CUs. Fig. 3 illustrates an example of CU partition reuse. Partition 310 corresponds to the CU structure of the BL. The video resolution of the EL is two times of the video resolution of the BL horizontally and vertically. The CU structure of corresponding CU partition 315 of BL is scaled up by 2. The scaled CU structure 320 is then used as the initial CU structure for the EL LCU. The leaf CUs of the scaled CU in the EL can be further split into sub-CUs and the result is indicated by 330 in Fig. 3. A flag may be used to indicate whether the leaf CU is further divided into sub-CUs. While Fig. 3 illustrates an example of CU structure is reused, other information may also be reused. For example, the prediction type, prediction size, merge index, inter reference direction, reference picture index, motion vectors, MVP index, and intra mode. The information/data can be scaled when needed before the information/data is reused in the EL.
[0030] In another embodiment according to the present invention, the mode information for a leaf CU is reused. The mode information may include skip flag, prediction type, prediction size, inter reference direction, reference picture index, motion vectors, motion vector index, merge flag, merge index, skip mode, merge mode, and intra mode. The mode information of the leaf CU in the EL can share the same or scaled mode information of the corresponding CU in the BL. One flag can be used to indicate whether the EL will reuse the mode information from the BL or not. For one or more pieces of mode information, one flag can be used to indicate whether the EL will reuse this mode information from the BL or not.In yet another embodiment according to the present invention, the motion information of corresponding Prediction Unit (PU) or Coding Unit (CU) in the BL is reused to derive the motion information of a PU or CU in the EL. The motion information may include inter prediction direction, reference picture index, motion vectors (MVs), Motion Vector Predictors (MVPs), MVP index, merge index, merge candidates, and intra mode. The motion information for the BL can be utilized as predictors or candidates for the motion vector predictor (MVP) information in the EL. For example, the BL MVs and BL MVPs can be added into the MVP list and/or merge list for EL MVP derivation. The aforementioned MVs of BL can be the MVs of the corresponding PU in the BL, the MVs of neighboring PUs of the corresponding PUs in the BL, the MVs of merge candidates of the corresponding PUs in the BL, the MVP of the corresponding PUs in the BL, or the co-located MVs of the corresponding PUs in the BL.
[0031] In another example, the merge candidate derivation for the EL can utilize the motion information of the BL. For example, the merge candidates of a corresponding PU in the BL can be added into the merge candidate list and/or MVP list. The aforementioned motion information of the BL can be the motion information of the corresponding PU in the BL, the motion information associated with a neighboring PU of the corresponding PU in the BL, merge candidates of the corresponding PUs in the BL, MVP of the corresponding PUs in the BL, or the co-located PU of the corresponding PU in the BL. In this case, the motion information includes inter prediction direction, reference picture index, and motion vectors.
[0032] In yet another example, the intra mode of a corresponding PU or CU in the BL can be reused for the EL. For example, the intra mode of a corresponding PU or CU in the BL can be added into the intra most probable mode list. An embodiment according to the present invention uses the motion information of the BL to predict the intra mode for the EL. The order for the most probable mode list in the EL can be adaptively changed according to the intra prediction mode information in the BL. Accordingly, the codeword lengths for codewords in the most probable mode list in the EL can be adaptively changed according to the intra prediction mode information in the BL. For example, the codewords of the intra remaining modes with prediction directions close to the prediction direction of coded BL intra mode are assigned a shorter length. As another example, the neighboring direction modes of BL intra mode can also be added into intra Most Probable Mode (MPM) list of the EL intra mode coding. The intra prediction mode information of the BL can be the intra prediction mode of the corresponding PU in the BL, or the neighboring direction modes of BL intra mode, or the intra prediction mode of a neighboring PU of the corresponding PU in the BL.
[0033] The selected MVP index, merge index, and intra mode index of BL motion information can be utilized to adaptively change the indices order in EL MVP list, merge index list, and intra most probable mode list. For example, in the HEVC Test Model Version 3.0 (HM-3.0), the order of the MVP list is {left MVP, above MVP, co-located MVP}. If the corresponding BL PU selects the above MVP, the order of the above MVP will be moved forward in the EL. Accordingly, the MVP list in the EL will become { above MVP, left MVP, co-located MVP}. Furthermore, the BL coded MV, scaled coded MV, MVP candidates, scaled MVP candidates, merge candidates, and scaled merge candidates can replace part of EL MVP candidates and/or merge candidates. The process of deriving the motion information for a PU or CU in the EL based on the motion information for a corresponding PU or CU in the BL is invoked when an MVP candidate or a merge candidate for a PU or CU in the EL is needed for encoding or decoding.
[0034] As mentioned earlier, the CU structure information for the BL can be used to determine the CU structure information for the EL. Furthermore, the CU structure information, the mode information and the motion information for the BL can be used jointly to determine the CU structure information, the mode information and the motion information for the EL. The mode information or the motion information for the BL may also be used to determine the mode information or the motion information for the EL. The process of deriving the CU structure information, the mode information, the motion information or any combination for the EL based on corresponding information for the BL can be invoked when the CU structure information, the mode information, the motion information or any combination for the EL needs to be encoded or decoded.
[0035] In HM-3.0, the prediction residual is further processed using quadtree partitioning and a coding type is selected for each block of results of residual quadtree partition. Both residual quadtree partition information and coding block pattern (CBP) information have to be incorporated into the bitstream so that the decoder can recover the residual quadtree information. An embodiment according to the present invention reuses the residual quadtree partition and CBP of a corresponding CU in the BL for the EL. The residual quadtree partition and CBP can be scaled and utilized as the predictor for the EL residual quadtree partition and CBP coding. In HEVC, the unit for block transform is termed as Transform Unit (TU) and a TU can be partitioned into smaller TUs. In an embodiment of the present invention, one flag for a root TU level or a TU level of the EL is transmitted to indicate that whether the Residual Quadtree Coding (RQT) structure of a corresponding TU in the BL is utilized to predict the RQT structure of the current TU in the EL. If the RQT structure of a corresponding TU in the BL is utilized to predict the RQT structure of the current TU in the EL, the RQT structure of the corresponding TU in the BL is scaled and used as the initial RQT structure of the current TU in the EL. In the leaf TU of the initial RQT structure for the EL, one split flag can be transmitted to indicate whether the TU is divided into sub-TUs. The process of deriving the RQT structure of the EL based on the information of the RQT structure of the BL is performed when an encoder needs to encode the RQT structure of the EL or a decoder needs to decode the RQT structure of the EL.
[0036] In H.264/AVC scalable extension, 4-tap and 2-tap FIR filters are adopted for the up-sampling operation of texture signal for luma and chroma components respectively. An embodiment according to the present invention re-samples the BL texture as the predictor of EL texture, where the re-sampling utilizes improved up-sampling methods to replace the 4-tap and 2-tap FIR filter in H.264/AVC scalable extension. The filter according to the present invention uses one of the following filters or a combination of the following filters: Discrete Cosine Transform Interpolation Filter (DCTIF), Discrete Sine Transform Interpolation Filter (DSTIF), Wiener filter, non-local mean filter, smoothing filter, and bilateral filter. The filter according to the present invention can cross TU boundaries or can be restricted within TU boundaries. An embodiment according to the present invention may skip the padding and deblocking procedures in inter-layer intra prediction to alleviate computation and data dependency problem. The Sample Adaptive Offset (SAO), Adaptive Loop Filter (ALF), non-local mean filter, and/or smoothing filter in the BL could also be skipped. The skipping of padding, deblocking, SAO, ALF, non-local mean filter, and smoothing filter can be applied to the entire LCU, leaf CU, PU, TU, pre-defined region, LCU boundary, leaf CU boundary, PU boundary, TU boundary, or boundary of a pre-defined region. In another embodiment, the texture of the BL is processed using a filter to produce filtered BL texture, and the BL texture has the same resolution as the EL texture and is used as the predictor of the texture of the EL. Wiener filter, ALF (Adaptive Loop Filter), non-local mean filter, smoothing filter, or SAO (Sample Adaptive Offset) can be applied to the texture of the BL before the texture of BL is utilized as the predictor of the texture of the EL.
[0037] To improve picture quality, an embodiment of the present invention applies Wiener filter or adaptive filter to the texture of the BL before the texture of the BL is re-sampled. Alternatively, the Wiener filter or adaptive filter can be applied to the texture of the BL after the texture of the BL is re- sampled. Furthermore, an embodiment of the present invention applies SAO or ALF to the texture of the BL before the texture of the BL is re-sampled.
[0038] Another embodiment according to the present invention utilizes LCU-based or CU-based Wiener filter and/or adaptive offset for inter-layer intra prediction. The filtering can be applied to BL texture data or up-sampled BL texture data.
[0039] In H.264 SVC, 2-tap FIR filter is adopted for the up-sampling operation of residual signal for both luma and chroma components. An embodiment according to the present invention uses improved up-sampling methods to replace the 2-tap FIR filter of H.264 SVC. The filter can be one of the following filters or a combination of the following filters: Discrete Cosine Transform Interpolation Filter (DCTIF), Discrete Sine Transform Interpolation Filter (DSTIF), Wiener filter, non-local mean filter, smoothing filter, and bilateral filter. When the EL has higher spatial resolution than the BL, the above filters can be applied to re-sample the BL residual. All the above filters can be restricted to cross or not to cross TU boundaries. Furthermore, the residual prediction can be performed in either the spatial domain or the frequency domain if the BL and the EL have the same resolution or the EL has a higher resolution than the BL. When the EL has higher spatial resolution than the BL, the residual of the BL can be re-sampled in frequency domain to form predictors for the EL residual. The process of deriving the predictor of residual of the EL by re- sampling the residual of the BL can be performed when an encoder or a decoder needs to derive the predictor of the residual of the EL based on the re-sampled residual of the BL.
[0040] An embodiment according to the present invention may utilize the BL information for context- based adaptive entropy coding in the EL. For example, the context formation or binarization of (Context- based Adaptive Binary Arithmetic Coding) CAB AC can exploit the information of the BL. The EL can use different context models, different context formation methods, or different context sets based on corresponding information in the BL. For example, the EL PU can use different context models depending on whether the corresponding PU in the BL is coded in skip mode or not. In another embodiment of the present invention, the probability or most probable symbol (MPS) of part of context models for CAB AC in the BL can be reused to derive the initial probability and MPS of part of context models for CABAC in the EL. The syntax element can be split flag, skip flag, merge flag, merge index, chroma intra mode, luma intra mode, partition size, prediction mode, inter prediction direction, motion vector difference, motion vector predictor index, reference index, delta quantization parameter, significant flag, last significant position, coefficient-greater-than-one, coefficient-magnitude-minus-one, ALF(Adaptive Loop Filter) control flag, ALF flag, ALF footprint size, ALF merge flag, ALF ON/OFF decision, ALF coefficient, sample adaptive offset (SAO) flag, SAO type, SAO offset, SAO merge flag, SAO run, SAO on/off decision, transform subdivision flags, residual quadtree CBF (Coded Block Flag), or residual quadtree root CBF. A codeword corresponding to the syntax elements can be adaptively changed according to the information of the BL and the codeword order corresponding to the syntax elements of the EL in a look-up codeword table can also be adaptively changed according to the information of the BL. The process of determining context information for processing the syntax element of the EL using the information of the BL is performed when the syntax element of the EL needs to be encoded or decoded.
[0041] An embodiment of the present invention uses some ALF information in the BL to derive the ALF information in the EL. The ALF information may include filter adaptation mode, filter coefficients, filter footprint, region partition, ON/OFF decision, enable flag, and merge results. For example, the EL can use part of ALF parameters in the BL as the ALF parameters or predictors of ALF parameters in the EL. When the ALF information is reused directly from the ALF information of the BL, there is no need to transmit the associated ALF parameters for the EL. A flag can be used to indicate whether the ALF information for the EL is predicted from the ALF information of the BL. If the flag indicates that the ALF information for the EL is predicted from the ALF information of the BL, the ALF information of the BL can be scaled and used as the predictor for the ALF information of the EL. A value can be used to denote the difference between the predictor of the ALF information and the ALF information of the EL. The process of deriving the ALF information for the EL using the ALF information of the BL is performed when an encoder or a decoder needs to derive the ALF information of the EL.
[0042] An embodiment of the present invention uses some SAO information in the BL to derive the SAO information in the EL. The SAO information may include offset type, offsets, region partition, ON/OFF decision, enable flag, and merge results. For example, the EL can use part of SAO parameters in the BL as the SAO parameters for the EL. When the SAO information is reused from the SAO information of the BL directly, there is no need to transmit the associated SAO parameters for the EL. A flag can be used to indicate whether the SAO information for the EL is predicted from the SAO information of the BL. If the flag indicates that the SAO information for the EL is predicted from the SAO information of the BL, the SAO information of the BL can be scaled and used as the predictor for the SAO information of the EL. A value can be used to denote the difference between the predictor of the SAO information and the SAO information of the EL. The process of deriving the SAO information for the EL using the SAO information of the BL is performed when an encoder or a decoder needs to derive the SAO information of the EL.
[0043] An embodiment of the present invention uses some Deblocking Filter (DF) information in the BL to derive the DF information in EL. The DF information may include threshold values, such as thresholds α, β, and tc that are used to determine Boundary Strength (BS). The DF information may also include filter parameters, ON/OFF filter decision, Strong/Weak filter selection, or filter strength. When the DF information is reused from DF information of the BL directly, there is no need to transmit the associated DF parameters for the EL. A flag can be used to indicate whether the DF information for the EL is predicted from the DF information of the BL. If the flag indicates that the DF information for the EL is predicted from the DF information of the BL, the DF information of the BL can be scaled and used as the predictor for the DF information of the EL. A value can be used to denote the difference between the predictor of the DF information and the DF information of the EL. The process of deriving the DF information for the EL using the DF information of the BL is performed when an encoder or a decoder needs to derive the DF information of the EL.
[0044] Figs. 4 through 11 illustrate exemplary flow charts for scalable video coding according to various embodiments of the present invention. Fig. 4 illustrates an exemplary flow chart of CU structure coding or motion information coding for scalable video coding according to an embodiment of the present invention, wherein video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution or better video quality than the BL. The CU structure (Coding Unit structure), motion information, or a combination of the CU structure and the motion information for a CU (Coding Unit) in the BL is determined in step 410. The CU structure, motion vector predictor (MVP) information, or a combination of the CU structure and the MVP information for a corresponding CU in the EL based on the CU structure, the motion information, or the combination of the CU structure and the motion information for the CU in the BL is respectively determined in step 420. Fig. 5 illustrates an exemplary flow chart of MVP derivation or merge candidate derivation for scalable video coding according to an embodiment of the present invention, wherein video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution or better video quality than the BL. The motion information for the BL is determined in step 510. The Motion Vector Predictor (MVP) candidates or merge candidates in the EL based on the motion information of the BL is derived in step 520. Fig. 6 illustrates an exemplary flow chart of intra prediction mode derivation for scalable video coding according to an embodiment of the present invention, wherein video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution or better video quahty than the BL. The information of intra prediction mode of the BL is determined in step 610. The intra prediction mode of the EL based on the information of the intra prediction mode of the BL is derived in step 620.
[0045] Fig. 7 illustrates an exemplary flow chart of Residual Quadtree Structure coding for scalable video coding according to an embodiment of the present invention, wherein video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution or better video quality than the BL. The information of RQT structure (Residual Quadtree Coding structure) of the BL is determined in step 710. The RQT structure of the EL based on the information of the RQT structure of the BL is derived in step 720. Fig. 8 illustrates an exemplary flow chart of texture prediction and re-sampling for scalable video coding according to an embodiment of the present invention, wherein video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution than the BL or better video quality than the BL. The information of texture of the BL is determined in step 810. A predictor of texture of the EL based on the information of the texture of the BL is derived in step 820. Fig. 9 illustrates an exemplary flow chart of residual prediction and resampling for scalable video coding according to an embodiment of the present invention, wherein video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution than the BL or better video quality than the BL. The residual information of the BL is determined in step 910. A predictor of residual of the EL by re-sampling the residual of the BL is derived in step 920.
[0046] Fig. 10 illustrates an exemplary flow chart of context adaptive entropy coding for scalable video coding according to an embodiment of the present invention, wherein video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution or better video quality than the BL. The information of the BL is determined in step 1010. The context information for processing a syntax element of the EL using the information of the BL is determined in step 1020. Fig. 11 illustrates an exemplary flow chart of ALF information coding, SAO information coding and DF information coding for scalable video coding according to an embodiment of the present invention, wherein video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution or better video quality than the BL. The ALF information, SAO information or DF information of the BL is determined in step 1110. The ALF information, SAO information, or DF information for the EL using the ALF information, SAO information, or DF information of the BL is respectively derived in step 1120.
[0047] Embodiments of scalable video coding, where the enhancement layer coding exploits the information of the base layer, according to the present invention as described above may be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein. An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA). These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware codes may be developed in different programming languages and different format or style. The software code may also be compiled for different target platforms. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
[0048] The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims

1. A method of CU (Coding Unit) structure coding, mode information coding or motion information coding for scalable video coding, wherein video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution or better video quality than the BL, the method comprising:
determining CU structure (Coding Unit structure), mode, motion information, or a combination of the CU structure, the mode and the motion information for a CU (Coding Unit) in the BL; and
determining CU structure, mode, motion vector predictor (MVP) information, or a combination of the CU structure, the mode and the MVP information for a corresponding CU in the EL based on the CU structure, the mode, the motion information, or the combination of the CU structure, the mode and the motion information for the CU in the BL respectively.
2. The method of Claim 1 , wherein said determining the CU structure, the mode, the MVP information, or the combination of the CU structure, the mode and the MVP information for the corresponding CU in the EL based on the CU structure, the mode, the motion information, or the combination of the CU structure, the mode and the motion information for the CU in the BL respectively is performed if an encoder needs to encode the CU structure, the mode, the MVP information, or the combination of the CU structure, the mode and the MVP information respectively for the corresponding CU in the EL.
3. The method of Claim 1 , wherein said determining the CU structure, the mode, the MVP information, or the combination of the CU structure, the mode and the MVP information for the corresponding CU in the EL based on the CU structure, the mode, the motion information, or the combination of the CU structure, the mode and the motion information for the CU in the BL respectively is performed if a decoder needs to decode the CU structure, the mode, the MVP information, or the combination of the CU structure, the mode and the MVP information respectively for the corresponding CU in the EL.
4. The method of Claim 1, further comprising incorporating a first flag to indicate whether said determining the CU structure, the mode, the MVP information, or the combination of the CU structure, the mode and the MVP information for the corresponding CU in the EL is predicted based on the CU structure, the mode, the motion information, or the combination of the CU structure, the mode and the motion information for the CU in the BL respectively or not.
5. The method of Claim 4, wherein the CU structure for the corresponding CU in the EL is scaled from the CU structure for the CU in the BL and used as an initial CU structure for the CU in the EL if the first flag indicates that said determining the CU structure for the corresponding CU in the EL is predicted based on the CU structure for the CU in the BL.
6. The method of Claim 5, wherein a split flag is incorporated to indicate whether a leaf CU of the corresponding CU in the EL is divided into sub-CUs.
7. The method of Claim 4, wherein the CU structure, the mode and the MVP information for the corresponding CU in the EL are scaled from the CU structure, the mode and the motion information for the CU in the BL if the first flag indicates that said determining the CU structure, the mode and the MVP information for the corresponding CU in the EL is predicted based on the CU structure, the mode and the motion information for the CU in the BL.
8. The method of Claim 1, wherein the CU structure is CU split flag or residual quad-tree split flag; wherein the mode is skip mode, merge mode, or intra mode; and wherein the motion information comprises one or a combination of inter prediction direction, reference picture index, motion vector, merge index, and MVP index when said determining the combination of the CU structure, the mode and the MVP information for the corresponding CU in the EL is based on the combination of the CU structure, the mode and the motion information for the CU in the BL respectively.
9. The method of Claim 1, where the CU is a leaf CU, and wherein said determining the mode or the MVP information for the corresponding CU in the EL is based on the mode or the motion information respectively for the CU in the BL.
10. The method of Claim 9, wherein said determining the mode or the MVP information for the corresponding CU in the EL based on the mode or the motion information for the CU in the BL respectively is performed if an encoder needs to encode the mode or the MVP information respectively for the corresponding CU in the EL.
11. The method of Claim 9, wherein said determining the mode or the MVP information for the corresponding CU in the EL based on the mode or the motion information for the CU in the BL respectively is performed if a decoder needs to decode the mode or the MVP information respectively for the corresponding CU in the EL.
12. The method of Claim 9, further comprising incorporating a first flag to indicate whether said determining the mode or the MVP information for the corresponding leaf CU in the EL is predicted based on the mode or the motion information respectively for the leaf CU in the BL or not.
13. The method of Claim 12, wherein the mode or the MVP information for the corresponding leaf CU in the EL is scaled from the CU structure for the leaf CU in the BL if the first flag indicates that said determining the mode or the MVP information for the corresponding leaf CU in the EL is predicted based on the mode or the motion information for the leaf CU in the BL.
14. The method of Claim 1, wherein the mode is skip mode, merge mode, or intra mode; and wherein the MVP information comprises one or a combination of an MVP candidate list, MVP candidate, order of the MVP candidate list, a merge candidate list, merge candidate, order of the merge candidate list, merge index, and MVP index.
15. An apparatus of CU (Coding Unit) structure coding, mode information coding or motion information coding scalable video coding, wherein video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution or better video quality than the BL, the apparatus comprising:
means for determining CU structure (Coding Unit structure), mode, motion information, or a combination of the CU structure, the mode and the motion information for a CU (Coding Unit) in the BL; and
means for determining CU structure, mode, motion vector predictor (MVP) information, or the combination of the CU structure, the mode and the MVP information for a corresponding CU in the EL based on the CU structure, the mode, the motion information, or the combination of the CU structure, the mode and the motion information for the CU in the BL respectively.
16. The apparatus of Claim 15, further comprising means for incorporating a first flag to indicate whether said determining the CU structure, the mode, the MVP information, or the combination of the CU structure, the mode and the MVP information for the corresponding CU in the EL is predicted based on the CU structure, the mode, the motion information, or the combination of the CU structure, the mode and the motion information for the CU in the BL respectively or not.
17. The apparatus of Claim 15, wherein the CU structure is CU split flag or residual quad-tree split flag; wherein the mode is skip mode, merge mode, or intra mode; and wherein the motion information comprises one or a combination of inter prediction direction, reference picture index, motion vector, merge index, and MVP index when said determining the combination of the CU structure, the mode and the MVP information for the corresponding CU in the EL is based on the combination of the CU structure, the mode and the motion information for the CU in the BL respectively.
18. The apparatus of Claim 15, where the CU is a leaf CU, and wherein said determining the mode and the MVP information for the corresponding CU in the EL is based on the mode and the motion information respectively for the CU in the BL.
19. A method of MVP (Motion Vector Prediction) derivation or merge candidate derivation for scalable video coding, wherein video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution or better video quality than the BL, the method comprising:
determining motion information in the BL; and
deriving Motion Vector Predictor (MVP) candidates or merge candidates in the EL based on the motion information the BL.
20. The method of Claim 19, wherein said deriving the Motion Vector Predictor (MVP) candidates or the merge candidates in the EL based on the motion information in the BL is performed when encoding or decoding of the video data needs to derive the MVP candidates or the merge candidates respectively in the EL.
21. The method of Claim 19, wherein an MVP candidate list in the EL includes at least one MV (motion vector) in the BL.
22. The method of Claim 21, wherein the MV of the BL comprises the MV of a corresponding
PU in the BL, the MV of a neighboring PU of the corresponding PU in the BL, the MV of a merge candidate of the corresponding PU in the BL, the MVP of the corresponding PU in the BL, or a co- located MV of the corresponding PU in the BL.
23. The method of Claim 21, wherein the MV of the BL is scaled up for the MVP list according to a video resolution ratio between the EL to the BL.
24. The method of Claim 19, wherein at least a motion vector in the BL replaces at least an MVP candidate of an MVP candidate list in the EL or is added to the MVP candidate list in the EL.
25. The method of Claim 24, wherein the MV of the BL comprises the MV of a corresponding PU in the BL, the MV of a neighboring PU of the corresponding PU in the BL, the MV of a merge candidate of the corresponding PU in the BL, the MVP of the corresponding PU in the BL, or a co- located MV of the corresponding PU in the BL.
26. The method of Claim 24, wherein the MV of the BL is scaled up for the MVP candidate list according to a video resolution ratio between the EL to the BL.
27. An apparatus of MVP (Motion Vector Prediction) derivation or merge candidate derivation for scalable video coding, wherein video data is configured into a Base Layer (BL) and an Enhancement
Layer (EL) and wherein the EL has higher spatial resolution or better video quality than the BL, the apparatus comprising: means for determining motion information for in the BL; and
means for deriving Motion Vector Predictor (MVP) candidates or merge candidates in the EL based on the motion information the BL.
28. The apparatus of Claim 27, wherein at least a motion vector in the BL replaces at least an MVP candidate of an MVP candidate list in the EL or is added to the MVP candidate list in the EL.
29. A method of intra prediction mode derivation for scalable video coding, wherein video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution or better video quality than the BL, the method comprising:
determining information of intra prediction mode of the BL; and
deriving intra prediction mode of the EL based on the information of the intra prediction mode of the
BL.
30. The method of Claim 29, wherein said deriving the intra prediction mode of the EL based on the information of the intra prediction mode of the BL is performed when an encoder needs to encode the intra prediction mode of the EL.
31. The method of Claim 29, wherein said deriving the intra prediction mode of the EL based on the information of the intra prediction mode of the BL is performed when a decoder needs to decode the intra prediction mode of the EL.
32. The method of Claim 29, wherein the intra prediction mode of the BL is added to an MPM (Most Probable Mode) list for the EL.
33. The method of Claim 29, wherein the information of the intra prediction mode of the BL comprises one or a combination of an intra prediction mode of a corresponding PU (Prediction Unit) in the BL, a neighboring direction mode of intra prediction mode of the BL, and a neighboring direction mode of intra prediction mode or intra prediction mode of a neighboring PU of the corresponding PU in the BL.
34. The method of Claim 29, wherein order of an MPM (Most Probable Mode) list for the EL is changed adaptively according to the information of the intra prediction mode of the BL.
35. The method of Claim 29, wherein a codeword for a remaining mode associated with the intra prediction mode of the EL is dependent on a prediction direction of the remaining mode; and wherein the codeword is shorter if the prediction direction of the remaining mode is closer to the prediction direction of the intra prediction mode of the BL.
36. The method of Claim 29, wherein the intra prediction mode is luma intra prediction mode or chroma intra prediction mode.
37. An apparatus of intra prediction mode derivation for scalable video coding, wherein video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution or better video quality than the BL, the apparatus comprising:
means for determining information of intra prediction mode of the BL; and
means for deriving intra prediction mode of the EL based on the information of the intra prediction mode of the BL.
PCT/CN2012/076316 2011-06-10 2012-05-31 Method and apparatus of scalable video coding WO2012167711A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US14/005,555 US9860528B2 (en) 2011-06-10 2012-05-31 Method and apparatus of scalable video coding
CN201280024337.5A CN103621081B (en) 2011-06-10 2012-05-31 Scalable video coding method and device
EP12796116.7A EP2719181A4 (en) 2011-06-10 2012-05-31 Method and apparatus of scalable video coding
RU2013154579/08A RU2575411C2 (en) 2011-06-10 2012-05-31 Method and apparatus for scalable video coding
BR112013031215A BR112013031215B8 (en) 2011-06-10 2012-05-31 SCALABLE VIDEO CODING METHOD AND DEVICE

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201161495740P 2011-06-10 2011-06-10
US61/495,740 2011-06-10
US201161567774P 2011-12-07 2011-12-07
US61/567,774 2011-12-07

Publications (1)

Publication Number Publication Date
WO2012167711A1 true WO2012167711A1 (en) 2012-12-13

Family

ID=47295475

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/076316 WO2012167711A1 (en) 2011-06-10 2012-05-31 Method and apparatus of scalable video coding

Country Status (5)

Country Link
US (1) US9860528B2 (en)
EP (1) EP2719181A4 (en)
CN (2) CN103621081B (en)
BR (1) BR112013031215B8 (en)
WO (1) WO2012167711A1 (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130003847A1 (en) * 2011-06-30 2013-01-03 Danny Hong Motion Prediction in Scalable Video Coding
JP2013102296A (en) * 2011-11-07 2013-05-23 Canon Inc Motion vector encoder, motion vector encoding method and program, motion vector decoder, and motion vector decoding method and program
GB2499865A (en) * 2012-03-02 2013-09-04 Canon Kk Method and Devices for Encoding/Decoding an Enhancement Layer Intra Image in a Scalable Video Bit-stream
US20140086329A1 (en) * 2012-09-27 2014-03-27 Qualcomm Incorporated Base layer merge and amvp modes for video coding
GB2506592A (en) * 2012-09-28 2014-04-09 Canon Kk Motion Vector Prediction in Scalable Video Encoder and Decoder
US20140185665A1 (en) * 2012-12-28 2014-07-03 Qualcomm Incorporated High-frequency-pass sample adaptive offset in video coding
WO2014106685A1 (en) * 2013-01-04 2014-07-10 Nokia Corporation An apparatus, a method and a computer program for video coding and decoding
KR20150012223A (en) * 2013-07-24 2015-02-03 삼성전자주식회사 Method and apparatus for determining motion vector
EP2793465A4 (en) * 2011-12-15 2015-07-22 Sony Corp Image processing device and image processing method
EP2829066A4 (en) * 2012-03-22 2015-11-11 Mediatek Inc Method and apparatus of scalable video coding
CN105144720A (en) * 2013-01-04 2015-12-09 Ge视频压缩有限责任公司 Efficient scalable coding concept
US9237359B2 (en) 2013-09-25 2016-01-12 Qualcomm Incorporated Filtering video data in video coding
CN105284113A (en) * 2013-06-17 2016-01-27 高通股份有限公司 Inter-component filtering
EP2903281A4 (en) * 2012-09-28 2016-04-27 Sony Corp Encoding device, encoding method, decoding device, and decoding method
CN105580374A (en) * 2013-09-27 2016-05-11 高通股份有限公司 Inter-view dependency type in MV-HEVC
CN105872538A (en) * 2016-04-18 2016-08-17 广东中星电子有限公司 Time-domain filtering method and time-domain filtering device
US9479778B2 (en) 2012-08-13 2016-10-25 Qualcomm Incorporated Device and method for coding video information using base layer motion vector candidate
US9826244B2 (en) 2013-01-08 2017-11-21 Qualcomm Incorporated Device and method for scalable coding of video information based on high efficiency video coding
CN108055542A (en) * 2012-12-21 2018-05-18 杜比实验室特许公司 High-precision up-sampling in the scalable coding of high bit depth video
US10194158B2 (en) 2012-09-04 2019-01-29 Qualcomm Incorporated Transform basis adjustment in scalable video coding
US11582473B2 (en) 2013-04-08 2023-02-14 Ge Video Compression, Llc Coding concept allowing efficient multi-view/layer coding
JP7524259B2 (en) 2013-01-04 2024-07-29 ジーイー ビデオ コンプレッション エルエルシー An efficient scalable coding concept.

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE47366E1 (en) 2011-06-23 2019-04-23 Sun Patent Trust Image decoding method and apparatus based on a signal type of the control parameter of the current block
EP4228264B1 (en) 2011-06-23 2024-07-31 Sun Patent Trust Image decoding device, image encoding device
KR102067683B1 (en) 2011-06-24 2020-01-17 선 페이턴트 트러스트 Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding/decoding device
CN106878722B (en) * 2011-06-24 2019-11-12 太阳专利托管公司 Coding/decoding method, decoding apparatus, coding method, code device
RU2608244C2 (en) 2011-06-27 2017-01-17 Сан Пэтент Траст Image encoding method, image decoding method, image encoding device, image decoding device and apparatus for encoding and decoding images
MX2014000046A (en) 2011-06-28 2014-02-17 Samsung Electronics Co Ltd Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor.
CN103563377B (en) 2011-06-28 2017-05-10 太阳专利托管公司 Decoding method and decoding device
MX2013010892A (en) 2011-06-29 2013-12-06 Panasonic Corp Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding/decoding device.
AU2012277219A1 (en) 2011-06-30 2013-09-19 Sun Patent Trust Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding/decoding device
MY167090A (en) 2011-06-30 2018-08-10 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
RU2604680C2 (en) 2011-07-11 2016-12-10 Сан Пэтент Траст Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding and decoding device
JP5810700B2 (en) * 2011-07-19 2015-11-11 ソニー株式会社 Image processing apparatus and image processing method
KR20130085088A (en) * 2012-01-19 2013-07-29 한국전자통신연구원 Method for fast mode decision in scalable video coding and apparatus thereof
US20150043639A1 (en) * 2012-03-20 2015-02-12 Samsung Electronics Co., Ltd. Method and device for coding scalable video on basis of coding unit of tree structure, and method and device for decoding scalable video on basis of coding unit of tree structure
US9420285B2 (en) * 2012-04-12 2016-08-16 Qualcomm Incorporated Inter-layer mode derivation for prediction in scalable video coding
US9179145B2 (en) * 2012-07-02 2015-11-03 Vidyo, Inc. Cross layer spatial intra prediction
US10277908B2 (en) * 2012-09-25 2019-04-30 Intel Corporation Inter-layer sample adaptive filter parameters re-use for scalable video coding
US9596465B2 (en) 2013-01-04 2017-03-14 Intel Corporation Refining filter for inter layer prediction of scalable video coding
US20140198846A1 (en) * 2013-01-16 2014-07-17 Qualcomm Incorporated Device and method for scalable coding of video information
JP6625165B2 (en) 2013-02-14 2019-12-25 キヤノン株式会社 Image processing apparatus, image processing method, and program
JP6362333B2 (en) * 2013-02-14 2018-07-25 キヤノン株式会社 Image processing apparatus, image processing method, and program
JP6569677B2 (en) * 2014-08-28 2019-09-04 日本電気株式会社 Block size determination method and program
US10057574B2 (en) * 2015-02-11 2018-08-21 Qualcomm Incorporated Coding tree unit (CTU) level adaptive loop filter (ALF)
US10681371B2 (en) * 2015-06-07 2020-06-09 Lg Electronics Inc. Method and device for performing deblocking filtering
US10965955B2 (en) * 2016-12-22 2021-03-30 Mediatek Inc. Method and apparatus of motion refinement for video coding
US20190045198A1 (en) * 2017-12-28 2019-02-07 Intel Corporation Region adaptive data-efficient generation of partitioning and mode decisions for video encoding
US11039173B2 (en) * 2019-04-22 2021-06-15 Arlo Technologies, Inc. Method of communicating video from a first electronic device to a second electronic device via a network, and a system having a camera and a mobile electronic device for performing the method
KR102550503B1 (en) * 2019-06-25 2023-06-30 닛폰 호소 교카이 Encoding device, decoding device, and program
US12033305B2 (en) * 2019-12-17 2024-07-09 Stmicroelectronics (Grenoble 2) Sas Filtering device, associated system and method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050157794A1 (en) * 2004-01-16 2005-07-21 Samsung Electronics Co., Ltd. Scalable video encoding method and apparatus supporting closed-loop optimization
CN101167364A (en) * 2005-03-10 2008-04-23 高通股份有限公司 Scalable video coding with two layer encoding and single layer decoding

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6980596B2 (en) 2001-11-27 2005-12-27 General Instrument Corporation Macroblock level adaptive frame/field coding for digital video content
CN102695054B (en) 2004-07-20 2015-01-28 高通股份有限公司 Method and apparatus for motion vector prediction in temporal video compression
DE102004059993B4 (en) * 2004-10-15 2006-08-31 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for generating a coded video sequence using interlayer motion data prediction, and computer program and computer readable medium
KR100888962B1 (en) 2004-12-06 2009-03-17 엘지전자 주식회사 Method for encoding and decoding video signal
KR100896279B1 (en) 2005-04-15 2009-05-07 엘지전자 주식회사 Method for scalably encoding and decoding video signal
KR100772873B1 (en) 2006-01-12 2007-11-02 삼성전자주식회사 Video encoding method, video decoding method, video encoder, and video decoder, which use smoothing prediction
EP2118852B1 (en) 2007-03-07 2011-11-30 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Concept for synthesizing texture in a video sequence
US8503527B2 (en) 2008-10-03 2013-08-06 Qualcomm Incorporated Video coding with large macroblocks
CN101888553B (en) 2010-06-30 2012-01-11 香港应用科技研究院有限公司 Scalable video coding method and device
US20130222539A1 (en) * 2010-10-08 2013-08-29 Dolby Laboratories Licensing Corporation Scalable frame compatible multiview encoding and decoding methods
US9247249B2 (en) * 2011-04-20 2016-01-26 Qualcomm Incorporated Motion vector prediction in video coding

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050157794A1 (en) * 2004-01-16 2005-07-21 Samsung Electronics Co., Ltd. Scalable video encoding method and apparatus supporting closed-loop optimization
CN101167364A (en) * 2005-03-10 2008-04-23 高通股份有限公司 Scalable video coding with two layer encoding and single layer decoding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2719181A4 *

Cited By (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130003847A1 (en) * 2011-06-30 2013-01-03 Danny Hong Motion Prediction in Scalable Video Coding
JP2013102296A (en) * 2011-11-07 2013-05-23 Canon Inc Motion vector encoder, motion vector encoding method and program, motion vector decoder, and motion vector decoding method and program
EP2793465A4 (en) * 2011-12-15 2015-07-22 Sony Corp Image processing device and image processing method
GB2499865B (en) * 2012-03-02 2016-07-06 Canon Kk Method and devices for encoding a sequence of images into a scalable video bit-stream, and decoding a corresponding scalable video bit-stream
GB2499865A (en) * 2012-03-02 2013-09-04 Canon Kk Method and Devices for Encoding/Decoding an Enhancement Layer Intra Image in a Scalable Video Bit-stream
EP2829066A4 (en) * 2012-03-22 2015-11-11 Mediatek Inc Method and apparatus of scalable video coding
US10003810B2 (en) 2012-03-22 2018-06-19 Mediatek Inc. Method and apparatus of scalable video coding
US9479778B2 (en) 2012-08-13 2016-10-25 Qualcomm Incorporated Device and method for coding video information using base layer motion vector candidate
US10194158B2 (en) 2012-09-04 2019-01-29 Qualcomm Incorporated Transform basis adjustment in scalable video coding
US9491459B2 (en) 2012-09-27 2016-11-08 Qualcomm Incorporated Base layer merge and AMVP modes for video coding
US20140086329A1 (en) * 2012-09-27 2014-03-27 Qualcomm Incorporated Base layer merge and amvp modes for video coding
WO2014052631A1 (en) * 2012-09-27 2014-04-03 Qualcomm Incorporated Base layer merge and amvp modes for video coding
GB2506592B (en) * 2012-09-28 2017-06-14 Canon Kk Method, device, and computer program for motion vector prediction in scalable video encoder and decoder
EP2903281A4 (en) * 2012-09-28 2016-04-27 Sony Corp Encoding device, encoding method, decoding device, and decoding method
GB2506592A (en) * 2012-09-28 2014-04-09 Canon Kk Motion Vector Prediction in Scalable Video Encoder and Decoder
CN108055542A (en) * 2012-12-21 2018-05-18 杜比实验室特许公司 High-precision up-sampling in the scalable coding of high bit depth video
CN108055542B (en) * 2012-12-21 2021-08-13 杜比实验室特许公司 High precision upsampling in scalable coding of high bit depth video
US11792416B2 (en) 2012-12-21 2023-10-17 Dolby Laboratories Licensing Corporation High precision up-sampling in scalable coding of high bit-depth video
US11570455B2 (en) 2012-12-21 2023-01-31 Dolby Laboratories Licensing Corporation High precision up-sampling in scalable coding of high bit-depth video
US20140185665A1 (en) * 2012-12-28 2014-07-03 Qualcomm Incorporated High-frequency-pass sample adaptive offset in video coding
US11800131B2 (en) 2013-01-04 2023-10-24 Nokia Technologies Oy Apparatus, a method and a computer program for video coding and decoding
JP2022172146A (en) * 2013-01-04 2022-11-15 ジーイー ビデオ コンプレッション エルエルシー Efficient scalable coding concept
US11677966B2 (en) 2013-01-04 2023-06-13 Ge Video Compression, Llc Efficient scalable coding concept
JP2016506196A (en) * 2013-01-04 2016-02-25 ジーイー ビデオ コンプレッション エルエルシー Efficient scalable coding concept
JP2021078156A (en) * 2013-01-04 2021-05-20 ジーイー ビデオ コンプレッション エルエルシー Efficient scalable coding concept
US9900609B2 (en) 2013-01-04 2018-02-20 Nokia Technologies Oy Apparatus, a method and a computer program for video coding and decoding
CN105144720A (en) * 2013-01-04 2015-12-09 Ge视频压缩有限责任公司 Efficient scalable coding concept
JP7524259B2 (en) 2013-01-04 2024-07-29 ジーイー ビデオ コンプレッション エルエルシー An efficient scalable coding concept.
US10104386B2 (en) 2013-01-04 2018-10-16 Ge Video Compression, Llc Efficient scalable coding concept
US11025928B2 (en) 2013-01-04 2021-06-01 Ge Video Compression, Llc Efficient scalable coding concept
CN105144720B (en) * 2013-01-04 2018-12-28 Ge视频压缩有限责任公司 Efficient scalable coding concept
WO2014106685A1 (en) * 2013-01-04 2014-07-10 Nokia Corporation An apparatus, a method and a computer program for video coding and decoding
JP2019050595A (en) * 2013-01-04 2019-03-28 ジーイー ビデオ コンプレッション エルエルシー Efficient scalable encoding conception
JP7126332B2 (en) 2013-01-04 2022-08-26 ジーイー ビデオ コンプレッション エルエルシー Efficient scalable coding concept
US10506247B2 (en) 2013-01-04 2019-12-10 Nokia Technologies Oy Apparatus, a method and a computer program for video coding and decoding
US10609396B2 (en) 2013-01-04 2020-03-31 Ge Video Compression, Llc Efficient scalable coding concept
US11153592B2 (en) 2013-01-04 2021-10-19 Nokia Technologies Oy Apparatus, a method and a computer program for video coding and decoding
US9826244B2 (en) 2013-01-08 2017-11-21 Qualcomm Incorporated Device and method for scalable coding of video information based on high efficiency video coding
US11582473B2 (en) 2013-04-08 2023-02-14 Ge Video Compression, Llc Coding concept allowing efficient multi-view/layer coding
CN105284113A (en) * 2013-06-17 2016-01-27 高通股份有限公司 Inter-component filtering
KR102216128B1 (en) * 2013-07-24 2021-02-16 삼성전자주식회사 Method and apparatus for determining motion vector
CN105594212B (en) * 2013-07-24 2019-04-16 三星电子株式会社 For determining the method and its equipment of motion vector
EP3016392A4 (en) * 2013-07-24 2017-04-26 Samsung Electronics Co., Ltd. Method for determining motion vector and apparatus therefor
CN105594212A (en) * 2013-07-24 2016-05-18 三星电子株式会社 Method for determining motion vector and apparatus therefor
KR20150012223A (en) * 2013-07-24 2015-02-03 삼성전자주식회사 Method and apparatus for determining motion vector
US9237359B2 (en) 2013-09-25 2016-01-12 Qualcomm Incorporated Filtering video data in video coding
CN105580374B (en) * 2013-09-27 2018-10-26 高通股份有限公司 A kind of method, video decoding apparatus and storage media that multi-layer video data are carried out with encoding and decoding
CN105580374A (en) * 2013-09-27 2016-05-11 高通股份有限公司 Inter-view dependency type in MV-HEVC
CN105872538B (en) * 2016-04-18 2020-12-29 广东中星微电子有限公司 Time domain filtering method and time domain filtering device
CN105872538A (en) * 2016-04-18 2016-08-17 广东中星电子有限公司 Time-domain filtering method and time-domain filtering device

Also Published As

Publication number Publication date
EP2719181A1 (en) 2014-04-16
EP2719181A4 (en) 2015-04-15
RU2013154579A (en) 2015-07-20
BR112013031215A2 (en) 2017-06-20
CN103621081A (en) 2014-03-05
BR112013031215B8 (en) 2022-07-19
US9860528B2 (en) 2018-01-02
CN103621081B (en) 2016-12-21
US20140003495A1 (en) 2014-01-02
CN106851319B (en) 2020-06-19
BR112013031215B1 (en) 2022-06-28
CN106851319A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
AU2015230740B2 (en) Method and apparatus of scalable video coding
US9860528B2 (en) Method and apparatus of scalable video coding
EP2829066B1 (en) Method and apparatus of scalable video coding
US10091515B2 (en) Method and apparatus for intra mode derivation and coding in scalable video coding
CN108540806B (en) Method, apparatus, and computer-readable medium for encoding/decoding image
KR102219842B1 (en) Method and apparatus for inter-layer prediction based on temporal sub-layer information
WO2015053697A1 (en) Method and arrangement for transcoding a video bitstream
KR20140043240A (en) Method and apparatus for image encoding/decoding
KR102219841B1 (en) Method and Apparatus for Video Encoding and Video Decoding
KR20210022598A (en) Method and Apparatus for Video Encoding and Video Decoding
KR102271878B1 (en) Video encoding and decoding method and apparatus using the same
KR102418524B1 (en) Method and apparatus for image encoding/decoding
RU2575411C2 (en) Method and apparatus for scalable video coding

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12796116

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2012796116

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 14005555

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2013154579

Country of ref document: RU

Kind code of ref document: A

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112013031215

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 112013031215

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20131204