WO2020007261A9 - V refinement of video motion vectors in adjacent video data - Google Patents

V refinement of video motion vectors in adjacent video data Download PDF

Info

Publication number
WO2020007261A9
WO2020007261A9 PCT/CN2019/094218 CN2019094218W WO2020007261A9 WO 2020007261 A9 WO2020007261 A9 WO 2020007261A9 CN 2019094218 W CN2019094218 W CN 2019094218W WO 2020007261 A9 WO2020007261 A9 WO 2020007261A9
Authority
WO
WIPO (PCT)
Prior art keywords
motion vector
coding
video
current
coding unit
Prior art date
Application number
PCT/CN2019/094218
Other languages
French (fr)
Other versions
WO2020007261A1 (en
Inventor
Sriram Sethuraman
Jeeva Raj A
Sagar KOTECHA
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Publication of WO2020007261A1 publication Critical patent/WO2020007261A1/en
Publication of WO2020007261A9 publication Critical patent/WO2020007261A9/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/43Hardware specially adapted for motion estimation or compensation
    • H04N19/433Hardware specially adapted for motion estimation or compensation characterised by techniques for memory access

Definitions

  • JVET-K0041-v2, JVET-M0148-v2 and JVET-L0173-v2 of the Joint Video Experts Team (JVET) , the contents of which are incorporated by reference, and of which the skilled person will be aware.
  • Embodiments of the present application generally relate to the field of video coding, such as HEVC, and more particularly to a method for concurrent processing of Coding units within a Coded Tree Block with decoder side motion vector refinement/derivation.
  • video data is generally compressed before being communicated across modern day telecommunications networks.
  • the size of a video could also be an issue when the video is stored on a storage device because memory resources may be limited.
  • Video compression devices often use software and/or hardware at the source to code the video data prior to transmission or storage, thereby decreasing the quantity of data needed to represent digital video images.
  • the compressed data is then received at the destination by a video decompression device that decodes the video data.
  • Embodiments of the present application provide apparatuses and methods for encoding and decoding. Particular embodiments are outlined in the attached independent claims, with other embodiments in the dependent claims.
  • a decoding method comprising: determining that refined motion vectors of a spatially neighboring set to be available when the refined motion vectors have been computed, wherein the availability of the refined motion vectors of the spatially neighboring set comprising at least one coding unit or sub-coding unit for a current set comprising at least one coding unit or sub coding unit.
  • the first method may comprise. partitioning a coding unit normatively into as many sub-coding units as the number of concurrency sets that the coding unit spans so that the data pre-fetch and decoder-side motion vector refinement and motion compensation for each sub-coding unit can happen independent of the other sub coding-units, but in a concurrent manner with other coding or sub-coding units that are part of a current concurrency set.
  • the method may be performed in a data pre-fetch stage of a current set; wherein the refined motion vectors have been computed in a pipeline stage of the current set, wherein the pipeline stage is an ahead stage of the data pre-fetch stage.
  • a method for using unrefined motion vectors is provided, the unrefined motion vectors being for video data of a neighbor coding unit (CU) that falls in a preceding pipeline slot (also described herein as a processing stage or cycle) to the current coding unit’s concurrency set, to perform pre-fetch and use the refined motion vectors of a neighbor CU that does not fall within the same concurrency set as start of refinement motion vector center.
  • padded samples whenever the refinement requires samples that are not pre-fetched based on a configurable additional sample fetch range around the pre-fetch center.
  • a decoding apparatus which comprises modules/units/components/circuits to perform at least a part of the steps of the method in the first manner.
  • a decoding apparatus which comprises a memory storing instructions; and a processor coupled to the memory, the processor configured to execute the instructions stored in the memory to cause the processor to perform the method of the first manner.
  • a computer-readable storage medium which having a program recorded thereon; where the program makes the computer execute method of the first manner.
  • a sixth manner computer program is provided, which is configured to cause a computer to execute method of the first manner.
  • any one of the foregoing embodiments may be combined with any one or more of the other foregoing embodiments to create a new embodiment within the scope of the present disclosure.
  • Decoder side motion vector refinement affects concurrent processing of coding units when the refined motion vector of a coding unit becomes a predictor for another coding unit or a starting point for decoder-side refinement of a coding unit, in other words when there is a dependency this can stall the pipeline.
  • the disclosed method determines the availability of refined motion vectors of coding units in such a way that ideally all coding units within a coded tree block can configure their data pre-fetch in a concurrent manner in a given stage of regular (e.g. CTB-level) pipeline.
  • regular e.g. CTB-level
  • the corresponding one or more motion vectors of neighboring units is already available (i.e. completed) from an earlier stage in the processing cycle.
  • the said available refined motion vectors can be used in obtaining a refined motion vector for other neighboring unit at a later stage.
  • Embodiments of the disclosure additionally perform their refinement process in a concurrent manner in a subsequent e.g. next stage of the CTB-level pipeline.
  • the concept of a lag between the top CTB row and current CTB row is utilized in determining the availability. Also, the concept of a concurrency set is introduced to normatively partition some coding units, if necessary, into sub-coding-units to meet the concurrency requirements of the pipeline.
  • the approach disclosed herein provides a higher coding gain while ensuring that the dependency does not overly constrain the hardware implementation of the refinement process.
  • the pipeline latency is further reduced to make even left or top-right CTB refined MVs to be used for refinement of current CTB CUs.
  • the process is also extended to finer granularities than CTB level.
  • FIG. 1A is a block diagram illustrating an example coding system that may implement embodiments of the invention.
  • FIG. 1B is a block diagram illustrating another example coding system that may implement embodiments of the invention.
  • FIG. 2 is a block diagram illustrating an example video encoder that may implement embodiments of the invention.
  • FIG. 3 is a block diagram illustrating an example of a video decoder that may implement embodiments of the invention.
  • FIG. 4 is a schematic diagram of a network device.
  • FIG. 5 is a simplified block diagram of an apparatus 500 that may be used as either or both of the source device 12 and the destination device 14 from FIG. 1A according to an exemplary embodiment.
  • FIG. 6 is a graphical illustration for MCP based on a translational motion model.
  • FIG. 7 is a graphical illustration for an overview block diagram of the HEVC inter-picture prediction.
  • FIG. 8 is a graphical illustration for the relationship between spatial MVP candidates and spatially neighboring blocks.
  • FIG. 9 is a graphical illustration for the derivation process flow for the two spatial candidates A and B.
  • FIG. 10 is a graphical illustration for CTU partitioning.
  • FIG. 11 is a graphical illustration for sub-CUs.
  • FIG. 12 is a graphical illustration for the relationship between CU and sub-CUs.
  • FIG. 13 is a graphical illustration for the bilateral matching to derive motion information of the current CU.
  • FIG. 14 is a graphical illustration for template matching to derive motion information of the current CU.
  • FIG. 15 is a graphical illustration for the motion associated to the block passing through a 4 ⁇ 4 block.
  • FIG. 16 is a graphical illustration for MV0′ and MV1′ .
  • FIG. 17 is a graphical illustration for a regular pipeline at CTB level for lag0.
  • FIG. 18 is a graphical illustration for the relationship between top row and current row.
  • FIG. 19 is a graphical illustration for another relationship between top row and current row.
  • FIG. 20a is a graphical illustration for another relationship between top row and current row.
  • FIG. 20b is an enlargement of the top half of FIG 20a.
  • FIG. 20c is an enlargement of the top half of FIG 20a.
  • FIG. 21 is a graphical illustration for a regular pipeline at CTB level for lag 2.
  • FIG. 22 is a graphical illustration for pattern.
  • FIG. 23 is a graphical illustration for ErrorSurface.
  • FIG. 1A is a block diagram illustrating an example coding system 10 that may utilize bidirectional prediction techniques.
  • the coding system 10 includes a source device 12 that provides encoded video data to be decoded at a later time by a destination device 14.
  • the source device 12 may provide the video data to destination device 14 via a computer-readable medium 16.
  • Source device 12 and destination device 14 may comprise any of a wide range of devices, including desktop computers, notebook (i.e., laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called “smart” phones, so-called “smart” pads, televisions, cameras, display devices, digital media players, video gaming consoles, video streaming device, or the like.
  • source device 12 and destination device 14 may be equipped for wireless communication.
  • Computer-readable medium 16 may comprise any type of medium or device capable of moving the encoded video data from source device 12 to destination device 14.
  • computer-readable medium 16 may comprise a communication medium to enable source device 12 to transmit encoded video data directly to destination device 14 in real-time.
  • the encoded video data may be modulated according to a communication standard, such as a wireless communication protocol, and transmitted to destination device 14.
  • the communication medium may comprise any wireless or wired communication medium, such as a radio frequency (RF) spectrum or one or more physical transmission lines.
  • the communication medium may form part of a packet-based network, such as a local area network, a wide-area network, or a global network such as the Internet.
  • the communication medium may include routers, switches, base stations, or any other equipment that may be useful to facilitate communication from source device 12 to destination device 14.
  • encoded data may be output from output interface 22 to a storage device.
  • encoded data may be accessed from the storage device by input interface.
  • the storage device may include any of a variety of distributed or locally accessed data storage media such as a hard drive, Blu-ray discs, digital video disks (DVD) s, Compact Disc Read-Only Memories (CD-ROMs) , flash memory, volatile or non-volatile memory, or any other suitable digital storage media for storing encoded video data.
  • the storage device may correspond to a file server or another intermediate storage device that may store the encoded video generated by source device 12. Destination device 14 may access stored video data from the storage device via streaming or download.
  • the file server may be any type of server capable of storing encoded video data and transmitting that encoded video data to the destination device 14.
  • Example file servers include a web server (e.g., for a website) , a file transfer protocol (FTP) server, network attached storage (NAS) devices, or a local disk drive.
  • Destination device 14 may access the encoded video data through any standard data connection, including an Internet connection. This may include a wireless channel (e.g., a Wi-Fi connection) , a wired connection (e.g., digital subscriber line (DSL) , cable modem, etc. ) , or a combination of both that is suitable for accessing encoded video data stored on a file server.
  • the transmission of encoded video data from the storage device may be a streaming transmission, a download transmission, or a combination thereof.
  • the techniques of this disclosure are not necessarily limited to wireless applications or settings.
  • the techniques may be applied to video coding in support of any of a variety of multimedia applications, such as over-the-air television broadcasts, cable television transmissions, satellite television transmissions, Internet streaming video transmissions, such as dynamic adaptive streaming over HTTP (DASH) , digital video that is encoded onto a data storage medium, decoding of digital video stored on a data storage medium, or other applications.
  • coding system 10 may be configured to support one-way or two-way video transmission to support applications such as video streaming, video playback, video broadcasting, and/or video telephony.
  • source device 12 includes video source 18, video encoder 20, and output interface 22.
  • Destination device 14 includes input interface 28, video decoder 30, and display device 32.
  • video encoder 20 of source device 12 and/or the video decoder 30 of the destination device 14 may be configured to apply the techniques for bidirectional prediction.
  • a source device and a destination device may include other components or arrangements.
  • source device 12 may receive video data from an external video source, such as an external camera.
  • destination device 14 may interface with an external display device, rather than including an integrated display device.
  • the illustrated coding system 10 of FIG. 1A is merely one example.
  • Techniques for bidirectional prediction may be performed by any digital video encoding and/or decoding device.
  • the techniques of this disclosure generally are performed by a video coding device, the techniques may also be performed by a video encoder/decoder, typically referred to as a “CODEC. ”
  • the techniques of this disclosure may also be performed by a video preprocessor.
  • the video encoder and/or the decoder may be a graphics processing unit (GPU) or a similar device.
  • Source device 12 and destination device 14 are merely examples of such coding devices in which source device 12 generates coded video data for transmission to destination device 14.
  • source device 12 and destination device 14 may operate in a substantially symmetrical manner such that each of the source and destination devices 12, 14 includes video encoding and decoding components.
  • coding system 10 may support one-way or two-way video transmission between video devices 12, 14, e.g., for video streaming, video playback, video broadcasting, or video telephony.
  • Video source 18 of source device 12 may include a video capture device, such as a video camera, a video archive containing previously captured video, and/or a video feed interface to receive video from a video content provider.
  • video source 18 may generate computer graphics-based data as the source video, or a combination of live video, archived video, and computer-generated video.
  • source device 12 and destination device 14 may form so-called camera phones or video phones.
  • the techniques described in this disclosure may be applicable to video coding in general, and may be applied to wireless and/or wired applications.
  • the captured, pre-captured, or computer-generated video may be encoded by video encoder 20.
  • the encoded video information may then be output by output interface 22 onto a computer-readable medium 16.
  • Computer-readable medium 16 may include transient media, such as a wireless broadcast or wired network transmission, or storage media (that is, non-transitory storage media) , such as a hard disk, flash drive, compact disc, digital video disc, Blu-ray disc, or other computer-readable media.
  • a network server (not shown) may receive encoded video data from source device 12 and provide the encoded video data to destination device 14, e.g., via network transmission.
  • a computing device of a medium production facility such as a disc stamping facility, may receive encoded video data from source device 12 and produce a disc containing the encoded video data. Therefore, computer-readable medium 16 may be understood to include one or more computer-readable media of various forms, in various examples.
  • Input interface 28 of destination device 14 receives information from computer-readable medium 16.
  • the information of computer-readable medium 16 may include syntax information defined by video encoder 20, which is also used by video decoder 30, that includes syntax elements that describe characteristics and/or processing of blocks and other coded units, e.g., group of pictures (GOPs) .
  • Display device 32 displays the decoded video data to a user, and may comprise any of a variety of display devices such as a cathode ray tube (CRT) , a liquid crystal display (LCD) , a plasma display, an organic light emitting diode (OLED) display, or another type of display device.
  • CTR cathode ray tube
  • LCD liquid crystal display
  • OLED organic light emitting diode
  • Video encoder 20 and video decoder 30 may operate according to a video coding standard, such as the High Efficiency Video Coding (HEVC) standard presently under development, and may conform to the HEVC Test Model (HM) .
  • video encoder 20 and video decoder 30 may operate according to other proprietary or industry standards, such as the International Telecommunications Union Telecommunication Standardization Sector (ITU-T) H. 264 standard, alternatively referred to as Motion Picture Expert Group (MPEG) -4, Part 10, Advanced Video Coding (AVC) , H. 265/HEVC, or extensions of such standards.
  • ITU-T International Telecommunications Union Telecommunication Standardization Sector
  • MPEG Motion Picture Expert Group
  • AVC Advanced Video Coding
  • H. 265/HEVC H. 265/HEVC
  • video encoder 20 and video decoder 30 may each be integrated with an audio encoder and decoder, and may include appropriate multiplexer-demultiplexer (MUX-DEMUX) units, or other hardware and software, to handle encoding of both audio and video in a common data stream or separate data streams.
  • MUX-DEMUX units may conform to the ITU H. 223 multiplexer protocol, or other protocols such as the user datagram protocol (UDP) .
  • a Coding tree unit is the basic processing unit, which conceptually corresponds in structure to macroblock units that were used in several previous video standards.
  • HEVC initially divides the picture into CTUs which are then divided for each luma/chroma component into coding tree blocks (CTBs) .
  • CTB coding tree blocks
  • a CTB can be 64 ⁇ 64, 32 ⁇ 32, or 16 ⁇ 16 with a larger pixel block size usually increasing the coding efficiency.
  • CTBs are then divided into same size or smaller one or more coding units (CUs) , so that the CTU size is also the largest coding unit size.
  • CUs The arrangement of CUs in a CTB is known as a quadtree since a subdivision results in four smaller regions. CUs are then divided into prediction units (PUs) and /or transform units (Tus) of either intra-picture or inter-picture prediction type which can vary in size from 64 ⁇ 64 to 4 ⁇ 4.
  • PUs prediction units
  • Tus transform units
  • Video encoder 20 and video decoder 30 each may be implemented as any of a variety of suitable encoder circuitry, such as one or more microprocessors, digital signal processors (DSPs) , application specific integrated circuits (ASICs) , field programmable gate arrays (FPGAs) , discrete logic, software, hardware, firmware or any combinations thereof.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • a device may store instructions for the software in a suitable, non-transitory computer-readable medium and execute the instructions in hardware using one or more processors to perform the techniques of this disclosure.
  • Each of video encoder 20 and video decoder 30 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (CODEC) in a respective device.
  • a device including video encoder 20 and/or video decoder 30 may comprise an integrated circuit, a microprocessor, and/or a wireless communication device, such as a cellular telephone.
  • Fig. 1B is an illustrative diagram of an example video coding system 40 including encoder 20 of fig. 2 and/or decoder 30 of fig. 3 according to an exemplary embodiment.
  • the system 40 can implement techniques of this present application.
  • video coding system 40 may include imaging device (s) 41, video encoder 20, video decoder 30 (and/or a video coder implemented via logic circuitry 54 of processing unit (s) 46) , an antenna 42, one or more processor (s) 43, one or more memory store (s) 44, and/or a display device 45.
  • imaging device (s) 41, antenna 42, processing unit (s) 46, logic circuitry 54, video encoder 20, video decoder 30, processor (s) 43, memory store (s) 44, and/or display device 45 may be capable of communication with one another.
  • video coding system 40 may include only video encoder 20 or only video decoder 30 in various examples.
  • video coding system 40 may include antenna 42. Antenna 42 may be configured to transmit or receive an encoded bitstream of video data, for example. Further, in some examples, video coding system 40 may include display device 45. Display device 45 may be configured to present video data. As shown, in some examples, logic circuitry 54 may be implemented via processing unit (s) 46. Processing unit (s) 46 may include application-specific integrated circuit (ASIC) logic, graphics processor (s) , general purpose processor (s) , or the like. Video coding system 40 also may include optional processor (s) 43, which may similarly include application-specific integrated circuit (ASIC) logic, graphics processor (s) , general purpose processor (s) , or the like.
  • ASIC application-specific integrated circuit
  • Video coding system 40 also may include optional processor (s) 43, which may similarly include application-specific integrated circuit (ASIC) logic, graphics processor (s) , general purpose processor (s) , or the like.
  • logic circuitry 54 may be implemented via hardware, video coding dedicated hardware, or the like, and processor (s) 43 may implemented general purpose software, operating systems, or the like.
  • memory store (s) 44 may be any type of memory such as volatile memory (e.g., Static Random Access Memory (SRAM) , Dynamic Random Access Memory (DRAM) , etc. ) or non-volatile memory (e.g., flash memory, etc. ) , and so forth.
  • memory store (s) 44 may be implemented by cache memory.
  • logic circuitry 54 may access memory store (s) 44 (for implementation of an image buffer for example) .
  • logic circuitry 47 and/or processing unit (s) 46 may include memory stores (e.g., cache or the like) for the implementation of an image buffer or the like.
  • video encoder 20 implemented via logic circuitry may include an image buffer (e.g., via either processing unit (s) 46 or memory store (s) 44) ) and a graphics processing unit (e.g., via processing unit (s) 46) .
  • the graphics processing unit may be communicatively coupled to the image buffer.
  • the graphics processing unit may include video encoder 20 as implemented via logic circuitry 47 to embody the various modules as discussed with respect to FIG. 2 and/or any other encoder system or subsystem described herein.
  • the logic circuitry may be configured to perform the various operations as discussed herein.
  • Video decoder 30 may be implemented in a similar manner as implemented via logic circuitry 47 to embody the various modules as discussed with respect to decoder 30 of FIG. 3 and/or any other decoder system or subsystem described herein.
  • video decoder 30 may be implemented via logic circuitry may include an image buffer (e.g., via either processing unit (s) 420 or memory store (s) 44) ) and a graphics processing unit (e.g., via processing unit (s) 46) .
  • the graphics processing unit may be communicatively coupled to the image buffer.
  • the graphics processing unit may include video decoder 30 as implemented via logic circuitry 47 to embody the various modules as discussed with respect to FIG. 3 and/or any other decoder system or subsystem described herein.
  • antenna 42 of video coding system 40 may be configured to receive an encoded bitstream of video data.
  • the encoded bitstream may include data, indicators, index values, mode selection data, or the like associated with encoding a video frame as discussed herein, such as data associated with the coding partition (e.g., transform coefficients or quantized transform coefficients, optional indicators (as discussed) , and/or data defining the coding partition) .
  • Video coding system 40 may also include video decoder 30 coupled to antenna 42 and configured to decode the encoded bitstream.
  • the display device 45 configured to present video frames.
  • FIG. 2 is a block diagram illustrating an example of video encoder 20 that may implement the techniques of the present application.
  • Video encoder 20 may perform intra-and inter-coding of video blocks within video slices.
  • Intra-coding relies on spatial prediction to reduce or remove spatial redundancy in video within a given video frame or picture.
  • Inter-coding relies on temporal prediction to reduce or remove temporal redundancy in video within adjacent frames or pictures of a video sequence.
  • Intra-mode may refer to any of several spatial based coding modes.
  • Inter-modes such as uni-directional prediction (P mode) or bi-prediction (B mode) , may refer to any of several temporal-based coding modes.
  • video encoder 20 receives a current video block within a video frame to be encoded.
  • video encoder 20 includes mode select unit 40, reference frame memory 64, summer 50, transform processing unit 52, quantization unit 54, and entropy coding unit 56.
  • Mode select unit 40 includes motion compensation unit 44, motion estimation unit 42, intra-prediction unit 46, and partition unit 48.
  • video encoder 20 also includes inverse quantization unit 58, inverse transform unit 60, and summer 62.
  • a deblocking filter (not shown in FIG. 2) may also be included to filter block boundaries to remove blockiness artifacts from reconstructed video. If desired, the deblocking filter would typically filter the output of summer 62. Additional filters (in loop or post loop) may also be used in addition to the deblocking filter. Such filters are not shown for brevity, but if desired, may filter the output of summer 50 (as an in-loop filter) .
  • video encoder 20 receives a video frame or slice to be coded.
  • the frame or slice may be divided into multiple video blocks.
  • Motion estimation unit 42 and motion compensation unit 44 perform inter-predictive coding of the received video block relative to one or more blocks in one or more reference frames to provide temporal prediction.
  • Intra-prediction unit 46 may alternatively perform intra-predictive coding of the received video block relative to one or more neighboring blocks in the same frame or slice as the block to be coded to provide spatial prediction.
  • Video encoder 20 may perform multiple coding passes, e.g., to select an appropriate coding mode for each block of video data.
  • partition unit 48 may partition blocks of video data into sub-blocks, based on evaluation of previous partitioning schemes in previous coding passes. For example, partition unit 48 may initially partition a frame or slice into largest coding units (LCUs) , and partition each of the LCUs into sub-coding units (sub-CUs) based on rate-distortion analysis (e.g., rate-distortion optimization) . Mode select unit 40 may further produce a quadtree data structure indicative of partitioning of a LCU into sub-CUs.
  • Leaf-node CUs of the quadtree may include one or more prediction units (PUs) and one or more transform units (TUs) .
  • a CU includes a coding node, PUs, and TUs associated with the coding node.
  • a size of the CU corresponds to a size of the coding node and is square in shape.
  • the size of the CU may range from 8 ⁇ 8 pixels up to the size of the treeblock with a maximum of 64 ⁇ 64 pixels or greater.
  • Each CU may contain one or more PUs and one or more TUs.
  • Syntax data associated with a CU may describe, for example, partitioning of the CU into one or more PUs. Partitioning modes may differ between whether the CU is skip or direct mode encoded, intra-prediction mode encoded, or inter-prediction mode encoded. PUs may be partitioned to be non-square in shape. Syntax data associated with a CU may also describe, for example, partitioning of the CU into one or more TUs according to a quadtree. In an embodiment, a CU, PU, or TU can be square or non-square (e.g., rectangular) in shape.
  • Mode select unit 40 may select one of the coding modes, intra or inter, e.g., based on error results, and provides the resulting intra-or inter-coded block to summer 50 to generate residual block data and to summer 62 to reconstruct the encoded block for use as a reference frame. Mode select unit 40 also provides syntax elements, such as motion vectors, intra-mode indicators, partition information, and other such syntax information, to entropy coding unit 56.
  • Motion estimation unit 42 and motion compensation unit 44 may be highly integrated, but are illustrated separately for conceptual purposes.
  • Motion estimation performed by motion estimation unit 42, is the process of generating motion vectors, which estimate motion for video blocks.
  • a motion vector for example, may indicate the displacement of a PU of a video block within a current video frame or picture relative to a predictive block within a reference frame (or other coded unit) relative to the current block being coded within the current frame (or other coded unit) .
  • a predictive block is a block that is found to closely match the block to be coded, in terms of pixel difference, which may be determined by sum of absolute difference (SAD) , sum of square difference (SSD) , or other difference metrics.
  • video encoder 20 may calculate values for sub-integer pixel positions of reference pictures stored in reference frame memory 64. For example, video encoder 20 may interpolate values of one-quarter pixel positions, one-eighth pixel positions, or other fractional pixel positions of the reference picture. Therefore, motion estimation unit 42 may perform a motion search relative to the full pixel positions and fractional pixel positions and output a motion vector with fractional pixel precision.
  • Motion estimation unit 42 calculates a motion vector for a PU of a video block in an inter-coded slice by comparing the position of the PU to the position of a predictive block of a reference picture.
  • the reference picture may be selected from a first reference picture list (List 0) or a second reference picture list (List 1) , each of which identify one or more reference pictures stored in reference frame memory 64.
  • Motion estimation unit 42 sends the calculated motion vector to entropy encoding unit 56 and motion compensation unit 44.
  • Motion compensation performed by motion compensation unit 44, may involve fetching or generating the predictive block based on the motion vector determined by motion estimation unit 42. Again, motion estimation unit 42 and motion compensation unit 44 may be functionally integrated, in some examples. Upon receiving the motion vector for the PU of the current video block, motion compensation unit 44 may locate the predictive block to which the motion vector points in one of the reference picture lists. Summer 50 forms a residual video block by subtracting pixel values of the predictive block from the pixel values of the current video block being coded, forming pixel difference values, as discussed below. In general, motion estimation unit 42 performs motion estimation relative to luma components, and motion compensation unit 44 uses motion vectors calculated based on the luma components for both chroma components and luma components. Mode select unit 40 may also generate syntax elements associated with the video blocks and the video slice for use by video decoder 30 in decoding the video blocks of the video slice.
  • Intra-prediction unit 46 may intra-predict a current block, as an alternative to the inter-prediction performed by motion estimation unit 42 and motion compensation unit 44, as described above. In particular, intra-prediction unit 46 may determine an intra-prediction mode to use to encode a current block. In some examples, intra-prediction unit 46 may encode a current block using various intra-prediction modes, e.g., during separate encoding passes, and intra-prediction unit 46 (or mode select unit 40, in some examples) may select an appropriate intra-prediction mode to use from the tested modes.
  • intra-prediction unit 46 may calculate rate-distortion values using a rate-distortion analysis for the various tested intra-prediction modes, and select the intra-prediction mode having the best rate-distortion characteristics among the tested modes.
  • Rate-distortion analysis generally determines an amount of distortion (or error) between an encoded block and an original, unencoded block that was encoded to produce the encoded block, as well as a bitrate (that is, a number of bits) used to produce the encoded block.
  • Intra-prediction unit 46 may calculate ratios from the distortions and rates for the various encoded blocks to determine which intra-prediction mode exhibits the best rate-distortion value for the block.
  • intra-prediction unit 46 may be configured to code depth blocks of a depth map using a depth modeling mode (DMM) .
  • DMM depth modeling mode
  • Mode select unit 40 may determine whether an available DMM mode produces better coding results than an intra-prediction mode and the other DMM modes, e.g., using rate-distortion optimization (RDO) .
  • RDO rate-distortion optimization
  • Data for a texture image corresponding to a depth map may be stored in reference frame memory 64.
  • Motion estimation unit 42 and motion compensation unit 44 may also be configured to inter-predict depth blocks of a depth map.
  • intra-prediction unit 46 may provide information indicative of the selected intra-prediction mode for the block to entropy coding unit 56.
  • Entropy coding unit 56 may encode the information indicating the selected intra-prediction mode.
  • Video encoder 20 may include in the transmitted bitstream configuration data, which may include a plurality of intra-prediction mode index tables and a plurality of modified intra-prediction mode index tables (also referred to as codeword mapping tables) , definitions of encoding contexts for various blocks, and indications of a most probable intra-prediction mode, an intra-prediction mode index table, and a modified intra-prediction mode index table to use for each of the contexts.
  • a plurality of intra-prediction mode index tables and a plurality of modified intra-prediction mode index tables also referred to as codeword mapping tables
  • Video encoder 20 forms a residual video block by subtracting the prediction data from mode select unit 40 from the original video block being coded.
  • Summer 50 represents the component or components that perform this subtraction operation.
  • Transform processing unit 52 applies a transform, such as a discrete cosine transform (DCT) or a conceptually similar transform, to the residual block, producing a video block comprising residual transform coefficient values.
  • Transform processing unit 52 may perform other transforms which are conceptually similar to DCT. Wavelet transforms, integer transforms, sub-band transforms or other types of transforms could also be used.
  • Transform processing unit 52 applies the transform to the residual block, producing a block of residual transform coefficients.
  • the transform may convert the residual information from a pixel value domain to a transform domain, such as a frequency domain.
  • Transform processing unit 52 may send the resulting transform coefficients to quantization unit 54.
  • Quantization unit 54 quantizes the transform coefficients to further reduce bit rate. The quantization process may reduce the bit depth associated with some or all of the coefficients. The degree of quantization may be modified by adjusting a quantization parameter.
  • quantization unit 54 may then perform a scan of the matrix including the quantized transform coefficients.
  • entropy encoding unit 56 may perform the scan.
  • entropy coding unit 56 entropy codes the quantized transform coefficients.
  • entropy coding unit 56 may perform context adaptive variable length coding (CAVLC) , context adaptive binary arithmetic coding (CABAC) , syntax-based context-adaptive binary arithmetic coding (SBAC) , probability interval partitioning entropy (PIPE) coding or another entropy coding technique.
  • context may be based on neighboring blocks.
  • the encoded bitstream may be transmitted to another device (e.g., video decoder 30) or archived for later transmission or retrieval.
  • Inverse quantization unit 58 and inverse transform unit 60 apply inverse quantization and inverse transformation, respectively, to reconstruct the residual block in the pixel domain, e.g., for later use as a reference block.
  • Motion compensation unit 44 may calculate a reference block by adding the residual block to a predictive block of one of the frames of reference frame memory 64. Motion compensation unit 44 may also apply one or more interpolation filters to the reconstructed residual block to calculate sub-integer pixel values for use in motion estimation.
  • Summer 62 adds the reconstructed residual block to the motion compensated prediction block produced by motion compensation unit 44 to produce a reconstructed video block for storage in reference frame memory 64.
  • the reconstructed video block may be used by motion estimation unit 42 and motion compensation unit 44 as a reference block to inter-code a block in a subsequent video frame.
  • a non-transform based encoder 20 can quantize the residual signal directly without the transform processing unit 52 for certain blocks or frames.
  • an encoder 20 can have the quantization unit 54 and the inverse quantization unit 58 combined into a single unit.
  • FIG. 3 is a block diagram illustrating an example of video decoder 30 that may implement the techniques of this present application.
  • video decoder 30 includes an entropy decoding unit 70, motion compensation unit 72, intra-prediction unit 74, inverse quantization unit 76, inverse transformation unit 78, reference frame memory 82, and summer 80.
  • Video decoder 30 may, in some examples, perform a decoding pass generally reciprocal to the encoding pass described with respect to video encoder 20 (FIG. 2) .
  • Motion compensation unit 72 may generate prediction data based on motion vectors received from entropy decoding unit 70, while intra-prediction unit 74 may generate prediction data based on intra-prediction mode indicators received from entropy decoding unit 70.
  • video decoder 30 receives an encoded video bitstream that represents video blocks of an encoded video slice and associated syntax elements from video encoder 20.
  • Entropy decoding unit 70 of video decoder 30 entropy decodes the bitstream to generate quantized coefficients, motion vectors or intra-prediction mode indicators, and other syntax elements.
  • Entropy decoding unit 70 forwards the motion vectors and other syntax elements to motion compensation unit 72.
  • Video decoder 30 may receive the syntax elements at the video slice level and/or the video block level.
  • intra prediction unit 74 may generate prediction data for a video block of the current video slice based on a signaled intra prediction mode and data from previously decoded blocks of the current frame or picture.
  • motion compensation unit 72 produces predictive blocks for a video block of the current video slice based on the motion vectors and other syntax elements received from entropy decoding unit 70.
  • the predictive blocks may be produced from one of the reference pictures within one of the reference picture lists.
  • Video decoder 30 may construct the reference frame lists, List 0 and List 1, using default construction techniques based on reference pictures stored in reference frame memory 82.
  • Motion compensation unit 72 determines prediction information for a video block of the current video slice by parsing the motion vectors and other syntax elements, and uses the prediction information to produce the predictive blocks for the current video block being decoded. For example, motion compensation unit 72 uses some of the received syntax elements to determine a prediction mode (e.g., intra-or inter-prediction) used to code the video blocks of the video slice, an inter-prediction slice type (e.g., B slice, P slice, or GPB slice) , construction information for one or more of the reference picture lists for the slice, motion vectors for each inter-encoded video block of the slice, inter-prediction status for each inter-coded video block of the slice, and other information to decode the video blocks in the current video slice.
  • a prediction mode e.g., intra-or inter-prediction
  • an inter-prediction slice type e.g., B slice, P slice, or GPB slice
  • Motion compensation unit 72 may also perform interpolation based on interpolation filters. Motion compensation unit 72 may use interpolation filters as used by video encoder 20 during encoding of the video blocks to calculate interpolated values for sub-integer pixels of reference blocks. In this case, motion compensation unit 72 may determine the interpolation filters used by video encoder 20 from the received syntax elements and use the interpolation filters to produce predictive blocks.
  • Data for a texture image corresponding to a depth map may be stored in reference frame memory 82.
  • Motion compensation unit 72 may also be configured to inter-predict depth blocks of a depth map.
  • the coding system 10 of FIG. 1A is suitable for implementing various video coding or compression techniques.
  • Some video compression techniques such as inter prediction, intra prediction, and loop filters, have demonstrated to be effective. Therefore, the video compression techniques have been adopted into various video coding standards, such as H. 264/AVC and H. 265/HEVC.
  • AMVP adaptive motion vector prediction
  • MERGE merge mode
  • the MVs noted above are utilized in bi-prediction.
  • two prediction blocks are formed.
  • One prediction block is formed using a MV of list0 (referred to herein as MV0) .
  • Another prediction block is formed using a MV of list1 (referred to herein as MV1) .
  • the two prediction blocks are then combined (e.g., averaged) in order to form a single prediction signal (e.g., a prediction block or a predictor block) .
  • the decoder 30 can be used to decode the compressed bitstream.
  • the decoder 30 can produce the output video stream without the loop filtering unit.
  • a non-transform based decoder 30 can inverse-quantize the residual signal directly without the inverse-transform processing unit 78 for certain blocks or frames.
  • the video decoder 30 can have the inverse-quantization unit 76 and the inverse-transform processing unit 78 combined into a single unit.
  • FIG. 4 is a schematic diagram of a network device 400 (e.g., a coding device) according to an embodiment of the disclosure.
  • the network device 400 is suitable for implementing the disclosed embodiments as described herein.
  • the network device 400 may be a decoder such as video decoder 30 of FIG. 1A or an encoder such as video encoder 20 of FIG. 1A.
  • the network device 400 may be one or more components of the video decoder 30 of FIG. 1A or the video encoder 20 of FIG. 1A as described above.
  • the network device 400 comprises ingress ports 410 and receiver units (Rx) 420 for receiving data; a processor, logic unit, or central processing unit (CPU) 430 to process the data; transmitter units (Tx) 440 and egress ports 450 for transmitting the data; and a memory 460 for storing the data.
  • the network device 400 may also comprise optical-to-electrical (OE) components and electrical-to-optical (EO) components coupled to the ingress ports 410, the receiver units 420, the transmitter units 440, and the egress ports 450 for egress or ingress of optical or electrical signals.
  • OE optical-to-electrical
  • EO electrical-to-optical
  • the processor 430 is implemented by hardware and software.
  • the processor 430 may be implemented as one or more CPU chips, cores (e.g., as a multi-core processor) , FPGAs, ASICs, and DSPs.
  • the processor 430 is in communication with the ingress ports 410, receiver units 420, transmitter units 440, egress ports 450, and memory 460.
  • the processor 430 comprises a coding module 470.
  • the coding module 470 implements the disclosed embodiments described above. For instance, the coding module 470 implements, processes, prepares, or provides the various coding operations. The inclusion of the coding module 470 therefore provides a substantial improvement to the functionality of the network device 400 and effects a transformation of the network device 400 to a different state.
  • the coding module 470 is implemented as instructions stored in the memory 460 and executed by the processor 430.
  • the memory 460 comprises one or more disks, tape drives, and solid-state drives and may be used as an over-flow data storage device, to store programs when such programs are selected for execution, and to store instructions and data that are read during program execution.
  • the memory 460 may be volatile and/or non-volatile and may be read-only memory (ROM) , random access memory (RAM) , ternary content-addressable memory (TCAM) , and/or static random-access memory (SRAM) .
  • Fig. 5 is a simplified block diagram of an apparatus 500 that may be used as either or both of the source device 12 and the destination device 14 from Fig. 1A according to an exemplary embodiment.
  • the apparatus 500 can implement techniques of this present application.
  • the apparatus 500 can be in the form of a computing system including multiple computing devices, or in the form of a single computing device, for example, a mobile phone, a tablet computer, a laptop computer, a notebook computer, a desktop computer, and the like.
  • a processor 502 in the apparatus 500 can be a central processing unit.
  • the processor 502 can be any other type of device, or multiple devices, capable of manipulating or processing information now-existing or hereafter developed.
  • the disclosed implementations can be practiced with a single processor as shown, e.g., the processor 502, advantages in speed and efficiency can be achieved using more than one processor.
  • a memory 504 in the apparatus 500 can be a read only memory (ROM) device or a random access memory (RAM) device in an implementation. Any other suitable type of storage device can be used as the memory 504.
  • the memory 504 can include code and data 506 that is accessed by the processor 502 using a bus 512.
  • the memory 504 can further include an operating system 508 and application programs 510, the application programs 510 including at least one program that permits the processor 502 to perform the methods described here.
  • the application programs 510 can include applications 1 through N, which further include a video coding application that performs the methods described here.
  • the apparatus 500 can also include additional memory in the form of a secondary storage 514, which can, for example, be a memory card used with a mobile computing device. Because the video communication sessions may contain a significant amount of information, they can be stored in whole or in part in the secondary storage 514 and loaded into the memory 504 as needed for processing.
  • the apparatus 500 can also include one or more output devices, such as a display 518.
  • the display 518 may be, in one example, a touch sensitive display that combines a display with a touch sensitive element that is operable to sense touch inputs.
  • the display 518 can be coupled to the processor 502 via the bus 512.
  • Other output devices that permit a user to program or otherwise use the apparatus 500 can be provided in addition to or as an alternative to the display 518.
  • the output device is or includes a display
  • the display can be implemented in various ways, including by a liquid crystal display (LCD) , a cathode-ray tube (CRT) display, a plasma display or light emitting diode (LED) display, such as an organic LED (OLED) display.
  • LCD liquid crystal display
  • CRT cathode-ray tube
  • LED light emitting diode
  • OLED organic LED
  • the apparatus 500 can also include or be in communication with an image-sensing device 520, for example a camera, or any other image-sensing device 520 now existing or hereafter developed that can sense an image such as the image of a user operating the apparatus 500.
  • the image-sensing device 520 can be positioned such that it is directed toward the user operating the apparatus 500.
  • the position and optical axis of the image-sensing device 520 can be configured such that the field of vision includes an area that is directly adjacent to the display 518 and from which the display 518 is visible.
  • the apparatus 500 can also include or be in communication with a sound-sensing device 522, for example a microphone, or any other sound-sensing device now existing or hereafter developed that can sense sounds near the apparatus 500.
  • the sound-sensing device 522 can be positioned such that it is directed toward the user operating the apparatus 500 and can be configured to receive sounds, for example, speech or other utterances, made by the user while the user operates the apparatus 500.
  • FIG. 5 depicts the processor 502 and the memory 504 of the apparatus 500 as being integrated into a single unit, other configurations can be utilized.
  • the operations of the processor 502 can be distributed across multiple machines (each machine having one or more of processors) that can be coupled directly or across a local area or other network.
  • the memory 504 can be distributed across multiple machines such as a network-based memory or memory in multiple machines performing the operations of the apparatus 500.
  • the bus 512of the apparatus 500 can be composed of multiple buses.
  • the secondary storage 514 can be directly coupled to the other components of the apparatus 500 or can be accessed via a network and can comprise a single integrated unit such as a memory card or multiple units such as multiple memory cards.
  • the apparatus 500 can thus be implemented in a wide variety of configurations.
  • the present disclosure related to inter-picture prediction, while inter-picture prediction makes use of the temporal correlation between pictures in order to derive a motion-compensated prediction (MCP) for a block of image samples.
  • MCP motion-compensated prediction
  • a video picture is divided into rectangular blocks. Assuming homogeneous motion inside one block and that moving objects are larger than one block, for each block, a corresponding block in a previously decoded picture can be found that serves as a predictor.
  • the general concept of MCP based on a translational motion model is illustrated in Fig. 6.
  • a translational motion model the position of the block in a previously decoded picture is indicated by a motion vector ( ⁇ x, ⁇ y) , where ⁇ x specifies the horizontal and ⁇ y specifies the vertical displacement relative to the position of the current block.
  • the motion vector ( ⁇ x, ⁇ y) could be of fractional sample accuracy to more accurately capture the movement of the underlying object.
  • Interpolation is applied on the reference pictures to derive the prediction signal when the corresponding motion vector has fractional sample accuracy.
  • the previously decoded picture is referred to as the reference picture and indicated by a reference index ⁇ t to a reference picture list.
  • These translational motion model parameters i.e. motion vectors and reference indices, are further referred to as motion data.
  • Two kinds of inter-picture prediction are allowed in modern video coding standards, namely uni-prediction and bi-prediction.
  • two sets of motion data i.e., ( ⁇ x 0 , ⁇ y 0 , ⁇ t 0 ) and ( ⁇ x 1 , ⁇ y 1 , ⁇ t 1 )
  • two MCPs possibly from different pictures
  • this is done by averaging but in case of weighted prediction, different weights can be applied to different MCPs, e.g., to compensate for scene fade outs.
  • the reference pictures that can be used in bi-prediction are stored in two separate lists, namely list 0 and list 1.
  • the HEVC standard restricts PUs with 4 ⁇ 8 and 8 ⁇ 4 luma prediction blocks to use uni-prediction only.
  • Motion data is derived at the encoder using a motion estimation process. Motion estimation is not specified within video standards so different encoders can utilize different complexity-quality tradeoffs in their implementations.
  • FIG. 7 An overview block diagram of the HEVC inter-picture prediction is shown in Fig. 7.
  • the motion data of a block is correlated with the neighboring blocks. To exploit this correlation, motion data is not directly coded in the bitstream but predictively coded based on neighboring motion data.
  • two concepts are used for that.
  • the predictive coding of the motion vectors was improved in HEVC by introducing a new tool called advanced motion vector prediction (AMVP) where the best predictor for each motion block is signaled to the decoder.
  • AMVP advanced motion vector prediction
  • inter-prediction block merging derives all motion data of a block from the neighboring blocks replacing the direct and skip modes in H. 264/AVC.
  • inter prediction modes Different kinds of inter prediction methods are implemented in the motion data coding module. Generally, the methods are called as inter prediction modes. Several inter prediction modes are introduced as following.
  • AMVP Advanced Motion Vector Prediction
  • MVP motion vector predictor
  • Motion vectors of the current block are usually correlated with the motion vectors of neighboring blocks in the current picture or in the earlier coded pictures. This is because neighboring blocks are likely to correspond to the same moving object with similar motion and the motion of the object is not likely to change abruptly over time. Consequently, using the motion vectors in neighboring blocks as predictors reduces the size of the signaled motion vector difference.
  • the MVPs are usually derived from already decoded motion vectors from spatial neighboring blocks or from temporally neighboring blocks in the co-located picture. In some cases, the zero motion vector can also be used as MVP. In H. 264/AVC, this is done by doing a component wise median of three spatially neighboring motion vectors. Using this approach, no signaling of the predictor is required.
  • Temporal MVPs from a co-located picture are only considered in the so called temporal direct mode of H. 264/AVC. The H.264/AVC direct modes are also used to derive other motion data than the motion vectors.
  • the variable coding quadtree block structure in HEVC can result in one block having several neighboring blocks with motion vectors as potential MVP candidates.
  • the initial design of AMVP includes five MVPs from three different classes of predictors: three motion vectors from spatial neighbors, the median of the three spatial predictors and a scaled motion vector from a co-located, temporally neighboring block.
  • the list of predictors was modified by reordering to place the most probable motion predictor in the first position and by removing redundant candidates to assure minimal signaling overhead.
  • the final design of the AMVP candidate list construction includes the following two MVP candidates: a. up to two spatial candidate MVPs that are derived from five spatial neighboring blocks; b. one temporal candidate MVPs derived from two temporal, co-located blocks when both spatial candidate MVPs are not available or they are identical; c. zero motion vectors when the spatial, the temporal or both candidates are not available.
  • two spatial MVP candidates A and B are derived from five spatially neighboring blocks which are shown in the right part of Fig. 8.
  • the locations of the spatial candidate blocks are the same for both AMVP and inter-prediction block merging.
  • the derivation process flow for the two spatial candidates A and B is depicted in Fig. 9.
  • candidate A motion data from the two blocks A0 and A1 at the bottom left corner is taken into account in a two pass approach. In the first pass, it is checked whether any of the candidate blocks contain a reference index that is equal to the reference index of the current block. The first motion vector found will be taken as candidate A.
  • the scaling operation is basically the same scheme that is used for the temporal direct mode in H. 264/AVC. This factoring allows pre-computation of ScaleFactor at slice-level since it only depends on the reference picture list structure signaled in the slice header. Note that the MV scaling is only performed when the current reference picture and the candidate reference picture are both short term reference pictures. Parameter td is defined as the POC difference between the co-located picture and the reference picture of the co-located candidate block.
  • candidate B the candidates B0 to B2 are checked sequentially in the same way as A0 and A1 which were checked in the first pass.
  • the second pass is only performed when blocks A0 and A1 do not contain any motion information, i.e. are not available or coded using intra-picture prediction.
  • candidate A is set equal to the non-scaled candidate B, if found, and candidate B is set equal to a second, non-scaled or scaled variant of candidate B.
  • the second pass searches for non-scaled as well as for scaled MVs derived from candidates B0 to B2. Overall, this design allows to process A0 and A1 independently from B0, B1, and B2.
  • the derivation of B should only be aware of the availability of both A0 and A1 in order to search for a scaled or an additional non-scaled MV derived from B0 to B2. This dependency is acceptable given that it significantly reduces the complex motion vector scaling operations for candidate B. Reducing the number of motion vector scaling represents a significant complexity reduction in the motion vector predictor derivation process.
  • TMVP temporal motion vector predictor
  • HEVC offers the possibility to indicate for each picture which reference picture is considered as the co-located picture. This is done by signaling in the slice header the co-located reference picture list and reference picture index as well as requiring that these syntax elements in all slices in a picture should specify the same reference picture.
  • inter_pred_idc signals whether reference list 0, 1 or both are used.
  • the corresponding reference picture ( ⁇ t) is signaled by an index to the reference picture list, ref_idx_l0/1
  • the MV ( ⁇ x, ⁇ y) is represented by an index to the MVP, mvp_l0/1_flag, and its MVD.
  • a newly introduced flag in the slice header, mvd_l1_zero_flag indicates whether the MVD for the second reference picture list is equal to zero and therefore not signaled in the bitstream.
  • the AMVP list only contains motion vectors for one reference list while a merge candidate contains all motion data including the information whether one or two reference picture lists are used as well as a reference index and a motion vector for each list.
  • the merge candidate list is constructed based on the following candidates: a. up to four spatial merge candidates that are derived from five spatial neighboring blocks; b. one temporal merge candidate derived from two temporal, co-located blocks; c. additional merge candidates including combined bi-predictive candidates and zero motion vector candidates.
  • the first candidates in the merge candidate list are the spatial neighbors. Up to four candidates are inserted in the merge list by sequentially checking A1, B1, B0, A0 and B2, in that order, according to the right part of Fig. 8.
  • redundancy checks are performed before taking all the motion data of the neighboring block as a merge candidate. These redundancy checks can be divided into two categories for two different purposes: a. avoid having candidates with redundant motion data in the list; b. prevent merging two partitions that could be expressed by other means which would create redundant syntax.
  • N is the number of spatial merge candidates
  • a complete redundancy check would consist of motion data comparisons.
  • ten motion data comparisons would be needed to assure that all candidates in the merge list have different motion data.
  • the checks for redundant motion data have been reduced to a subset in a way that the coding efficiency is kept while the comparison logic is significantly reduced.
  • no more than two comparisons are performed per candidate resulting in five overall comparisons. Given the order of ⁇ A1, B1, B0, A0, B2 ⁇ , B0 only checks B1, A0 only A1 and B2 only A1 and B1.
  • the bottom PU of a 2N ⁇ N partitioning is merged with the top one by choosing candidate B1. This would result in one CU with two PUs having the same motion data which could be equally signaled as a 2N ⁇ 2N CU.
  • this check applies for all second PUs of the rectangular and asymmetric partitions 2N ⁇ N, 2N ⁇ nU, 2N ⁇ nD, N ⁇ 2N, nR ⁇ 2N and nL ⁇ 2N. It is noted that for the spatial merge candidates, only the redundancy checks are performed and the motion data is copied from the candidate blocks as it is. Hence, no motion vector scaling is needed here.
  • the derivation of the motion vectors for the temporal merge candidate is the same as for the TMVP. Since a merge candidate comprises all motion data and the TMVP is only one motion vector, the derivation of the whole motion data only depends on the slice type. For bi-predictive slices, a TMVP is derived for each reference picture list. Depending on the availability of the TMVP for each list, the prediction type is set to bi-prediction or to the list for which the TMVP is available. All associated reference picture indices are set equal to zero. Consequently for uni-predictive slices, only the TMVP for list 0 is derived together with the reference picture index equal to zero.
  • the length of the merge candidate list is fixed. After the spatial and the temporal merge candidates have been added, it can happen that the list has not yet the fixed length. In order to compensate for the coding efficiency loss that comes along with the non-length adaptive list index signaling, additional candidates are generated. Depending on the slice type, up to two kind of candidates are used to fully populate the list: a. Combined bi-predictive candidates; b. Zero motion vector candidates.
  • additional candidates can be generated based on the existing ones by combining reference picture list 0 motion data of one candidate with and the list 1 motion data of another one. This is done by copying ⁇ x 0 , ⁇ y 0 , ⁇ t 0 from one candidate, e.g. the first one, and ⁇ x 1 , ⁇ y 1 , ⁇ t 1 from another, e.g. the second one.
  • the different combinations are predefined and given in Table 1.1.
  • zero motion vector candidates are calculated to complete the list. All zero motion vector candidates have one zero displacement motion vector for uni-predictive slices and two for bi-predictive slices.
  • the reference indices are set equal to zero and are incremented by one for each additional candidate until the maximum number of reference indices is reached. If that is the case and there are still additional candidates missing, a reference index equal to zero is used to create these. For all the additional candidates, no redundancy checks are performed as it turned out that omitting these checks will not introduce a coding efficiency loss.
  • a so called merge_flag indicates that block merging is used to derive the motion data.
  • the merge_idx further determines the candidate in the merge list that provides all the motion data needed for the MCP. Besides this PU-level signaling, the number of candidates in the merge list is signaled in the slice header. Since the default value is five, it is represented as a difference to five (five_minus_max_num_merge_cand) . That way, the five is signaled with a short codeword for the 0 whereas using only one candidate, is signaled with a longer codeword for the 4.
  • the overall process remains the same although it terminates after the list contains the maximum number of merge candidates.
  • the maximum value for the merge index coding was given by the number of available spatial and temporal candidates in the list. When e.g. only two candidates are available, the index can be efficiently coded as a flag. But, in order to parse the merge index, the whole merge candidate list has to be constructed to know the actual number of candidates. Assuming unavailable neighboring blocks due to transmission errors, it would not be possible to parse the merge index anymore.
  • a crucial application of the block merging concept in HEVC is its combination with a skip mode.
  • the skip mode was used to indicate for a block that the motion data is inferred instead of explicitly signaled and that the prediction residual is zero, i.e. no transform coefficients are transmitted.
  • a skip_flag is signaled that implies the following: a. the CU only contains one PU (2N ⁇ 2N partition type) ; b. the merge mode is used to derive the motion data (merge_flag equal to 1) ; c. no residual data is present in the bitstream.
  • a parallel merge estimation level was introduced in HEVC that indicates the region in which merge candidate lists can be independently derived by checking whether a candidate block is located in that merge estimation region (MER) .
  • MER merge estimation region
  • a candidate block that is in the same MER is not included in the merge candidate list. Hence, its motion data does not need to be available at the time of the list construction.
  • this level is e.g. 32, all prediction units in a 32 ⁇ 32 area can construct the merge candidate list in parallel since all merge candidates that are in the same 32 ⁇ 32 MER, are not inserted in the list.
  • merge candidate lists of PUs 2-6 cannot include motion data from these PUs when the merge estimation inside that MER should be independent. Therefore, when looking at a PU5 for example, no merge candidates are available and hence not inserted in the merge candidate list. In that case, the merge list of PU5 consists only of the temporal candidate (if available) and zero MV candidates.
  • the parallel merge estimation level is adaptive and signaled as log2_parallel_merge_level_minus2 in the picture parameter set.
  • Sub-CU based motion vector prediction Another one of the inter prediction modes is Sub-CU based motion vector prediction.
  • each CU can have at most one set of motion parameters for each prediction direction.
  • Two sub-CU level motion vector prediction methods are considered in the encoder by splitting a large CU into sub-CUs and deriving motion information for all the sub-CUs of the large CU.
  • Alternative temporal motion vector prediction (ATMVP) method allows each CU to fetch multiple sets of motion information from multiple blocks smaller than the current CU in the collocated reference picture.
  • STMVP spatial-temporal motion vector prediction
  • motion vectors of the sub-CUs are derived recursively by using the temporal motion vector predictor and spatial neighboring motion vector.
  • the motion compression for the reference frames is currently disabled.
  • Sub-CU based motion vector prediction includes Alternative temporal motion vector prediction, Spatial-temporal motion vector prediction, Combined with Merge mode, and Pattern matched motion vector derivation. Which will be introduced in the following:
  • the motion vectors temporal motion vector prediction is modified by fetching multiple sets of motion information (including motion vectors and reference indices) from blocks smaller than the current CU.
  • the sub-CUs are square N ⁇ N blocks (N is set to 4 by default) .
  • ATMVP predicts the motion vectors of the sub-CUs within a CU in two steps.
  • the first step is to identify the corresponding block in a reference picture with a so-called temporal vector.
  • the reference picture is called the motion source picture.
  • the second step is to split the current CU into sub-CUs and obtain the motion vectors as well as the reference indices of each sub-CU from the block corresponding to each sub-CU, as shown in Figure 6.
  • a reference picture and the corresponding block is determined by the motion information of the spatial neighboring blocks of the current CU.
  • the first merge candidate in the merge candidate list of the current CU is used.
  • the first available motion vector as well as its associated reference index are set to be the temporal vector and the index to the motion source picture. This way, in ATMVP, the corresponding block may be more accurately identified, compared with TMVP, wherein the corresponding block (sometimes called collocated block) is always in a bottom-right or center position relative to the current CU.
  • a corresponding block of the sub-CU is identified by the temporal vector in the motion source picture, by adding the temporal vector to the coordinate of the current CU.
  • the motion information of its corresponding block (the smallest motion grid that covers the center sample) is used to derive the motion information for the sub-CU.
  • the motion information of a corresponding N ⁇ N block is identified, it is converted to the motion vectors and reference indices of the current sub-CU, in the same way as TMVP of HEVC, wherein motion scaling and other procedures apply.
  • the decoder checks whether the low-delay condition (i.e.
  • motion vector MVx the motion vector corresponding to reference picture list X
  • Y the motion vector MVy
  • the motion vectors of the sub-CUs are derived recursively, following raster scan order. As shown in Fig. 12, it is considered that an 8 ⁇ 8 CU which contains four 4 ⁇ 4 sub-CUs A, B, C, and D.
  • the neighboring 4 ⁇ 4 blocks in the current frame are labelled as a, b, c, and d.
  • the motion derivation for sub-CU A starts by identifying its two spatial neighbors.
  • the first neighbor is the N ⁇ N block above sub-CU A (block c) . If this block c is not available or is intra coded the other N ⁇ N blocks above sub-CU A are checked (from left to right, starting at block c) .
  • the second neighbor is a block to the left of the sub-CU A (block b) . If block b is not available or is intra coded other blocks to the left of sub-CU A are checked (from top to bottom, staring at block b) .
  • the motion information obtained from the neighboring blocks for each list is scaled to the first reference frame for a given list.
  • temporal motion vector predictor (TMVP) of sub-block A is derived by following the same procedure of TMVP derivation as specified in HEVC.
  • the motion information of the collocated block at location D is fetched and scaled accordingly.
  • all available motion vectors (up to 3) are averaged separately for each reference list. The averaged motion vector is assigned as the motion vector of the current sub-CU.
  • the sub-CU modes are enabled as additional merge candidates and there is no additional syntax element required to signal the modes.
  • Two additional merge candidates are added to merge candidates list of each CU to represent the ATMVP mode and STMVP mode. Up to seven merge candidates are used, if the sequence parameter set indicates that ATMVP and STMVP are enabled.
  • the encoding logic of the additional merge candidates is the same as for the merge candidates in HM, which means, for each CU in P or B slice, two more RD checks is needed for the two additional merge candidates.
  • Pattern matched motion vector derivation (PMMVD) mode is based on Frame-Rate Up Conversion (FRUC) techniques. With this mode, motion information of a block is not signaled but derived at decoder side.
  • FRUC Frame-Rate Up Conversion
  • a FRUC flag is signaled for a CU when its merge flag is true.
  • the FRUC flag is false, a merge index is signaled and the regular merge mode is used.
  • an additional FRUC mode flag is signaled to indicate which method (bilateral matching or template matching) is to be used to derive motion information for the block.
  • the decision on whether using FRUC merge mode for a CU is based on RD cost selection as done for normal merge candidate. That is the two matching modes (bilateral matching and template matching) are both checked for a CU by using RD cost selection. The one leading to the minimal cost is further compared to other CU modes. If a FRUC matching mode is the most efficient one, FRUC flag is set to true for the CU and the related matching mode is used.
  • Motion derivation process in FRUC merge mode has two steps.
  • a CU-level motion search is first performed, then followed by a Sub-CU level motion refinement.
  • an initial motion vector is derived for the whole CU based on bilateral matching or template matching.
  • a list of MV candidates is generated and the candidate which leads to the minimum matching cost is selected as the starting point for further CU level refinement.
  • a local search based on bilateral matching or template matching around the starting point is performed and the MV results in the minimum matching cost is taken as the MV for the whole CU.
  • the motion information is further refined at sub-CU level with the derived CU motion vectors as the starting points.
  • the following derivation process is performed for a W ⁇ H CU motion information derivation.
  • MV for the whole W ⁇ H CU is derived.
  • the CU is further split into M ⁇ M sub-CUs.
  • the value of M is calculated as in Eq. (1.8)
  • D is a predefined splitting depth which is set to 3 by default in the JEM. Then the MV for each sub-CU is derived.
  • the bilateral matching is used to derive motion information of the current CU by finding the closest match between two blocks along the motion trajectory of the current CU in two different reference pictures.
  • the motion vectors MV0 and MV1 pointing to the two reference blocks shall be proportional to the temporal distances, i.e., TD0 and TD1, between the current picture and the two reference pictures.
  • the bilateral matching becomes mirror based bi-directional MV.
  • the encoder can choose among uni-prediction from list0, uni-prediction from list1 or bi-prediction for a CU. The selection is based on a template matching cost as follows:
  • costBi ⁇ factor *min (cost0, cost1)
  • cost0 is the SAD of list0 template matching
  • cost1 is the SAD of list1 template matching
  • costBi is the SAD of bi-prediction template matching.
  • the value of factor is equal to 1.25, which means that the selection process is biased toward bi-prediction.
  • the inter prediction direction selection is only applied to the CU-level template matching process.
  • template matching is used to derive motion information of the current CU by finding the closest match between a template (top and/or left neighbouring blocks of the current CU) in the current picture and a block (same size to the template) in a reference picture. Except the aforementioned FRUC merge mode, the template matching is also applied to AMVP mode. With template matching method, a new candidate is derived. If the newly derived candidate by template matching is different to the first existing AMVP candidate, it is inserted at the very beginning of the AMVP candidate list and then the list size is set to two (meaning remove the second existing AMVP candidate) . When applied to AMVP mode, only CU level search is applied.
  • the MV candidate set at CU level consists of: a. Original AMVP candidates if the current CU is in AMVP mode; b. all merge candidates; c. several MVs in the interpolated MV field; d. top and left neighbouring motion vectors.
  • the interpolated MV field mentioned above is generated before coding a picture for the whole picture based on unilateral ME. Then the motion field may be used later as CU level or sub-CU level MV candidates.
  • the motion field of each reference pictures in both reference lists is traversed at 4 ⁇ 4 block level. For each 4 ⁇ 4 block, if the motion associated to the block passing through a 4 ⁇ 4 block in the current picture, as shown in Fig. 15, and the block has not been assigned any interpolated motion, the motion of the reference block is scaled to the current picture according to the temporal distance TD0 and TD1 (the same way as that of MV scaling of TMVP in HEVC) and the scaled motion is assigned to the block in the current frame. If no scaled MV is assigned to a 4 ⁇ 4 block, the block’s motion is marked as unavailable in the interpolated motion field.
  • each valid MV of a merge candidate is used as an input to generate a MV pair with the assumption of bilateral matching.
  • one valid MV of a merge candidate is (MVa, refa) at reference list A.
  • the reference picture refb of its paired bilateral MV is found in the other reference list B so that refa and refb are temporally at different sides of the current picture. If such a refb is not available in reference list B, refb is determined as a reference which is different from refa and its temporal distance to the current picture is the minimal one in list B.
  • MVb is derived by scaling MVa based on the temporal distance between the current picture and refa, refb.
  • MVs from the interpolated MV field are also added to the CU level candidate list. More specifically, the interpolated MVs at the position (0, 0) , (W/2, 0) , (0, H/2) and (W/2, H/2) of the current CU are added.
  • the original AMVP candidates are also added to CU level MV candidate set.
  • the MV candidate set at sub-CU level consists of: a. an MV determined from a CU-level search; b. top, left, top-left and top-right neighbouring MVs; c. scaled versions of collocated MVs from reference pictures; d. up to 4 ATMVP candidates; e. up to 4 STMVP candidates.
  • the scaled MVs from reference pictures are derived as follows. All the reference pictures in both lists are traversed. The MVs at a collocated position of the sub-CU in a reference picture are scaled to the reference of the starting CU-level MV.
  • ATMVP and STMVP candidates are limited to the four first ones.
  • Motion vector can be refined by different methods combining with the different inter prediction modes.
  • MV refinement is a pattern based MV search with the criterion of bilateral matching cost or template matching cost.
  • two search patterns are supported –an unrestricted center-biased diamond search (UCBDS) and an adaptive cross search for MV refinement at the CU level and sub-CU level, respectively.
  • UMBDS center-biased diamond search
  • the MV is directly searched at quarter luma sample MV accuracy, and this is followed by one-eighth luma sample MV refinement.
  • the search range of MV refinement for the CU and sub-CU step are set equal to 8 luma samples.
  • bi-prediction operation for the prediction of one block region, two prediction blocks, formed using a MV of list0 and a MV of list1, respectively, are combined to form a single prediction signal.
  • the two motion vectors of the bi-prediction are further refined by a bilateral template matching process.
  • the bilateral template matching applied in the decoder to perform a distortion-based search between a bilateral template and the reconstruction samples in the reference pictures in order to obtain a refined MV without transmission of additional motion information.
  • a bilateral template is generated as the weighted combination (i.e. average) of the two prediction blocks, from the initial MV0 of list0 and MV1 of list1, respectively, as shown in Fig. 16.
  • the template matching operation consists of calculating cost measures between the generated template and the sample region (around the initial prediction block) in the reference picture. For each of the two reference pictures, the MV that yields the minimum template cost is considered as the updated MV of that list to replace the original one.
  • nine MV candidates are searched for each list. The nine MV candidates include the original MV and 8 surrounding MVs with one luma sample offset to the original MV in either the horizontal or vertical direction, or both.
  • the two new MVs i.e., MV0′ and MV1′ as shown in Fig. 16, are used for generating the final bi-prediction results.
  • a sum of absolute differences (SAD) is used as the cost measure.
  • DMVR is applied for the merge mode of bi-prediction with one MV from a reference picture in the past and another from a reference picture in the future, without the transmission of additional syntax elements.
  • TMVP motion data storage reduction
  • HEVC uses motion data storage reduction (MDSR) to reduce the size of the motion data buffer and the associated memory access bandwidth by sub-sampling motion data in the reference pictures. While H. 264/AVC is storing these information on a 4 ⁇ 4 block basis, HEVC uses a 16 ⁇ 16 block where, in case of sub-sampling a 4 ⁇ 4 grid, the information of the top-left 4 ⁇ 4 block is stored. Due to this sub-sampling, MDSR impacts on the quality of the temporal prediction.
  • MDSR motion data storage reduction
  • motion vector accuracy is one-quarter pel (one-quarter luma sample and one-eighth chroma sample for 4: 2: 0 video) .
  • accuracy for the internal motion vector storage and the merge candidate increases to 1/16 pel.
  • the higher motion vector accuracy (1/16 pel) is used in motion compensation inter prediction for the CU coded with skip/merge mode.
  • the integer-pel or quarter-pel motion is used for the CU coded with normal AMVP mode.
  • motion compensated interpolation is needed.
  • an 8-tap separable DCT-based interpolation filter is used for 2/4 precision samples and a 7-tap separable DCT-based interpolation filter is used for 1/4 precisions samples, as shown in Table 1.2.
  • a 4-tap separable DCT-based interpolation filter is used for the chroma interpolation filter, as shown in Table 1.3.
  • bit-depth of the output of the interpolation filter is maintained to 14-bit accuracy, regardless of the source bit-depth, before the averaging of the two prediction signals.
  • the actual averaging process is done implicitly with the bit-depth reduction process as:
  • bi-linear interpolation instead of regular 8-tap HEVC interpolation is used for both bilateral matching and template matching.
  • the matching cost is a bit different at different steps.
  • the matching cost is the SAD of bilateral matching or template matching.
  • the matching cost C of bilateral matching at sub-CU level search is calculated as follows:
  • MV and MV s indicate the current MV and the starting MV, respectively.
  • SAD is still used as the matching cost of template matching at sub-CU level search.
  • MV is derived by using luma samples only. The derived motion will be used for both luma and chroma for MC inter prediction. After MV is decided, final MC is performed using 8-taps interpolation filter for luma and 4-taps interpolation filter for chroma.
  • the motion vectors and reference indices of coding units that are coded with any inter-coding mode are reconstructed or inferred without any pixel level operations on any coding unit within that frame.
  • the differential coding of a motion vector using an appropriately scaled version of an already reconstructed motion vector of a spatial or temporally co-located or interpolated neighbor as well as the process of inheriting a reconstructed motion vector through a merge process are computationally simple and hence the dependent reconstruction or inheritance process does not pose any major decoder side design complexity issue.
  • MVD motion vector delta
  • the coding gains suffer significantly. This is because the RDO process decides DMVR/PMMVD to be superior to the other inter-coding modes. But in the absence of the decoder-side refinement, the MVD coding bits increase significantly (when compared to the no DMVR/PMMVD case) and also the starting points for the refinements end up being inferior. Hence there is a significant compression loss by not using any refined MVs.
  • the proposed method and embodiments disclosed herein determines the availability of refined motion vectors of spatially neighboring coding units in such a way that a set of coding units or sub-coding units within a coded tree block can configure their data pre-fetch in a concurrent manner in a given stage of a regular pipeline and also perform their refinement process in a concurrent manner in the next stage of that regular pipeline.
  • the concept of a lag between the top CTB row and current CTB row is utilized in determining such availability.
  • the concept of a concurrency set is introduced to normatively partition some coding units, if necessary, into sub-coding-units to meet the concurrency requirements of the pipeline.
  • the proposed approach provides a higher coding gain while ensuring that the dependency does not overly constrain the hardware implementation of the refinement process. Also, by pre-fetching around an unrefined motion vector and using a normative padding process to access samples that go outside the normative amount of pre-fetched samples, the pipeline latency is further reduced to make even left or top-right CTB refined MVs to be used for refinement of current CTB CUs. The process is also extended to finer granularities than CTB level.
  • decoder side motion vector refinement/derivation is a normative aspect of a coding system
  • the encoder will also have to perform the same error surface technique in order to not have any drift between the encoder’s reconstruction and the decoder’s reconstruction.
  • all aspects of all embodiments are applicable to both encoding and decoding systems.
  • each processing stage should be preceded by a data pre-fetch stage. Both the data fetch stage and processing stage should be able to use the entire time of the pipeline slot.
  • Fig. 17 shows such a regular pipeline at CTB level with DMA on the left and DMVR + MC on the right, corresponding to the data pre-fetch and processing stages.
  • CN+1 CTB can use the refined MVs of the Top and Top left CTBs for Inter MVP and as starting MVs for decoder-side MV refinement.
  • N can be selected as per the design constraints of the video coding system.
  • a concurrency set is defined to be a set of pixels in a CTB that correspond to one partition of recursive quad-tree partition of the CTB. For instance, concurrency set can be chosen as 64x64 or 32x32 for a CTB size of 128x128.
  • a given coding unit that spans across more than one concurrency set is force partitioned for decoder-side motion vector refinement purposes into as many sub coding units (sub-CUs) as the number of concurrency sets that it spans. The dependency across these concurrency sets is assumed to be in a recursive z-scan order.
  • a concurrency set becomes an independent set of pixels, the processing for which can be performed concurrently to have a regular sub-CTB level pipeline that can have a data pre-fetch stage followed by a refinement and motion compensation stage in a manner similar to the CTB level pipeline in embodiment 1.
  • Fig. 20a-c show the pipeline of using a 1-level deep quad-tree split of a CTB and with lag of 2 CTBs between two consecutive CTB rows.
  • Concurrency set Z0 has all the neighbor concurrency sets available except bottom left
  • concurrency set Z1 has all top concurrency set neighbors available but left and bottom left concurrency set neighbors are not available
  • concurrency set Z2 has all concurrency set neighbors except Top right and bottom left concurrency sets
  • concurrency set Z3 has only Top and top left concurrency sets available. This is summarized in the Table 3.2.
  • Table 3.2 Refined spatial neighbor concurrency set availability status for 1-level deep quad-trees split with a lag of 2 CTBs between consecutive CTB rows
  • the forced partitioning of a CTB into concurrency sets for performing the decoder-side motion vector refinement in a manner that is independent of the actual partitioning of the CTB makes more final refined motion vectors to be available in conjunction with the concept of a configurable CTB lag between consecutive CTB rows. This helps improve the coding gains relative to embodiment 1 while still allowing for a regular pipeline that allows for a data pre-fetch stage that precedes the decoder-side motion vector refinement and motion compensated prediction stage.
  • a method for determining the availability of the refined motion vectors of a spatially neighboring set, comprising at least one coding unit or sub-coding unit, for a current set, comprising at least one coding unit or sub coding unit may comprise the steps of:
  • Fig. 22 describes different pattern directions, where a-f describes different patterns (in black) for different pixel situations (in black) , and the details are shown in the Fig. 22.
  • Fig. 23 describes error surface, and the details are shown in the fig. 23.
  • Terms T B L R and C in Figure 23 refer to relative directions Top Bottom Left Right and Centre respectively.
  • an example of error surface cost combination is show in table 4.
  • the search range for decoder-side motion vector refinement increases the worst-case external memory accesses and also the internal memory buffers.
  • some prior art methods do not bring any additional samples that are based on the search range, but only use the samples required for performing motion compensation using the merge mode motion vectors. Additional samples required for refinement are obtained purely through motion compensated interpolation that employs padding for the unavailable sample with the last available sample before it. It is also possible to arrive at a trade-off between external memory bandwidth and padding introduced coding efficiency reduction by fetching one or more lines of samples beyond just the samples required for motion compensation using the merge mode motion vectors without refinement, but still less than what are required for covering the entire refinement range.
  • the pre-fetch for a given CTB (or a sub-partition of a CTB at the first level) is performed using the unrefined motion vectors of coding units in the causal neighbor CTBs if the refined neighbor CTB motion vectors are not available at the time of pre- fetch.
  • the refined motion vector of a coding unit in the neighbor CTB can be used as the starting point for performing the refinement for a coding unit within the current CTB that merges with that coding unit in the neighbor CTB. Any unavailable samples relative to the pre-fetched data can be accessed or interpolated through padding.
  • the use of padded samples obviates the need for pre-fetch to be performed only after the coding unit in the neighbor CTB completes its refinement, thus reducing the latency of the dependency.
  • This method works reasonably well when the refinement search range iterations are low in number or when the refinement process exit early.
  • the reduction of the pipeline latency implies that refined motion vectors of CTBs that are just one pipeline slot ahead of the current CTB can be used as the starting points for refinement in coding units within the current CTB. For example, even with a lag of 1 CTB between CTB rows, the top CTB’s refined MVs can be used for refinement of coding units within the current CTB.
  • the left CTB’s refined MVs can be used for refinement of coding units within the current CTB.
  • all other neighbor refined MVs can be employed to bring back the coding gains.
  • the availabilities are summarized in Table 5 below.
  • each CTB is quad-tree split at the first depth of splitting, it is possible to have a pipeline of pre-fetch followed by refinement that is at the granularity of a quarter of a CTB (QCTB) .
  • QCTB a pipeline of pre-fetch followed by refinement that is at the granularity of a quarter of a CTB
  • the z-scan order of encoding the 4 QCTB coding units more refined MVs can be tapped for refinement while still ensuring that the pre-fetch followed by refinement pipeline at the QCTB level can work.
  • Table 6 summarizes the MVs of neighbor used by each of the QCTBs within a CTB.
  • Table 6 Use of neighbor’s refined and unrefined MVs for pre-fetch and refinement based on pipeline lag value of neighbor CU’s QCTB.
  • This embodiment enables even the left, top-right, and bottom-left neighbor’s refined MV to be used if the neighbor CU is in a neighbor CTB (or QCTB) and not within the current CTB (or QCTB) .
  • This improves the coding gains while still ensuring a regular pipeline where pre-fetch is performed at CTB (or QCTB) level and refinement of all coding units within the current CTB (or QCTB) can be performed in parallel.
  • Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol.
  • Computer-readable media generally may correspond to (1) tangible computer-readable storage media which is non-transitory or (2) a communication medium such as a signal or carrier wave.
  • Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure.
  • a computer program product may include a computer-readable medium.
  • Such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium.
  • coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL) , or wireless technologies such as infrared, radio, and microwave are included in the definition of medium.
  • DSL digital subscriber line
  • computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead directed to non-transitory, tangible storage media.
  • Disk and disc includes compact disc (CD) , laser disc, optical disc, digital versatile disc (DVD) , floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
  • processors such as one or more digital signal processors (DSPs) , general purpose microprocessors, application specific integrated circuits (ASICs) , field programmable logic arrays (FPGAs) , or other equivalent integrated or discrete logic circuitry.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable logic arrays
  • processors may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein.
  • the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques could be fully implemented in one or more circuits or logic elements.
  • the techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set) .
  • IC integrated circuit
  • a set of ICs e.g., a chip set
  • Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a codec hardware unit or provided by a collection of interpretable hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method of decoding video is provided, comprising: determining that refined motion vectors of a spatially neighboring set to be available when the refined motion vectors have been computed, wherein the availability of the refined motion vectors of the spatially neighboring set comprising at least one coding unit or sub-coding unit for a current set comprising at least one coding unit or sub coding unit; and partitioning a coding unit normatively into as many sub-coding units as the number of concurrency sets that the coding unit spans so that the data pre-fetch and decoder-side motion vector refinement and motion compensation for each sub-coding unit can happen independent of the other sub coding-units, but in a concurrent manner with other coding or sub-coding units that are part of a current concurrency set.

Description

V REFINEMENT OF VIDEO MOTION VECTORS IN ADJACENT VIDEO DATA
PRIORITY CLAIM AND INCORPORATION BY REFERENCE
The current application claims the priority and benefit of Indian patent application number IN201831024668 filed on 02 July 2018 and Indian patent application number IN201831034993 filed on 17 September 2018.
Reference is made to documents JVET-K0041-v2, JVET-M0148-v2 and JVET-L0173-v2 of the Joint Video Experts Team (JVET) , the contents of which are incorporated by reference, and of which the skilled person will be aware.
TECHNICAL FIELD
Embodiments of the present application (disclosure) generally relate to the field of video coding, such as HEVC, and more particularly to a method for concurrent processing of Coding units within a Coded Tree Block with decoder side motion vector refinement/derivation.
BACKGROUND
The amount of video data needed to depict even a relatively short video can be substantial, which may result in difficulties when the data is to be streamed or otherwise communicated across a communications network with limited bandwidth capacity. Thus, video data is generally compressed before being communicated across modern day telecommunications networks. The size of a video could also be an issue when the video is stored on a storage device because memory resources may be limited. Video compression devices often use software and/or hardware at the source to code the video data prior to transmission or storage, thereby decreasing the quantity of data needed to represent digital video images. The compressed data is then received at the destination by a video decompression device that decodes the video data. With  limited network resources and ever increasing demands of higher video quality, improved compression and decompression techniques that improve compression ratio with little to no sacrifice in image quality are desirable.
SUMMARY
Embodiments of the present application (or the present disclosure) provide apparatuses and methods for encoding and decoding. Particular embodiments are outlined in the attached independent claims, with other embodiments in the dependent claims.
In a first manner, a decoding method is provided, comprising: determining that refined motion vectors of a spatially neighboring set to be available when the refined motion vectors have been computed, wherein the availability of the refined motion vectors of the spatially neighboring set comprising at least one coding unit or sub-coding unit for a current set comprising at least one coding unit or sub coding unit.
Further, the first method may comprise. partitioning a coding unit normatively into as many sub-coding units as the number of concurrency sets that the coding unit spans so that the data pre-fetch and decoder-side motion vector refinement and motion compensation for each sub-coding unit can happen independent of the other sub coding-units, but in a concurrent manner with other coding or sub-coding units that are part of a current concurrency set.
Wherein the method may be performed in a data pre-fetch stage of a current set; wherein the refined motion vectors have been computed in a pipeline stage of the current set, wherein the pipeline stage is an ahead stage of the data pre-fetch stage.
In a second manner, a method for using unrefined motion vectors is provided, the unrefined motion vectors being for video data of a neighbor coding unit (CU) that falls in a preceding pipeline slot (also described herein as a processing stage or cycle) to the current coding unit’s concurrency set, to perform pre-fetch and use the refined motion vectors of a neighbor CU that does  not fall within the same concurrency set as start of refinement motion vector center. Further, using padded samples whenever the refinement requires samples that are not pre-fetched based on a configurable additional sample fetch range around the pre-fetch center.
The above manners may be combined in a decoding apparatus.
In a third manner, a decoding apparatus is provided, which comprises modules/units/components/circuits to perform at least a part of the steps of the method in the first manner.
In a fourth manner, a decoding apparatus is provided, which comprises a memory storing instructions; and a processor coupled to the memory, the processor configured to execute the instructions stored in the memory to cause the processor to perform the method of the first manner.
In a fifth manner, a computer-readable storage medium is provided, which having a program recorded thereon; where the program makes the computer execute method of the first manner.
In a sixth manner, computer program is provided, which is configured to cause a computer to execute method of the first manner.
For the purpose of clarity, any one of the foregoing embodiments may be combined with any one or more of the other foregoing embodiments to create a new embodiment within the scope of the present disclosure.
It will be understood that one aspect of the disclosure thus relates to an improvement in decoder side motion vector refinement. Decoder side motion vector refinement affects concurrent processing of coding units when the refined motion vector of a coding unit becomes a predictor for another coding unit or a starting point for decoder-side refinement of a coding unit, in other words when there is a dependency this can stall the pipeline.
In embodiments of the disclosure, the disclosed method determines the availability of refined motion vectors of coding units in such a way that ideally  all coding units within a coded tree block can configure their data pre-fetch in a concurrent manner in a given stage of regular (e.g. CTB-level) pipeline. By making one or more motion vectors of neighboring units available, the corresponding one or more motion vectors of neighboring units is already available (i.e. completed) from an earlier stage in the processing cycle. Thus the said available refined motion vectors can be used in obtaining a refined motion vector for other neighboring unit at a later stage. Embodiments of the disclosure additionally perform their refinement process in a concurrent manner in a subsequent e.g. next stage of the CTB-level pipeline. The concept of a lag between the top CTB row and current CTB row (such as adjacent) is utilized in determining the availability. Also, the concept of a concurrency set is introduced to normatively partition some coding units, if necessary, into sub-coding-units to meet the concurrency requirements of the pipeline.
Compared to not using any refined motion vector from the current picture, the approach disclosed herein provides a higher coding gain while ensuring that the dependency does not overly constrain the hardware implementation of the refinement process. In addition, by pre-fetching around an unrefined motion vector and using a normative padding process to access samples that go outside the normative amount of pre-fetched samples, the pipeline latency is further reduced to make even left or top-right CTB refined MVs to be used for refinement of current CTB CUs. The process is also extended to finer granularities than CTB level.
These and other features will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings and claims.
BRIEF DESCRIPTION OF THE DRAWINGS
For a more complete understanding of this disclosure, reference is now made to the following brief description, taken in connection with the  accompanying drawings and detailed description, wherein like reference numerals represent like parts.
FIG. 1A is a block diagram illustrating an example coding system that may implement embodiments of the invention.
FIG. 1B is a block diagram illustrating another example coding system that may implement embodiments of the invention.
FIG. 2 is a block diagram illustrating an example video encoder that may implement embodiments of the invention.
FIG. 3 is a block diagram illustrating an example of a video decoder that may implement embodiments of the invention.
FIG. 4 is a schematic diagram of a network device.
FIG. 5 is a simplified block diagram of an apparatus 500 that may be used as either or both of the source device 12 and the destination device 14 from FIG. 1A according to an exemplary embodiment.
FIG. 6 is a graphical illustration for MCP based on a translational motion model.
FIG. 7 is a graphical illustration for an overview block diagram of the HEVC inter-picture prediction.
FIG. 8 is a graphical illustration for the relationship between spatial MVP candidates and spatially neighboring blocks.
FIG. 9 is a graphical illustration for the derivation process flow for the two spatial candidates A and B.
FIG. 10 is a graphical illustration for CTU partitioning.
FIG. 11 is a graphical illustration for sub-CUs.
FIG. 12 is a graphical illustration for the relationship between CU and sub-CUs.
FIG. 13 is a graphical illustration for the bilateral matching to derive motion information of the current CU.
FIG. 14 is a graphical illustration for template matching to derive motion information of the current CU.
FIG. 15 is a graphical illustration for the motion associated to the block passing through a 4×4 block.
FIG. 16 is a graphical illustration for MV0′ and MV1′ .
FIG. 17 is a graphical illustration for a regular pipeline at CTB level for lag0.
FIG. 18 is a graphical illustration for the relationship between top row and current row.
FIG. 19 is a graphical illustration for another relationship between top row and current row.
FIG. 20a is a graphical illustration for another relationship between top row and current row.
FIG. 20b is an enlargement of the top half of FIG 20a.
FIG. 20c is an enlargement of the top half of FIG 20a.
FIG. 21 is a graphical illustration for a regular pipeline at CTB level for lag 2.
FIG. 22 is a graphical illustration for pattern.
FIG. 23 is a graphical illustration for ErrorSurface.
DETAILED DESCRIPTION
It should be understood at the outset that although an illustrative implementation of one or more embodiments are provided below, the disclosed systems and/or methods may be implemented using any number of techniques, whether currently known or in existence. The disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary designs and implementations illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
FIG. 1A is a block diagram illustrating an example coding system 10 that may utilize bidirectional prediction techniques. As shown in FIG. 1A, the coding system 10 includes a source device 12 that provides encoded video data to be decoded at a later time by a destination device 14. In particular, the source device 12 may provide the video data to destination device 14 via a computer-readable medium 16. Source device 12 and destination device 14 may comprise any of a wide range of devices, including desktop computers, notebook (i.e., laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called “smart” phones, so-called “smart” pads, televisions, cameras, display devices, digital media players, video gaming consoles, video streaming device, or the like. In some cases, source device 12 and destination device 14 may be equipped for wireless communication.
Destination device 14 may receive the encoded video data to be decoded via computer-readable medium 16. Computer-readable medium 16 may comprise any type of medium or device capable of moving the encoded video data from source device 12 to destination device 14. In one example, computer-readable medium 16 may comprise a communication medium to enable source device 12 to transmit encoded video data directly to destination device 14 in real-time. The encoded video data may be modulated according to a communication standard, such as a wireless communication protocol, and transmitted to destination device 14. The communication medium may comprise any wireless or wired communication medium, such as a radio frequency (RF) spectrum or one or more physical transmission lines. The communication medium may form part of a packet-based network, such as a local area network, a wide-area network, or a global network such as the Internet. The communication medium may include routers, switches, base stations, or any other equipment that may be useful to facilitate communication from source device 12 to destination device 14.
In some examples, encoded data may be output from output interface 22 to a storage device. Similarly, encoded data may be accessed from the storage device by input interface. The storage device may include any of a variety of distributed or locally accessed data storage media such as a hard drive, Blu-ray discs, digital video disks (DVD) s, Compact Disc Read-Only Memories (CD-ROMs) , flash memory, volatile or non-volatile memory, or any other suitable digital storage media for storing encoded video data. In a further example, the storage device may correspond to a file server or another intermediate storage device that may store the encoded video generated by source device 12. Destination device 14 may access stored video data from the storage device via streaming or download. The file server may be any type of server capable of storing encoded video data and transmitting that encoded video data to the destination device 14. Example file servers include a web server (e.g., for a website) , a file transfer protocol (FTP) server, network attached storage (NAS) devices, or a local disk drive. Destination device 14 may access the encoded video data through any standard data connection, including an Internet connection. This may include a wireless channel (e.g., a Wi-Fi connection) , a wired connection (e.g., digital subscriber line (DSL) , cable modem, etc. ) , or a combination of both that is suitable for accessing encoded video data stored on a file server. The transmission of encoded video data from the storage device may be a streaming transmission, a download transmission, or a combination thereof.
The techniques of this disclosure are not necessarily limited to wireless applications or settings. The techniques may be applied to video coding in support of any of a variety of multimedia applications, such as over-the-air television broadcasts, cable television transmissions, satellite television transmissions, Internet streaming video transmissions, such as dynamic adaptive streaming over HTTP (DASH) , digital video that is encoded onto a data storage medium, decoding of digital video stored on a data storage medium, or other applications. In some examples, coding system 10 may be configured to support  one-way or two-way video transmission to support applications such as video streaming, video playback, video broadcasting, and/or video telephony.
In the example of FIG. 1A, source device 12 includes video source 18, video encoder 20, and output interface 22. Destination device 14 includes input interface 28, video decoder 30, and display device 32. In accordance with this disclosure, video encoder 20 of source device 12 and/or the video decoder 30 of the destination device 14 may be configured to apply the techniques for bidirectional prediction. In other examples, a source device and a destination device may include other components or arrangements. For example, source device 12 may receive video data from an external video source, such as an external camera. Likewise, destination device 14 may interface with an external display device, rather than including an integrated display device.
The illustrated coding system 10 of FIG. 1A is merely one example. Techniques for bidirectional prediction may be performed by any digital video encoding and/or decoding device. Although the techniques of this disclosure generally are performed by a video coding device, the techniques may also be performed by a video encoder/decoder, typically referred to as a “CODEC. ” Moreover, the techniques of this disclosure may also be performed by a video preprocessor. The video encoder and/or the decoder may be a graphics processing unit (GPU) or a similar device.
Source device 12 and destination device 14 are merely examples of such coding devices in which source device 12 generates coded video data for transmission to destination device 14. In some examples, source device 12 and destination device 14 may operate in a substantially symmetrical manner such that each of the source and  destination devices  12, 14 includes video encoding and decoding components. Hence, coding system 10 may support one-way or two-way video transmission between  video devices  12, 14, e.g., for video streaming, video playback, video broadcasting, or video telephony.
Video source 18 of source device 12 may include a video capture device, such as a video camera, a video archive containing previously captured video, and/or a video feed interface to receive video from a video content provider. As a further alternative, video source 18 may generate computer graphics-based data as the source video, or a combination of live video, archived video, and computer-generated video.
In some cases, when video source 18 is a video camera, source device 12 and destination device 14 may form so-called camera phones or video phones. As mentioned above, however, the techniques described in this disclosure may be applicable to video coding in general, and may be applied to wireless and/or wired applications. In each case, the captured, pre-captured, or computer-generated video may be encoded by video encoder 20. The encoded video information may then be output by output interface 22 onto a computer-readable medium 16.
Computer-readable medium 16 may include transient media, such as a wireless broadcast or wired network transmission, or storage media (that is, non-transitory storage media) , such as a hard disk, flash drive, compact disc, digital video disc, Blu-ray disc, or other computer-readable media. In some examples, a network server (not shown) may receive encoded video data from source device 12 and provide the encoded video data to destination device 14, e.g., via network transmission. Similarly, a computing device of a medium production facility, such as a disc stamping facility, may receive encoded video data from source device 12 and produce a disc containing the encoded video data. Therefore, computer-readable medium 16 may be understood to include one or more computer-readable media of various forms, in various examples.
Input interface 28 of destination device 14 receives information from computer-readable medium 16. The information of computer-readable medium 16 may include syntax information defined by video encoder 20, which is also used by video decoder 30, that includes syntax elements that describe  characteristics and/or processing of blocks and other coded units, e.g., group of pictures (GOPs) . Display device 32 displays the decoded video data to a user, and may comprise any of a variety of display devices such as a cathode ray tube (CRT) , a liquid crystal display (LCD) , a plasma display, an organic light emitting diode (OLED) display, or another type of display device.
Video encoder 20 and video decoder 30 may operate according to a video coding standard, such as the High Efficiency Video Coding (HEVC) standard presently under development, and may conform to the HEVC Test Model (HM) . Alternatively, video encoder 20 and video decoder 30 may operate according to other proprietary or industry standards, such as the International Telecommunications Union Telecommunication Standardization Sector (ITU-T) H. 264 standard, alternatively referred to as Motion Picture Expert Group (MPEG) -4, Part 10, Advanced Video Coding (AVC) , H. 265/HEVC, or extensions of such standards. The techniques of this disclosure, however, are not limited to any particular coding standard. Other examples of video coding standards include MPEG-2 and ITU-T H. 263. Although not shown in FIG. 1A, in some aspects, video encoder 20 and video decoder 30 may each be integrated with an audio encoder and decoder, and may include appropriate multiplexer-demultiplexer (MUX-DEMUX) units, or other hardware and software, to handle encoding of both audio and video in a common data stream or separate data streams. If applicable, MUX-DEMUX units may conform to the ITU H. 223 multiplexer protocol, or other protocols such as the user datagram protocol (UDP) .
It will thus be understood that terms used in this disclosure may, but not necessarily, have the particular technical meaning and/or definition provided in the HEVC standard. For example, in the HEVC standard, a Coding tree unit (CTU) is the basic processing unit, which conceptually corresponds in structure to macroblock units that were used in several previous video standards.  HEVC initially divides the picture into CTUs which are then divided for each luma/chroma component into coding tree blocks (CTBs) . A CTB can be 64×64, 32×32, or 16×16 with a larger pixel block size usually increasing the coding efficiency. CTBs are then divided into same size or smaller one or more coding units (CUs) , so that the CTU size is also the largest coding unit size. The arrangement of CUs in a CTB is known as a quadtree since a subdivision results in four smaller regions. CUs are then divided into prediction units (PUs) and /or transform units (Tus) of either intra-picture or inter-picture prediction type which can vary in size from 64×64 to 4×4.
Video encoder 20 and video decoder 30 each may be implemented as any of a variety of suitable encoder circuitry, such as one or more microprocessors, digital signal processors (DSPs) , application specific integrated circuits (ASICs) , field programmable gate arrays (FPGAs) , discrete logic, software, hardware, firmware or any combinations thereof. When the techniques are implemented partially in software, a device may store instructions for the software in a suitable, non-transitory computer-readable medium and execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. Each of video encoder 20 and video decoder 30 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (CODEC) in a respective device. A device including video encoder 20 and/or video decoder 30 may comprise an integrated circuit, a microprocessor, and/or a wireless communication device, such as a cellular telephone.
Fig. 1B is an illustrative diagram of an example video coding system 40 including encoder 20 of fig. 2 and/or decoder 30 of fig. 3 according to an exemplary embodiment. The system 40 can implement techniques of this present application. In the illustrated implementation, video coding system 40 may  include imaging device (s) 41, video encoder 20, video decoder 30 (and/or a video coder implemented via logic circuitry 54 of processing unit (s) 46) , an antenna 42, one or more processor (s) 43, one or more memory store (s) 44, and/or a display device 45.
As illustrated, imaging device (s) 41, antenna 42, processing unit (s) 46, logic circuitry 54, video encoder 20, video decoder 30, processor (s) 43, memory store (s) 44, and/or display device 45 may be capable of communication with one another. As discussed, although illustrated with both video encoder 20 and video decoder 30, video coding system 40 may include only video encoder 20 or only video decoder 30 in various examples.
As shown, in some examples, video coding system 40 may include antenna 42. Antenna 42 may be configured to transmit or receive an encoded bitstream of video data, for example. Further, in some examples, video coding system 40 may include display device 45. Display device 45 may be configured to present video data. As shown, in some examples, logic circuitry 54 may be implemented via processing unit (s) 46. Processing unit (s) 46 may include application-specific integrated circuit (ASIC) logic, graphics processor (s) , general purpose processor (s) , or the like. Video coding system 40 also may include optional processor (s) 43, which may similarly include application-specific integrated circuit (ASIC) logic, graphics processor (s) , general purpose processor (s) , or the like. In some examples, logic circuitry 54 may be implemented via hardware, video coding dedicated hardware, or the like, and processor (s) 43 may implemented general purpose software, operating systems, or the like. In addition, memory store (s) 44 may be any type of memory such as volatile memory (e.g., Static Random Access Memory (SRAM) , Dynamic Random Access Memory (DRAM) , etc. ) or non-volatile memory (e.g., flash memory, etc. ) , and so forth. In a non-limiting example, memory store (s) 44 may be implemented by cache memory. In some examples, logic circuitry 54 may access memory store (s) 44 (for implementation of an image buffer for example) .  In other examples, logic circuitry 47 and/or processing unit (s) 46 may include memory stores (e.g., cache or the like) for the implementation of an image buffer or the like.
In some examples, video encoder 20 implemented via logic circuitry may include an image buffer (e.g., via either processing unit (s) 46 or memory store (s) 44) ) and a graphics processing unit (e.g., via processing unit (s) 46) . The graphics processing unit may be communicatively coupled to the image buffer. The graphics processing unit may include video encoder 20 as implemented via logic circuitry 47 to embody the various modules as discussed with respect to FIG. 2 and/or any other encoder system or subsystem described herein. The logic circuitry may be configured to perform the various operations as discussed herein.
Video decoder 30 may be implemented in a similar manner as implemented via logic circuitry 47 to embody the various modules as discussed with respect to decoder 30 of FIG. 3 and/or any other decoder system or subsystem described herein. In some examples, video decoder 30 may be implemented via logic circuitry may include an image buffer (e.g., via either processing unit (s) 420 or memory store (s) 44) ) and a graphics processing unit (e.g., via processing unit (s) 46) . The graphics processing unit may be communicatively coupled to the image buffer. The graphics processing unit may include video decoder 30 as implemented via logic circuitry 47 to embody the various modules as discussed with respect to FIG. 3 and/or any other decoder system or subsystem described herein.
In some examples, antenna 42 of video coding system 40 may be configured to receive an encoded bitstream of video data. As discussed, the encoded bitstream may include data, indicators, index values, mode selection data, or the like associated with encoding a video frame as discussed herein, such as data associated with the coding partition (e.g., transform coefficients or quantized transform coefficients, optional indicators (as discussed) , and/or data  defining the coding partition) . Video coding system 40 may also include video decoder 30 coupled to antenna 42 and configured to decode the encoded bitstream. The display device 45 configured to present video frames.
FIG. 2 is a block diagram illustrating an example of video encoder 20 that may implement the techniques of the present application. Video encoder 20 may perform intra-and inter-coding of video blocks within video slices. Intra-coding relies on spatial prediction to reduce or remove spatial redundancy in video within a given video frame or picture. Inter-coding relies on temporal prediction to reduce or remove temporal redundancy in video within adjacent frames or pictures of a video sequence. Intra-mode (I mode) may refer to any of several spatial based coding modes. Inter-modes, such as uni-directional prediction (P mode) or bi-prediction (B mode) , may refer to any of several temporal-based coding modes.
As shown in FIG. 2, video encoder 20 receives a current video block within a video frame to be encoded. In the example of FIG. 2, video encoder 20 includes mode select unit 40, reference frame memory 64, summer 50, transform processing unit 52, quantization unit 54, and entropy coding unit 56. Mode select unit 40, in turn, includes motion compensation unit 44, motion estimation unit 42, intra-prediction unit 46, and partition unit 48. For video block reconstruction, video encoder 20 also includes inverse quantization unit 58, inverse transform unit 60, and summer 62. A deblocking filter (not shown in FIG. 2) may also be included to filter block boundaries to remove blockiness artifacts from reconstructed video. If desired, the deblocking filter would typically filter the output of summer 62. Additional filters (in loop or post loop) may also be used in addition to the deblocking filter. Such filters are not shown for brevity, but if desired, may filter the output of summer 50 (as an in-loop filter) .
During the encoding process, video encoder 20 receives a video frame or slice to be coded. The frame or slice may be divided into multiple video blocks. Motion estimation unit 42 and motion compensation unit 44 perform inter-predictive coding of the received video block relative to one or more blocks in one or more reference frames to provide temporal prediction. Intra-prediction unit 46 may alternatively perform intra-predictive coding of the received video block relative to one or more neighboring blocks in the same frame or slice as the block to be coded to provide spatial prediction. Video encoder 20 may perform multiple coding passes, e.g., to select an appropriate coding mode for each block of video data.
Moreover, partition unit 48 may partition blocks of video data into sub-blocks, based on evaluation of previous partitioning schemes in previous coding passes. For example, partition unit 48 may initially partition a frame or slice into largest coding units (LCUs) , and partition each of the LCUs into sub-coding units (sub-CUs) based on rate-distortion analysis (e.g., rate-distortion optimization) . Mode select unit 40 may further produce a quadtree data structure indicative of partitioning of a LCU into sub-CUs. Leaf-node CUs of the quadtree may include one or more prediction units (PUs) and one or more transform units (TUs) .
The present disclosure uses the term “block” to refer to any of a CU, PU, or TU, in the context of HEVC, or similar data structures in the context of other standards (e.g., macroblocks and sub-blocks thereof in H. 264/AVC) . A CU includes a coding node, PUs, and TUs associated with the coding node. A size of the CU corresponds to a size of the coding node and is square in shape. The size of the CU may range from 8×8 pixels up to the size of the treeblock with a maximum of 64×64 pixels or greater. Each CU may contain one or more PUs and one or more TUs. Syntax data associated with a CU may describe, for example, partitioning of the CU into one or more PUs. Partitioning modes may differ between whether the CU is skip or direct mode encoded, intra-prediction  mode encoded, or inter-prediction mode encoded. PUs may be partitioned to be non-square in shape. Syntax data associated with a CU may also describe, for example, partitioning of the CU into one or more TUs according to a quadtree. In an embodiment, a CU, PU, or TU can be square or non-square (e.g., rectangular) in shape.
Mode select unit 40 may select one of the coding modes, intra or inter, e.g., based on error results, and provides the resulting intra-or inter-coded block to summer 50 to generate residual block data and to summer 62 to reconstruct the encoded block for use as a reference frame. Mode select unit 40 also provides syntax elements, such as motion vectors, intra-mode indicators, partition information, and other such syntax information, to entropy coding unit 56.
Motion estimation unit 42 and motion compensation unit 44 may be highly integrated, but are illustrated separately for conceptual purposes. Motion estimation, performed by motion estimation unit 42, is the process of generating motion vectors, which estimate motion for video blocks. A motion vector, for example, may indicate the displacement of a PU of a video block within a current video frame or picture relative to a predictive block within a reference frame (or other coded unit) relative to the current block being coded within the current frame (or other coded unit) . A predictive block is a block that is found to closely match the block to be coded, in terms of pixel difference, which may be determined by sum of absolute difference (SAD) , sum of square difference (SSD) , or other difference metrics. In some examples, video encoder 20 may calculate values for sub-integer pixel positions of reference pictures stored in reference frame memory 64. For example, video encoder 20 may interpolate values of one-quarter pixel positions, one-eighth pixel positions, or other fractional pixel positions of the reference picture. Therefore, motion estimation unit 42 may perform a motion search relative to the full pixel positions and  fractional pixel positions and output a motion vector with fractional pixel precision.
Motion estimation unit 42 calculates a motion vector for a PU of a video block in an inter-coded slice by comparing the position of the PU to the position of a predictive block of a reference picture. The reference picture may be selected from a first reference picture list (List 0) or a second reference picture list (List 1) , each of which identify one or more reference pictures stored in reference frame memory 64. Motion estimation unit 42 sends the calculated motion vector to entropy encoding unit 56 and motion compensation unit 44.
Motion compensation, performed by motion compensation unit 44, may involve fetching or generating the predictive block based on the motion vector determined by motion estimation unit 42. Again, motion estimation unit 42 and motion compensation unit 44 may be functionally integrated, in some examples. Upon receiving the motion vector for the PU of the current video block, motion compensation unit 44 may locate the predictive block to which the motion vector points in one of the reference picture lists. Summer 50 forms a residual video block by subtracting pixel values of the predictive block from the pixel values of the current video block being coded, forming pixel difference values, as discussed below. In general, motion estimation unit 42 performs motion estimation relative to luma components, and motion compensation unit 44 uses motion vectors calculated based on the luma components for both chroma components and luma components. Mode select unit 40 may also generate syntax elements associated with the video blocks and the video slice for use by video decoder 30 in decoding the video blocks of the video slice.
Intra-prediction unit 46 may intra-predict a current block, as an alternative to the inter-prediction performed by motion estimation unit 42 and motion compensation unit 44, as described above. In particular, intra-prediction unit 46 may determine an intra-prediction mode to use to encode a current block. In some examples, intra-prediction unit 46 may encode a current block using  various intra-prediction modes, e.g., during separate encoding passes, and intra-prediction unit 46 (or mode select unit 40, in some examples) may select an appropriate intra-prediction mode to use from the tested modes.
For example, intra-prediction unit 46 may calculate rate-distortion values using a rate-distortion analysis for the various tested intra-prediction modes, and select the intra-prediction mode having the best rate-distortion characteristics among the tested modes. Rate-distortion analysis generally determines an amount of distortion (or error) between an encoded block and an original, unencoded block that was encoded to produce the encoded block, as well as a bitrate (that is, a number of bits) used to produce the encoded block. Intra-prediction unit 46 may calculate ratios from the distortions and rates for the various encoded blocks to determine which intra-prediction mode exhibits the best rate-distortion value for the block.
In addition, intra-prediction unit 46 may be configured to code depth blocks of a depth map using a depth modeling mode (DMM) . Mode select unit 40 may determine whether an available DMM mode produces better coding results than an intra-prediction mode and the other DMM modes, e.g., using rate-distortion optimization (RDO) . Data for a texture image corresponding to a depth map may be stored in reference frame memory 64. Motion estimation unit 42 and motion compensation unit 44 may also be configured to inter-predict depth blocks of a depth map.
After selecting an intra-prediction mode for a block (e.g., a conventional intra-prediction mode or one of the DMM modes) , intra-prediction unit 46 may provide information indicative of the selected intra-prediction mode for the block to entropy coding unit 56. Entropy coding unit 56 may encode the information indicating the selected intra-prediction mode. Video encoder 20 may include in the transmitted bitstream configuration data, which may include a plurality of intra-prediction mode index tables and a plurality of modified intra-prediction mode index tables (also referred to as codeword mapping tables) ,  definitions of encoding contexts for various blocks, and indications of a most probable intra-prediction mode, an intra-prediction mode index table, and a modified intra-prediction mode index table to use for each of the contexts.
Video encoder 20 forms a residual video block by subtracting the prediction data from mode select unit 40 from the original video block being coded. Summer 50 represents the component or components that perform this subtraction operation.
Transform processing unit 52 applies a transform, such as a discrete cosine transform (DCT) or a conceptually similar transform, to the residual block, producing a video block comprising residual transform coefficient values. Transform processing unit 52 may perform other transforms which are conceptually similar to DCT. Wavelet transforms, integer transforms, sub-band transforms or other types of transforms could also be used.
Transform processing unit 52 applies the transform to the residual block, producing a block of residual transform coefficients. The transform may convert the residual information from a pixel value domain to a transform domain, such as a frequency domain. Transform processing unit 52 may send the resulting transform coefficients to quantization unit 54. Quantization unit 54 quantizes the transform coefficients to further reduce bit rate. The quantization process may reduce the bit depth associated with some or all of the coefficients. The degree of quantization may be modified by adjusting a quantization parameter. In some examples, quantization unit 54 may then perform a scan of the matrix including the quantized transform coefficients. Alternatively, entropy encoding unit 56 may perform the scan.
Following quantization, entropy coding unit 56 entropy codes the quantized transform coefficients. For example, entropy coding unit 56 may perform context adaptive variable length coding (CAVLC) , context adaptive binary arithmetic coding (CABAC) , syntax-based context-adaptive binary arithmetic coding (SBAC) , probability interval partitioning entropy (PIPE)  coding or another entropy coding technique. In the case of context-based entropy coding, context may be based on neighboring blocks. Following the entropy coding by entropy coding unit 56, the encoded bitstream may be transmitted to another device (e.g., video decoder 30) or archived for later transmission or retrieval.
Inverse quantization unit 58 and inverse transform unit 60 apply inverse quantization and inverse transformation, respectively, to reconstruct the residual block in the pixel domain, e.g., for later use as a reference block. Motion compensation unit 44 may calculate a reference block by adding the residual block to a predictive block of one of the frames of reference frame memory 64. Motion compensation unit 44 may also apply one or more interpolation filters to the reconstructed residual block to calculate sub-integer pixel values for use in motion estimation. Summer 62 adds the reconstructed residual block to the motion compensated prediction block produced by motion compensation unit 44 to produce a reconstructed video block for storage in reference frame memory 64. The reconstructed video block may be used by motion estimation unit 42 and motion compensation unit 44 as a reference block to inter-code a block in a subsequent video frame.
Other structural variations of the video encoder 20 can be used to encode the video stream. For example, a non-transform based encoder 20 can quantize the residual signal directly without the transform processing unit 52 for certain blocks or frames. In another implementation, an encoder 20 can have the quantization unit 54 and the inverse quantization unit 58 combined into a single unit.
FIG. 3 is a block diagram illustrating an example of video decoder 30 that may implement the techniques of this present application. In the example of FIG. 3, video decoder 30 includes an entropy decoding unit 70, motion compensation unit 72, intra-prediction unit 74, inverse quantization unit 76, inverse transformation unit 78, reference frame memory 82, and summer 80.  Video decoder 30 may, in some examples, perform a decoding pass generally reciprocal to the encoding pass described with respect to video encoder 20 (FIG. 2) . Motion compensation unit 72 may generate prediction data based on motion vectors received from entropy decoding unit 70, while intra-prediction unit 74 may generate prediction data based on intra-prediction mode indicators received from entropy decoding unit 70.
During the decoding process, video decoder 30 receives an encoded video bitstream that represents video blocks of an encoded video slice and associated syntax elements from video encoder 20. Entropy decoding unit 70 of video decoder 30 entropy decodes the bitstream to generate quantized coefficients, motion vectors or intra-prediction mode indicators, and other syntax elements. Entropy decoding unit 70 forwards the motion vectors and other syntax elements to motion compensation unit 72. Video decoder 30 may receive the syntax elements at the video slice level and/or the video block level.
When the video slice is coded as an intra-coded (I) slice, intra prediction unit 74 may generate prediction data for a video block of the current video slice based on a signaled intra prediction mode and data from previously decoded blocks of the current frame or picture. When the video frame is coded as an inter-coded (i.e., B, P, or GPB) slice, motion compensation unit 72 produces predictive blocks for a video block of the current video slice based on the motion vectors and other syntax elements received from entropy decoding unit 70. The predictive blocks may be produced from one of the reference pictures within one of the reference picture lists. Video decoder 30 may construct the reference frame lists, List 0 and List 1, using default construction techniques based on reference pictures stored in reference frame memory 82.
Motion compensation unit 72 determines prediction information for a video block of the current video slice by parsing the motion vectors and other syntax elements, and uses the prediction information to produce the predictive blocks for the current video block being decoded. For example, motion  compensation unit 72 uses some of the received syntax elements to determine a prediction mode (e.g., intra-or inter-prediction) used to code the video blocks of the video slice, an inter-prediction slice type (e.g., B slice, P slice, or GPB slice) , construction information for one or more of the reference picture lists for the slice, motion vectors for each inter-encoded video block of the slice, inter-prediction status for each inter-coded video block of the slice, and other information to decode the video blocks in the current video slice.
Motion compensation unit 72 may also perform interpolation based on interpolation filters. Motion compensation unit 72 may use interpolation filters as used by video encoder 20 during encoding of the video blocks to calculate interpolated values for sub-integer pixels of reference blocks. In this case, motion compensation unit 72 may determine the interpolation filters used by video encoder 20 from the received syntax elements and use the interpolation filters to produce predictive blocks.
Data for a texture image corresponding to a depth map may be stored in reference frame memory 82. Motion compensation unit 72 may also be configured to inter-predict depth blocks of a depth map.
As will be appreciated by those in the art, the coding system 10 of FIG. 1A is suitable for implementing various video coding or compression techniques. Some video compression techniques, such as inter prediction, intra prediction, and loop filters, have demonstrated to be effective. Therefore, the video compression techniques have been adopted into various video coding standards, such as H. 264/AVC and H. 265/HEVC.
Various coding tools such as adaptive motion vector prediction (AMVP) and merge mode (MERGE) are used to predict motion vectors (MVs) and enhance inter prediction efficiency and, therefore, the overall video compression efficiency.
The MVs noted above are utilized in bi-prediction. In a bi-prediction operation, two prediction blocks are formed. One prediction block is formed  using a MV of list0 (referred to herein as MV0) . Another prediction block is formed using a MV of list1 (referred to herein as MV1) . The two prediction blocks are then combined (e.g., averaged) in order to form a single prediction signal (e.g., a prediction block or a predictor block) .
Other variations of the video decoder 30 can be used to decode the compressed bitstream. For example, the decoder 30 can produce the output video stream without the loop filtering unit. For example, a non-transform based decoder 30 can inverse-quantize the residual signal directly without the inverse-transform processing unit 78 for certain blocks or frames. In another implementation, the video decoder 30 can have the inverse-quantization unit 76 and the inverse-transform processing unit 78 combined into a single unit.
FIG. 4 is a schematic diagram of a network device 400 (e.g., a coding device) according to an embodiment of the disclosure. The network device 400 is suitable for implementing the disclosed embodiments as described herein. In an embodiment, the network device 400 may be a decoder such as video decoder 30 of FIG. 1A or an encoder such as video encoder 20 of FIG. 1A. In an embodiment, the network device 400 may be one or more components of the video decoder 30 of FIG. 1A or the video encoder 20 of FIG. 1A as described above.
The network device 400 comprises ingress ports 410 and receiver units (Rx) 420 for receiving data; a processor, logic unit, or central processing unit (CPU) 430 to process the data; transmitter units (Tx) 440 and egress ports 450 for transmitting the data; and a memory 460 for storing the data. The network device 400 may also comprise optical-to-electrical (OE) components and electrical-to-optical (EO) components coupled to the ingress ports 410, the receiver units 420, the transmitter units 440, and the egress ports 450 for egress or ingress of optical or electrical signals.
The processor 430 is implemented by hardware and software. The processor 430 may be implemented as one or more CPU chips, cores (e.g., as a  multi-core processor) , FPGAs, ASICs, and DSPs. The processor 430 is in communication with the ingress ports 410, receiver units 420, transmitter units 440, egress ports 450, and memory 460. The processor 430 comprises a coding module 470. The coding module 470 implements the disclosed embodiments described above. For instance, the coding module 470 implements, processes, prepares, or provides the various coding operations. The inclusion of the coding module 470 therefore provides a substantial improvement to the functionality of the network device 400 and effects a transformation of the network device 400 to a different state. Alternatively, the coding module 470 is implemented as instructions stored in the memory 460 and executed by the processor 430.
The memory 460 comprises one or more disks, tape drives, and solid-state drives and may be used as an over-flow data storage device, to store programs when such programs are selected for execution, and to store instructions and data that are read during program execution. The memory 460 may be volatile and/or non-volatile and may be read-only memory (ROM) , random access memory (RAM) , ternary content-addressable memory (TCAM) , and/or static random-access memory (SRAM) .
Fig. 5 is a simplified block diagram of an apparatus 500 that may be used as either or both of the source device 12 and the destination device 14 from Fig. 1A according to an exemplary embodiment. The apparatus 500 can implement techniques of this present application. The apparatus 500 can be in the form of a computing system including multiple computing devices, or in the form of a single computing device, for example, a mobile phone, a tablet computer, a laptop computer, a notebook computer, a desktop computer, and the like.
processor 502 in the apparatus 500 can be a central processing unit. Alternatively, the processor 502 can be any other type of device, or multiple devices, capable of manipulating or processing information now-existing or hereafter developed. Although the disclosed implementations can be practiced  with a single processor as shown, e.g., the processor 502, advantages in speed and efficiency can be achieved using more than one processor.
memory 504 in the apparatus 500 can be a read only memory (ROM) device or a random access memory (RAM) device in an implementation. Any other suitable type of storage device can be used as the memory 504. The memory 504 can include code and data 506 that is accessed by the processor 502 using a bus 512. The memory 504 can further include an operating system 508 and application programs 510, the application programs 510 including at least one program that permits the processor 502 to perform the methods described here. For example, the application programs 510 can include applications 1 through N, which further include a video coding application that performs the methods described here. The apparatus 500 can also include additional memory in the form of a secondary storage 514, which can, for example, be a memory card used with a mobile computing device. Because the video communication sessions may contain a significant amount of information, they can be stored in whole or in part in the secondary storage 514 and loaded into the memory 504 as needed for processing.
The apparatus 500 can also include one or more output devices, such as a display 518. The display 518 may be, in one example, a touch sensitive display that combines a display with a touch sensitive element that is operable to sense touch inputs. The display 518 can be coupled to the processor 502 via the bus 512. Other output devices that permit a user to program or otherwise use the apparatus 500 can be provided in addition to or as an alternative to the display 518. When the output device is or includes a display, the display can be implemented in various ways, including by a liquid crystal display (LCD) , a cathode-ray tube (CRT) display, a plasma display or light emitting diode (LED) display, such as an organic LED (OLED) display.
The apparatus 500 can also include or be in communication with an image-sensing device 520, for example a camera, or any other image-sensing  device 520 now existing or hereafter developed that can sense an image such as the image of a user operating the apparatus 500. The image-sensing device 520 can be positioned such that it is directed toward the user operating the apparatus 500. In an example, the position and optical axis of the image-sensing device 520 can be configured such that the field of vision includes an area that is directly adjacent to the display 518 and from which the display 518 is visible.
The apparatus 500 can also include or be in communication with a sound-sensing device 522, for example a microphone, or any other sound-sensing device now existing or hereafter developed that can sense sounds near the apparatus 500. The sound-sensing device 522 can be positioned such that it is directed toward the user operating the apparatus 500 and can be configured to receive sounds, for example, speech or other utterances, made by the user while the user operates the apparatus 500.
Although FIG. 5 depicts the processor 502 and the memory 504 of the apparatus 500 as being integrated into a single unit, other configurations can be utilized. The operations of the processor 502 can be distributed across multiple machines (each machine having one or more of processors) that can be coupled directly or across a local area or other network. The memory 504can be distributed across multiple machines such as a network-based memory or memory in multiple machines performing the operations of the apparatus 500. Although depicted here as a single bus, the bus 512of the apparatus 500 can be composed of multiple buses. Further, the secondary storage 514 can be directly coupled to the other components of the apparatus 500 or can be accessed via a network and can comprise a single integrated unit such as a memory card or multiple units such as multiple memory cards. The apparatus 500 can thus be implemented in a wide variety of configurations.
The present disclosure related to inter-picture prediction, while inter-picture prediction makes use of the temporal correlation between pictures in  order to derive a motion-compensated prediction (MCP) for a block of image samples.
For block-based MCP, a video picture is divided into rectangular blocks. Assuming homogeneous motion inside one block and that moving objects are larger than one block, for each block, a corresponding block in a previously decoded picture can be found that serves as a predictor. The general concept of MCP based on a translational motion model is illustrated in Fig. 6. Using a translational motion model, the position of the block in a previously decoded picture is indicated by a motion vector (Δx, Δy) , where Δx specifies the horizontal and Δy specifies the vertical displacement relative to the position of the current block. The motion vector (Δx, Δy) could be of fractional sample accuracy to more accurately capture the movement of the underlying object. Interpolation is applied on the reference pictures to derive the prediction signal when the corresponding motion vector has fractional sample accuracy. The previously decoded picture is referred to as the reference picture and indicated by a reference index Δt to a reference picture list. These translational motion model parameters, i.e. motion vectors and reference indices, are further referred to as motion data. Two kinds of inter-picture prediction are allowed in modern video coding standards, namely uni-prediction and bi-prediction.
In case of bi-prediction, two sets of motion data, i.e., (Δx 0, Δy 0, Δt 0) and (Δx 1, Δy 1, Δt 1) , are used to generate two MCPs (possibly from different pictures) , which are then combined to get the final MCP. Per default, this is done by averaging but in case of weighted prediction, different weights can be applied to different MCPs, e.g., to compensate for scene fade outs. The reference pictures that can be used in bi-prediction are stored in two separate lists, namely list 0 and list 1. In order to limit the memory bandwidth in slices allowing bi-prediction, the HEVC standard restricts PUs with 4×8 and 8×4 luma prediction blocks to use uni-prediction only. Motion data is derived at the encoder using a motion estimation process. Motion estimation is not specified within video  standards so different encoders can utilize different complexity-quality tradeoffs in their implementations.
An overview block diagram of the HEVC inter-picture prediction is shown in Fig. 7. The motion data of a block is correlated with the neighboring blocks. To exploit this correlation, motion data is not directly coded in the bitstream but predictively coded based on neighboring motion data. In HEVC, two concepts are used for that. The predictive coding of the motion vectors was improved in HEVC by introducing a new tool called advanced motion vector prediction (AMVP) where the best predictor for each motion block is signaled to the decoder. In addition, a new technique called inter-prediction block merging derives all motion data of a block from the neighboring blocks replacing the direct and skip modes in H. 264/AVC.
Different kinds of inter prediction methods are implemented in the motion data coding module. Generally, the methods are called as inter prediction modes. Several inter prediction modes are introduced as following.
One of the inter prediction mode is AMVP (Advanced Motion Vector Prediction) . As in previous video coding standards, the HEVC motion vectors are coded in terms of horizontal (x) and vertical (y) components as a difference to a so called motion vector predictor (MVP) . The calculation of both motion vector difference (MVD) components is shown in Eq. (1.1) and (1.2) .
MVD X = Δx -MVP X (1.1)
MVD Y = Δy -MVP Y (1.2)
Motion vectors of the current block are usually correlated with the motion vectors of neighboring blocks in the current picture or in the earlier coded pictures. This is because neighboring blocks are likely to correspond to the same moving object with similar motion and the motion of the object is not likely to change abruptly over time. Consequently, using the motion vectors in neighboring blocks as predictors reduces the size of the signaled motion vector difference. The MVPs are usually derived from already decoded motion vectors  from spatial neighboring blocks or from temporally neighboring blocks in the co-located picture. In some cases, the zero motion vector can also be used as MVP. In H. 264/AVC, this is done by doing a component wise median of three spatially neighboring motion vectors. Using this approach, no signaling of the predictor is required. Temporal MVPs from a co-located picture are only considered in the so called temporal direct mode of H. 264/AVC. The H.264/AVC direct modes are also used to derive other motion data than the motion vectors.
In HEVC, the approach of implicitly deriving the MVP was replaced by a technique known as motion vector competition, which explicitly signals which MVP from a list of MVPs, is used for motion vector derivation. The variable coding quadtree block structure in HEVC can result in one block having several neighboring blocks with motion vectors as potential MVP candidates. The initial design of AMVP includes five MVPs from three different classes of predictors: three motion vectors from spatial neighbors, the median of the three spatial predictors and a scaled motion vector from a co-located, temporally neighboring block. Furthermore, the list of predictors was modified by reordering to place the most probable motion predictor in the first position and by removing redundant candidates to assure minimal signaling overhead. Then, significant simplifications of the AMVP design are developed such as removing the median predictor, reducing the number of candidates in the list from five to two, fixing the candidate order in the list and reducing the number of redundancy checks. The final design of the AMVP candidate list construction includes the following two MVP candidates: a. up to two spatial candidate MVPs that are derived from five spatial neighboring blocks; b. one temporal candidate MVPs derived from two temporal, co-located blocks when both spatial candidate MVPs are not available or they are identical; c. zero motion vectors when the spatial, the temporal or both candidates are not available.
As already mentioned, two spatial MVP candidates A and B are derived from five spatially neighboring blocks which are shown in the right part of Fig. 8. The locations of the spatial candidate blocks are the same for both AMVP and inter-prediction block merging. The derivation process flow for the two spatial candidates A and B is depicted in Fig. 9. For candidate A, motion data from the two blocks A0 and A1 at the bottom left corner is taken into account in a two pass approach. In the first pass, it is checked whether any of the candidate blocks contain a reference index that is equal to the reference index of the current block. The first motion vector found will be taken as candidate A. When all reference indices from A0 and A1 are pointing to a different reference picture than the reference index of the current block, the associated motion vector cannot be used as is. Therefore, in a second pass, the motion vectors need to be scaled according to the temporal distances between the candidate reference picture and the current reference picture. Eq. (1.3) shows how the candidate motion vector mv cand is scaled according to a scale factor. ScaleFactor is calculated based on the temporal distance between the current picture and the reference picture of the candidate block td and the temporal distance between the current picture and the reference picture of the current block tb. The temporal distance is expressed in terms of difference between the picture order count (POC) values which define the display order of the pictures. The scaling operation is basically the same scheme that is used for the temporal direct mode in H. 264/AVC. This factoring allows pre-computation of ScaleFactor at slice-level since it only depends on the reference picture list structure signaled in the slice header. Note that the MV scaling is only performed when the current reference picture and the candidate reference picture are both short term reference pictures. Parameter td is defined as the POC difference between the co-located picture and the reference picture of the co-located candidate block.
mv = sign (mv cand ·ScaleFactor) · ( (|mv cand ·ScaleFactor| + 2 7) >> 8) (1.3)
ScaleFactor = clip (-2 12, 2 12 -1, (tb ·tx + 2 5) >> 6) (1.4)
Figure PCTCN2019094218-appb-000001
For candidate B, the candidates B0 to B2 are checked sequentially in the same way as A0 and A1 which were checked in the first pass. The second pass, however, is only performed when blocks A0 and A1 do not contain any motion information, i.e. are not available or coded using intra-picture prediction. Then, candidate A is set equal to the non-scaled candidate B, if found, and candidate B is set equal to a second, non-scaled or scaled variant of candidate B. The second pass searches for non-scaled as well as for scaled MVs derived from candidates B0 to B2. Overall, this design allows to process A0 and A1 independently from B0, B1, and B2. The derivation of B should only be aware of the availability of both A0 and A1 in order to search for a scaled or an additional non-scaled MV derived from B0 to B2. This dependency is acceptable given that it significantly reduces the complex motion vector scaling operations for candidate B. Reducing the number of motion vector scaling represents a significant complexity reduction in the motion vector predictor derivation process.
In HEVC, the block to the bottom right and at the center of the current block have been determined to be the most suitable to provide a good temporal motion vector predictor (TMVP) . These candidates are illustrated in the left part of Fig. 8 where C0 represents the bottom right neighbor and C1 represents the center block. Here again, motion data of C0 is considered first and, if not available, motion data from the co-located candidate block at the center is used to derive the temporal MVP candidate C. The motion data of C0 is also considered as not being available when the associated PU belongs to a CTU beyond the current CTU row. This minimizes the memory bandwidth requirements to store the co-located motion data. In contrast to the spatial MVP  candidates, where the motion vectors may refer to the same reference picture, motion vector scaling is mandatory for the TMVP. Hence, the same scaling operation as for the spatial MVPs is used.
While the temporal direct mode in H. 264/AVC always refers to the first reference picture in the second reference picture list, list 1, and is only allowed in bi-predictive slices, HEVC offers the possibility to indicate for each picture which reference picture is considered as the co-located picture. This is done by signaling in the slice header the co-located reference picture list and reference picture index as well as requiring that these syntax elements in all slices in a picture should specify the same reference picture.
Since the temporal MVP candidate introduces additional dependencies, it might be desirable to disable its usage for error robustness reasons. In H. 264/AVC there is the possibility to disable the temporal direct mode for bi-predictive slices in the slice header (direct_spatial_mv_pred_flag) . HEVC syntax extends this signaling by allowing to disable the TMVP at sequence level or at picture level (sps/slice_temporal_mvp_enabled_flag) . Although the flag is signaled in the slice header, it is a requirement of bitstream conformance that its value shall be the same for all slices in one picture. Since the signaling of the picture-level flag depends on the SPS flag, signaling it in the PPS would introduce a parsing dependency between SPS and PPS. Another advantage of this slice header signaling is that if you want to change only the value of this flag and no other parameter in the PPS, there is no need to transmit a second PPS.
In general, motion data signaling in HEVC is similar as in H.264/AVC. An inter-picture prediction syntax element, inter_pred_idc, signals whether  reference list  0, 1 or both are used. For each MCP obtained from one reference picture list, the corresponding reference picture (Δt) is signaled by an index to the reference picture list, ref_idx_l0/1, and the MV (Δx, Δy) is represented by an index to the MVP, mvp_l0/1_flag, and its MVD. A newly  introduced flag in the slice header, mvd_l1_zero_flag, indicates whether the MVD for the second reference picture list is equal to zero and therefore not signaled in the bitstream. When the motion vector is fully reconstructed, a final clipping operation assures that the values of each component of the final motion vector will always be in the range of -2 15 to 2 15 -1, inclusive.
Another one of the inter prediction modes is Inter-picture Prediction Block Merging. The AMVP list only contains motion vectors for one reference list while a merge candidate contains all motion data including the information whether one or two reference picture lists are used as well as a reference index and a motion vector for each list. Overall, the merge candidate list is constructed based on the following candidates: a. up to four spatial merge candidates that are derived from five spatial neighboring blocks; b. one temporal merge candidate derived from two temporal, co-located blocks; c. additional merge candidates including combined bi-predictive candidates and zero motion vector candidates.
The first candidates in the merge candidate list are the spatial neighbors. Up to four candidates are inserted in the merge list by sequentially checking A1, B1, B0, A0 and B2, in that order, according to the right part of Fig. 8.
Instead of just checking whether a neighboring block is available and contains motion information, some additional redundancy checks are performed before taking all the motion data of the neighboring block as a merge candidate. These redundancy checks can be divided into two categories for two different purposes: a. avoid having candidates with redundant motion data in the list; b. prevent merging two partitions that could be expressed by other means which would create redundant syntax.
When N is the number of spatial merge candidates, a complete redundancy check would consist of
Figure PCTCN2019094218-appb-000002
motion data comparisons. In case of the five potential spatial merge candidates, ten motion data comparisons would be needed to assure that all candidates in the merge list have different motion  data. During the development of HEVC, the checks for redundant motion data have been reduced to a subset in a way that the coding efficiency is kept while the comparison logic is significantly reduced. In the final design, no more than two comparisons are performed per candidate resulting in five overall comparisons. Given the order of {A1, B1, B0, A0, B2} , B0 only checks B1, A0 only A1 and B2 only A1 and B1. In an embodiment of the partitioning redundancy check, the bottom PU of a 2N×N partitioning is merged with the top one by choosing candidate B1. This would result in one CU with two PUs having the same motion data which could be equally signaled as a 2N×2N CU. Overall, this check applies for all second PUs of the rectangular and asymmetric partitions 2N×N, 2N×nU, 2N×nD, N×2N, nR×2N and nL×2N. It is noted that for the spatial merge candidates, only the redundancy checks are performed and the motion data is copied from the candidate blocks as it is. Hence, no motion vector scaling is needed here.
The derivation of the motion vectors for the temporal merge candidate is the same as for the TMVP. Since a merge candidate comprises all motion data and the TMVP is only one motion vector, the derivation of the whole motion data only depends on the slice type. For bi-predictive slices, a TMVP is derived for each reference picture list. Depending on the availability of the TMVP for each list, the prediction type is set to bi-prediction or to the list for which the TMVP is available. All associated reference picture indices are set equal to zero. Consequently for uni-predictive slices, only the TMVP for list 0 is derived together with the reference picture index equal to zero.
When at least one TMVP is available and the temporal merge candidate is added to the list, no redundancy check is performed. This makes the merge list construction independent of the co-located picture which improves error resilience. Consider the case where the temporal merge candidate would be redundant and therefore not included in the merge candidate list. In the event of a lost co-located picture, the decoder could not derive the temporal candidates  and hence not check whether it would be redundant. The indexing of all subsequent candidates would be affected by this.
For parsing robustness reasons, the length of the merge candidate list is fixed. After the spatial and the temporal merge candidates have been added, it can happen that the list has not yet the fixed length. In order to compensate for the coding efficiency loss that comes along with the non-length adaptive list index signaling, additional candidates are generated. Depending on the slice type, up to two kind of candidates are used to fully populate the list: a. Combined bi-predictive candidates; b. Zero motion vector candidates.
In bi-predictive slices, additional candidates can be generated based on the existing ones by combining reference picture list 0 motion data of one candidate with and the list 1 motion data of another one. This is done by copying Δx 0, Δy 0, Δt 0 from one candidate, e.g. the first one, and Δx 1, Δy 1, Δt 1 from another, e.g. the second one. The different combinations are predefined and given in Table 1.1.
Table 1.1
Figure PCTCN2019094218-appb-000003
When the list is still not full after adding the combined bi-predictive candidates, or for uni-predictive slices, zero motion vector candidates are calculated to complete the list. All zero motion vector candidates have one zero displacement motion vector for uni-predictive slices and two for bi-predictive slices. The reference indices are set equal to zero and are incremented by one for each additional candidate until the maximum number of reference indices is reached. If that is the case and there are still additional candidates missing, a reference index equal to zero is used to create these. For all the additional  candidates, no redundancy checks are performed as it turned out that omitting these checks will not introduce a coding efficiency loss.
For each PU coded in inter-picture prediction mode, a so called merge_flag indicates that block merging is used to derive the motion data. The merge_idx further determines the candidate in the merge list that provides all the motion data needed for the MCP. Besides this PU-level signaling, the number of candidates in the merge list is signaled in the slice header. Since the default value is five, it is represented as a difference to five (five_minus_max_num_merge_cand) . That way, the five is signaled with a short codeword for the 0 whereas using only one candidate, is signaled with a longer codeword for the 4. Regarding the impact on the merge candidate list construction process, the overall process remains the same although it terminates after the list contains the maximum number of merge candidates. In the initial design, the maximum value for the merge index coding was given by the number of available spatial and temporal candidates in the list. When e.g. only two candidates are available, the index can be efficiently coded as a flag. But, in order to parse the merge index, the whole merge candidate list has to be constructed to know the actual number of candidates. Assuming unavailable neighboring blocks due to transmission errors, it would not be possible to parse the merge index anymore.
A crucial application of the block merging concept in HEVC is its combination with a skip mode. In previous video coding standards, the skip mode was used to indicate for a block that the motion data is inferred instead of explicitly signaled and that the prediction residual is zero, i.e. no transform coefficients are transmitted. In HEVC, at the beginning of each CU in an inter-picture prediction slice, a skip_flag is signaled that implies the following: a. the CU only contains one PU (2N×2N partition type) ; b. the merge mode is used to derive the motion data (merge_flag equal to 1) ; c. no residual data is present in the bitstream.
A parallel merge estimation level was introduced in HEVC that indicates the region in which merge candidate lists can be independently derived by checking whether a candidate block is located in that merge estimation region (MER) . A candidate block that is in the same MER is not included in the merge candidate list. Hence, its motion data does not need to be available at the time of the list construction. When this level is e.g. 32, all prediction units in a 32×32 area can construct the merge candidate list in parallel since all merge candidates that are in the same 32×32 MER, are not inserted in the list. As shown in Fig. 10, there is a CTU partitioning with seven CUs and ten PUs. All potential merge candidates for the first PU0 are available because they are outside the first 32×32 MER. For the second MER, merge candidate lists of PUs 2-6 cannot include motion data from these PUs when the merge estimation inside that MER should be independent. Therefore, when looking at a PU5 for example, no merge candidates are available and hence not inserted in the merge candidate list. In that case, the merge list of PU5 consists only of the temporal candidate (if available) and zero MV candidates. In order to enable an encoder to trade-off parallelism and coding efficiency, the parallel merge estimation level is adaptive and signaled as log2_parallel_merge_level_minus2 in the picture parameter set.
Another one of the inter prediction modes is Sub-CU based motion vector prediction. During the development of the new video coding technique, with QTBT, each CU can have at most one set of motion parameters for each prediction direction. Two sub-CU level motion vector prediction methods are considered in the encoder by splitting a large CU into sub-CUs and deriving motion information for all the sub-CUs of the large CU. Alternative temporal motion vector prediction (ATMVP) method allows each CU to fetch multiple sets of motion information from multiple blocks smaller than the current CU in the collocated reference picture. In spatial-temporal motion vector prediction (STMVP) method motion vectors of the sub-CUs are derived recursively by  using the temporal motion vector predictor and spatial neighboring motion vector.
To preserve more accurate motion field for sub-CU motion prediction, the motion compression for the reference frames is currently disabled.
While the Sub-CU based motion vector prediction includes Alternative temporal motion vector prediction, Spatial-temporal motion vector prediction, Combined with Merge mode, and Pattern matched motion vector derivation. Which will be introduced in the following:
In the alternative temporal motion vector prediction (ATMVP) method, the motion vectors temporal motion vector prediction (TMVP) is modified by fetching multiple sets of motion information (including motion vectors and reference indices) from blocks smaller than the current CU. As shown in Fig. 11, the sub-CUs are square N×N blocks (N is set to 4 by default) .
ATMVP predicts the motion vectors of the sub-CUs within a CU in two steps. The first step is to identify the corresponding block in a reference picture with a so-called temporal vector. The reference picture is called the motion source picture. The second step is to split the current CU into sub-CUs and obtain the motion vectors as well as the reference indices of each sub-CU from the block corresponding to each sub-CU, as shown in Figure 6.
In the first step, a reference picture and the corresponding block is determined by the motion information of the spatial neighboring blocks of the current CU. To avoid the repetitive scanning process of neighboring blocks, the first merge candidate in the merge candidate list of the current CU is used. The first available motion vector as well as its associated reference index are set to be the temporal vector and the index to the motion source picture. This way, in ATMVP, the corresponding block may be more accurately identified, compared with TMVP, wherein the corresponding block (sometimes called collocated block) is always in a bottom-right or center position relative to the current CU.
In the second step, a corresponding block of the sub-CU is identified by the temporal vector in the motion source picture, by adding the temporal vector to the coordinate of the current CU. For each sub-CU, the motion information of its corresponding block (the smallest motion grid that covers the center sample) is used to derive the motion information for the sub-CU. After the motion information of a corresponding N×N block is identified, it is converted to the motion vectors and reference indices of the current sub-CU, in the same way as TMVP of HEVC, wherein motion scaling and other procedures apply. For example, the decoder checks whether the low-delay condition (i.e. the POCs of all reference pictures of the current picture are smaller than the POC of the current picture) is fulfilled and possibly uses motion vector MVx (the motion vector corresponding to reference picture list X) to predict motion vector MVy (with X being equal to 0 or 1 and Y being equal to 1-X) for each sub-CU.
In the Spatial-temporal motion vector prediction method, the motion vectors of the sub-CUs are derived recursively, following raster scan order. As shown in Fig. 12, it is considered that an 8×8 CU which contains four 4×4 sub-CUs A, B, C, and D. The neighboring 4×4 blocks in the current frame are labelled as a, b, c, and d.
The motion derivation for sub-CU A starts by identifying its two spatial neighbors. The first neighbor is the N×N block above sub-CU A (block c) . If this block c is not available or is intra coded the other N×N blocks above sub-CU A are checked (from left to right, starting at block c) . The second neighbor is a block to the left of the sub-CU A (block b) . If block b is not available or is intra coded other blocks to the left of sub-CU A are checked (from top to bottom, staring at block b) . The motion information obtained from the neighboring blocks for each list is scaled to the first reference frame for a given list. Next, temporal motion vector predictor (TMVP) of sub-block A is derived by following the same procedure of TMVP derivation as specified in HEVC. The  motion information of the collocated block at location D is fetched and scaled accordingly. Finally, after retrieving and scaling the motion information, all available motion vectors (up to 3) are averaged separately for each reference list. The averaged motion vector is assigned as the motion vector of the current sub-CU.
In the Combined with Merge mode, the sub-CU modes are enabled as additional merge candidates and there is no additional syntax element required to signal the modes. Two additional merge candidates are added to merge candidates list of each CU to represent the ATMVP mode and STMVP mode. Up to seven merge candidates are used, if the sequence parameter set indicates that ATMVP and STMVP are enabled. The encoding logic of the additional merge candidates is the same as for the merge candidates in HM, which means, for each CU in P or B slice, two more RD checks is needed for the two additional merge candidates.
Pattern matched motion vector derivation (PMMVD) mode is based on Frame-Rate Up Conversion (FRUC) techniques. With this mode, motion information of a block is not signaled but derived at decoder side.
A FRUC flag is signaled for a CU when its merge flag is true. When the FRUC flag is false, a merge index is signaled and the regular merge mode is used. When the FRUC flag is true, an additional FRUC mode flag is signaled to indicate which method (bilateral matching or template matching) is to be used to derive motion information for the block.
At encoder side, the decision on whether using FRUC merge mode for a CU is based on RD cost selection as done for normal merge candidate. That is the two matching modes (bilateral matching and template matching) are both checked for a CU by using RD cost selection. The one leading to the minimal cost is further compared to other CU modes. If a FRUC matching mode is the most efficient one, FRUC flag is set to true for the CU and the related matching mode is used.
Motion derivation process in FRUC merge mode has two steps. A CU-level motion search is first performed, then followed by a Sub-CU level motion refinement. At CU level, an initial motion vector is derived for the whole CU based on bilateral matching or template matching. First, a list of MV candidates is generated and the candidate which leads to the minimum matching cost is selected as the starting point for further CU level refinement. Then a local search based on bilateral matching or template matching around the starting point is performed and the MV results in the minimum matching cost is taken as the MV for the whole CU. Subsequently, the motion information is further refined at sub-CU level with the derived CU motion vectors as the starting points.
For example, the following derivation process is performed for a W×H CU motion information derivation. At the first stage, MV for the whole W×H CU is derived. At the second stage, the CU is further split into M×M sub-CUs. The value of M is calculated as in Eq. (1.8) , D is a predefined splitting depth which is set to 3 by default in the JEM. Then the MV for each sub-CU is derived.
Figure PCTCN2019094218-appb-000004
As shown in the Fig. 13, the bilateral matching is used to derive motion information of the current CU by finding the closest match between two blocks along the motion trajectory of the current CU in two different reference pictures. Under the assumption of continuous motion trajectory, the motion vectors MV0 and MV1 pointing to the two reference blocks shall be proportional to the temporal distances, i.e., TD0 and TD1, between the current picture and the two reference pictures. When the current picture is temporally between the two reference pictures and the temporal distance from the current picture to the two reference pictures is the same, the bilateral matching becomes mirror based bi-directional MV.
In the bilateral matching merge mode, bi-prediction is always applied since the motion information of a CU is derived based on the closest match  between two blocks along the motion trajectory of the current CU in two different reference pictures. There is no such limitation for the template matching merge mode. In the template matching merge mode, the encoder can choose among uni-prediction from list0, uni-prediction from list1 or bi-prediction for a CU. The selection is based on a template matching cost as follows:
If costBi <= factor *min (cost0, cost1)
bi-prediction is used;
Otherwise, if cost0 <= cost1
uni-prediction from list0 is used;
Otherwise,
uni-prediction from list1 is used;
where cost0 is the SAD of list0 template matching, cost1 is the SAD of list1 template matching and costBi is the SAD of bi-prediction template matching. The value of factor is equal to 1.25, which means that the selection process is biased toward bi-prediction. The inter prediction direction selection is only applied to the CU-level template matching process.
As shown in. Fig. 14, template matching is used to derive motion information of the current CU by finding the closest match between a template (top and/or left neighbouring blocks of the current CU) in the current picture and a block (same size to the template) in a reference picture. Except the aforementioned FRUC merge mode, the template matching is also applied to AMVP mode. With template matching method, a new candidate is derived. If the newly derived candidate by template matching is different to the first existing AMVP candidate, it is inserted at the very beginning of the AMVP candidate list and then the list size is set to two (meaning remove the second existing AMVP candidate) . When applied to AMVP mode, only CU level search is applied.
The MV candidate set at CU level consists of: a. Original AMVP candidates if the current CU is in AMVP mode; b. all merge candidates; c. several MVs in the interpolated MV field; d. top and left neighbouring motion vectors.
It is noted that the interpolated MV field mentioned above is generated before coding a picture for the whole picture based on unilateral ME. Then the motion field may be used later as CU level or sub-CU level MV candidates. First, the motion field of each reference pictures in both reference lists is traversed at 4×4 block level. For each 4×4 block, if the motion associated to the block passing through a 4×4 block in the current picture, as shown in Fig. 15, and the block has not been assigned any interpolated motion, the motion of the reference block is scaled to the current picture according to the temporal distance TD0 and TD1 (the same way as that of MV scaling of TMVP in HEVC) and the scaled motion is assigned to the block in the current frame. If no scaled MV is assigned to a 4×4 block, the block’s motion is marked as unavailable in the interpolated motion field.
When using bilateral matching, each valid MV of a merge candidate is used as an input to generate a MV pair with the assumption of bilateral matching. For example, one valid MV of a merge candidate is (MVa, refa) at reference list A. Then the reference picture refb of its paired bilateral MV is found in the other reference list B so that refa and refb are temporally at different sides of the current picture. If such a refb is not available in reference list B, refb is determined as a reference which is different from refa and its temporal distance to the current picture is the minimal one in list B. After refb is determined, MVb is derived by scaling MVa based on the temporal distance between the current picture and refa, refb.
Four MVs from the interpolated MV field are also added to the CU level candidate list. More specifically, the interpolated MVs at the position (0, 0) , (W/2, 0) , (0, H/2) and (W/2, H/2) of the current CU are added.
When FRUC is applied in AMVP mode, the original AMVP candidates are also added to CU level MV candidate set.
At the CU level, up to 15 MVs for AMVP CUs and up to 13 MVs for merge CUs are added to the candidate list.
The MV candidate set at sub-CU level consists of: a. an MV determined from a CU-level search; b. top, left, top-left and top-right neighbouring MVs; c. scaled versions of collocated MVs from reference pictures; d. up to 4 ATMVP candidates; e. up to 4 STMVP candidates.
The scaled MVs from reference pictures are derived as follows. All the reference pictures in both lists are traversed. The MVs at a collocated position of the sub-CU in a reference picture are scaled to the reference of the starting CU-level MV.
ATMVP and STMVP candidates are limited to the four first ones.
At the sub-CU level, up to 17 MVs are added to the candidate list.
Motion vector can be refined by different methods combining with the different inter prediction modes.
MV refinement is a pattern based MV search with the criterion of bilateral matching cost or template matching cost. In the current development, two search patterns are supported –an unrestricted center-biased diamond search (UCBDS) and an adaptive cross search for MV refinement at the CU level and sub-CU level, respectively. For both CU and sub-CU level MV refinement, the MV is directly searched at quarter luma sample MV accuracy, and this is followed by one-eighth luma sample MV refinement. The search range of MV refinement for the CU and sub-CU step are set equal to 8 luma samples.
In bi-prediction operation, for the prediction of one block region, two prediction blocks, formed using a MV of list0 and a MV of list1, respectively, are combined to form a single prediction signal. In the decoder-side motion vector refinement (DMVR) method, the two motion vectors of the bi-prediction are further refined by a bilateral template matching process. The bilateral  template matching applied in the decoder to perform a distortion-based search between a bilateral template and the reconstruction samples in the reference pictures in order to obtain a refined MV without transmission of additional motion information.
In DMVR, a bilateral template is generated as the weighted combination (i.e. average) of the two prediction blocks, from the initial MV0 of list0 and MV1 of list1, respectively, as shown in Fig. 16. The template matching operation consists of calculating cost measures between the generated template and the sample region (around the initial prediction block) in the reference picture. For each of the two reference pictures, the MV that yields the minimum template cost is considered as the updated MV of that list to replace the original one. In the current development, nine MV candidates are searched for each list. The nine MV candidates include the original MV and 8 surrounding MVs with one luma sample offset to the original MV in either the horizontal or vertical direction, or both. Finally, the two new MVs, i.e., MV0′ and MV1′ as shown in Fig. 16, are used for generating the final bi-prediction results. A sum of absolute differences (SAD) is used as the cost measure.
DMVR is applied for the merge mode of bi-prediction with one MV from a reference picture in the past and another from a reference picture in the future, without the transmission of additional syntax elements.
The usage of the TMVP, in AMVP as well as in the merge mode, requires the storage of the motion data (including motion vectors, reference indices and coding modes) in co-located reference pictures. Considering the granularity of motion representation, the memory size needed for storing motion data could be significant. HEVC employs motion data storage reduction (MDSR) to reduce the size of the motion data buffer and the associated memory access bandwidth by sub-sampling motion data in the reference pictures. While H. 264/AVC is storing these information on a 4×4 block basis, HEVC uses a 16×16 block where, in case of sub-sampling a 4×4 grid, the information of the  top-left 4×4 block is stored. Due to this sub-sampling, MDSR impacts on the quality of the temporal prediction.
Furthermore, there is a tight correlation between the position of the MV used in the co-located picture, and the position of the MV stored by MDSR. During the standardization process of HEVC, it turned out that storing the motion data of the top left block inside the 16×16 area together with the bottom right and center TMVP candidates provide the best tradeoff between coding efficiency and memory bandwidth reduction.
In HEVC, motion vector accuracy is one-quarter pel (one-quarter luma sample and one-eighth chroma sample for 4: 2: 0 video) . In the current development, the accuracy for the internal motion vector storage and the merge candidate increases to 1/16 pel. The higher motion vector accuracy (1/16 pel) is used in motion compensation inter prediction for the CU coded with skip/merge mode. For the CU coded with normal AMVP mode, either the integer-pel or quarter-pel motion is used.
When a motion vector points to a fractional sample position, motion compensated interpolation is needed. For the luma interpolation filtering, an 8-tap separable DCT-based interpolation filter is used for 2/4 precision samples and a 7-tap separable DCT-based interpolation filter is used for 1/4 precisions samples, as shown in Table 1.2.
Table 1.2
Position Filter coefficients
1/4 {-1, 4, -10, 58, 17, -5, 1}
2/4 {-1, 4, -11, 40, 40, -11, 4, -1}
3/4 {1, -5, 17, 58, -10, 4, -1}
Similarly, a 4-tap separable DCT-based interpolation filter is used for the chroma interpolation filter, as shown in Table 1.3.
Table 1.3
Position Filter coefficients
1/8 {-2, 58, 10, -2}
2/8 {-4, 54, 16, -2}
3/8 {-6, 46, 28, -4}
4/8 {-4, 36, 36, -4}
5/8 {-4, 28, 46, -6}
6/8 {-2, 16, 54, -4}
7/8 {-2, 10, 58, -2}
For the vertical interpolation for 4: 2: 2 and the horizontal and vertical interpolation for 4: 4: 4 chroma channels, the odd positions in Table 1.3 are not used, resulting in 1/4 th chroma interpolation.
For the bi-directional prediction, the bit-depth of the output of the interpolation filter is maintained to 14-bit accuracy, regardless of the source bit-depth, before the averaging of the two prediction signals. The actual averaging process is done implicitly with the bit-depth reduction process as: 
predSamples [x, y] =
(predSamplesL0 [x, y] + predSamplesL1 [x, y] + offset) >> shift (1.9)
shift = 15 –BitDepth (1.10)
offset = 1 << (shift –1) (1.11)
To reduce complexity, bi-linear interpolation instead of regular 8-tap HEVC interpolation is used for both bilateral matching and template matching. 
The calculation of matching cost is a bit different at different steps. When selecting the candidate from the candidate set at the CU level, the matching cost is the SAD of bilateral matching or template matching. After the starting MV is determined, the matching cost C of bilateral matching at sub-CU level search is calculated as follows:
Figure PCTCN2019094218-appb-000005
where w is a weighting factor which is empirically set to 4, MV and MV s indicate the current MV and the starting MV, respectively. SAD is still used as the matching cost of template matching at sub-CU level search.
In FRUC mode, MV is derived by using luma samples only. The derived motion will be used for both luma and chroma for MC inter prediction. After MV is decided, final MC is performed using 8-taps interpolation filter for luma and 4-taps interpolation filter for chroma.
In the absence of any decoder side motion vector derivation, the motion vectors and reference indices of coding units that are coded with any inter-coding mode are reconstructed or inferred without any pixel level operations on any coding unit within that frame. The differential coding of a motion vector using an appropriately scaled version of an already reconstructed motion vector of a spatial or temporally co-located or interpolated neighbor as well as the process of inheriting a reconstructed motion vector through a merge process are computationally simple and hence the dependent reconstruction or inheritance process does not pose any major decoder side design complexity issue.
The decoder side motion vector refinement or pattern matched motion vector derivation schemes proposed till now allow the refined motion vector (s) of a spatially neighboring coding unit to be employed as motion vector predictor (s) in the differential coding of the motion vector (s) of a current coding unit. This results in significant coding gains through either the reduction in motion vector delta (MVD) coding bits with the spatially neighboring CU’s refinement MVs when the current CU is INTER/AFFINE-INTER and through improved starting point for current CU’s refinement when it is a CU that employs DMVR/PMMVD. However, this significantly impacts the concurrency in processing of the coding units of the current frame as the motion compensation or refinement followed by  motion compensation for a given coding unit cannot start until the final motion vector of the spatially neighboring CU on which the current CU depends on. Even one refinement based dependency in the chain will stall the processing pipeline till that dependency is resolved. Given that motion based partitioning can have a wide range of granularities (from a complete coded tree block being one CU to as small as a 4x4 CU) , the sequential dependency results in reduced parallelism that will affect the timing within which the tasks of fetching the required reference data and performing decoder-side MV refinement and/or motion compensated prediction need to be completed. This will result in a significant over-design (e.g. higher clock, wider execution units, wider buses, etc. ) to handle the worst-case timing and significant under-utilization of the designed execution units in the average cases.
On the other hand, when the dependency issue is resolved by forcing all CUs in the current access unit to not use the refined motion vector of any coding unit as a predictor or starting point for refinement, the coding gains suffer significantly. This is because the RDO process decides DMVR/PMMVD to be superior to the other inter-coding modes. But in the absence of the decoder-side refinement, the MVD coding bits increase significantly (when compared to the no DMVR/PMMVD case) and also the starting points for the refinements end up being inferior. Hence there is a significant compression loss by not using any refined MVs.
Hence there is a need for a method that arrives at a suitable trade-off between the coding loss and the complexity increase by allowing refined motion vectors within the current access unit to be used as a predictor or as a starting point for other refinements.
The proposed method and embodiments disclosed herein determines the availability of refined motion vectors of spatially neighboring coding units in such a way that a set of coding units or sub-coding units within a coded tree block can configure their data pre-fetch in a concurrent manner in a given stage  of a regular pipeline and also perform their refinement process in a concurrent manner in the next stage of that regular pipeline. Specifically, the concept of a lag between the top CTB row and current CTB row is utilized in determining such availability. Also, the concept of a concurrency set is introduced to normatively partition some coding units, if necessary, into sub-coding-units to meet the concurrency requirements of the pipeline. Compared to not using any refined motion vector from the current picture, the proposed approach provides a higher coding gain while ensuring that the dependency does not overly constrain the hardware implementation of the refinement process. Also, by pre-fetching around an unrefined motion vector and using a normative padding process to access samples that go outside the normative amount of pre-fetched samples, the pipeline latency is further reduced to make even left or top-right CTB refined MVs to be used for refinement of current CTB CUs. The process is also extended to finer granularities than CTB level.
Given that decoder side motion vector refinement/derivation is a normative aspect of a coding system, the encoder will also have to perform the same error surface technique in order to not have any drift between the encoder’s reconstruction and the decoder’s reconstruction. Hence, all aspects of all embodiments are applicable to both encoding and decoding systems.
All embodiments of the present invention are applicable to both PMMVD and DMVR methods.
Embodiment 1
By considering a regular CTB level pipeline design, each processing stage should be preceded by a data pre-fetch stage. Both the data fetch stage and processing stage should be able to use the entire time of the pipeline slot. Following Fig. 17 shows such a regular pipeline at CTB level with DMA on the left and DMVR + MC on the right, corresponding to the data pre-fetch and processing stages.
In case of both top and current row starting at same time, doesn’ t provide the spatial refined motion vector from the neighbors. The following figure shows the CTB pipeline cross rows when both row processing starts at same time.
From Fig. 18, it is very clear that none of the CTB in the current row at the DMA stage have the top row DMVR+MC stage completed. This is similar to not using any refined motion vector from the current access unit for AMVP or as a starting motion vector for decoder-side motion refinement.
By introducing a lag of N CTBs (N > 0) between the current CTB row and its Top neighbor CTB row, Current CTB gets the refined motion vectors of some of the top CTB neighbors. Following Fig. 19 shows the processing pipeline with N=2. In HEVC and AVC, intra prediction depends on completion of top right neighbor CTB. A lag value of N=2 will now bring a similar dependency for inter prediction.
From Fig. 19, during the DMA or data pre-fetch stage of CN+1, both TN and TN+1 DMVR+MC stage are completed, hence CN+1 CTB can use the refined MVs of the Top and Top left CTBs for Inter MVP and as starting MVs for decoder-side MV refinement.
It should be noted that even with the concept of lag between consecutive CTB rows, any coding unit that is a spatial neighbor for a current CU within the current CTB is still considered unavailable. Also, given that data pre-fetch stage has been introduced as a pipeline stage, left and bottom-left neighbor CTBs are always considered as unavailable. Hence, this scheme is more beneficial when the maximum CTB size is smaller. This is summarized in the Table 3.1.
Table 3.1 Final refined MV Availability status of spatial neighbors CTBs with  lags  0, 2, and 3
Spatial Availability
Neighbour CTB LAG  0 LAG 2 LAG3
Left      
Bottom-Left      
Top      
Top Left      
Top Right      
By using the method disclosed in embodiment 1, improved coding gains due to increasing the availability of spatial refined motion vectors by introducing the CTB lag of N. “N” can be selected as per the design constraints of the video coding system.
Embodiment 2
In embodiment 1, all coding units at a CTB level were considered as part of a concurrency set such that their data-prefetches can happen concurrently during a pipeline stage while their refinement and motion compensation processing can happen concurrently during the following pipeline stage. In this embodiment, a concept of concurrency set that can exist at a sub-CTB level is introduced in order further improve the coding gains by being able to use the final refined motion vectors in more cases.
For instance, there are cases where a larger CTB is force partitioned into quad-tree partitioned coding units that are coded in a recursive z-scan order. 
As in PMMVD, it is also possible to force partition a given coding unit into sub-coding units. A concurrency set is defined to be a set of pixels in a CTB that correspond to one partition of recursive quad-tree partition of the CTB. For instance, concurrency set can be chosen as 64x64 or 32x32 for a CTB size of 128x128. A given coding unit that spans across more than one concurrency set is force partitioned for decoder-side motion vector refinement purposes into as many sub coding units (sub-CUs) as the number of concurrency sets that it spans.  The dependency across these concurrency sets is assumed to be in a recursive z-scan order. Thus independent of the actual partitioning of the CTB into coding units, a concurrency set becomes an independent set of pixels, the processing for which can be performed concurrently to have a regular sub-CTB level pipeline that can have a data pre-fetch stage followed by a refinement and motion compensation stage in a manner similar to the CTB level pipeline in embodiment 1.
Fig. 20a-c show the pipeline of using a 1-level deep quad-tree split of a CTB and with lag of 2 CTBs between two consecutive CTB rows.
From the above figures, the improvements in the availability of final refined spatial neighbor motion vectors can be seen. Concurrency set Z0 has all the neighbor concurrency sets available except bottom left, concurrency set Z1 has all top concurrency set neighbors available but left and bottom left concurrency set neighbors are not available, concurrency set Z2 has all concurrency set neighbors except Top right and bottom left concurrency sets, and concurrency set Z3 has only Top and top left concurrency sets available. This is summarized in the Table 3.2.
Table 3.2 Refined spatial neighbor concurrency set availability status for 1-level deep quad-trees split with a lag of 2 CTBs between consecutive CTB rows
Figure PCTCN2019094218-appb-000006
Figure PCTCN2019094218-appb-000007
By using the method of embodiment 2, the forced partitioning of a CTB into concurrency sets for performing the decoder-side motion vector refinement in a manner that is independent of the actual partitioning of the CTB makes more final refined motion vectors to be available in conjunction with the concept of a configurable CTB lag between consecutive CTB rows. This helps improve the coding gains relative to embodiment 1 while still allowing for a regular pipeline that allows for a data pre-fetch stage that precedes the decoder-side motion vector refinement and motion compensated prediction stage.
Thus both  embodiments  1 and 2 in the present disclosure may be summarized as, a method for determining the availability of the refined motion vectors of a spatially neighboring set, comprising at least one coding unit or sub-coding unit, for a current set, comprising at least one coding unit or sub coding unit, may comprise the steps of:
determining that the spatially neighboring set’s refined motion vectors to be available when those refined motion vectors have been computed in a pipeline stage ahead of the data pre-fetch stage of the current set; and
partitioning a coding unit normatively into as many sub-coding units as the number of concurrency sets that the coding unit spans so that the data pre-fetch and decoder-side motion vector refinement and motion compensation for each sub-coding unit can happen independent of the other sub coding-units, but in a concurrent manner with other coding or sub-coding units that are part of a current concurrency set.
Fig. 22 describes different pattern directions, where a-f describes different patterns (in black) for different pixel situations (in black) , and the details are shown in the Fig. 22.
Fig. 23 describes error surface, and the details are shown in the fig. 23. Terms T B L R and C in Figure 23 refer to relative directions Top Bottom Left  Right and Centre respectively. Moreover, an example of error surface cost combination is show in table 4.
Table 4
Figure PCTCN2019094218-appb-000008
Figure PCTCN2019094218-appb-000009
Embodiment 3
The search range for decoder-side motion vector refinement increases the worst-case external memory accesses and also the internal memory buffers. To counter this, some prior art methods do not bring any additional samples that are based on the search range, but only use the samples required for performing motion compensation using the merge mode motion vectors. Additional samples required for refinement are obtained purely through motion compensated interpolation that employs padding for the unavailable sample with the last available sample before it. It is also possible to arrive at a trade-off between external memory bandwidth and padding introduced coding efficiency reduction by fetching one or more lines of samples beyond just the samples required for motion compensation using the merge mode motion vectors without refinement, but still less than what are required for covering the entire refinement range.
In this embodiment, with such padded motion compensation, the pre-fetch for a given CTB (or a sub-partition of a CTB at the first level) is performed using the unrefined motion vectors of coding units in the causal neighbor CTBs if the refined neighbor CTB motion vectors are not available at the time of pre- fetch. The refined motion vector of a coding unit in the neighbor CTB can be used as the starting point for performing the refinement for a coding unit within the current CTB that merges with that coding unit in the neighbor CTB. Any unavailable samples relative to the pre-fetched data can be accessed or interpolated through padding. Thus, the use of padded samples obviates the need for pre-fetch to be performed only after the coding unit in the neighbor CTB completes its refinement, thus reducing the latency of the dependency. This method works reasonably well when the refinement search range iterations are low in number or when the refinement process exit early. The reduction of the pipeline latency implies that refined motion vectors of CTBs that are just one pipeline slot ahead of the current CTB can be used as the starting points for refinement in coding units within the current CTB. For example, even with a lag of 1 CTB between CTB rows, the top CTB’s refined MVs can be used for refinement of coding units within the current CTB. Also, the left CTB’s refined MVs can be used for refinement of coding units within the current CTB. Thus, barring the refined MVs of coding units within the current CTB, all other neighbor refined MVs can be employed to bring back the coding gains. The availabilities are summarized in Table 5 below.
Table 5: Use of neighbor’s refined and unrefined MVs for pre-fetch and refinement based on neighbor CTB’s pipeline lag when CTB-row lag=2
Figure PCTCN2019094218-appb-000010
Similar to embodiment 2, when each CTB is quad-tree split at the first depth of splitting, it is possible to have a pipeline of pre-fetch followed by refinement that is at the granularity of a quarter of a CTB (QCTB) . In this case, with the pre-fetch based on unrefined neighbor QCTB MVs and with neighbor QCTB refined MVs as starting points for refinement with use of padding to get unavailable samples, the z-scan order of encoding the 4 QCTB coding units, more refined MVs can be tapped for refinement while still ensuring that the pre-fetch followed by refinement pipeline at the QCTB level can work. Table 6 below summarizes the MVs of neighbor used by each of the QCTBs within a CTB.
Table 6: Use of neighbor’s refined and unrefined MVs for pre-fetch and refinement based on pipeline lag value of neighbor CU’s QCTB.
Figure PCTCN2019094218-appb-000011
This embodiment enables even the left, top-right, and bottom-left neighbor’s refined MV to be used if the neighbor CU is in a neighbor CTB (or QCTB) and not within the current CTB (or QCTB) . This improves the coding gains while still ensuring a regular pipeline where pre-fetch is performed at CTB (or QCTB) level and refinement of all coding units within the current CTB (or QCTB) can be performed in parallel.
In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media which is non-transitory or (2) a communication medium such as a signal  or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL) , or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead directed to non-transitory, tangible storage media. Disk and disc, as used herein, includes compact disc (CD) , laser disc, optical disc, digital versatile disc (DVD) , floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs) , general purpose microprocessors, application specific integrated circuits (ASICs) , field programmable logic arrays (FPGAs) , or other equivalent integrated or discrete logic circuitry. Accordingly,  the term “processor, ” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques could be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set) . Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a codec hardware unit or provided by a collection of interpretable hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.
While several embodiments have been provided in the present disclosure, it should be understood that the disclosed systems and methods might be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented.
In addition, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and could be made without departing from the spirit and scope disclosed herein.

Claims (17)

  1. A method of decoding for video data, such as in DMRV or PMMVD, implemented by a decoding unit or an encoding unit, comprising:
    - determining, for a current set which comprises at least one coding unit (CU) , or sub coding unit, that a refined motion vector of a spatially neighboring set is available when the refined motion vectors have been obtained, wherein the spatially neighboring set comprises at least one coding unit (CU) or sub-coding unit; and -providing the said refined motion vector to a current processing stage of a processing pipeline for the current set.
  2. The method according to claim 1, wherein
    a coding unit is partitioned into as many sub-coding units as the number of concurrency set (s) that the coding unit (CU) spans, so that the data pre-fetch and decoder-side motion vector refinement and motion compensation for each sub-coding unit happens independently of the other sub coding-units, but in a concurrent manner with other coding or sub-coding units that are part of a current concurrency set.
  3. The method according to claim 1 or 2, wherein the method is performed in a data pre-fetch stage of a current set;
    wherein the refined motion vectors have been computed in a pipeline stage of the current set, wherein the pipeline stage is an ahead stage of the data pre-fetch stage.
  4. The method according to any preceding claim, comprising
    - is a refined motion vector of a spatially neighboring set is not available at the time of pre-fetch of the current set, using unrefined motion vectors of coding units in casual neighbor CTBs; and /or
    - using the refined motion vector of a coding unit in the neighbour CTB that does not fall within the same concurrency set, such as may merge with that coding unit, as start of refinement motion vector center;
    and /or
    using padded samples whenever the refinement requires samples that are not pre-fetched based on a configurable additional sample fetch range around the pre-fetch center.
  5. The method according to any preceding claim, wherein the refined motion vector is obtained by computing the refined motion vector in an earlier processing stage that is prior to the current processing stage, wherein the earlier processing stage is prior to the current processing stage by a number of processing stages of the processing pipeline.
  6. The method according to any preceding claim, wherein the number of processing stages earlier or ahead is determined according to a configured lag.
  7. The method according to claim 6, wherein the configured lag is an integer, N, N is greater than or equal to 0; preferably N=1; even more preferably N=2 or more.
  8. The method according to any of claims 1-7, wherein the current processing stage is a pre-fetch stage e.g. DMA of the current set or a motion vector refinement and motion compensation processing stage e.g. DMVR+MC of the current set; and the earlier processing stage is a motion vector refinement and motion compensation processing stage e.g. DMVR+MC of the spatially neighboring set which obtains the refined motion vector of the spatially neighboring set.
  9. The method according to any preceding claim, wherein a concurrency set comprises a set of independent pixels for which a processing stage can be performed concurrently in terms of data pre-fetch and/or motion vector refinement and motion compensation.
  10. The method according to claim 9, wherein the concurrent set comprises the current set or the spatially neighboring set.
  11. The method according to any preceding claim, wherein the spatially neighboring set comprises a coding unit or sub-coding belonging to a first CTB that is spatially adjacent, such as a upper row CTB, in decoding to the coding unit or sub-coding of the current set of a second CTB, such as the following row CTB.
  12. The method according to any preceding claim, wherein the CTB size is selected to be as small as possible, e.g. in reverse order of preference: 64×64, 32×32, or 16×16.
  13. The method according to any of claims 2-12, wherein the concurrency set comprises all coding units (CU) of a CTB.
  14. The method according to any preceding claim, wherein an error surface as defined herein is used in a cost function.
  15. A method for using unrefined motion vector (s) for video data of a neighbor coding unit (CU) that falls in a preceding pipeline slot to the current coding unit’s concurrency set to perform pre-fetch, and use the refined motion vectors of a neighbor CU that does not fall within the same concurrency set as start of refinement motion vector center and using padded samples whenever the refinement requires  samples that are not pre-fetched based on a configurable additional sample fetch range around the pre-fetch center.
  16. A decoding or encoding apparatus, comprising the decoding or encoding apparatus is configured to perform the method of any of claims 1-15.
  17. A decoding or encoding apparatus for video data such as in DMRV or PMMVD, comprising the decoding or encoding apparatus is configured to:
    - pre-fetch data for a spatially neighboring set which comprises at least one coding unit (CU) or sub-coding unit; and
    - compute, using the pre-fetched data, a refined motion vector for the spatially neighboring set;
    - determine, for a current set which comprises at least one coding unit (CU) , or sub coding unit, that the refined motion vector of the spatially neighboring set is available, wherein the determining is performed before or in a data pre-fetch stage of the current set;
    - wherein if a refined motion vector is not available, using unrefined motion vector (s) for video data of a neighbor coding unit (CU) that falls in a preceding pipeline slot to that of the current coding unit’s concurrency set;
    - compute a refined motion vector for the current set
    - using the refined motion vector of the spatially neighboring set which is available, or
    - using the refined motion vectors of a neighbor CU that does not fall within the same concurrency set as start of refinement motion vector center, and using padded samples whenever the refinement samples are not pre-fetched based on a configurable additional sample fetch range around the pre-fetch center.
PCT/CN2019/094218 2018-07-02 2019-07-01 V refinement of video motion vectors in adjacent video data WO2020007261A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
IN201831024668 2018-07-02
IN201831024668 2018-07-02
IN201831034993 2018-09-17
IN201831034993 2018-09-17

Publications (2)

Publication Number Publication Date
WO2020007261A1 WO2020007261A1 (en) 2020-01-09
WO2020007261A9 true WO2020007261A9 (en) 2020-02-20

Family

ID=69059858

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/094218 WO2020007261A1 (en) 2018-07-02 2019-07-01 V refinement of video motion vectors in adjacent video data

Country Status (1)

Country Link
WO (1) WO2020007261A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022228420A1 (en) * 2021-04-27 2022-11-03 Beijing Bytedance Network Technology Co., Ltd. Method, device, and medium for video processing
WO2023051645A1 (en) * 2021-09-29 2023-04-06 Beijing Bytedance Network Technology Co., Ltd. Method, apparatus, and medium for video processing
WO2023072287A1 (en) * 2021-10-29 2023-05-04 Beijing Bytedance Network Technology Co., Ltd. Method, apparatus, and medium for video processing

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080031333A1 (en) * 2006-08-02 2008-02-07 Xinghai Billy Li Motion compensation module and methods for use therewith
US8218636B2 (en) * 2006-11-21 2012-07-10 Vixs Systems, Inc. Motion refinement engine with a plurality of cost calculation methods for use in video encoding and methods for use therewith
TW201246920A (en) * 2011-05-12 2012-11-16 Sunplus Technology Co Ltd Apparatus for refining motion vector

Also Published As

Publication number Publication date
WO2020007261A1 (en) 2020-01-09

Similar Documents

Publication Publication Date Title
AU2020207821B2 (en) Motion vector derivation in video coding
US11297340B2 (en) Low-complexity design for FRUC
US10805630B2 (en) Gradient based matching for motion search and derivation
US10701366B2 (en) Deriving motion vector information at a video decoder
US10595035B2 (en) Constraining motion vector information derived by decoder-side motion vector derivation
US10757442B2 (en) Partial reconstruction based template matching for motion vector derivation
US20230188722A1 (en) Apparatus and method for conditional decoder-side motion vector refinement in video coding
WO2020007261A9 (en) V refinement of video motion vectors in adjacent video data
WO2020007291A1 (en) A video encoder, a video decoder and corresponding methods

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19831137

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19831137

Country of ref document: EP

Kind code of ref document: A1