WO2019006363A1 - Compensation d'éclairage local à l'aide d'une bi-prédiction généralisée - Google Patents

Compensation d'éclairage local à l'aide d'une bi-prédiction généralisée Download PDF

Info

Publication number
WO2019006363A1
WO2019006363A1 PCT/US2018/040393 US2018040393W WO2019006363A1 WO 2019006363 A1 WO2019006363 A1 WO 2019006363A1 US 2018040393 W US2018040393 W US 2018040393W WO 2019006363 A1 WO2019006363 A1 WO 2019006363A1
Authority
WO
WIPO (PCT)
Prior art keywords
coding unit
illumination compensation
current coding
prediction
sub
Prior art date
Application number
PCT/US2018/040393
Other languages
English (en)
Inventor
Yan Zhang
Xiaoyu XIU
Yuwen He
Yan Ye
Original Assignee
Vid Scale, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vid Scale, Inc. filed Critical Vid Scale, Inc.
Publication of WO2019006363A1 publication Critical patent/WO2019006363A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures

Definitions

  • Video data explosion and the demand of video data transmission has been growing in media industry. Video coding is becoming more demanding and challenging.
  • Video coding system such as a block-based hybrid video coding system, may be introduced.
  • MCP motion compensated prediction
  • the bi-prediction signal may be formed by combining two uni-prediction signals (e.g., using a weight value equal to 0.5).
  • Motion compensation may be affected by illumination changes between a current block and one or more reference blocks.
  • a decoder may use illumination compensation to address illumination changes that affect the motion compensation.
  • One or more prediction modes may be indicative of continuous motion changes between the current block and one or more reference blocks.
  • a decoder may identify a prediction mode when predicting the current block. Based on the prediction mode associated with the current block, the decoder may determine whether to parse an illumination compensation indication. The illumination compensation indication may indicate whether to enable an illumination compensation process for the current block. If the prediction mode is indicative of the continuous motion changes between the current block and one or more of the reference blocks, the decoder may bypass parsing the illumination compensation indication.
  • the decoder may disable the illumination compensation process on the current block based on the determination to bypass parsing the illumination compensation indication for the current block.
  • a bi-prediction mode may be indicative of continuous motion changes between the current block and one or more reference blocks.
  • a bi-latera! mode may be indicative of continuous motion changes between the current block and one or more reference blocks.
  • a frame rate up conversion (FRUC) bi-lateral mode may be indicative of continuous motion changes between the current block and one or more reference blocks.
  • the decoder may determine to parse the illumination compensation indication. The decoder may determine whether to enable the illumination compensation process based on the parsed illumination compensation indication. If the parsed illumination compensation indication indicates that the illumination compensation process is to be enabled, the decoder may perform motion compensation with the illumination compensation process. If the parsed illumination compensation indication indicates that the illumination compensation process is to be disabled, the decoder may perform motion compensation without the illumination compensation process.
  • the illumination compensation process may include local illumination compensation.
  • the decoder may compensate illumination changes by deriving weights and offsets. If the local illumination compensation is disabled (e.g. when the decoder bypass parsing the illumination compensation indication), the decoder may perform a generalized bi-directional prediction on the current block.
  • the processor may determine to disable the illumination compensation process for one or more sub-coding units of the current block when the one or more sub-coding units is bi-prediction coded.
  • the decoder may perform motion compensation on the one or more sub-coding units without the illumination compensation process if the decoder determines to disable the illumination compensation process on the one or more sub-coding units.
  • the processor may determine whether to disable the illumination compensation process for the current block by determining whether at least one sub-coding unit of the current block is bi-prediction coded.
  • the decoder may perform motion compensation on the current block without the illumination compensation process as long as at least one sub-coding unit of the current block is bi-prediction coded.
  • FIG. 1A is a system diagram illustrating an example communications system in which one or more disclosed embodiments may be implemented
  • FIG. 1 B is a system diagram illustrating an example wireless transmit/receive unit (WTRU) that may be used within the communications system illustrated in FIG. 1 A according to an embodiment
  • WTRU wireless transmit/receive unit
  • FIG. 1C is a system diagram illustrating an example radio access network (RAN) and an example core network (CN) that may be used within the communications system illustrated in FIG. 1 A according to an embodiment
  • RAN radio access network
  • CN core network
  • FIG. 1 D is a system diagram illustrating a further example RAN and a further example CN that may be used within the communications system illustrated in FIG. 1 A according to an embodiment
  • FIG. 2 illustrates an example diagram of a video encoding system.
  • FIG. 3 illustrates an example diagram of a video decoding system.
  • FIG. 4 illustrates an example of templates in a current picture and corresponding reference pictures.
  • FIG. 5 illustrates an example position of spatial merge candidates.
  • FIG. 6 illustrates an example advance temporal motion vector prediction (ATMVP) motion prediction for a coding unit (CU).
  • ATMVP advance temporal motion vector prediction
  • FIG. 7 illustrates an example spatial temporal motion vector prediction (STMVP) motion prediction for a CU.
  • STMVP spatial temporal motion vector prediction
  • FIG. 8 illustrates an example bilateral matching mode in frame rate up conversion (FRUC).
  • FRUC frame rate up conversion
  • FIG. 9 illustrates an example template matching mode in FRUC.
  • FIG. 10 illustrates an example illumination change over time.
  • FIG. 11 illustrates an example local illumination compensation (LIC) flag signaling for an explicit inter prediction mode.
  • LIC local illumination compensation
  • FIG. 12 illustrates an example LIC flag decoding for an explicit inter prediction mode.
  • FIG. 13 illustrates an example LIC flag derivation for ATMVP and/or STMVP coding unit (CU).
  • FIG. 14 illustrates an example where the LIC process may be performed on boundary sub-CU blocks.
  • FIG. 15 illustrates an example where the L!C process may be performed on inner sub-CU blocks.
  • FIG. 16 illustrates an example weighted LIC for inner sub-CU blocks.
  • FIG. 1A is a diagram illustrating an example communications system 100 in which one or more disclosed embodiments may be implemented.
  • the communications system 100 may be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to multiple wireless users.
  • the communications system 100 may enable multiple wireless users to access such content through the sharing of system resources, includina wireless bandwidth.
  • the communications systems 100 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TD A), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), single-carrier FDMA (SC-FDMA), zero-tail unique-word DFT-Spread OFDM (ZT UW DTS-s OFDM), unique word OFDM (UW-OFDM), resource block-filtered OFDM, filter bank multicarrier (FBMC), and the like.
  • CDMA code division multiple access
  • TD A time division multiple access
  • FDMA frequency division multiple access
  • OFDMA orthogonal FDMA
  • SC-FDMA single-carrier FDMA
  • ZT UW DTS-s OFDM zero-tail unique-word DFT-Spread OFDM
  • UW-OFDM unique word OFDM
  • FBMC filter bank multicarrier
  • the communications system 100 may include wireless transmit/receive units (WTRUs) 102a, 102b, 102c, 102d, a RAN 104/113, a CN 106/115, a public switched telephone network (PSTN) 108, the Internet 110, and other networks 112, though it will be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements.
  • WTRUs 102a, 102b, 102c, 102d may be any type of device configured to operate and/or communicate in a wireless environment.
  • the WTRUs 102a, 102b, 102c, 102d may be configured to transmit and/or receive wireless signals and may include a user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a subscription-based unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, a hotspot or Mi-Fi device, an Internet of Things (loT) device, a watch or other wearable, a head-mounted display (HMD), a vehicle, a drone, a medical device and applications (e.g., remote surgery), an industrial device and applications (e.g., a robot and/or other wireless devices operating in an industrial and/or an automated processing chain contexts), a consumer electronics device, a device operating on commercial and/or industrial wireless networks, and the like.
  • UE user equipment
  • PDA personal digital assistant
  • HMD head-mounted display
  • a vehicle a drone
  • the communications systems 100 may also include a base station 114a and/or a base station 114b.
  • Each of the base stations 114a, 114b may be any type of device configured to wire!essly interface with at least one of the WTRUs 102a, 102b, 102c, 102d to facilitate access to one or more communication networks, such as the CN 106/115, the Internet 110, and/or the other networks 112.
  • the base stations 114a, 114b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a gNB, a NR NodeB, a site controller, an access point (AP), a wireless router, and the like. While the base stations 114a, 114b are each depicted as a single element, it will be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements.
  • the base station 114a may be part of the RAN 104/113, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc.
  • BSC base station controller
  • RNC radio network controller
  • the base station 114a and/or the base station 114b may be configured to transmit and/or receive wireless signals on one or more carrier frequencies, which may be referred to as a cell (not shown). These frequencies may be in licensed spectrum, unlicensed spectrum, or a combination of licensed and unlicensed spectrum.
  • a cell may provide coverage for a wireless service to a specific geographical area that may be relatively fixed or that may change over time. The cell may further be divided into cell sectors.
  • the cell associated with the base station 114a may be divided into three sectors.
  • the base station 114a may include three transceivers, i.e., one for each sector of the cell.
  • the base station 114a may employ multiple-input multiple output (MIMO) technology and may utilize multiple transceivers for each sector of the cell.
  • MIMO multiple-input multiple output
  • beamforming may be used to transmit and/or receive signals in desired spatial directions.
  • the base stations 114a, 114b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 116, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, centimeter wave, micrometer wave, infrared (!R), ultraviolet (UV), visible light, etc.).
  • the air interface 116 may be established using any suitable radio access technology (RAT).
  • RAT radio access technology
  • the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like.
  • the base station 114a in the RAN 104/113 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 115/116/117 using wideband CDMA (WCDMA).
  • WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+).
  • HSPA may include High-Speed Downlink (DL) Packet Access (HSDPA) and/or High-Speed UL Packet Access (HSUPA).
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface 116 using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A) and/or LTE-Advanced Pro (LTE-A Pro).
  • E-UTRA Evolved UMTS Terrestrial Radio Access
  • LTE Long Term Evolution
  • LTE-A LTE-Advanced
  • LTE-A Pro LTE-Advanced Pro
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as NR Radio Access , which may establish the air interface 116 using New Radio (NR).
  • a radio technology such as NR Radio Access , which may establish the air interface 116 using New Radio (NR).
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement multiple radio access technologies.
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement LTE radio access and NR radio access together, for instance using dual connectivity (DC) principles.
  • DC dual connectivity
  • the air interface utilized by WTRUs 102a, 102b, 102c may be characterized by multiple types of radio access technologies and/or transmissions sent to/from multiple types of base stations (e.g., a eNB and a gNB).
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement radio technologies such as IEEE 802.11 (i.e., Wireless Fidelity (WiFi), IEEE 802.16 (i.e., Worldwide interoperability for Microwave Access (WiMAX)), CDMA2000, CD A2000 1X, CDMA2000 EV-DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.
  • IEEE 802.11 i.e., Wireless Fidelity (WiFi)
  • IEEE 802.16 i.e., Worldwide interoperability for Microwave Access (WiMAX)
  • CDMA2000, CD A2000 1X, CDMA2000 EV-DO Code Division Multiple Access 2000
  • IS-95 Interim Standard 95
  • IS-856 Interim Standard 856
  • GSM Global System for
  • the base station 114b in FIG. 1 A may be a wireless router, Home Node B, Home e!Mode B, or access point, for example, and may utilize any suitable RAT for facilitating wireless connectivity in a localized area, such as a place of business, a home, a vehicle, a campus, an industrial facility, an air corridor (e.g., for use by drones), a roadway, and the like, in one embodiment, the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.11 to establish a wireless local area network (WLAN).
  • WLAN wireless local area network
  • the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN).
  • the base station 114b and the WTRUs 102c, 102d may utilize a cellular-based RAT (e.g., WCD A, CDMA2000, GSM, LTE, LTE-A, LTE-A Pro, NR etc.) to establish a picocell or femtocell.
  • the base station 114b may have a direct connection to the Internet 110.
  • the base station 114b may not be required to access the Internet 110 via the CN 106/115.
  • the RAN 104/113 may be in communication with the CN 106/115, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d.
  • the data may have varying quality of service (QoS) requirements, such as differing throughput requirements, latency requirements, error tolerance requirements, reliability requirements, data throughput requirements, mobility requirements, and the like.
  • QoS quality of service
  • the CN 106/115 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication.
  • the RAN 104/113 and/or the CN 106/115 may be in direct or indirect communication with other RANs that employ the same RAT as the RAN 104/1 13 or a different RAT.
  • the CN 106/115 may also be in communication with another RAN (not shown) employing a GSM, UMTS, CDMA 2000, WiMAX, E-UTRA, or WiFi radio technology.
  • the CN 106/115 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 110, and/or the other networks 112.
  • the PSTN 108 may include circuit- switched telephone networks that provide plain old telephone service (POTS).
  • POTS plain old telephone service
  • the Internet 1 10 may include a global system of interconnected computer networks and devices that use common
  • the networks 112 may include wired and/or wireless communications networks owned and/or operated by other service providers.
  • the networks 112 may include another CN connected to one or more RANs, which may employ the same RAT as the RAN 104/113 or a different RAT.
  • Some or all of the WTRUs 102a, 102b, 102c, 102d in the communications system 100 may include multi-mode capabilities (e.g., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links).
  • the WTRU 102c shown in FIG. 1 A may be configured to communicate with the base station 114a, which may employ a cellular-based radio technology, and with the base station 114b, which may employ an IEEE 802 radio technology.
  • FIG. 1 B is a system diagram illustrating an example WTRU 102.
  • the WTRU 102 may include a processor 118, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, non-removable memory 130, removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and/or other peripherals 138, among others.
  • GPS global positioning system
  • the processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like.
  • the processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment.
  • the processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While FIG. 1 B depicts the processor 118 and the transceiver 120 as separate components, it will be appreciated that the processor 118 and the transceiver 120 may be integrated together in an electronic package or chip.
  • the transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 116.
  • the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals.
  • the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example.
  • the transmit/receive element 122 may be configured to transmit and/or receive both RF and light signals, it will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
  • the WTRU 102 may include any number of transmit/receive elements 122. More specifically, the WTRU 102 may employ Ml MO technology. Thus, in one embodiment, the WTRU 102 may include two or more transmit/receive elements 122 ⁇ e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 116.
  • the transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122.
  • the WTRU 102 may have multi-mode capabilities.
  • the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as NR and IEEE 802.11 , for example.
  • the processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit).
  • the processor 118 may also output user data to the speaker/microphone 124. the keypad 126, and/or the display/touchpad 128.
  • the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132.
  • the non-removable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device.
  • the removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like.
  • SIM subscriber identity module
  • SD secure digital
  • the processor 118 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
  • the processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102.
  • the power source 134 may be any suitable device for powering the WTRU 102.
  • the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), etc.), solar cells, fuel cells, and the like.
  • the processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102.
  • location information e.g., longitude and latitude
  • the WTRU 102 may receive location information over the air interface 116 from a base station (e.g., base stations 114a, 114b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable location- determination method while remaining consistent with an embodiment.
  • the processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity.
  • the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs and/or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an internet browser, a Virtual Reality and/or Augmented Reality (VR/AR) device, an activity tracker, and the like.
  • FM frequency modulated
  • the peripherals 138 may include one or more sensors, the sensors may be one or more of a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
  • a gyroscope an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
  • the WTRU 102 may include a full duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for both the UL (e.g., for transmission) and downlink (e.g., for reception) may be concurrent and/or simultaneous.
  • the full duplex radio may include an interference management unit to reduce and or substantially eliminate self-interference via either hardware (e.g., a choke) or signal processing via a processor (e.g., a separate processor (not shown) or via processor 118).
  • the WRTU 102 may include a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e.g., for transmission) or the downlink (e.g., for reception)).
  • a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e.g., for transmission) or the downlink (e.g., for reception)).
  • FIG. 1C is a system diagram illustrating the RAN 104 and the CN 106 according to an embodiment.
  • the RAN 104 may employ an E-UTRA radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the RAN 104 may also be in communication with the CN 106.
  • the RAN 104 may include eNode-Bs 160a, 160b, 160c, though it will be appreciated that the RAN 104 may include any number of eNode-Bs while remaining consistent with an embodiment.
  • the eNode-Bs 160a, 160b, 160c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the eNode-Bs 160a, 160b, 160c may implement MIMO technology.
  • the eNode-B 160a for example, may use multiple antennas to transmit wireless signals to, and/or receive wireless signals from, the WTRU 102a.
  • Each of the eNode-Bs 160a, 160b, 160c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, and the like. As shown in FIG. 1C, the eNode-Bs 160a, 160b, 160c may communicate with one another over an X2 interface.
  • the CN 106 shown in FIG. 1C may include a mobility management entity ( E) 162, a serving gateway (SGW) 164, and a packet data network (PDN) gateway (or PGW) 166. While each of the foregoing elements are depicted as part of the CN 106, it will be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
  • the E 162 may be connected to each of the eNode-Bs 162a, 162b, 162c in the RAN 104 via an S1 interface and may serve as a control node.
  • the MME 162 may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs 102a, 102b, 102c, and the like.
  • the MME 162 may provide a control plane function for switching between the RAN 104 and other RANs (not shown) that employ other radio technologies, such as GSM and/or WCDMA.
  • the SGW 164 may be connected to each of the eNode Bs 160a, 160b, 160c in the RAN 104 via the S1 interface.
  • the SGW 164 may generally route and forward user data packets to/from the WTRUs 102a, 102b, 102c.
  • the SGW 164 may perform other functions, such as anchoring user planes during inter- eNode B handovers, triggering paging when DL data is available for the WTRUs 102a, 102b, 102c, managing and storing contexts of the WTRUs 102a, 102b, 102c, and the like.
  • the SGW 164 may be connected to the PGW 166, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.
  • packet-switched networks such as the Internet 110
  • the CN 106 may facilitate communications with other networks.
  • the CN 106 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices.
  • the CN 106 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the CN 106 and the PSTN 108.
  • IMS IP multimedia subsystem
  • the CN 106 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired and/or wireless networks that are owned and/or operated by other service providers.
  • the WTRU is described in FIGS. 1 A-1 D as a wireless terminal, it is contemplated that in certain representative embodiments that such a terminal may use (e.g., temporarily or permanently) wired communication interfaces with the communication network.
  • the other network 112 may be a WLAN.
  • a WLAN in Infrastructure Basic Service Set (BSS) mode may have an Access Point (AP) for the BSS and one or more stations (STAs) associated with the AP.
  • the AP may have an access or an interface to a Distribution System (DS) or another type of wired/wireless network that carries traffic in to and/or out of the BSS.
  • Traffic to STAs that originates from outside the BSS may arrive through the AP and may be delivered to the STAs.
  • Traffic originating from STAs to destinations outside the BSS may be sent to the AP to be delivered to respective destinations.
  • Traffic between STAs within the BSS may be sent through the AP, for example, where the source STA may send traffic to the AP and the AP may deliver the traffic to the destination STA.
  • the traffic between STAs within a BSS may be considered and/or referred to as peer-to- peer traffic.
  • the peer-to-peer traffic may be sent between (e.g., directly between) the source and destination STAs with a direct link setup (DLS).
  • the DLS may use an 802.11e DLS or an 802.11z tunneled DLS (TDLS).
  • a WLAN using an independent BSS (IBSS) mode may not have an AP, and the STAs (e.g., aii of the STAs) within or using the !BSS may communicate directly with each other.
  • the IBSS mode of communication may sometimes be referred to herein as an "ad- hoc" mode of communication.
  • the AP may transmit a beacon on a fixed channel, such as a primary channel.
  • the primary channel may be a fixed width (e.g., 20 MHz wide bandwidth) or a dynamically set width via signaling.
  • the primary channel may be the operating channel of the BSS and may be used by the STAs to establish a connection with the AP.
  • Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) may be implemented, for example in in 802.11 systems.
  • the STAs e.g., every STA, including the AP, may sense the primary channel. If the primary channel is sensed/detected and/or determined to be busy by a particular STA, the particular STA may back off.
  • One STA (e.g., only one station) may transmit at any given time in a given BSS.
  • High Throughput (HT) STAs may use a 40 MHz wide channel for communication, for example, via a combination of the primary 20 MHz channel with an adjacent or nonadjacent 20 MHz channel to form a 40 MHz wide channel.
  • VHT STAs may support 20MHz, 40 MHz, 80 MHz, and/or 160 MHz wide channels.
  • the 40 MHz, and/or 80 MHz, channels may be formed by combining contiguous 20 MHz channels.
  • a 160 MHz channel may be formed by combining 8 contiguous 20 MHz channels, or by combining two non-contiguous 80 MHz channels, which may be referred to as an 80+80 configuration.
  • the data, after channel encoding may be passed through a segment parser that may divide the data into two streams, inverse Fast Fourier Transform (IFFT) processing, and time domain processing, may be done on each stream separately.
  • IFFT inverse Fast Fourier Transform
  • the streams may be mapped on to the two 80 MHz channels, and the data may be transmitted by a transmitting STA.
  • the above described operation for the 80+80 configuration may be reversed, and the combined data may be sent to the Medium Access Control (MAC).
  • MAC Medium Access Control
  • Sub 1 GHz modes of operation are supported by 802.1 1af and 802.11ah.
  • the channel operating bandwidths, and carriers, are reduced in 802.11af and 802.11ah relative to those used in 802.11n, and 802.11ac.
  • 802.11af supports 5 MHz, 10 MHz and 20 MHz bandwidths in the TV White Space (TVWS) spectrum
  • 802.11ah supports 1 MHz, 2 MHz, 4 MHz, 8 MHz, and 16 MHz bandwidths using non- TVWS spectrum.
  • 802.11ah may support Meter Type
  • MTC devices may have certain capabilities, for example, limited capabilities including support for (e.g., only support for) certain and/or limited bandwidths.
  • the MTC devices may include a battery with a battery life above a threshold (e.g., to maintain a very long battery life).
  • WLAN systems which may support multiple channels, and channel bandwidths, such as 802.11 ⁇ , 802.11ac, 802.11af, and 802.11ah, include a channel which may be designated as the primary channel.
  • the primary channel may have a bandwidth equal to the largest common operating bandwidth supported by all STAs in the BSS.
  • the bandwidth of the primary channel may be set and/or limited by a STA, from among all STAs in operating in a BSS, which supports the smallest bandwidth operating mode, in the example of 802.11ah, the primary channel may be 1 MHz wide for STAs (e.g., MTC type devices) that support (e.g., only support) a 1 MHz mode, even if the AP, and other STAs in the BSS support 2 MHz, 4 MHz, 8 MHz, 16 MHz, and/or other channel bandwidth operating modes.
  • STAs e.g., MTC type devices
  • Carrier sensing and/or Network Allocation Vector (NAV) settings may depend on the status of the primary channel, !f the primary channel is busy, for example, due to a STA (which supports only a 1 MHz operating mode), transmitting to the AP, the entire available frequency bands may be considered busy even though a majority of the frequency bands remains idle and may be available.
  • NAV Network Allocation Vector
  • the available frequency bands which may be used by 802.11 ah, are from 902 MHz to 928 MHz. in Korea, the available frequency bands are from 917.5 MHz to 923.5 MHz. In Japan, the available frequency bands are from 916.5 MHz to 927.5 MHz. The total bandwidth available for 802.11 ah is 6 MHz to 26 MHz depending on the country code.
  • FIG. 1 D is a system diagram illustrating the RAN 113 and the CN 115 according to an embodiment.
  • the RAN 113 may employ an NR radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the RAN 113 may also be in communication with the CN 115.
  • the RAN 113 may include gNBs 180a, 180b, 180c, though it will be appreciated that the RAN 1 13 may include any number of gNBs while remaining consistent with an embodiment.
  • the gNBs 180a, 180b, 180c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the gNBs 180a, 180b, 180c may implement MIMO technology.
  • gNBs 180a, 108b may utilize beamforming to transmit signals to and/or receive signals from the gNBs 180a, 180b, 180c.
  • the gNB 180a may use multiple antennas to transmit wireless signals to, and/or receive wireless signals from, the WTRU 102a.
  • the gNBs 180a, 180b, 180c may implement carrier aggregation technology.
  • the gNB 180a may transmit multiple component carriers to the WTRU 102a (not shown). A subset of these component carriers may be on unlicensed spectrum while the remaining component carriers may be on licensed spectrum.
  • the gNBs 180a, 180b, 180c may implement Coordinated Multi-Point (CoMP) technology.
  • WTRU 102a may receive coordinated transmissions from gNB 180a and gNB 180b (and/or gNB 180c).
  • CoMP Coordinated Multi-Point
  • the WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using transmissions associated with a scalable numerology.
  • the OFDM symbol spacing and/or OFDM subcarrier spacing may vary for different transmissions, different cells, and/or different portions of the wireless transmission spectrum.
  • the WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using subframe or transmission time intervals (TT!s) of various or scalable lengths (e.g., containing varying number of OFDM symbols and/or lasting varying lengths of absolute time).
  • TT!s subframe or transmission time intervals
  • the gNBs 180a, 180b, 180c may be configured to communicate with the WTRUs 102a, 102b, 102c in a standalone configuration and/or a non-standalone configuration.
  • WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c without also accessing other RANs (e.g., such as eNode-Bs 160a, 160b, 160c).
  • WTRUs 102a, 102b, 102c may utilize one or more of gNBs 180a, 180b, 180c as a mobility anchor point.
  • WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using signals in an unlicensed band, in a non-standalone configuration WTRUs 102a, 102b, 102c may communicate with/connect to gNBs 180a, 180b, 180c while also communicating with/connecting to another RAN such as eNode-Bs 160a, 160b, 160c.
  • WTRUs 102a, 102b, 102c may implement DC principles to communicate with one or more gNBs 180a, 180b, 180c and one or more eNode-Bs 160a, 160b, 160c substantially simultaneously.
  • eNode-Bs 160a, 160b, 160c may serve as a mobility anchor for WTRUs 102a, 102b, 102c and gNBs 180a, 180b, 180c may provide additional coverage and/or throughput for servicing WTRUs 102a, 102b, 102c.
  • Each of the gNBs 180a, 180b, 180c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, support of network slicing, dual connectivity, interworking between NR and E- UTRA, routing of user plane data towards User Plane Function (UPF) 184a, 184b, routing of control plane information towards Access and Mobility Management Function (AMF) 182a, 182b and the like. As shown in FIG. 1D, the gNBs 180a, 180b, 180c may communicate with one another over an Xn interface.
  • UPF User Plane Function
  • AMF Access and Mobility Management Function
  • the CN 115 shown in FIG. 1 D may include at least one AMF 182a, 182b, at least one UPF 184a, 184b, at least one Session Management Function (SMF) 183a, 183b, and possibly a Data Network (DN) 185a, 185b. While each of the foregoing elements are depicted as part of the CN 115, it will be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
  • SMF Session Management Function
  • the AMF 182a, 182b may be connected to one or more of the gNBs 180a, 180b, 180c in the RAN 113 via an N2 interface and may serve as a control node.
  • the AMF 182a, 182b may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, support for network slicing (e.g., handling of different PDU sessions with different requirements), selecting a particular S F 183a, 183b, management of the registration area, termination of NAS signaling, mobility management, and the like.
  • Network slicing may be used by the AMF 182a, 182b in order to customize CN support for WTRUs 102a, 102b, 102c based on the types of services being utilized WTRUs 102a, 102b, 102c.
  • different network slices may be established for different use cases such as services relying on ultra-reliable low latency (URLLC) access, services relying on enhanced massive mobile broadband (e BB) access, services for machine type communication (TC) access, and/or the like.
  • URLLC ultra-reliable low latency
  • e BB enhanced massive mobile broadband
  • TC machine type communication
  • the AMF 162 may provide a control plane function for switching between the RAN 113 and other RANs (not shown) that employ other radio technologies, such as LTE, LTE-A, LTE-A Pro, and/or non-3GPP access technologies such as WiFi.
  • radio technologies such as LTE, LTE-A, LTE-A Pro, and/or non-3GPP access technologies such as WiFi.
  • the SMF 183a, 183b may be connected to an AMF 182a, 182b in the CN 115 via an N11 interface.
  • the SMF 183a, 183b may also be connected to a UPF 184a, 184b in the CN 115 via an N4 interface.
  • the SMF 183a, 183b may select and control the UPF 184a, 184b and configure the routing of traffic through the UPF 184a, 184b.
  • the SMF 183a, 183b may perform other functions, such as managing and allocating UE IP address, managing PDU sessions, controlling policy enforcement and QoS, providing downlink data notifications, and the like.
  • a PDU session type may be !P-based, non-IP based, Ethernet- based, and the like.
  • the UPF 184a, 184b may be connected to one or more of the gNBs 180a, 180b, 180c in the RAN 113 via an N3 interface, which may provide the WTRUs 102a, 102b, 102c with access to packet- switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.
  • the UPF 184, 184b may perform other functions, such as routing and forwarding packets, enforcing user plane policies, supporting multi-homed PDU sessions, handling user plane QoS, buffering downlink packets, providing mobility anchoring, and the like.
  • the CN 115 may facilitate communications with other networks.
  • the CN 115 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the CN 115 and the PSTN 108.
  • IP gateway e.g., an IP multimedia subsystem (IMS) server
  • IMS IP multimedia subsystem
  • the CN 115 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired and/or wireless networks that are owned and/or operated by other service providers.
  • the WTRUs 102a, 102b, 102c may be connected to a local Data Network (DN) 185a, 185b through the UPF 184a, 184b via the N3 interface to the UPF 184a, 184b and an N6 interface between the UPF 184a, 184b and the DN 185a, 185b.
  • DN local Data Network
  • one or more, or all, of the functions described herein with regard to one or more of: WTRU 102a-d, Base Station 114a-b, eNode-B 160a-c, MME 162, SGW 164, PGW 166, gNB 180a-c, AMF 182a-b, UPF 184a-b, SMF 183a-b, DN 185a-b, and/or any other device(s) described herein, may be performed by one or more emulation devices (not shown).
  • the emulation devices may be one or more devices configured to emulate one or more, or all, of the functions described herein.
  • the emulation devices may be used to test other devices and/or to simulate network and/or WTRU functions.
  • the emulation devices may be designed to implement one or more tests of other devices in a lab environment and/or in an operator network environment.
  • the one or more emulation devices may perform the one or more, or all, functions while being fully or partially implemented and/or deployed as part of a wired and/or wireless communication network in order to test other devices within the
  • the one or more emulation devices may perform the one or more, or all, functions while being temporarily implemented/deployed as part of a wired and/or wireless communication network.
  • the emulation device may be directly coupled to another device for purposes of testing and/or may performing testing using over-the-air wireless communications.
  • the one or more emulation devices may perform the one or more, including all, functions while not being implemented/deployed as part of a wired and/or wireless communication network.
  • the emulation devices may be utilized in a testing scenario in a testing laboratory and/or a non-deployed (e.g., testing) wired and/or wireless communication network in order to implement testing of one or more components.
  • the one or more emulation devices may be test equipment. Direct RF coupling and/or wireless communications via RF circuitry (e.g., which may include one or more antennas) may be used by the emulation devices to transmit and/or receive data.
  • RF circuitry e.g., which may include one or more antennas
  • FIG. 2 is an example diagram of a video encoding system.
  • the input video signal 702 may be processed block by block, in high efficiency video coding (HEVC), extended block sizes (e.g., called a coding unit or CU) may be used to efficiently compress high resolution (e.g., 1080p and beyond) video signals.
  • HEVC high efficiency video coding
  • a CU may be up to 64 x 64 pixels.
  • a CU may be partitioned into prediction units or PU, for which separate prediction methods may be applied.
  • For each input video block e.g., MB or CU), spatial prediction 760 and/or temporal motion prediction 762 may be performed.
  • Spatial prediction may use pixels from the already coded neighboring blocks in the same video picture and/or slice to predict the current video block. Spatial prediction may reduce spatial redundancy inherent in the video signal.
  • Temporal prediction which may be referred to as inter prediction or motion compensated prediction, may use pixels from the already coded video pictures to predict the current video block.
  • Temporal prediction may reduce temporal redundancy inherent in the video signal.
  • Temporal prediction signal for a given video block may be usually signaled by one or more motion vectors which indicate the amount and the direction of motion between the current block and its reference block. If multiple reference pictures are supported (e.g., for the video coding standards such as H.264/AVC or HEVC), one or more video blocks' reference picture index may be sent additionally. The reference index may be used to identify from which reference picture in the reference picture store 764 the temporal prediction signal comes.
  • the mode decision block 780 in the encoder may choose the prediction mode, for example based on the rate- distortion optimization method.
  • the rate and the distortion may be used as factors that decide the cost of a certain prediction mode.
  • the intra mode index may be recorded while for inter prediction the 2D vectors which contains the amount of horizontal and vertical shift in pixels (e.g., with fractional precision) may be stored.
  • Prediction errors may be calculated in Intra-Picture Prediction for intra prediction method and/or in Motion Compensation for inter prediction method. These prediction errors may go through Transform, Scaling & Quantization, and/or become the coefficients, for example, to further de-correlate redundant information before entropy encoding.
  • the number of bits for representing the current CU (e.g., the rate) may be known.
  • these coefficients may further go through the block of Scaling & Inverse Transform to compute the reconstruction error.
  • the reconstruction error may be used for the computation of distortion.
  • the cost for each of the potentially promising prediction modes may be calculated and compared.
  • the prediction mode which has the smallest cost may be selected for the current CU.
  • the reconstructed error may be added to the prediction block to acquire the reconstructed block.
  • additional filters like those in the block of Deblocking & SAO Filter which may be designed by the block of Filter Analysis Control may be performed on the reconstructed slice/picture before it is buffered in the Decoded Picture Buffer to serve as reference for future encoding purpose.
  • some control information from General Control Information and filter control information from Filter Control Analysis may be entropy encoded by the Header Formatting & CABAC block to arrive at the desired encoding bitstream.
  • the prediction block may be subtracted from the current video block 716.
  • the prediction residual may be de-correlated using transform 704 and quantization 706 to achieve the target bit-rate.
  • the quantized residual coefficients may be inverse quantized 710 and inverse transformed 712 to form the reconstructed residual, which may be added back to the prediction block 726 to form the reconstructed video block.
  • In-loop filtering such as de-blocking filter and/or adaptive loop filters may be applied 766 on the reconstructed video block.
  • the reconstructed block may be put in the reference picture store 764 and used to code future video blocks.
  • coding mode e.g., inter or intra
  • prediction mode information e.g., motion information
  • quantized residual coefficients may be sent to the entropy coding unit 708 to be compressed and/or packed to form the bit-stream.
  • FIG. 3 is an example diagram of a video decoding system.
  • the video bit-stream 202 may be unpacked and entropy may be decoded at entropy decoding unit 208.
  • the coding mode and/or prediction information may be sent to the spatial prediction unit 260 (e.g., if intra coded) or the temporal prediction unit 262 (e.g., if inter coded) to form the prediction block.
  • the residual transform coefficients may be sent to inverse quantization unit 210 and inverse transform unit 212 to reconstruct the residual block.
  • the prediction block and the residual block may be added at 226.
  • the reconstructed block may go through in- loop filtering.
  • the reconstructed block may be stored in reference picture store 264.
  • the reconstructed video in reference picture store may be sent out to drive a display device, as well as used to predict future video blocks.
  • the bi-prediction signal may be formed by combining two uni-prediction signals (e.g., using a weight value equal to 0.5). If illumination changes from one reference picture to another, one or more prediction techniques to compensate for illumination variation over time may be provided. One or more global and/or local weights and offset values may be applied to the sample values in reference pictures.
  • the one or more prediction techniques to compensate for illumination variation over time may include a local illumination compensation (LIC) process.
  • LIC may be used to address local illumination changes, for example, when local illumination changes are non-linear.
  • a pair of weight and offset may be applied to a reference block to obtain a prediction block, for example, using Eq. 1.
  • Parameter P r [x + v] may indicate the reference block.
  • the reference block may be indicated (e.g., pointed to) by motion vector v.
  • Parameters [ ⁇ , ⁇ ] may be the corresponding pair of weight and offset for the reference block.
  • Parameter P[x] may be the final prediction result block.
  • the pair of weight and offset pair may be estimated by, for example, the least linear mean square error (LL SE).
  • the LL SE may utilize the template of the current block and the template of the reference block.
  • the reference block may be designated by the motion vector of the current block.
  • Parameter i may represent the number of samples in the template of the current block.
  • Parameter P c [xi] may indicate ⁇ e.g., be) the ith sample of the current block's template, and parameter P r [xi] may be the ith sample of the reference template that the corresponding motion vector may be pointed to.
  • An estimation process of pair of the weight and offset may be applied on one or more reference blocks.
  • the estimation process of the pair of weight and offset may be applied ⁇ e.g., applied once) on each of the two reference blocks.
  • FIG. 4 illustrates an example of templates in a current picture and corresponding reference pictures.
  • the current picture 402 may include a current block 408.
  • the current block 408 may be associated with a template T c .
  • the reference picture 404 may include a reference block 410.
  • the reference picture 404 be a reference picture in layer 0 (L0).
  • the reference block 410 (e.g., a corresponding prediction block of the current block 408) may be associated with a template To.
  • the reference picture 406 may include a reference block 412.
  • the reference picture 406 be a reference picture in layer 1 (L1).
  • the reference block 412 (e.g., a corresponding prediction block of the current block 408) may be associated with a template Ti. Using motion vectors vo and vi , the template To and Ti may be fetched, respectively. By minimizing an illumination difference between the pair of templates Tc and To, a first corresponding pair of weight and offset may be derived. By minimizing an illumination difference between the pair of templates Tc, and Ti (e.g., separately from the illumination difference between the pair of templates Tc and To), the second corresponding pair of weight and offset may be derived.
  • One or more reference blocks may be combined. For example, multiple pairs of prediction blocks may be combined. As shown in FIG. 4, the prediction block 410 and the prediction block 412 from the two different directions may be combined.
  • the LiC bi-directional prediction may be performed based on, for example, Eq. 4.
  • Parameters [ ⁇ , ⁇ ] and [ ⁇ , ⁇ ] may be the two pairs of weight and offset.
  • Parameters vo and vi may be the corresponding motion vectors for reference block 410 and 412, respectively.
  • the one or more prediction techniques to compensate for illumination variation over time may include a generalized bi-directional (GBi) prediction process.
  • a weighted bi-prediction technique e.g., a generalized bi-directional (GBi) prediction process
  • the GBi prediction process may be used to compute a prediction signal of a block.
  • the prediction signal of the block may be a weighted average of two motion-compensated prediction blocks.
  • the prediction signal of the block may be calculated using block-level adaptive weights.
  • the weight values may be dynamically configured. The weighted values may change and may not be fixed to the 0.5 as for the certain bi-prediction.
  • the prediction process of GBi may be performed in accordance with Eq. 5.
  • Parameter P[x] may denote the prediction of a current-block sample x located at a picture position x.
  • Each P x+Vi] and Vi e ⁇ 0, 1 ⁇ may be the motion-compensated prediction of x associated with a motion vector (MV) vi from a reference picture in reference list Li.
  • (1 - w) and w may represent weight values applied respectively to Po[x+vo] and Pi[x+vi].
  • One or more different sets ⁇ e.g., three different sets) of candidate weights may be defined as follows:
  • a parameter may indicate the number of weight sets that are allowed to be chosen on the slice level (e.g., a picture level) starting from Wi. If more than one weight sets are available, the encoder may choose (e.g., with flexibility) among different weight sets from slice to slice (e.g., with flexibility), for example, based on the observation of the illumination changes from the current slice to a reference slice.
  • an index at the leaf node of a quad-tree plus binary tree (QTBT) structure may be used to indicate the entry position where w is located in the set (e.g., one of a position in Wi, W2, or W3).
  • the index may be binanzed and/or encoded.
  • the GBi mode may be extended to temporal prediction techniques that support bi-prediction, including one of more of merge modes, advance temporal motion vector prediction (AT VP), spatial temporal motion vector prediction (STMVP), and/or frame-rate up conversion (FRUC).
  • a VP advance temporal motion vector prediction
  • STMVP spatial temporal motion vector prediction
  • FRUC frame-rate up conversion
  • a merge mode may be performed. Motion information from spatial and/or temporal neighboring blocks may be derived. The motion information may not be searched using motion estimation. The technique of deriving the motion information from a current block's spatial and/or temporal neighbors may be referred to as a merge mode.
  • One or more merge candidates may be chosen to form a merge candidate list.
  • Spatial merge candidate(s) may have higher priorities than temporal merge candidate(s).
  • FIG. 5 illustraies an example position of the spatial merge candidates.
  • the availability of the spatial merge candidates may be checked in an order (e.g., the alphabetical order A, B, C, D, and/or E shown in FIG. 5). If one of the spatial neighboring blocks is out of boundary or is intra predicted, that spatial neighboring block's corresponding motion may be considered unavailable.
  • the redundant candidate may be removed from the merge candidate list.
  • the right bottom position outside of the co-located prediction unit (PU) of the reference picture may be used if the right bottom position outside of the co-located PU is available. If the right bottom position outside of the co-located PU is not available, the center position may be used.
  • the order of checking the merge candidates may be predefined. If the order of checking the merge candidate is predefined, each of the merge candidate may be indexed (e.g., by the encoder). The index may be signaled to the decoder. By decoding the merge candidate index in the decoder side, the decoder may gain the knowledge about from which block prediction information may be copied. A PU may be reconstructed using the prediction information.
  • Sub-CU based motion vector prediction may be performed.
  • Sub-CU may be used for motion vector prediction (e.g., having a set of motion for each of sub-CUs in a CU).
  • Different sub-CU level motion vector predictions may be provided.
  • ATMVP and ST VP may be provided as sub-CU level motion vector predictions.
  • a CU may be divided into sub-CU blocks (e.g., 4x4 small sub-CU blocks).
  • the motion vectors (e.g., motion vectors for each sub-CU block) may be fetched from the corresponding reference picture.
  • the sub-CU based motion vector prediction may include ATMVP and/or STMVP.
  • FIG. 6 illustrates an example ATMVP motion prediction for a CU.
  • the motion vector for a sub-CU block may be derived as shown in FIG. 6.
  • the motion vector from a first merge candidate in the merge candidate list may be set to be a temporal motion vector 610.
  • the corresponding reference picture 604, that the temporal motion vector 610 is referring to may be defined as the motion source picture for a current CU 606 of the current picture 602.
  • the collocated CU 608 in the motion source picture 604 may be identified.
  • the collocated CU 608 may be split into one or more 4x4 sub-CUs.
  • the sub-CUs may not be aligned with the grid of motion source picture 604.
  • a sub-CU 614 may be aligned by mapping the center position of the 4x4 sub-CUs of the collocated CU 608 to the center position of a 4x4 grid 612 which is covered by the sub-CU 614.
  • a mapped sub-CU may be referred to as a corresponding sub-CU block, and the motion vector for the corresponding sub-CU block may be used to derive the motion vector of a current sub-CU block.
  • MV0 and MV1 may be used to predict the motion vector of the current sub-CU block.
  • the current sub-CU block motion vector may be derived.
  • the temporal distance between the motion source picture 604 and the first reference slice (e.g., the first merge candidate in the merge candidate list) that the motion vector of the motion source picture's sub-CU block is pointed to may be denoted as Di.
  • the temporal distance between the current slice (e.g., current picture 602) and the first reference slice from the reference list (e.g., the first merge candidate in the merge candidate list), which may be decided by corresponding sub-CU block's motion vector, may be denoted as D 2 .
  • the motion vector for the current sub-CU block V c may be predicted using, for example, Eq. 6.
  • V r — V r Eq. 6
  • the corresponding sub-CU blocks may belong to different CUs. Some of the sub-CU blocks may be uni-predicted, and some sub-CU blocks may be bi-predicted.
  • the motion vector of a sub-CU block may be estimated by averaging one or more
  • FIG. 7 illustrates an example STMVP motion prediction for a CU.
  • the sub-CU motion vector may be derived.
  • A, B, C, and D may be four 4x4 sub-CU blocks, and a, b, c, d may be the corresponding spatial neighbors.
  • the first motion vector of sub-CU block A may be derived.
  • the availability of spatial neighbor sub-CU block c may be determined, if the spatial neighbor sub-CU block c is available and/or is inter predicted, the motion vector of neighbor sub-CU block c may be adopted as the first spatial motion vector, if the neighbor sub-CU block c is not available and/or is intra predicted, the spatial sub-CU neighbor d may be tested.
  • the second spatial motion vector may be taken from the left spatial neighbors in a similar way as deriving the first spatial motion vector of sub-CU block A.
  • Sub-CU availability may be checked from the top sub-CU block b to the bottom sub-CU block a.
  • the temporal motion vector may be determined (e.g., derived) from the co-located sub-CU block which may be located at the position D (e.g., that may be to the bottom right of the current sub-CU).
  • the three motion vectors may be scaled in the same way as for AT VP based on the temporal distances. After scaling, the motion vectors may be averaged to serve as the spatial temporal motion vector predictor of sub-CU A. If the two spatial motion vectors and one temporal motion vector after scaling are given by v a , v; and VD, the final motion vector v P may be computed as shown in Eq. 7.
  • the spatial and/or temporal neighbors may be uni-directional or bi-directional predicted.
  • the STMVP predicted CU may be a mixture of uni-directional and/or bi-directional predicted sub-CUs.
  • FRUC may be performed.
  • FRUC may be used as an inter prediction technique (e.g., as a motion compensation prediction mode).
  • One or more motion vectors (e.g., used for prediction) from FRUC may not be signaled to a decoder (e.g., from an encoder).
  • the one or more motion vectors from FRUC may be derived (e.g., at the decoder side).
  • FRUC may be considered and/or used as a merge mode ⁇ e.g., a special merge mode).
  • FRUC may be used in different modes (e.g., FRUC modes).
  • FRUC may be used in one or more of a bilateral mode (e.g., an FRUC bilateral matching mode) or a template matching mode (e.g., an FRUC template matching mode).
  • a motion compensation prediction mode may be identified as a bilateral mode (e.g., a FRUC bilateral matching mode).
  • the bilateral mode may be indicative of (e.g., be characterized by) a continuous motion change between a current picture and one or more reference pictures.
  • a bi-prediction mode e.g., a bi-lateral mode
  • FIG. 8 illustrates an example bilateral matching mode.
  • a current picture 806 may be predicted using a reference picture 802 and a reference picture 810.
  • the current picture 806 may be a temporal distance away from the reference picture 802 (e.g., temporal distance 814) and the reference picture 810 (e.g., temporal distance 816).
  • the current picture 806 may include a current block 808.
  • the reference picture 810 may include a reference block 812.
  • the reference picture 802 may include a reference block 804.
  • the reference block 804 and the reference block 812 may correspond to the current block 808.
  • the current block 808 may be predicted using the reference block 804 and the reference block 812.
  • the bilateral mode may be indicative of (e.g., characterized by) a continuous motion change (e.g., a continuous motion trajectory).
  • a continuous motion change e.g., a continuous motion trajectory
  • the motion vectors for a bilateral mode e.g., a FRUC bilateral matching mode
  • a first motion vector may be used (e.g., taken) from a given direction.
  • a second motion vector (e.g., the motion vector in the other direction) may be derived, for example, based on the first motion vector and/or respective temporal distances between the current block and the reference blocks.
  • the second motion vector may be derived every time when the decoder takes a first motion vector in a given direction.
  • the motion change from the current block 808 to the corresponding block 812 and the motion change from the current block 808 to the corresponding block 804 may be continuous.
  • Motion vector 818 may indicate the motion change from the current block 808 and the corresponding block 804.
  • Motion vector 820 may indicate the motion change from the current block 808 and the corresponding block 812.
  • the motion changes between the current block and one or more reference blocks may be continuous.
  • the motion vector 818 versus the motion vector 820 may be proportional to the temporal distance 814 that corresponds to the motion vector 818 versus the temporal distance 816 that corresponds to the motion vector 820.
  • the motion vector 818 may be used to derive the motion vector 820 so that the motion vector 818 versus the motion vector 820 is proportional to the temporal distance 814 versus the temporal distance 816.
  • a distortion may be estimated, and/or a selected motion vector may be identified (e.g., from a merge candidate list). For example, one or more motion vectors (e.g., the two motion vectors 818 and 820) may be used for motion compensation, and/or the distortion between two reference blocks 804 and 812 may be estimated. Based on the estimated distortion, one or more CU level motion vectors may be identified or selected from the merge candidate list. When the one or more CU level motion vectors (e.g., the best CU level motion vector) is identified/selected, a CU may be refined (e.g., at the CU level). In some instances, the CU may be divided into one or more sub-CU blocks, and/or the motion vector for each sub- CU block at the sub-CU level may be defined.
  • the CU level motion vectors in the merge candidate list may be examined to select a best motion vector.
  • the example in FIG. 8 may illustrate bilateral matching mode in FRUC.
  • FIG. 9 illustrates an example template matching mode.
  • the way of finding the motion vectors e.g., the best motion vectors
  • a current picture 902 may include a current block 906.
  • the associated template of the current block 906 may be the template 908.
  • the reference picture 904 may include a reference block.
  • the reference block may be associated with a template 910.
  • the template 908 of the current block 906 and the template 910 of the reference block may be compared for difference.
  • the motion vector which leads to a reduced (e.g., the minimum) distortion may be selected as the motion vector (e.g., best motion vector) on the CU level.
  • the selected motion vector may be refined at the CU and/or sub-CU level to form a motion field (e.g., the final motion field) for the template matching FRUC CU.
  • the CU may be bi-predicted.
  • the CU that uses FRUC bilateral matching mode may be determined (e.g., ensured) to be bi-predicted.
  • the motion vector(s) in the other direction(s) may be derived based on corresponding temporal distances.
  • the motion vector e.g., the best motion vector
  • the CU may be uni-predicted and/or bi-predicted.
  • the syntax table for an illumination compensation indication may be used to determine whether to encode the illumination compensation indication.
  • the syntax table may be shared across an encoder and/or decoder. For one or more of, but not limited to, the following cases, the illumination compensation indication (e.g., the LIC flag) may not be encoded and/or decoded. If an indication (e.g., the slice level LiC flag) indicates that LIC may not be used in the slice (e.g., a CU in the slice, the LIC flag may not be coded.
  • a CU that is coded using an intra mode and/or regular affine motion prediction mode may not code the LIC flag (e.g., since LIC does not apply to such a CU).
  • the signaling of the LIC flag may be skipped (e.g., bypassed).
  • the LIC flag of the current block may be derived from the current block's spatial and/or temporal neighboring block(s).
  • Table 1 illustrates an example syntax table for the illumination compensation indication (e.g., the LIC flag).
  • illumination changes may impact a prediction (e.g., motion-compensated prediction of a current picture/block).
  • FIG. 10 illustrates an example of illumination changes over time.
  • illumination may fade over time, illumination compensation may not be needed, for example, when illumination does not fade over time.
  • Illumination compensation may not be needed.
  • One or more prediction modes e.g., motion compensation prediction modes
  • a bi-prediction mode e.g., a bi-lateral mode
  • a sample of an object may travel over a period of time from t-3 to t.
  • a sample may be used to refer to a luma and/or chroma sample at a given location, for example, at a point on an object.
  • the intensity value of the sample may change from vt-3 to vt over the period of time (e.g., along the sample's motion trajectory).
  • the sample may be predicted using the sample's prediction value(s) from t-3 to t-1.
  • the prediction value(s) may be bounded within vt-3 and vn.
  • the prediction value(s) within vt-3 and VM may include illumination changes that are greater than at least one of a minimum illumination change, a minimal illumination change, a threshold illumination change, or a selected illumination change.
  • the illumination change between time t-3 to time t may be indicated by (e.g., be) the difference between vt-3 and vt.
  • the difference between vt-3 and vt may be equal to or greater than a threshold value, if the difference between vt-s and vt is less than a threshold value, it may indicate that the illumination is substantially the same from time t-3 to time t.
  • Different techniques/tools may be used for illumination compensation.
  • the techniques may include one or more LIC, GBi prediction, and other techniques described herein.
  • Different techniques/tools may differ in a way in which weights and offsets that are used for compensation are acquired.
  • weights and offsets may be derived based on the templates of the current block and/or one or more reference blocks (e.g., a corresponding reference block), information (e.g., additional information for illumination compensation) may not be transmitted to the decoder (e.g., if the weights and offsets are derived).
  • weights may be signaled. The signaling of the weights for GBi prediction may be explicit.
  • the LIC may be used to compensate illumination changes.
  • the LIC may assume a correlation (e.g., a strong correlation) between the template of the current block and the template of the reference block.
  • a strong correlation between the template of the current block and the template of the reference block may not be guaranteed.
  • QP quantization parameter
  • Such predictions mode may include a bi- prediction mode.
  • L!C weights and/or offsets may be estimated separately for each prediction.
  • LIC weights and offsets may not be jointly estimated for each prediction.
  • L!C may be applied to one or more (e.g., each) of the prediction blocks. Pixel values may be bit-wisely shifted downward from a high precision to a low precision (e.g., after the LIC is applied on each of the prediction blocks).
  • weights e.g., a weight set
  • the weights may not depend on a correlation ⁇ e.g., an assumption of a strong correlation) between the template of the current block and the template of the reference block.
  • the weights e.g., pre-defined weight values
  • the GBi prediction may be used with the bi-prediction.
  • Derivation of weights and offsets may be performed when using a template-based technique such as LIC.
  • LIC may be combined with some prediction techniques (e.g., some advanced prediction techniques).
  • the advanced prediction techniques may include AT VP, STMVP, and FRUC.
  • the CU may be divided into sub-CU blocks.
  • One or more of the sub-CU blocks may be assigned with a different motion vector.
  • each of the sub-CU blocks may be assigned with a motion vector (e.g., a unique motion vector) that is different from a motion vector that is assigned to a different sub-CU block.
  • a derivation operation (e.g., the derivation of weights and offsets) may be performed on one or more (e.g., each) of the sub-CU blocks.
  • the percentage of CUs that are coded with sub-CU block coding modes (e.g., the advanced prediction techniques) may be high.
  • a derivation procedure that is similar to or the same as the derivation procedure that the encoder performs may be performed. GBi may be used to deal with illumination changes for bi-directional prediction.
  • Illumination compensation may be performed.
  • the illumination compensation may be based on GBi, for example, in bi-prediction blocks (e.g., to resolve local illumination change issues).
  • Other illumination compensation techniques may be disabled (e.g., if some prediction modes are used).
  • the LIC process may be disabled (e.g., switched off) for a bi-directional prediction. If the LIC process may be disabled on a current CU, prediction (e.g., motion compensation) may be performed using GBi.
  • an illumination compensation process may be disabled for a certain motion compensation prediction mode (e.g., a bi-prediction mode).
  • the illumination compensation process may include an LIC process.
  • a determination may be made as to whether to disable the illumination compensation process based on the illumination compensation indication (e.g., an LIC flag).
  • the illumination compensation indication may include an LIC flag.
  • Whether to parse the illumination compensation indication may be determined.
  • the decoder may determine whether to parse the illumination compensation indication based on whether a certain prediction mode is used.
  • the decoder may determine to bypass parsing the illumination compensation indication (e.g., skipping the LIC flag) if a bi-prediction mode is used.
  • the decoder may determine to disable the illumination compensation process if parsing of the illumination compensation indication is bypassed.
  • the illumination compensation indication may be signaled for a certain prediction mode. Whether to disable the illumination compensation process may be determined based on a parsed illumination compensation indication.
  • the illumination compensation process may be switched off on the encoder side, for example, for a bi-prediction mode (e.g., FRUC bi-iateral mode).
  • the decoder may determine to bypass parsing the illumination compensation indication if the prediction mode is indicative of continuous motion changes between the current block and one or more reference blocks. If the decoder determines to bypass parsing the illumination compensation indication, the decoder may determine to disable the illumination compensation process.
  • the prediction modes for which the signaling of the iiiumination compensation indication is bypassed may be related to one or more of an explicit inter prediction mode, a regular merge mode, an AT VP and/or ST VP merge mode, or an FRUC merge mode.
  • the signaling of the illumination compensation indication may be bypassed for certain explicit inter prediction modes (e.g., an explicit bi-prediction mode).
  • the LiC flag may be a CU level flag that may be signaled for an inter prediction mode (e.g., explicit inter prediction mode).
  • FiG. 11 illustrates an example LIC flag signaling for an explicit inter prediction mode.
  • the encoder may determine the current CU is a candidate for prediction using an explicit inter prediction mode at 1104.
  • the encoder may encode information related to the current CU (e.g., other CU information) at 1106.
  • the encoder may determine whether the current CU is to be bi-predicted at 1108. If the current CU is to be bi-predicted, the encoder may bypass signaling an illumination compensation indication and/or encode residual at 1114.
  • the encoder may encode the illumination compensation indication at 1 110 and/or encode residual at 1112.
  • the encoder may finish encoding the current CU at 1116.
  • the encoder may signal the current CU in the bitstream at 1118.
  • the LIC flag may not be received and/or decoded when the current CU is coded using bi-prediction and using explicit inter prediction mode.
  • the LIC flag may be decoded when the prediction information of the current CU is parsed.
  • the prediction information may indicate the prediction mode and/or the prediction direction information.
  • the parsing of the LiC flag may be bypassed based on the prediction mode and/or prediction direction information. For example, whether the current CU is explicitly bi-directional predicted may be determined (e.g., as shown in FIG. 12). If the current CU is explicitly bi-directionally predicted, the parsing of the LIC flag may be skipped.
  • FIG. 12 illustrates an example LIC flag decoding for an explicit inter prediction mode. As shown in FIG.
  • an explicit inter prediction mode may be identified in the bitstream for a current CU at 1204.
  • Information related to the current CU e.g., other CU information
  • Prediction direction information may be parsed at 1208.
  • whether the current CU is bi-predicted may be determined. If it is determined that the current CU is bi-predicted, parsing of an illumination compensation indication may be bypassed. Residual may be parsed at 1216. If it is determined that the current CU is not bi-predicted, the illumination compensation indication may be parsed at 1212. Residual may be parsed at 1214.
  • Prediction e.g., motion compensation
  • LIC when bi-prediction is used, LIC may be disabled. For bi-prediction, the LIC flag may not be decoded. For example, if bi-prediction is used, the LIC flag may be set to 0, and the weight and offset derivation process may be skipped (e.g., and the weight and/or offset values may be set to default).
  • the signaling of the illumination compensation indication may be bypassed for certain regular merge modes.
  • Regular merge modes may include one or more of, but not limited to, spatial merge candidate, temporal motion vector predictor (TMVP), and/or the default merge mode where no sub-block level LIC process may be involved.
  • the LIC flag may not be signaled. For example, whether LIC is disabled may be determined based on the prediction direction information and/or the LIC flag, associated with a merge candidate. The prediction direction information and/or the LIC flag may be inherited from the merge candidate.
  • the location of a merge block (e.g., merge targeting block) may be known.
  • One or more (e.g., all) the prediction information of the current block may be obtained.
  • the decision on whether the current block is coded in uni-prediction or bi-prediction may be made during the parsing process for blocks coded using a regular merge mode. If a CU is determined to be bi-predicted, the LiC flag may be set to a value that indicates that the LIC process may be disabled, for example, even if the LiC flag that is inherited from the merge candidate is set to true.
  • the weight and offset derivation process in LIC may be skipped. If a CU is determined to be uni-predicted and/or the LIC flag is set to true, the LIC process may be performed on the CU.
  • a determination as to whether the prediction mode (e.g., MC prediction mode) for the current CU is a regular merge mode may be made.
  • Whether to disable the illumination compensation process for at least one sub-CU of the current CU may be determined based on whether the at least one sub-CU of the current CU is bi-prediction coded. For example, it may be determined to disable the illumination compensation process for a sub-CU of the current CU if the sub-CU is bi-prediction coded. If it is determined to disable the illumination compensation process on the at least one sub-CU of the current CU.
  • prediction ⁇ e.g., motion compensation
  • the signaling of the illumination compensation indication may be bypassed for certain AT VP and/or ST VP merge modes.
  • the LIC flag may be derived from the first merge candidate in the merge candidate list.
  • whether the CU is uni- or bi-predicted may be determined or defined using different approaches.
  • two motion vectors may be provided for the CU.
  • a motion field may be constructed.
  • ATMVP and/or STMVP CU may not have two motion vectors.
  • the motion field may be constructed in a refined manner.
  • the ATMVP or STMVP CU may be divided into one or more (e.g., 4x4) sub-CU blocks.
  • a (e.g., each) sub-CU block of the 4x4 sub-CU blocks may be assigned with one or two motion vectors depending on whether the current sub-CU block is uni- predicted or bi-predicted. For example, one motion vector may be assigned to the current sub-CU block of the 4x4 sub-CU blocks, if the current sub-CU block is uni-predicted. If the current sub-CU block is bi-predicted, two motion vectors may be assigned to the current sub-CU block of the 4x4 sub-CU blocks.
  • the illumination compensation process may be disabled for the sub-CU blocks that are coded by bi-prediction, and the LIC operation or process may be enabled for the sub-CU blocks which are coded by uni-prediction.
  • the LIC flag may be set for a (e.g., each) sub-CU block, for example, when the CU level LIC flag indicates that LIC is to be applied. For example, the LIC flag may be set to true for the uni-predicted sub-CU blocks. The LIC flag may set to false for the bi-predicted sub-CU blocks.
  • a determination may be made as to whether the prediction mode for the current CU (e.g., MC prediction mode) is an ATMVP and/or STMVP mode. Whether to disable the illumination compensation process for a sub-CU of the current CU may be determined based on whether the sub-CU is bi-prediction coded. For example, the illumination compensation process may be disabled for the sub-CU if the sub-CU is bi-prediction coded. If it is determined to disable the illumination compensation process on the sub-CU, prediction (e.g., motion compensation) may be performed on the sub-CU without the illumination compensation process.
  • prediction e.g., motion compensation
  • Whether a sub-CU block of the sub-CU blocks included in the current CU is bi-predicted may be determined. If one (e.g., at least one) of the sub-CU block is bi-predicted, the checking of the remaining sub-CU blocks may be skipped, and/or the CU level illumination compensation indication may be set to a value that indicates LIC process may be disabled. Motion compensation may be performed on the current CU without LIC based on the determination to disable the L!C process, for example, by setting the CU level illumination compensation indication to a value that indicates L!C process may be disabled.
  • FIG. 13 illustrates an example L!C flag derivation for AT VP and/or ST VP CU.
  • ATMVP and/or STMVP prediction information may be received.
  • An LIC flag may be derived (e.g., from a neighboring merge candidate) at 1302.
  • CU-level motion vectors may be derived at 1304.
  • it may be determined whether a CU-level illumination compensation indication has a value that indicates LiC process may be enabled.
  • a variable i may be set to be one at 1308. It may be determined whether the variable i is less than or equal to the number of sub-CU blocks in the current CU at 1310. If the variable i is less than the number of sub-CU blocks in the current CU, whether the current sub-CU block is bi-prediction coded may be determined at 1322. If the current sub-CU block is bi-prediction coded, the illumination compensation indication may be set to a value that indicates LIC process may be disabled at 1326.
  • variable i may be increased by one at 1324.
  • the variable i may be checked again to determine whether the variable i is still less than or equal to the number of sub-CU blocks, and if so, it may be determined whether a next sub-CU block is bi-prediction coded.
  • the variable i is greater than the number of sub-CU blocks in the current CU at 1310, or the CU-level illumination compensation indication is set to a value that indicates LIC process may be disabled at 1326, the variable i may be set to one at 1312, and references for sub-CUi may be fetched at 1314.
  • a determination may be made as to whether the CU-level illumination compensation indication has a value that indicates LIC process may be enabled at 1316. If the CU-level illumination compensation indication has a value that indicates LIC process may be enabled, the LIC process may be performed for the sub-CUi at 1318.
  • variable i is still less than or equal to the number of sub-CU blocks. If variable i is still less than or equal to the number of sub-CU blocks, the variable may be increased by one at 1328 and, references for the sub-CU+i may be fetched. If variable i is greater than the number of sub-CU blocks, reconstruction of the CU may be completed. If the CU-level illumination compensation indication has a value that indicates LIC process may be disabled at 1316, the LIC process may be disabled (e.g., skipped or bypassed) for the sub-CUi . Whether variable i is still less than or equal to the number of sub-CU blocks may be determined at 1320. [0141] The derivation process may be performed on individual sub-blocks for LIC process for ATMVP and/or STMVP. When the LIC for ATMVP and/or STMVP is disabled, fetching of the reference templates from two prediction directions may be skipped.
  • the signaling of the illumination compensation indication may be bypassed for certain FRUC merge modes (e.g., a FRUC bi-lateral mode).
  • the CU structure of a FRUC merge candidate may resemble the CU structure of a merge candidate or other prediction modes (e.g., a ATMVP or STMVP merge candidate).
  • An illumination compensation indication e.g., an LIC flag
  • An FRUC merge mode may include a bi-lateral mode and/or a template mode.
  • Whether to disable the illumination compensation process may be based on whether the current CU is coded in a bi-lateral mode (e.g., FRUC bi-lateral mode). It may be identified that the bi-lateral mode is used for the current CU, for example, based on an FRUC flag and/or a FRUC mode. Signaling an illumination compensation indication may be skipped for the current CU based on a prediction mode (e.g., FRUC bi-lateral mode). A bit(s) for one or more (e.g., each) FRUC bi-lateral encoded CU may be saved.
  • a bi-lateral mode e.g., FRUC bi-lateral mode
  • an FRUC enabling indication (e.g., an FRUC flag) may be decoded for the current CU.
  • a FRUC type e.g., an FRUC mode
  • the decoder may determine whether to bypass parsing the illumination compensation indication for the current CU based on whether the bi-lateral mode is used for the current CU.
  • the decoder may bypass parsing the LiC flag, and/or the decoder may set the L!C flag for the current CU to be false.
  • the decoder may perform prediction (e.g., motion compensation) on the current CU based on the determination whether to parse the illumination compensation indication for the current CU.
  • the decoder may determine to disable the illumination compensation process on the current CU if the decoder determines to bypass parsing the illumination compensation indication for the current CU.
  • the decoder may determine to parse the illumination compensation indication (e.g., the LiC flag) for the current CU if a different prediction mode (e.g., a prediction mode indicative a discontinuous motion changes or illumination changes between a current block and one or more reference blocks) is used for the current CU.
  • the decoder may determine whether to disable the illumination compensation process for the current CU based on the parsed the illumination compensation indication, !f the decoder determines that the parsed illumination compensation indication indicates that the illumination compensation process is enabled, the decoder may perform the prediction (e.g., motion compensation) with the illumination compensation process. If the decoder determines that the parsed illumination compensation indication indicates that the illumination compensation process is disabled, the decoder may perform the prediction (e.g., motion compensation) without the illumination compensation process.
  • a different prediction mode e.g., a prediction mode indicative a discontinuous motion changes or illumination changes between a current block and one or more reference blocks
  • the LIC flag signaling and/or decoding process for a bi-lateral mode may resemble the inter bi- prediction (e.g., explicit inter prediction).
  • the approaches shown in FIG. 11 and FIG. 12 may be used to determine whether the motion compensation mode for the current CU is a bi-lateral mode. For example, if the current FRUC mode is bilateral matching, an encoder may determine that the prediction mode is a bilateral mode at 1108 and skip signaling the LIC flag. If the current FRUC mode is bilateral matching, a decoder may determine that the prediction mode is a bi-lateral mode at 1210 and bypass parsing the LIC flag.
  • the illumination compensation indication may be signaled.
  • the CU that is encoded with FRUC template mode may be uni-predicted or bi-predicted.
  • Motion vectors for multiple directions may be derived. For example, the derivation of the motion vectors for each direction may be performed separately.
  • An encoder or a decoder may examine whether a (e.g., single) template is available for a reference list. The motion derivation process may be optimized separately (e.g., for each direction). On the decoder side, the motion vectors may not be derived until a CU decoding process is finished.
  • the decoder may not know if the current CU is a uni- predicted or bi-predicted in the parsing stage.
  • the encoder may signal the illumination compensation indication for FRUC template mode.
  • the decoder may parse the illumination compensation indication when FRUC template mode is used.
  • illumination compensation process e.g., LIC process
  • illumination compensation process may be disabled (e.g., switched off) for bi-prediction.
  • the LIC flag may be ignored when motion compensation is performed for bi-prediction (e.g., at the decoder side).
  • a (e.g., each) sub-CU block may be examined to determine if the sub-CU block is uni-predicted or bi-predicted. If the sub-CU block is bi- predicted, the LIC process may be disabled (e.g., skipped) for that sub-CU block.
  • the encoder may be configured to form uni-predicted sub-CU block(s) (e.g., and not form bi- predicted sub-CU block). Bi-predicted sub-CU blocks may not appear in the decoder side, and the LIC process for bi-prediction may be (e.g., automatically) switched off.
  • the encoder and/or decoder may check the LIC flag to decide whether or not to test bi-prediction in the process of FRUC template motion estimation. If the LIC flag is false, uni-prediction and/or bi- prediction may be tested. If the illumination compensation indication has a value that indicates that the LIC process may be enabled, the motion estimation on a (e.g., one) reference list may be performed, or the performance-wise reference list may be chosen to form an uni-prediction sub-CU block.
  • a (e.g., one) reference list may be performed, or the performance-wise reference list may be chosen to form an uni-prediction sub-CU block.
  • the LIC signaling may indicate whether the LIC process is disabled (e.g., switched off), for example, for bi-directional prediction.
  • the LIC flag may not be included in the bitstream when the CU is coded using one or more of the inter prediction mode (e.g., explicit bi-prediction mode or FRUC bi-lateral mode).
  • Table 2 illustrates an example of conditions used to avoiding coding the LIC flag when one or more of the explicit bi-prediction mode or FRUC bi-lateral mode are used.
  • the LIC flag syntax table within the decoding process may be shown in Table 2.
  • Table 3 illustrates an example syntax table of the LIC flag.
  • Table 3 Exemplary syntax table of LIC flag.
  • LIC flag may indicate whether the LIC process may be enabled for a current CU. If LIC_Flag[ xO ][ yO ] equals to 1 , the LIC process may be performed for the current CU. If LIC_Flag[ xO ][ yO ] equals to 0, the LIC process may be disabled (e.g., skipped) for the current CU.
  • the array indices xO, yO may specify the location (xO, yO) of the top-left luma sample of the considered coding block relative to the top-left luma sample of the picture.
  • the LIC process may be skipped for the current CU.
  • the inferred value of LIC_Flag[ xO ][ yO ] may be equal to 0.
  • the distortion for each of the motion vectors may be calculated as a quantity criterion to decide which one among them is the best, if the LIC flag for the current distortion computation is set to true, the LIC process may be performed on one or more (e.g., each) sub-CU blocks to determine a distortion.
  • DC value(s) may be removed from the current block and/or the reference block.
  • the DC value may be removed from the current block and/or the reference block to estimate distortion for LIC (e.g., without performing the LiC process on each of the sub-CU block(s)).
  • a mismatch may occur between FRUC motion estimation and motion compensation stage.
  • LiC process may not be executed, and in the motion estimation stage, the DC vaiue(s) may be removed to simulate the illumination change environment.
  • the LIC flag may be checked before the distortion is computed. When the LIC flag is true, the DC value may be kept for the computation of distortion.
  • FRUC bi-lateral mode may form a bi-directional predicted candidate after motion estimation.
  • FRUC motion estimation process described herein may be applied to FRUC bi-lateral mode.
  • An illumination compensation process at sub-CU level may be performed based a distance between a sub-CU block and a corresponding template of the current CU.
  • the current block and the current block's template may or may not be correlated (e.g., highly correlated).
  • the current CU/sub-CU block and the current block's template may not be correlated.
  • the LIC process may be disabled.
  • the correlation between the current CU/sub-CU block and the template may be assumed.
  • the correlation assumption may fail.
  • the LIC process may be performed on the sub-CU blocks that are close to at least one of the templates of the current CU.
  • the LIC process may be skipped (e.g., or the LIC process may be disabled).
  • FIG. 14 illustrates an example where the LIC process may be performed on boundary sub-CU blocks. As shown in FIG. 14, the LIC process may be performed on boundary sub-CU blocks (e.g., the shaded blocks located at the boundaries of the CU). LIC process may not be performed on one or more other sub-CU blocks that may be farther from the templates.
  • the LIC parameters derived for the boundary sub-CU blocks may be reused.
  • the derived LIC parameters for the boundary sub-CU blocks may be applied on the inner sub-CU blocks (e.g., located farther away from the templates) whose LiC process may have been skipped.
  • a spatial distance between a sub-CU block and a template may indicate how closely the sub-CU and the template are correlated.
  • a spatial distance threshold T may be defined to determine whether the LiC process may be performed, !f the current spatial distance t between the sub-CU block and the template is smaller than the threshold T (e.g., t ⁇ T), the LIC process may be performed on the sub-CU block.
  • the LiC process may be skipped on the sub-CU block.
  • the spatial distance threshold T may be a fixed value.
  • the spatial distance threshold T may be a dynamic value. If the spatial distance threshold T is a dynamic value, the threshold T may be determined based on the block size of the current CU and/or based on local statistics (e.g., the sample variance of the current block).
  • the spatial distance between the sub-CU block and the template may indicate how closely the sub-CU and template are correlated.
  • Sub-CU LIC process may use a template(s) that is spatially closer to the current sub-CU block for estimating weight(s) and/or offset(s).
  • the template that is relatively further away from the sub-CU block may not be used.
  • FIG. 15 illustrates an example where the LIC process may be performed on inner sub-CU blocks. As shown in FIG. 15, illumination compensation process may be performed on inner sub-CU blocks having closer template(s).
  • the sub-CU block that is marked with A in FIG. 15 may be an inner sub-CU block.
  • a spatial distance(s) may be defined as the number of sub-CU blocks that lies in between a sub-CU block (e.g., the current sub-CU block A) and a corresponding template.
  • the current sub-CU's corresponding above and left templates may be denoted by a spatial distance Di and a spatial distance D2 respectively.
  • the above template may be spatially closer to the sub-CU block A (e.g., Di ⁇ D2). if Di ⁇ D2, the above template may be used to estimate the LiC weight(s) and offset(s). When the above template is closer to the current sub-CU block, the above template may be used.
  • the correlation between a sub-CU block and a template(s) may be determined based on the spatial distances of the sub-CU block to both templates.
  • the template that is closer may be used in the LIC parameter estimation process.
  • more than one template may be used in a weighted LIC parameter estimation process.
  • One or more weights may be applied to one or more different templates based on the respective spatial distance to the sub-CU block(s), which may be located inside the CU.
  • FIG. 16 illustrates an example weighted LiC for inner sub-CU blocks. As shown in FIG. 16, the sub- CU block that is marked with A may be an inner sub-CU block.
  • a spatial distance(s) may be defined as the number of sub-CU blocks that lies in between a sub-CU block (e.g., the current sub-CU block A) and a corresponding template.
  • the current sub-CU block A's corresponding above and left templates may be denoted by spatial distance Di and spatial distance D2, respectively.
  • the above and left template e.g., template samples
  • the weighted template samples may be used as input(s) to estimate the weight(s) and/or offset(s). incorporating the weights w and W2 to Eq. 2 and/or Eq.
  • the mathematical expression for estimating the weight(s) and offset(s) a and ⁇ using the weighted template samples may be provided, for example, as shown in Eq. 8 and/or Eq. 9. For example, it may be assumed that there are m samples from the above template, and the total number of template samples are i.
  • the number of possible weights may increase.
  • the granularity of the weight may become high, and the neighbor weight values may be similar or the same if the one or more (e.g., all) weights are sorted in ascending or descending order.
  • Bit shift operation may be used (e.g., to multiplication and/or division).
  • the weights may be quantized to a set of predefined values which may be applied to the template samples through the bit shift operation. The quantized weight values may be carefully chosen. The process of weighted LIC parameter estimation may be accelerated with minor precision loss.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Sur la base du mode de prédiction utilisé pour le bloc actuel, un décodeur peut déterminer s'il faut procéder à une analyse syntaxique d'une indication de compensation d'éclairage pour le bloc actuel. L'indication de compensation d'éclairage peut indiquer s'il faut activer un processus de compensation d'éclairage pour le bloc actuel. Si le mode de prédiction indique des changements de mouvement continus entre le bloc actuel et un ou plusieurs des blocs de référence, le décodeur peut contourner l'analyse syntaxique de l'indication de compensation d'éclairage. Le décodeur peut désactiver le processus de compensation d'éclairage sur le bloc actuel sur la base de la détermination de façon à contourner l'analyse syntaxique de l'indication de compensation d'éclairage pour le bloc actuel.
PCT/US2018/040393 2017-06-30 2018-06-29 Compensation d'éclairage local à l'aide d'une bi-prédiction généralisée WO2019006363A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762527320P 2017-06-30 2017-06-30
US62/527,320 2017-06-30

Publications (1)

Publication Number Publication Date
WO2019006363A1 true WO2019006363A1 (fr) 2019-01-03

Family

ID=63143353

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/040393 WO2019006363A1 (fr) 2017-06-30 2018-06-29 Compensation d'éclairage local à l'aide d'une bi-prédiction généralisée

Country Status (1)

Country Link
WO (1) WO2019006363A1 (fr)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020147804A1 (fr) * 2019-01-17 2020-07-23 Beijing Bytedance Network Technology Co., Ltd. Utilisation d'une prédiction de candidat virtuel et d'une prédiction pondérée dans un traitement vidéo
WO2020151765A1 (fr) * 2019-01-27 2020-07-30 Beijing Bytedance Network Technology Co., Ltd. Interpolation pour bi-prédiction avec pondération au niveau des unités de codage (cu)
WO2020192717A1 (fr) * 2019-03-26 2020-10-01 Beijing Bytedance Network Technology Co., Ltd. Dérivation de paramètres pour prédiction inter
US10939128B2 (en) 2019-02-24 2021-03-02 Beijing Bytedance Network Technology Co., Ltd. Parameter derivation for intra prediction
US10979717B2 (en) 2018-11-06 2021-04-13 Beijing Bytedance Network Technology Co., Ltd. Simplified parameter derivation for intra prediction
US11057642B2 (en) 2018-12-07 2021-07-06 Beijing Bytedance Network Technology Co., Ltd. Context-based intra prediction
CN113302918A (zh) * 2019-01-15 2021-08-24 北京字节跳动网络技术有限公司 视频编解码中的加权预测
US11115655B2 (en) 2019-02-22 2021-09-07 Beijing Bytedance Network Technology Co., Ltd. Neighboring sample selection for intra prediction
US20210360275A1 (en) * 2019-02-01 2021-11-18 Huawei Technologies Co., Ltd. Inter prediction method and apparatus
US20220021894A1 (en) * 2019-04-09 2022-01-20 Beijing Dajia Internet Information Technology Co., Ltd. Methods and apparatuses for signaling of merge modes in video coding
CN114128271A (zh) * 2019-06-12 2022-03-01 交互数字Vc控股公司 用于视频编码和解码的照明补偿
US11284069B2 (en) 2018-10-23 2022-03-22 Beijing Bytedance Network Technology Co., Ltd. Harmonized local illumination compensation and modified inter prediction coding
US11405607B2 (en) 2018-10-23 2022-08-02 Beijing Bytedance Network Technology Co., Ltd. Harmonization between local illumination compensation and inter prediction coding
US11438581B2 (en) 2019-03-24 2022-09-06 Beijing Bytedance Network Technology Co., Ltd. Conditions in parameter derivation for intra prediction
WO2023183027A1 (fr) * 2022-03-25 2023-09-28 Tencent America LLC Procédé et appareil de contrainte adaptative sur une bi-prédiction pour des conditions hors limite
US11902507B2 (en) 2018-12-01 2024-02-13 Beijing Bytedance Network Technology Co., Ltd Parameter derivation for intra prediction

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150326881A1 (en) * 2012-12-26 2015-11-12 Sharp Kabushiki Kaisha Image decoding device
WO2015192372A1 (fr) * 2014-06-20 2015-12-23 Mediatek Singapore Pte. Ltd. Procédé simplifié pour la compensation d'éclairage dans le codage vidéo 3d et multivues
US20160366416A1 (en) * 2015-06-09 2016-12-15 Qualcomm Incorporated Systems and methods of determining illumination compensation status for video coding
WO2018064492A1 (fr) * 2016-09-30 2018-04-05 Qualcomm Incorporated Améliorations apportées à un mode de codage à conversion ascendante de fréquence d'images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150326881A1 (en) * 2012-12-26 2015-11-12 Sharp Kabushiki Kaisha Image decoding device
WO2015192372A1 (fr) * 2014-06-20 2015-12-23 Mediatek Singapore Pte. Ltd. Procédé simplifié pour la compensation d'éclairage dans le codage vidéo 3d et multivues
US20160366416A1 (en) * 2015-06-09 2016-12-15 Qualcomm Incorporated Systems and methods of determining illumination compensation status for video coding
WO2018064492A1 (fr) * 2016-09-30 2018-04-05 Qualcomm Incorporated Améliorations apportées à un mode de codage à conversion ascendante de fréquence d'images

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
C-C CHEN ET AL: "Generalized bi-prediction for inter coding", 3. JVET MEETING; 26-5-2016 - 1-6-2016; GENEVA; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://PHENIX.INT-EVRY.FR/JVET/,, no. JVET-C0047-v2, 28 May 2016 (2016-05-28), XP030150143 *
CHEN J ET AL: "Algorithm description of Joint Exploration Test Model 3 (JEM3)", 3. JVET MEETING; 26-5-2016 - 1-6-2016; GENEVA; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://PHENIX.INT-EVRY.FR/JVET/,, no. JVET-C1001, 2 July 2016 (2016-07-02), XP030150223 *
IKAI (SHARP) T: "3D-CE5.h related: Removal of parsing dependency for illumination compensation", 4. JCT-3V MEETING; 20-4-2013 - 26-4-2013; INCHEON; (THE JOINT COLLABORATIVE TEAM ON 3D VIDEO CODING EXTENSION DEVELOPMENT OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://PHENIX.INT-EVRY.FR/JCT2/,, no. JCT3V-D0060, 12 April 2013 (2013-04-12), XP030130724 *
XIU X ET AL: "Description of SDR, HDR and 360° video coding technology proposal by InterDigital Communications and Dolby Laboratories", 10. JVET MEETING; 10-4-2018 - 20-4-2018; SAN DIEGO; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://PHENIX.INT-EVRY.FR/JVET/,, no. JVET-J0015, 3 April 2018 (2018-04-03), XP030151174 *
ZHANG K ET AL: "3D-CE5.h related: Removal of parsing dependency for illumination compensation", 4. JCT-3V MEETING; 20-4-2013 - 26-4-2013; INCHEON; (THE JOINT COLLABORATIVE TEAM ON 3D VIDEO CODING EXTENSION DEVELOPMENT OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://PHENIX.INT-EVRY.FR/JCT2/,, no. JCT3V-D0152, 13 April 2013 (2013-04-13), XP030130816 *

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11284069B2 (en) 2018-10-23 2022-03-22 Beijing Bytedance Network Technology Co., Ltd. Harmonized local illumination compensation and modified inter prediction coding
US11405607B2 (en) 2018-10-23 2022-08-02 Beijing Bytedance Network Technology Co., Ltd. Harmonization between local illumination compensation and inter prediction coding
US11470307B2 (en) 2018-10-23 2022-10-11 Beijing Bytedance Network Technology Co., Ltd. Harmonized local illumination compensation and intra block copy coding
US11758124B2 (en) 2018-10-23 2023-09-12 Beijing Bytedance Network Technology Co., Ltd Harmonized local illumination compensation and modified inter coding tools
US11659162B2 (en) 2018-10-23 2023-05-23 Beijing Bytedance Network Technology Co., Ltd Video processing using local illumination compensation
US11019344B2 (en) 2018-11-06 2021-05-25 Beijing Bytedance Network Technology Co., Ltd. Position dependent intra prediction
US10979717B2 (en) 2018-11-06 2021-04-13 Beijing Bytedance Network Technology Co., Ltd. Simplified parameter derivation for intra prediction
US10999581B2 (en) 2018-11-06 2021-05-04 Beijing Bytedance Network Technology Co., Ltd. Position based intra prediction
US11025915B2 (en) 2018-11-06 2021-06-01 Beijing Bytedance Network Technology Co., Ltd. Complexity reduction in parameter derivation intra prediction
US11438598B2 (en) 2018-11-06 2022-09-06 Beijing Bytedance Network Technology Co., Ltd. Simplified parameter derivation for intra prediction
US11930185B2 (en) 2018-11-06 2024-03-12 Beijing Bytedance Network Technology Co., Ltd. Multi-parameters based intra prediction
US11902507B2 (en) 2018-12-01 2024-02-13 Beijing Bytedance Network Technology Co., Ltd Parameter derivation for intra prediction
US11595687B2 (en) 2018-12-07 2023-02-28 Beijing Bytedance Network Technology Co., Ltd. Context-based intra prediction
US11057642B2 (en) 2018-12-07 2021-07-06 Beijing Bytedance Network Technology Co., Ltd. Context-based intra prediction
CN113302918A (zh) * 2019-01-15 2021-08-24 北京字节跳动网络技术有限公司 视频编解码中的加权预测
US11509927B2 (en) 2019-01-15 2022-11-22 Beijing Bytedance Network Technology Co., Ltd. Weighted prediction in video coding
US11483550B2 (en) 2019-01-17 2022-10-25 Beijing Bytedance Network Technology Co., Ltd. Use of virtual candidate prediction and weighted prediction in video processing
WO2020147805A1 (fr) * 2019-01-17 2020-07-23 Beijing Bytedance Network Technology Co., Ltd. Filtrage de déblocage à l'aide d'une prédiction de mouvement
CN113316933A (zh) * 2019-01-17 2021-08-27 北京字节跳动网络技术有限公司 使用运动预测进行去方块滤波
WO2020147804A1 (fr) * 2019-01-17 2020-07-23 Beijing Bytedance Network Technology Co., Ltd. Utilisation d'une prédiction de candidat virtuel et d'une prédiction pondérée dans un traitement vidéo
CN113302916A (zh) * 2019-01-27 2021-08-24 北京字节跳动网络技术有限公司 具有cu级别权重的双向预测的插值
CN113302916B (zh) * 2019-01-27 2024-04-12 北京字节跳动网络技术有限公司 具有cu级别权重的双向预测的插值
WO2020151765A1 (fr) * 2019-01-27 2020-07-30 Beijing Bytedance Network Technology Co., Ltd. Interpolation pour bi-prédiction avec pondération au niveau des unités de codage (cu)
WO2020151764A1 (fr) * 2019-01-27 2020-07-30 Beijing Bytedance Network Technology Co., Ltd. Procédé amélioré de compensation d'éclairage local
US20210360275A1 (en) * 2019-02-01 2021-11-18 Huawei Technologies Co., Ltd. Inter prediction method and apparatus
US11115655B2 (en) 2019-02-22 2021-09-07 Beijing Bytedance Network Technology Co., Ltd. Neighboring sample selection for intra prediction
US10939128B2 (en) 2019-02-24 2021-03-02 Beijing Bytedance Network Technology Co., Ltd. Parameter derivation for intra prediction
US11729405B2 (en) 2019-02-24 2023-08-15 Beijing Bytedance Network Technology Co., Ltd. Parameter derivation for intra prediction
US11438581B2 (en) 2019-03-24 2022-09-06 Beijing Bytedance Network Technology Co., Ltd. Conditions in parameter derivation for intra prediction
CN113632474B (zh) * 2019-03-26 2022-12-09 北京字节跳动网络技术有限公司 用于帧间预测的参数推导
WO2020192717A1 (fr) * 2019-03-26 2020-10-01 Beijing Bytedance Network Technology Co., Ltd. Dérivation de paramètres pour prédiction inter
CN113632474A (zh) * 2019-03-26 2021-11-09 北京字节跳动网络技术有限公司 用于帧间预测的参数推导
US20220021894A1 (en) * 2019-04-09 2022-01-20 Beijing Dajia Internet Information Technology Co., Ltd. Methods and apparatuses for signaling of merge modes in video coding
CN114128271A (zh) * 2019-06-12 2022-03-01 交互数字Vc控股公司 用于视频编码和解码的照明补偿
WO2023183027A1 (fr) * 2022-03-25 2023-09-28 Tencent America LLC Procédé et appareil de contrainte adaptative sur une bi-prédiction pour des conditions hors limite

Similar Documents

Publication Publication Date Title
US11962759B2 (en) Motion compensated bi-prediction based on local illumination compensation
US20220368943A1 (en) Motion-compensation prediction based on bi-directional optical flow
US11570470B2 (en) Complexity reduction of overlapped block motion compensation
US11575933B2 (en) Bi-directional optical flow method with simplified gradient derivation
US11425418B2 (en) Overlapped block motion compensation
EP3857886B1 (fr) Bi-prédiction pour codage vidéo
WO2019006363A1 (fr) Compensation d'éclairage local à l'aide d'une bi-prédiction généralisée
WO2019089933A1 (fr) Dérivation de mouvement de sous-bloc et affinement de vecteur de mouvement côté décodeur pour mode de fusion
US20220070441A1 (en) Combined inter and intra prediction
US20220394298A1 (en) Transform coding for inter-predicted video data
US20220385897A1 (en) Adaptive interpolation filter for motion compensation
US20220116656A1 (en) Improved intra planar prediction using merge mode motion vector candidates
WO2023194556A1 (fr) Mode intra implicite pour prédiction inter-fusion/intra combinée et prédiction intra/inter de mode de partitionnement géométrique
WO2023194558A1 (fr) Prédiction améliorée de vecteur de mouvement basée sur un sous-bloc (sbtmvp)
WO2023118048A1 (fr) Génération de liste de modes le plus probable avec dérivation de mode intra basé sur un modèle et dérivation de mode intra côté décodeur
WO2023118301A1 (fr) Résolution de vecteur de mouvement adaptative (amvr) au moyen d'une carte de profondeur ou d'une carte de mouvement

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18752315

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18752315

Country of ref document: EP

Kind code of ref document: A1