WO2021134706A1 - 环路滤波的方法与装置 - Google Patents
环路滤波的方法与装置 Download PDFInfo
- Publication number
- WO2021134706A1 WO2021134706A1 PCT/CN2019/130954 CN2019130954W WO2021134706A1 WO 2021134706 A1 WO2021134706 A1 WO 2021134706A1 CN 2019130954 W CN2019130954 W CN 2019130954W WO 2021134706 A1 WO2021134706 A1 WO 2021134706A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- alf
- component
- current block
- target filter
- chrominance component
- Prior art date
Links
- 238000001914 filtration Methods 0.000 title claims abstract description 124
- 238000000034 method Methods 0.000 title claims abstract description 96
- 230000003044 adaptive effect Effects 0.000 claims abstract description 25
- 230000015654 memory Effects 0.000 claims description 43
- 230000006978 adaptation Effects 0.000 claims description 10
- 230000008569 process Effects 0.000 abstract description 35
- 230000009466 transformation Effects 0.000 description 18
- 238000004364 calculation method Methods 0.000 description 17
- 238000013139 quantization Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 13
- 238000007906 compression Methods 0.000 description 7
- 230000006835 compression Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 5
- 230000000903 blocking effect Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 229910003460 diamond Inorganic materials 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002207 retinal effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
Definitions
- the present invention relates to the technical field of digital video coding, and more specifically, to a method and device for loop filtering.
- the video coding compression process includes: block division, prediction, transformation, quantization, and entropy coding processes to form a hybrid video coding framework.
- the video coding and decoding technology standards have gradually formed.
- some mainstream video coding and decoding standards include: international video coding standards H.264/MPEG-AVC, H. 265/MEPG-HEVC, the domestic audio and video coding standard AVS2, and the H.266/VVC international standard and AVS3 domestic standard that are being developed.
- the loop filter includes deblocking filter (DBF), adaptive Sample compensation filter (Sample Adaptive Offset, SAO) and Adaptive Loop Filter (Adaptive Loop Filter, ALF). Among them, the filtering process still has room for improvement.
- DPF deblocking filter
- SAO sample Adaptive Offset
- ALF Adaptive Loop Filter
- the present invention provides a method and device for loop filtering. Compared with the prior art, the complexity of loop filtering can be reduced and the filtering effect can be improved.
- a method for loop filtering including:
- Encoding is performed according to the filtered chrominance component of the current block, and the total number of the multiple cross-component ALF filters is used as a syntax element for encoding, wherein the bitstream of one frame of image contains only one for indicating The total number of syntax elements of the plurality of cross-component ALF filters.
- a method of loop filtering including:
- the total number of cross-component ALF filters and the index of the target filter are decoded from the code stream.
- the target filter is the ALF filter used by the chrominance component of the current block; among them, only one frame of image code stream Contains a syntax element for indicating the total number of cross-component ALF filters;
- a device for loop filtering including: a memory for storing codes;
- the processor is configured to execute the code stored in the memory to perform the following operations:
- Encoding is performed according to the filtered chrominance component of the current block, and the total number of the multiple cross-component ALF filters is used as a syntax element for encoding, wherein the bitstream of one frame of image contains only one for indicating The total number of syntax elements of the plurality of cross-component ALF filters.
- a loop filtering device including:
- Memory used to store code
- the processor is configured to execute the code stored in the memory to perform the following operations:
- the total number of cross-component ALF filters and the index of the target filter are decoded from the code stream.
- the target filter is the ALF filter used by the chrominance component of the current block; among them, only one frame of image code stream Contains a syntax element for indicating the total number of cross-component ALF filters;
- the technical method of the embodiment of the present application improves the coding and decoding performance by optimizing the coding method in the coding and decoding loop filtering process.
- Fig. 1 is a structural diagram of a technical solution applying an embodiment of the present application.
- Fig. 2 is a schematic diagram of a video coding framework according to an embodiment of the present application.
- Fig. 3 is a schematic diagram of a video decoding framework according to an embodiment of the present application.
- Fig. 4 is a schematic diagram of a Wiener filter according to an embodiment of the present application.
- Fig. 5a is a schematic diagram of an ALF filter according to an embodiment of the present application.
- Fig. 5b is a schematic diagram of another ALF filter according to an embodiment of the present application.
- Fig. 6 is a schematic flowchart of a loop filtering method according to an embodiment of the present application.
- FIG. 7 is a schematic diagram of the shape of a CC-ALF filter according to an embodiment of the present application.
- FIG. 8 is a schematic flowchart of a loop filtering method according to another embodiment of the present application.
- FIG. 9 is a schematic flowchart of a loop filtering method according to another embodiment of the present application.
- FIG. 10 is a schematic flowchart of a loop filtering device according to another embodiment of the present application.
- FIG. 11 is a schematic flowchart of a loop filtering device according to another embodiment of the present application.
- the embodiments of the present application can be applied to standard or non-standard image or video encoders.
- the encoder of the VVC standard For example, the encoder of the VVC standard.
- the size of the sequence number of each process does not mean the order of execution.
- the execution order of each process should be determined by its function and internal logic, and should not correspond to the embodiments of the present application.
- the implementation process constitutes any limitation.
- Fig. 1 is a structural diagram of a technical solution applying an embodiment of the present application.
- the system 100 can receive the data 102 to be processed, process the data 102 to be processed, and generate processed data 108.
- the system 100 may receive the data to be encoded and encode the data to be encoded to generate encoded data, or the system 100 may receive the data to be decoded and decode the data to be decoded to generate decoded data.
- the components in the system 100 may be implemented by one or more processors.
- the processor may be a processor in a computing device or a processor in a mobile device (such as a drone).
- the processor may be any type of processor, which is not limited in the embodiment of the present invention.
- the processor may include an encoder, a decoder, or a codec, etc.
- One or more memories may also be included in the system 100.
- the memory can be used to store instructions and data, for example, computer-executable instructions that implement the technical solutions of the embodiments of the present invention, to-be-processed data 102, processed data 108, and so on.
- the memory may be any type of memory, which is not limited in the embodiment of the present invention.
- the data to be encoded may include text, images, graphic objects, animation sequences, audio, video, or any other data that needs to be encoded.
- the data to be encoded may include sensor data from sensors, which may be vision sensors (for example, cameras, infrared sensors), microphones, near-field sensors (for example, ultrasonic sensors, radars), position sensors, and temperature sensors. Sensors, touch sensors, etc.
- the data to be encoded may include information from the user, for example, biological information, which may include facial features, fingerprint scans, retinal scans, voice recordings, DNA sampling, and the like.
- Fig. 2 is a schematic diagram of a video coding framework 2 according to an embodiment of the present application.
- each frame in the video to be coded is coded in sequence.
- the current coded frame mainly undergoes processing such as prediction (Prediction), transformation (Transform), quantization (Quantization), and entropy coding (Entropy Coding), and finally the bit stream of the current coded frame is output.
- the decoding process usually decodes the received code stream according to the inverse process of the above process to recover the video frame information before decoding.
- the video encoding framework 2 includes an encoding control module 201, which is used to perform decision-making control actions and parameter selection in the encoding process.
- the encoding control module 201 controls the parameters used in transformation, quantization, inverse quantization, and inverse transformation, controls the selection of intra-frame or inter-frame modes, and parameter control of motion estimation and filtering, and
- the control parameters of the encoding control module 201 will also be input to the entropy encoding module, and the encoding will be performed to form a part of the encoded bitstream.
- the frame to be coded is divided 202 processing, specifically, it is firstly divided into slices, and then divided into blocks.
- the frame to be encoded is divided into a plurality of non-overlapping largest coding tree units (CTUs), and each CTU can also be divided into quad-tree, or binary tree, or tri-tree.
- the method is iteratively divided into a series of smaller coding units (Coding Unit, CU).
- the CU may also include a prediction unit (Prediction Unit, PU) and a transformation unit (Transform Unit, TU) associated with it.
- PU is the basic unit of prediction
- TU is the basic unit of transformation and quantization.
- the PU and TU are respectively obtained by dividing into one or more blocks on the basis of the CU, where one PU includes multiple prediction blocks (PB) and related syntax elements.
- the PU and TU may be the same, or obtained by the CU through different division methods.
- at least two of the CU, PU, and TU are the same.
- CU, PU, and TU are not distinguished, and prediction, quantization, and transformation are all performed in units of CU.
- the CTU, CU, or other data units formed are all referred to as coding blocks in the following.
- the data unit for video encoding may be a frame, a slice, a coding tree unit, a coding unit, a coding block, or any group of the above.
- the size of the data unit can vary.
- a prediction process is performed to remove the spatial and temporal redundant information of the current frame to be encoded.
- predictive coding methods include intra-frame prediction and inter-frame prediction.
- Intra-frame prediction uses only the reconstructed information in the current frame to predict the current coding block
- inter-frame prediction uses the information in other previously reconstructed frames (also called reference frames) to predict the current coding block.
- Make predictions Specifically, in the embodiment of the present application, the encoding control module 201 is used to make a decision to select intra prediction or inter prediction.
- the process of intra-frame prediction 203 includes obtaining the reconstructed block of the coded neighboring block around the current coding block as a reference block, and based on the pixel value of the reference block, the prediction mode method is used to calculate the predicted value to generate the predicted block , Subtracting the corresponding pixel values of the current coding block and the prediction block to obtain the residual of the current coding block, the residual of the current coding block is transformed 204, quantized 205, and entropy coding 210 to form the code stream of the current coding block. Further, all the coded blocks of the frame to be coded currently form a part of the coded stream of the frame to be coded after undergoing the above-mentioned coding process. In addition, the control and reference data generated in the intra-frame prediction 203 are also encoded by the entropy encoding 210 to form a part of the encoded bitstream.
- the transform 204 is used to remove the correlation of the residual of the image block, so as to improve the coding efficiency.
- the two-dimensional discrete cosine transform (DCT) transformation and the two-dimensional discrete sine transform (DST) transformation are usually used.
- the information is respectively multiplied by an N ⁇ M transformation matrix and its transposed matrix, and the transformation coefficient of the current coding block is obtained after the multiplication.
- the quantization 205 is used to further improve the compression efficiency.
- the transform coefficients can be quantized to obtain the quantized coefficients, and then the quantized coefficients are entropy-encoded 210 to obtain the residual code stream of the current coding block, wherein the entropy coding method includes But it is not limited to content adaptive binary arithmetic coding (Context Adaptive Binary Arithmetic Coding, CABAC) entropy coding.
- CABAC Context Adaptive Binary Arithmetic Coding
- the coded neighboring block in the intra prediction 203 process is: the neighboring block that has been coded before the current coding block is coded, and the residual generated in the coding process of the neighboring block is transformed 204, quantized 205, After inverse quantization 206 and inverse transform 207, the reconstructed block is obtained by adding the prediction block of the neighboring block.
- the inverse quantization 206 and the inverse transformation 207 are the inverse processes of the quantization 206 and the transformation 204, which are used to restore the residual data before the quantization and transformation.
- the inter prediction process includes motion estimation 208 and motion compensation 209. Specifically, the motion estimation is performed 208 according to the reference frame image in the reconstructed video frame, and the image block most similar to the current encoding block is searched for in one or more reference frame images according to a certain matching criterion as a matching block.
- the relative displacement with the current coding block is the motion vector (Motion Vector, MV) of the current block to be coded.
- Motion Compensation is performed 209 on the frame to be coded based on the motion vector and the reference frame to obtain the prediction value of the frame to be coded.
- the original value of the pixel of the frame to be coded is subtracted from the corresponding predicted value to obtain the residual of the frame to be coded.
- the residual of the current frame to be encoded is transformed 204, quantized 205, and entropy encoding 210 to form a part of the encoded bitstream of the frame to be encoded.
- the control and reference data generated in the motion compensation 209 are also encoded by the entropy encoding 210 to form a part of the encoded bitstream.
- the reconstructed video frame is a video frame obtained after filtering 211.
- the filtering 211 is used to reduce compression distortion such as blocking effects and ringing effects generated in the encoding process.
- the reconstructed video frame is used to provide a reference frame for inter-frame prediction; in the decoding process, the reconstructed video frame is output as the final decoded video after post-processing.
- the filtering 211 includes at least one of the following filtering techniques: deblocking DB filtering, adaptive sample compensation offset SAO filtering, adaptive loop filtering ALF, cross-component ALF (Cross-Component ALF, CC-ALF).
- ALF is set after DB and/or SAO.
- the luminance component before ALF is used to filter the chrominance component after ALF.
- the filter parameters in the process of filtering 211 are also transmitted to the entropy coding for coding, forming a part of the coded bitstream.
- Fig. 3 is a schematic diagram of a video decoding framework 3 according to an embodiment of the present application.
- video decoding executes operation steps corresponding to video encoding.
- the residual data undergoes inverse quantization 302 and inverse transformation 303 to obtain original residual data information.
- the reconstructed image block in the current frame is used to construct prediction information according to the intra-frame prediction method; if it is inter-frame prediction, according to the decoded motion compensation syntax, Determine the reference block in the reconstructed image to obtain the prediction information; then, superimpose the prediction information and the residual information, and filter 311 to obtain the reconstructed video frame. After the reconstructed video frame undergoes post-processing 306, the decoded video is obtained .
- the filter 311 may be the same as the filter 211 in FIG. 2 and includes at least one of the following: deblocking DB filter, adaptive sample compensation offset SAO filter, adaptive loop filter ALF, cross-component ALF (Cross-Component ALF, CC-ALF).
- deblocking DB filter adaptive sample compensation offset SAO filter
- adaptive loop filter ALF adaptive loop filter ALF
- cross-component ALF Cross-component ALF
- CC-ALF cross-component ALF
- the filter parameters and control parameters in the filter 311 can be obtained by entropy decoding the coded code stream, and filtering is performed based on the obtained filter parameters and control parameters respectively.
- the DB filter is used to process pixels on the boundary between the prediction unit PU and the transformation unit TU, and a low-pass filter obtained by training is used to perform nonlinear weighting of boundary pixels, thereby reducing blocking effects.
- SAO filtering uses the coding block in the frame image as a unit to classify the pixel values in the coding block, and add compensation values to each type of pixel. Different coding blocks use different filtering forms and different The different types of pixel compensation values in the encoding block are different, so that the reconstructed frame image is closer to the original frame image, and the ringing effect is avoided.
- ALF filtering is a Wiener filtering process.
- filter coefficients are calculated for filtering, which is mainly used to minimize the mean square between the reconstructed frame image and the original frame image.
- Error Magnetic-square Error, MSE
- MSE mean-square Error
- a pixel signal in the currently encoded original encoding frame is X
- the reconstructed pixel signal after encoding, DB filtering and SAO filtering Is Y
- the noise or distortion introduced by Y in this process is e
- the reconstructed pixel signal is filtered by the filter coefficient f in the Wiener filter to form an ALF reconstructed signal So that the ALF reconstructs the signal
- the mean square error with the original pixel signal is the smallest, and f is obtained as the ALF filter coefficient.
- the calculation formula of f is as follows:
- a filter composed of a set of ALF filter coefficients is shown in Figures 5a and 5b, with 13 filter coefficients distributed symmetrically from C0 to C12, and the filter length L is 7. ; Or there are 7 filter coefficients distributed symmetrically from C0 to C6, and the filter length L is 5.
- the filter shown in Figure 5a is also called a 7*7 filter, which is suitable for encoding frame brightness components
- the filter shown in Figure 5b is also called a 5*5 filter, which is suitable for encoding frame colors. Degree component.
- the filter composed of the coefficients of the ALF filter may also be a filter of other forms, for example, a filter form such as a symmetrical distribution and a filter length of 9. This is not limited.
- the weighted average of surrounding pixels is used to obtain the result after the current point filtering, that is, the corresponding pixel in the ALF reconstructed image frame .
- the pixel I (x, y) in the reconstructed image frame is the current pixel to be filtered
- (x, y) is the position coordinate of the current pixel to be filtered in the encoding frame
- the filter coefficient of the filter center corresponds to it
- the other filter coefficients in the filter correspond to the pixels around I(x, y) one by one.
- the filter coefficient value in the filter is the weight value.
- the filter coefficient value in the filter is multiplied by the corresponding pixel point.
- the value obtained by adding and averaging is the filtered pixel value O(x, y) of the current pixel I(x, y) to be filtered.
- the specific calculation formula is as follows:
- w(i,j) represents any filter coefficient in the filter
- (i,j) represents the relative position of the filter coefficient in the filter from the center point
- i and j are both less than L/2 and greater than -L/ An integer of 2, where L is the length of the filter.
- the filter coefficient C12 at the center of the filter is represented as w(0,0)
- the filter coefficient C6 above C12 is represented as w(0,1)
- the filter coefficient C11 to the right of C12 It is expressed as w(1, 0).
- each pixel in the reconstructed image frame is filtered in turn to obtain the filtered ALF reconstructed image frame.
- the filter coefficient w(i, j) of the filter is an integer between [-1, 1).
- the filter coefficient w(i,j) is enlarged by 128 times and then rounded to obtain w'(i,j), w'(i,j) is [- 128, 128).
- encoding and transmitting the amplified w'(i,j) is easy to implement hardware encoding and decoding, and the amplified w'(i,j) is used for filtering to obtain the calculation formula of O(x,y) as follows:
- the filter is no longer directly used as a weight, and a weighted average of multiple pixels is used to obtain the filtered result.
- nonlinear parameter factors are introduced to optimize the filtering effect.
- the nonlinear ALF filter is used to filter I(x, y) to obtain the calculation formula of O’(x, y) as follows:
- the filter coefficient w(i, j) of the filter is an integer between [-1, 1).
- k(i,j) represents the loop filter ALF correction clip parameter, which is also referred to as the correction parameter or the clip parameter hereinafter, and each filter coefficient w(i,j) will correspond to A clip parameter.
- the clip parameter selects one from ⁇ 1024, 181, 32, 6 ⁇ .
- the clip parameter selects one from ⁇ 1024, 161, 25, 4 ⁇ , and each The index corresponding to the clip parameter, that is, the clip index parameter is written into the code stream. If the clip parameter is 1024, the clip index parameter 0 must be written into the code stream. Similarly, if it is 181, 1 must be written into the code stream. Therefore, you can see the clip of the coded frame brightness classification and the coded frame chroma classification.
- the index parameters are all integers between 0 and 3.
- the encoding frame of the luminance Y component can correspond to 25 sets of filters at most, and the encoding frame of the chrominance UV component corresponds to a set of filters.
- the pixel category may be a category corresponding to the luminance Y component, but the embodiment of the present application is not limited to this, and the pixel category may also be a category corresponding to other components or all components.
- the following takes the classification and division and ALF filtering of the coded frame of the luminance Y component as an example for description.
- the reconstructed image frame after DB filtering and SAO filtering is divided into a plurality of 4*4 pixel blocks. Classify the multiple 4*4 blocks.
- each 4*4 block can be classified according to the Laplace direction:
- D is the Laplace direction, It is the result of the fine classification after the direction D (Direction) classification, There are many ways to obtain, and here is only the result of sub-categorization.
- the calculation method of direction D is as follows. First, calculate the Laplacian gradient of the current 4*4 block in different directions. The calculation formula is:
- i and j are the coordinates of the upper left pixel of the current 4*4 block.
- R(k, 1) represents the reconstructed pixel value at the (k, 1) position in the 4*4 block.
- V k,l represents the vertical Laplacian gradient of the pixel at the (k,1) coordinate in the 4*4 block.
- H k,l represents the Laplacian gradient of the pixel at the (k,1) coordinate in the 4*4 block in the horizontal direction.
- D1 k,l represents the Laplacian gradient of the pixel at the (k,1) coordinate in the 4*4 block in the direction of 135 degrees.
- D2 k,l represents the 45-degree Laplacian gradient of the pixel at the (k, 1) coordinate in the 4*4 block.
- the calculated g v represents the Laplacian gradient of the current 4*4 block in the vertical direction.
- g h represents the Laplacian gradient of the current 4*4 block in the horizontal direction.
- g d1 represents the Laplacian gradient of the current 4*4 block in the direction of 135 degrees.
- g d2 represents the Laplacian gradient of the current 4*4 block in the direction of 45 degrees.
- R h, v represents the ratio of the Laplacian gradient in the horizontal and vertical directions.
- R d0, d1 represent the ratio of the Laplacian gradient in the 45 and 135 directions.
- t1 and t2 represent preset thresholds.
- the value range of C is an integer between 0-24.
- at most 4*4 blocks in one frame of image are divided into 25 categories.
- each type of 4*4 block has a set of ALF filter coefficients, where N is an integer between 1-25.
- the number of classifications can be classified into any other number in addition to 25 types, which is not limited in the embodiment of the present application.
- ALF filtering can be divided into frame-based ALF, block-based ALF and quad-tree-based ALF.
- frame-based ALF is to use a set of filter coefficients to filter the entire frame
- block-based ALF is to divide the coded frame into image blocks of equal size, and determine whether to perform ALF filtering on the image block, based on quadtree ALF
- the coding frame is divided into image blocks of different sizes based on the quadtree division method, and it is judged whether to perform ALF filtering.
- the frame-based ALF calculation is simple, but the filtering effect is not good, and the quad-tree-based ALF calculation is more complicated. Therefore, in some standards or technologies, such as the latest VVC standard under study, its reference software VTM uses block-based ALF.
- VTM Take the block-based ALF in VTM as an example.
- a coded frame has a frame-level ALF filter flag and a block-level ALF filter flag.
- the block level may be a CTU, a CU, or an image block in other division modes, which is not limited in the embodiment of the present application.
- the CTU level ALF filter flag bit is used as an example for illustration below.
- the frame-level ALF filter flag indicates that ALF filtering is not performed
- the CTU-level ALF filter flag in the encoded frame is not identified.
- the frame-level ALF filter flag indicates that ALF filtering is performed
- the CTU-level ALF in the encoded frame is not identified.
- the filtering flag bit indicates whether the current CTU performs ALF filtering.
- the coded frame includes Z CTUs
- the method for calculating N groups of ALF filter coefficients in the coded frame is as follows: whether the Z CTUs in the coded frame are combined with ALF filtering, and for each combination mode, the calculation is obtained.
- the calculation method of the i-th group of ALF in each group of ALF filter coefficients is: in the current CTU combination mode, the i-th type of pixels in the CTU undergoing ALF filtering are calculated for f, and the other CTUs not undergoing ALF filtering are calculated For the i-th type of pixels, no f calculation is performed, and the i-th group of ALF coefficients in the current combination mode are calculated. It should be understood that in different combinations, the N groups of ALF filter coefficients obtained by calculation may be different from each other.
- the frame-level ALF flag of the coded frame is identified as performing ALF filtering, and the CTU-level ALF flag in turn indicates whether to perform ALF filtering in the CTU data . For example, when the flag is marked as 0, it means that ALF filtering is not performed, and when the flag is marked as 1, it means that ALF filtering is performed.
- the coded frame is not subjected to ALF filtering, and the frame-level ALF flag bit of the coded frame is marked as not performing ALF filtering. At this time, the ALF flag bit of the CTU level is not identified.
- the ALF in the embodiments of the present application is not only applicable to the VVC standard, but also applicable to other technical solutions or standards using block-based ALF.
- Cross-Component ALF Cross-Component ALF (Cross-Component ALF, CC-ALF)
- CC-ALF is used to adjust the chrominance component by using the value of the luminance component to improve the quality of the chrominance component.
- the current block includes a luminance component and a chrominance component, where the chrominance component includes a first chrominance component (for example, Cb in FIG. 6) and a second chrominance component (for example, Cr in FIG. 6).
- the luminance component is filtered through SAO and ALF in sequence.
- the first chrominance component is filtered through SAO and ALF in sequence.
- the second chrominance component is filtered through SAO and ALF in sequence.
- a CC-ALF filter is also used to perform CC-ALF on the chrominance components.
- the shape of the CC-ALF filter may be as shown in FIG. 7.
- the CC-ALF filter uses a 3x4 diamond shape with a total of 8 coefficients.
- the position of the mark 2 is the current pixel of the first chroma component or the second chroma component, and the weighted average of the surrounding 7 points is used to obtain the filtered result of the pixel at the position of the middle mark 2.
- first chrominance component and the second chrominance component can be selected from the same CC-ALF filter with the same or different target filters for filtering, or,
- the target filter can be selected from different CC-ALF filters for filtering.
- the total number of CC-ALF filters used in the current image needs to be written into the bitstream, where the total number of CC-ALF filters may include the total number of CC-ALF filters for the first chrominance component and/or the total number of CC-ALF filters for the second chrominance component.
- the total number of CC-ALF filters in the first chrominance component is the same as the total number of CC-ALF filters in the second chrominance component, or the first chrominance component and the second chrominance component can be selected from the same CC-ALF filter In the case of selecting the target filter, the total number of only one CC-ALF filter can be used to indicate.
- the index of the target filter selected by the current block is also encoded into the code stream.
- the indexes of the target filters respectively selected by the first chrominance component and the second chrominance component are the same or different
- the indexes of the target filters of the two chrominance components may be encoded into the code stream respectively.
- only one index may be encoded into the bitstream, and the index is used to indicate the two chrominance components The target filter.
- the first chrominance component determines the target filter of the first chrominance component of the current block from multiple CC-ALF filters; according to the brightness without ALF (for example, after SAO and without ALF) Component, and the ALF first chrominance component of the current block, determine the target filter coefficient of the first chrominance component.
- the first chrominance component is filtered according to the target filter and the target filter coefficient of the first chrominance component.
- the filtering result of the first chrominance component is determined.
- the second chrominance component determines the target filter of the second chrominance component of the current block from multiple CC-ALF filters; according to the luminance component without ALF (for example, after SAO and without ALF), and The ALF second chrominance component of the current block determines the target filter coefficient of the second chrominance component.
- the second chrominance component is filtered according to the target filter and the target filter coefficient of the second chrominance component.
- the filtering result of the second chrominance component is determined.
- the total number of multiple CC-ALF filters is encoded into the code stream as syntax elements, and the index of the target filter selected by the first chroma component of the current block, the second color
- the index of the target filter selected by the degree component is coded into the code stream as a syntax element.
- the syntax element used to indicate the total number of multiple CC-ALF filters there is only one syntax element used to indicate the total number of multiple CC-ALF filters in the bitstream of one frame of image.
- the syntax element used to indicate the total number of filters of multiple CC-ALFs is located in the adaptation parameter set (Adaptation parameter set syntax) of the image.
- the syntax element used to indicate the total number of the plurality of cross-component ALF filters does not exist in the image header and/or the slice header.
- a truncated binary code may be used to encode a syntax element indicating the total number of filters of multiple CC-ALFs.
- a truncated binary code may be used to encode the index of the target filter.
- the target filter coefficient of the first chrominance component and the target filter coefficient of the second chrominance component of the current block are also encoded into the bitstream.
- the decoder After receiving the code stream, the decoder decodes the index of the target filter selected by the chrominance component of the current block and the total number of CC-ALF filters from the code stream, and determines the color of the current block according to the index and total number. CC-ALF filter for the degree component.
- the decoding end also decodes the target filter coefficient of the chrominance component of the current block from the code stream, so as to filter the ALF chrominance component of the current block according to the target filter and the target filter coefficient.
- the technical solutions of the embodiments of the present application can be applied to both the encoding end and the decoding end.
- the following describes the technical solutions of the embodiments of the present application from the encoding end and the decoding end respectively.
- FIG. 8 shows a schematic flowchart of a method 200 for loop filtering according to an embodiment of the present application.
- the method 200 may be executed by the encoding end. For example, it can be executed by the system 100 shown in FIG. 1 when performing an encoding operation.
- S210 Determine a target filter of the chrominance component of the current block from a plurality of cross-component adaptive loop filtering ALF filters.
- S220 Determine the target filter coefficient of the chrominance component of the current block according to the ALF chrominance component of the current block and the non-ALF luminance component of the current block.
- S230 Filter the ALF chrominance component of the current block according to the target filter and the target filter coefficient.
- S240 Determine the filtered chrominance component of the current block according to the chrominance component filtered by the target filter coefficient and the ALF chrominance component of the current block.
- S250 Encoding according to the filtered chrominance component of the current block, and encoding the total number of the multiple cross-component ALF filters as a syntax element, wherein the code stream of one frame of image contains only one Is a syntax element indicating the total number of the plurality of cross-component ALF filters.
- the syntax element used to indicate the total number of the multiple cross-component ALF filters is located in the adaptation parameter set syntax of the image.
- the syntax element used to indicate the total number of the plurality of cross-component ALF filters does not exist in the image header and/or the slice header.
- the ALF chrominance component of the current block is specifically the chrominance component of the current block that has undergone adaptive sample compensation filtering SAO and ALF in sequence.
- the luminance component of the current block without ALF is specifically the luminance component of the current block after SAO and without ALF.
- the encoding the total number of the multiple cross-component ALF filters as a syntax element includes: encoding the total number of the multiple cross-component ALF filters by using a truncated binary code.
- the method further includes: encoding the index of the target filter as a syntax element.
- the encoding the index of the target filter as a syntax element includes: encoding the index of the target filter by using a truncated binary code.
- the method further includes: encoding the target filter coefficient of the chrominance component of the current block into a bitstream.
- FIG. 9 shows a schematic flowchart of a method 200 for loop filtering according to an embodiment of the present application.
- the method 300 can be executed by the encoding end. For example, it can be executed by the system 100 shown in FIG. 1 when performing an encoding operation.
- S310 Decode the total number of cross-component ALF filters and the index of the target filter from the code stream, where the target filter is the ALF filter used by the chrominance component of the current block; among them, the code stream of one frame of image Contains only one syntax element indicating the total number of cross-component ALF filters.
- S320 Decode the target filter coefficient of the chrominance component of the current block from the code stream, where the target filter coefficient is a coefficient in the target filter.
- S330 Perform cross-component filtering on the ALF chrominance component of the current block according to the target filter and the target filter coefficient.
- S340 Determine the filtered chrominance component of the current block according to the chrominance component filtered by the target filter coefficient and the ALF chrominance component of the current block.
- the syntax element used to indicate the total number of the cross-component ALF filters is located in the adaptation parameter set syntax of the image.
- the syntax element used to indicate the total number of the cross-component ALF filters does not exist in the image header and/or the slice header.
- the ALF chrominance component of the current block is specifically the chrominance component of the current block that has undergone adaptive sample compensation filtering SAO and ALF in sequence.
- the luminance component of the current block without ALF is specifically the luminance component of the current block after SAO and without ALF.
- said decoding the total number of cross-component ALF filters and the index of the target filter from the code stream includes: using a truncated binary code to compare the total number of cross-component ALF filters and/or said The index of the target filter is decoded.
- Fig. 10 is a schematic block diagram of another device 30 for loop filtering at the encoding end according to an embodiment of the present application.
- the device 30 for loop filtering is a device for loop filtering at the video encoding end.
- the loop The filtering device 20 may correspond to the method 100 of loop filtering.
- the loop filtering device 30 includes: a processor 31 and a memory 32;
- the memory 32 may be used to store programs, and the processor 31 may be used to execute the programs stored in the memory to perform the following operations:
- Encoding is performed according to the filtered chrominance component of the current block, and the total number of the multiple cross-component ALF filters is used as a syntax element for encoding, wherein the bitstream of one frame of image contains only one for indicating The total number of syntax elements of the plurality of cross-component ALF filters.
- the syntax element used to indicate the total number of the plurality of cross-component ALF filters does not exist in the image header and/or the slice header.
- the ALF chrominance component of the current block is specifically the chrominance component of the current block that has undergone adaptive sample compensation filtering SAO and ALF in sequence.
- the luminance component of the current block without ALF is specifically the luminance component of the current block after SAO and without ALF.
- the encoding the total number of the multiple cross-component ALF filters as a syntax element includes:
- a truncated binary code is used to encode the total number of the multiple cross-component ALF filters.
- the processor is further configured to:
- the index of the target filter is coded as a syntax element.
- the encoding the index of the target filter as a syntax element includes:
- FIG. 11 is a schematic block diagram of a device 40 for loop filtering at the decoding end according to an embodiment of the present application.
- the device 40 for loop filtering may correspond to the method 200 for loop filtering.
- the loop filtering device 40 includes: a processor 41 and a memory 42;
- the memory 42 may be used to store programs, and the processor 41 may be used to execute the programs stored in the memory to perform the following operations:
- the total number of cross-component ALF filters and the index of the target filter are decoded from the code stream.
- the target filter is the ALF filter used by the chrominance component of the current block; among them, only one frame of image code stream Contains a syntax element for indicating the total number of cross-component ALF filters;
- the syntax element used to indicate the total number of the cross-component ALF filters is located in the adaptation parameter set syntax of the image.
- the ALF chrominance component of the current block is specifically the chrominance component of the current block that has undergone adaptive sample compensation filtering SAO and ALF in sequence.
- the luminance component of the current block without ALF is specifically the luminance component of the current block after SAO and without ALF.
- the decoding of the total number of cross-component ALF filters and the index of the target filter from the code stream includes:
- the truncated binary code is used to decode the total number of the cross-component ALF filters and/or the index of the target filter.
- the memory of the embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
- the non-volatile memory can be read-only memory (ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), and electrically available Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
- the volatile memory may be random access memory (RAM), which is used as an external cache.
- the embodiment of the present application also proposes a computer program, which includes instructions, when the computer program is executed by a computer, the computer can execute the method of the embodiments shown in FIG. 6 to FIG. 14.
- An embodiment of the present application also provides a chip that includes an input and output interface, at least one processor, at least one memory, and a bus.
- the at least one memory is used to store instructions, and the at least one processor is used to call the at least one memory. To execute the method of the embodiment shown in FIG. 6 to FIG. 14.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (28)
- 一种环路滤波的方法,其特征在于,包括:从多个跨分量自适应环路滤波ALF滤波器中确定当前块的色度分量的目标滤波器;根据所述当前块的经ALF的色度分量和所述当前块的未经ALF的亮度分量确定所述当前块的色度分量的目标滤波系数;根据所述目标滤波器和所述目标滤波系数对所述当前块的经ALF的色度分量进行滤波;根据经所述目标滤波系数滤波后的色度分量和所述当前块的经ALF后的色度分量,确定所述当前块的滤波后的色度分量;根据所述当前块的滤波后的色度分量进行编码,以及将所述多个跨分量ALF滤波器的数量总数作为语法元素进行编码,其中,一帧图像的码流中仅包含一个用于指示所述多个跨分量ALF滤波器的数量总数的语法元素。
- 根据权利权要1所述的环路滤波的方法,其特征在于,所述用于指示所述多个跨分量ALF滤波器的数量总数的语法元素位于所述图像的自适应参数集Adaptation parameter set syntax中。
- 根据权利权要1或2所述的环路滤波的方法,其特征在于,所述用于指示所述多个跨分量ALF滤波器的数量总数的语法元素未存在于图像头和/或条带slice头中。
- 根据权利要求1所述的方法,其特征在于,所述当前块的经ALF的色度分量具体为所述当前块的依次经过自适应样值补偿滤波SAO和经ALF之后的色度分量。
- 根据权利要求1所述的方法,其特征在于,所述当前块的未经ALF的亮度分量具体为所述当前块的经过SAO后且未经ALF的亮度分量。
- 根据权利要求1所述的方法,其特征在于,所述将所述多个跨分量ALF滤波器的数量总数作为语法元素进行编码,包括:采用截断二元码对所述多个跨分量ALF滤波器的数量总数进行编码。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:将所述目标滤波器的索引作为语法元素进行编码。
- 根据权利要求7所述的方法,其特征在于,所述将所述目标滤波器 的索引作为语法元素进行编码,包括:采用截断二元码对所述目标滤波器的索引进行编码。
- 一种环路滤波的方法,其特征在于,包括:从码流中解码出跨分量ALF滤波器的数量总数和目标滤波器的索引,所述目标滤波器为当前块的色度分量所采用的ALF滤波器;其中,一帧图像的码流中仅包含一个用于指示跨分量ALF滤波器的数量总数的语法元素;从码流中解码出所述当前块的色度分量的目标滤波系数,所述目标滤波系数是所述目标滤波器中的系数;根据所述目标滤波器和所述目标滤波系数对所述当前块的经ALF的色度分量进行滤波;根据经所述目标滤波系数滤波后的色度分量和所述当前块的经ALF后的色度分量,确定所述当前块的滤波后的色度分量。
- 根据权利权要9所述的环路滤波的方法,其特征在于,所述用于指示所述跨分量ALF滤波器的数量总数的语法元素位于所述图像的自适应参数集Adaptation parameter set syntax中。
- 根据权利权要9或10所述的环路滤波的方法,其特征在于,所述用于指示所述跨分量ALF滤波器的数量总数的语法元素未存在于图像头和/或条带slice头中。
- 根据权利要求9所述的方法,其特征在于,所述当前块的经ALF的色度分量具体为所述当前块的依次经过自适应样值补偿滤波SAO和经ALF之后的色度分量。
- 根据权利要求9所述的方法,其特征在于,所述当前块的未经ALF的亮度分量具体为所述当前块的经过SAO后且未经ALF的亮度分量。
- 根据权利要求9所述的方法,其特征在于,所述从码流中解码出跨分量ALF滤波器的数量总数和目标滤波器的索引,包括:采用截断二元码对所述跨分量ALF滤波器的数量总数和/或所述目标滤波器的索引进行解码。
- 一种环路滤波装置,其特征在于,包括:存储器,用于存储代码;处理器,用于执行所述存储器中存储的代码,以执行如下操作:从多个跨分量自适应环路滤波ALF滤波器中确定当前块的色度分量的目标滤波器;根据所述当前块的经ALF的色度分量和所述当前块的未经ALF的亮度分量确定所述当前块的色度分量的目标滤波系数;根据所述目标滤波器和所述目标滤波系数对所述当前块的经ALF的色度分量进行滤波;根据经所述目标滤波系数滤波后的色度分量和所述当前块的经ALF后的色度分量,确定所述当前块的滤波后的色度分量;根据所述当前块的滤波后的色度分量进行编码,以及将所述多个跨分量ALF滤波器的数量总数作为语法元素进行编码,其中,一帧图像的码流中仅包含一个用于指示所述多个跨分量ALF滤波器的数量总数的语法元素。
- 根据权利权要15所述的环路滤波的装置,其特征在于,所述用于指示所述多个跨分量ALF滤波器的数量总数的语法元素位于所述图像的自适应参数集Adaptation parameter set syntax中。
- 根据权利权要15或16所述的环路滤波的装置,其特征在于,所述用于指示所述多个跨分量ALF滤波器的数量总数的语法元素未存在于图像头和/或条带slice头中。
- 根据权利要求15所述的装置,其特征在于,所述当前块的经ALF的色度分量具体为所述当前块的依次经过自适应样值补偿滤波SAO和经ALF之后的色度分量。
- 根据权利要求15所述的装置,其特征在于,所述当前块的未经ALF的亮度分量具体为所述当前块的经过SAO后且未经ALF的亮度分量。
- 根据权利要求15所述的装置,其特征在于,所述将所述多个跨分量ALF滤波器的数量总数作为语法元素进行编码,包括:采用截断二元码对所述多个跨分量ALF滤波器的数量总数进行编码。
- 根据权利要求15所述的装置,其特征在于,所述处理器还用于:将所述目标滤波器的索引作为语法元素进行编码。
- 根据权利要求21所述的装置,其特征在于,所述将所述目标滤波器的索引作为语法元素进行编码,包括:采用截断二元码对所述目标滤波器的索引进行编码。
- 一种环路滤波的装置,其特征在于,包括:存储器,用于存储代码;处理器,用于执行所述存储器中存储的代码,以执行如下操作:从码流中解码出跨分量ALF滤波器的数量总数和目标滤波器的索引,所述目标滤波器为当前块的色度分量所采用的ALF滤波器;其中,一帧图像的码流中仅包含一个用于指示跨分量ALF滤波器的数量总数的语法元素;从码流中解码出所述当前块的色度分量的目标滤波系数,所述目标滤波系数是所述目标滤波器中的系数;根据所述目标滤波器和所述目标滤波系数对所述当前块的经ALF的色度分量进行滤波;根据经所述目标滤波系数滤波后的色度分量和所述当前块的经ALF后的色度分量,确定所述当前块的滤波后的色度分量。
- 根据权利权要23所述的环路滤波的装置,其特征在于,所述用于指示所述跨分量ALF滤波器的数量总数的语法元素位于所述图像的自适应参数集Adaptation parameter set syntax中。
- 根据权利权要23或24所述的环路滤波的装置,其特征在于,所述用于指示所述跨分量ALF滤波器的数量总数的语法元素未存在于图像头和/或条带slice头中。
- 根据权利要求23所述的装置,其特征在于,所述当前块的经ALF的色度分量具体为所述当前块的依次经过自适应样值补偿滤波SAO和经ALF之后的色度分量。
- 根据权利要求23所述的装置,其特征在于,所述当前块的未经ALF的亮度分量具体为所述当前块的经过SAO后且未经ALF的亮度分量。
- 根据权利要求23所述的装置,其特征在于,所述从码流中解码出跨分量ALF滤波器的数量总数和目标滤波器的索引,包括:采用截断二元码对所述跨分量ALF滤波器的数量总数和/或所述目标滤波器的索引进行解码。
Priority Applications (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311663799.8A CN117596413A (zh) | 2019-12-31 | 2019-12-31 | 视频处理方法及装置 |
JP2022537322A JP2023515742A (ja) | 2019-12-31 | 2019-12-31 | ループ内フィルタリングの方法、コンピュータ可読記憶媒体及びプログラム |
CN202311663855.8A CN117596414A (zh) | 2019-12-31 | 2019-12-31 | 视频处理方法及装置 |
PCT/CN2019/130954 WO2021134706A1 (zh) | 2019-12-31 | 2019-12-31 | 环路滤波的方法与装置 |
EP19958181.0A EP4087243A4 (en) | 2019-12-31 | 2019-12-31 | LOOP FILTRATION METHOD AND APPARATUS |
CN201980051177.5A CN112544081B (zh) | 2019-12-31 | 2019-12-31 | 环路滤波的方法与装置 |
KR1020227022560A KR20220101743A (ko) | 2019-12-31 | 2019-12-31 | 루프 필터링 방법 및 비일시적 컴퓨터 저장 매체 |
US17/853,906 US12041231B2 (en) | 2019-12-31 | 2022-06-29 | In-loop filtering method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/130954 WO2021134706A1 (zh) | 2019-12-31 | 2019-12-31 | 环路滤波的方法与装置 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/853,906 Continuation US12041231B2 (en) | 2019-12-31 | 2022-06-29 | In-loop filtering method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021134706A1 true WO2021134706A1 (zh) | 2021-07-08 |
Family
ID=75013407
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/130954 WO2021134706A1 (zh) | 2019-12-31 | 2019-12-31 | 环路滤波的方法与装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US12041231B2 (zh) |
EP (1) | EP4087243A4 (zh) |
JP (1) | JP2023515742A (zh) |
KR (1) | KR20220101743A (zh) |
CN (3) | CN117596414A (zh) |
WO (1) | WO2021134706A1 (zh) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113489977B (zh) * | 2021-07-02 | 2022-12-06 | 浙江大华技术股份有限公司 | 环路滤波方法、视频/图像编解码方法及相关装置 |
CN114205582B (zh) * | 2021-05-28 | 2023-03-24 | 腾讯科技(深圳)有限公司 | 用于视频编解码的环路滤波方法、装置及设备 |
CN114222118B (zh) * | 2021-12-17 | 2023-12-12 | 北京达佳互联信息技术有限公司 | 编码方法及装置、解码方法及装置 |
WO2023245544A1 (zh) * | 2022-06-23 | 2023-12-28 | Oppo广东移动通信有限公司 | 编解码方法、码流、编码器、解码器以及存储介质 |
WO2024016981A1 (en) * | 2022-07-20 | 2024-01-25 | Mediatek Inc. | Method and apparatus for adaptive loop filter with chroma classifier for video coding |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120051438A1 (en) * | 2010-09-01 | 2012-03-01 | Qualcomm Incorporated | Filter description signaling for multi-filter adaptive filtering |
CN104683819A (zh) * | 2015-01-31 | 2015-06-03 | 北京大学 | 一种自适应环路滤波方法及装置 |
CN104735450A (zh) * | 2015-02-26 | 2015-06-24 | 北京大学 | 一种在视频编解码中进行自适应环路滤波的方法及装置 |
US20180063527A1 (en) * | 2016-08-31 | 2018-03-01 | Qualcomm Incorporated | Cross-component filter |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180041778A1 (en) * | 2016-08-02 | 2018-02-08 | Qualcomm Incorporated | Geometry transformation-based adaptive loop filtering |
US10440396B2 (en) * | 2017-03-28 | 2019-10-08 | Qualcomm Incorporated | Filter information sharing among color components |
JP7343487B2 (ja) * | 2017-09-20 | 2023-09-12 | ヴィド スケール インコーポレイテッド | 360度ビデオ符号化におけるフェイス不連続の処理 |
CN116233456A (zh) * | 2018-03-30 | 2023-06-06 | 松下电器(美国)知识产权公司 | 编码装置、解码装置以及存储介质 |
US11197030B2 (en) * | 2019-08-08 | 2021-12-07 | Panasonic Intellectual Property Corporation Of America | System and method for video coding |
EP4024858A4 (en) * | 2019-08-29 | 2023-10-04 | LG Electronics Inc. | APPARATUS AND METHOD FOR IMAGE CODING BASED ON CROSS-COMPONENT ADAPTIVE LOOP FILTERING |
US20210076032A1 (en) * | 2019-09-09 | 2021-03-11 | Qualcomm Incorporated | Filter shapes for cross-component adaptive loop filter with different chroma formats in video coding |
CN118214864A (zh) * | 2019-09-11 | 2024-06-18 | 夏普株式会社 | 用于基于交叉分量相关性来减小视频编码中的重构误差的***和方法 |
US11451834B2 (en) * | 2019-09-16 | 2022-09-20 | Tencent America LLC | Method and apparatus for cross-component filtering |
US11202068B2 (en) * | 2019-09-16 | 2021-12-14 | Mediatek Inc. | Method and apparatus of constrained cross-component adaptive loop filtering for video coding |
US11343493B2 (en) * | 2019-09-23 | 2022-05-24 | Qualcomm Incorporated | Bit shifting for cross-component adaptive loop filtering for video coding |
WO2021070427A1 (en) * | 2019-10-09 | 2021-04-15 | Sharp Kabushiki Kaisha | Systems and methods for reducing a reconstruction error in video coding based on a cross-component correlation |
WO2021083257A1 (en) * | 2019-10-29 | 2021-05-06 | Beijing Bytedance Network Technology Co., Ltd. | Cross-component adaptive loop filter |
US11425405B2 (en) * | 2019-11-15 | 2022-08-23 | Qualcomm Incorporated | Cross-component adaptive loop filter in video coding |
US11265558B2 (en) * | 2019-11-22 | 2022-03-01 | Qualcomm Incorporated | Cross-component adaptive loop filter |
WO2021101345A1 (ko) * | 2019-11-22 | 2021-05-27 | 한국전자통신연구원 | 적응적 루프내 필터링 방법 및 장치 |
US11432016B2 (en) * | 2019-12-05 | 2022-08-30 | Hfi Innovation Inc. | Methods and apparatuses of syntax signaling constraint for cross-component adaptive loop filter in video coding system |
US20230023387A1 (en) * | 2019-12-17 | 2023-01-26 | Telefonaktiebolaget Lm Ericsson (Publ) | Low complexity image filter |
-
2019
- 2019-12-31 EP EP19958181.0A patent/EP4087243A4/en active Pending
- 2019-12-31 KR KR1020227022560A patent/KR20220101743A/ko active Search and Examination
- 2019-12-31 CN CN202311663855.8A patent/CN117596414A/zh active Pending
- 2019-12-31 CN CN202311663799.8A patent/CN117596413A/zh active Pending
- 2019-12-31 WO PCT/CN2019/130954 patent/WO2021134706A1/zh unknown
- 2019-12-31 JP JP2022537322A patent/JP2023515742A/ja active Pending
- 2019-12-31 CN CN201980051177.5A patent/CN112544081B/zh active Active
-
2022
- 2022-06-29 US US17/853,906 patent/US12041231B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120051438A1 (en) * | 2010-09-01 | 2012-03-01 | Qualcomm Incorporated | Filter description signaling for multi-filter adaptive filtering |
CN104683819A (zh) * | 2015-01-31 | 2015-06-03 | 北京大学 | 一种自适应环路滤波方法及装置 |
CN104735450A (zh) * | 2015-02-26 | 2015-06-24 | 北京大学 | 一种在视频编解码中进行自适应环路滤波的方法及装置 |
US20180063527A1 (en) * | 2016-08-31 | 2018-03-01 | Qualcomm Incorporated | Cross-component filter |
Non-Patent Citations (3)
Title |
---|
LUO LIDONG, WANG YONG-FANG;SHANG XI-WU;YANG PING;ZHANG ZHAO-YANG: "A low Complexity Adaptive Loop Filter Algorithm for Multi-View Video Coding", GUANGDIANZI-JIGUANG - JOURNAL OF OPTRONICS-LASER, TIANJIN DAXUE JIDIAN FENXIAO, TIANJIN, CN, vol. 25, no. 2, 1 February 2014 (2014-02-01), CN, pages 336 - 342, XP055827944, ISSN: 1005-0086, DOI: 10.16136/j.joel.2014.02.021 * |
MA, SIWEI, FALEI LUO, TIEJUN HUANG: "Kernel Technologies and Applications of AVS2 Video Coding Standard", DIANXIN KEXUE - TELECOMMUNICATIONS SCIENCE, RENMIN YOUDIAN CHUBANSHE, BEIJING, CN, vol. 33, no. 8, 20 August 2017 (2017-08-20), CN, pages 1 - 15, XP055709436, ISSN: 1000-0801, DOI: 10.11959/j.issn.1000-0801.2017245 * |
See also references of EP4087243A4 * |
Also Published As
Publication number | Publication date |
---|---|
CN112544081B (zh) | 2023-12-22 |
CN117596413A (zh) | 2024-02-23 |
JP2023515742A (ja) | 2023-04-14 |
CN117596414A (zh) | 2024-02-23 |
CN112544081A (zh) | 2021-03-23 |
US12041231B2 (en) | 2024-07-16 |
EP4087243A4 (en) | 2023-09-06 |
KR20220101743A (ko) | 2022-07-19 |
EP4087243A1 (en) | 2022-11-09 |
US20220345699A1 (en) | 2022-10-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI834773B (zh) | 使用適應性迴路濾波器以編碼和解碼影像之一或多個影像部分的方法、裝置和電腦可讀儲存媒體 | |
WO2021203394A1 (zh) | 环路滤波的方法与装置 | |
WO2021134706A1 (zh) | 环路滤波的方法与装置 | |
JP7308983B2 (ja) | クロマのためのクロスコンポーネント適応(アダプティブ)ループフィルタ | |
CN111742552B (zh) | 环路滤波的方法与装置 | |
WO2021163862A1 (zh) | 视频编码的方法与装置 | |
CN110337811A (zh) | 运动补偿的方法、装置和计算机*** | |
WO2022252222A1 (zh) | 编码方法和编码装置 | |
CN116456083A (zh) | 解码预测方法、装置及计算机存储介质 | |
WO2021056220A1 (zh) | 视频编解码的方法与装置 | |
CN113196762A (zh) | 图像分量预测方法、装置及计算机存储介质 | |
CN112640458A (zh) | 信息处理方法及装置、设备、存储介质 | |
WO2024007116A1 (zh) | 解码方法、编码方法、解码器以及编码器 | |
WO2023197189A1 (zh) | 编解码方法、装置、编码设备、解码设备以及存储介质 | |
WO2023141970A1 (zh) | 解码方法、编码方法、解码器、编码器和编解码*** | |
WO2023197179A1 (zh) | 解码方法、编码方法、解码器以及编码器 | |
WO2023197181A1 (zh) | 解码方法、编码方法、解码器以及编码器 | |
WO2023019407A1 (zh) | 帧间预测方法、编码器、解码器以及存储介质 | |
WO2022016535A1 (zh) | 视频编解码的方法和装置 | |
WO2023123398A1 (zh) | 滤波方法、滤波装置以及电子设备 | |
WO2021134700A1 (zh) | 视频编解码的方法和装置 | |
WO2020181541A1 (zh) | 环路滤波的方法、装置、计算机***和可移动设备 | |
WO2019191888A1 (zh) | 环路滤波的方法、装置和计算机*** | |
WO2019157718A1 (zh) | 运动补偿的方法、装置和计算机*** | |
WO2024081872A1 (en) | Method, apparatus, and medium for video processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19958181 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022537322 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20227022560 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2019958181 Country of ref document: EP Effective date: 20220801 |