WO2020192034A1 - 滤波方法及装置、计算机存储介质 - Google Patents
滤波方法及装置、计算机存储介质 Download PDFInfo
- Publication number
- WO2020192034A1 WO2020192034A1 PCT/CN2019/105799 CN2019105799W WO2020192034A1 WO 2020192034 A1 WO2020192034 A1 WO 2020192034A1 CN 2019105799 W CN2019105799 W CN 2019105799W WO 2020192034 A1 WO2020192034 A1 WO 2020192034A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- filter
- filtering
- neural network
- parameters
- parameter
- Prior art date
Links
- 238000001914 filtration Methods 0.000 title claims abstract description 465
- 238000000034 method Methods 0.000 title claims abstract description 125
- 238000013528 artificial neural network Methods 0.000 claims abstract description 173
- 230000003044 adaptive effect Effects 0.000 claims description 105
- 238000012549 training Methods 0.000 claims description 103
- 230000015654 memory Effects 0.000 claims description 30
- 238000012805 post-processing Methods 0.000 claims description 28
- 238000007781 pre-processing Methods 0.000 claims description 26
- 238000012545 processing Methods 0.000 claims description 19
- 238000010586 diagram Methods 0.000 description 27
- 238000013527 convolutional neural network Methods 0.000 description 23
- 238000013139 quantization Methods 0.000 description 12
- 230000015556 catabolic process Effects 0.000 description 9
- 238000006731 degradation reaction Methods 0.000 description 9
- 239000011159 matrix material Substances 0.000 description 8
- 238000004891 communication Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 5
- 238000005192 partition Methods 0.000 description 4
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 1
- 206010047571 Visual impairment Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 1
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 208000029257 vision disease Diseases 0.000 description 1
- 230000004393 visual impairment Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/60—Image enhancement or restoration using machine learning, e.g. neural networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/14—Coding unit complexity, e.g. amount of activity or edge presence estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/184—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
Definitions
- the embodiments of the present application relate to the technical field of video coding and decoding, in particular to a filtering method and device, and a computer storage medium.
- image/video filtering is realized by using filters.
- the pre-processing filter is used to pre-process the original image to reduce the video resolution, because the video resolution that needs to be encoded is higher than The resolution of the original video is low, which can use fewer bits to represent, thereby improving the overall coding efficiency;
- the post-processing filter processes the filtered video in the loop to output the video to improve the video resolution;
- loop The path filter is used to improve the subjective and objective quality of the reconstructed image.
- pre-processing filters For pre-processing filters, loop filters and post-processing filters, they are all implemented using convolutional neural networks, and filters based on convolutional neural networks can be divided into two categories, one is offline training, the other The species is trained online.
- all weight parameters of the neural network can be set at the encoding end and the decoding end at the same time after the training is completed, but because the weight coefficients are fixed, the filter performance may be in the case of some video content Down degradation.
- the weight parameters of the network need to be retrained and updated frequently, so the weight coefficients need to be transmitted in the code stream, which is computationally expensive and complex, and is suitable for processing a narrow range of video content. Relatively limited.
- the embodiments of the present application provide a filtering method and device, and a computer storage medium, which can improve the filtering performance of the filtering device and have a wide application range.
- the embodiment of the present application provides a filtering method, including:
- the pixel information to be filtered and the side information are input into a filter based on a neural network to output filtered pixels, where the filter is obtained by an online filtering part combined with an offline filtering part.
- the embodiment of the present application also provides a filtering method for encoding video, including:
- the first filter parameter is encoded and written into the video code stream.
- the embodiment of the present application also provides a filtering method for decoding a video bitstream, including:
- the adaptive filter is used to filter the input pixels to obtain filtered pixels.
- An embodiment of the present application provides a filtering device, including:
- the first acquiring part is configured to acquire pixel information to be filtered
- the first determining part is configured to determine side information
- the first filtering part is configured to input the pixel information to be filtered and the side information into a filter based on a neural network to output filtered pixels, wherein the filter is combined with an online filtering part and an offline filtering part get.
- An embodiment of the present application also provides a filtering device for encoding video, including:
- the second determining part is configured to determine filter parameters of the adaptive filter
- the second filtering part is configured to use the adaptive filter to filter the input pixels according to the filtering parameters and side information to obtain filtered pixels;
- the second determining part is further configured to determine a first filter parameter, where the first filter parameter is a part of the filter parameter in the filter parameter that needs to be encoded;
- the second writing part is configured to encode the first filter parameter and write the video code stream.
- An embodiment of the present application also provides a filtering device for decoding video, including:
- the third determining part is configured to analyze the video code stream and determine the first filter parameter of the adaptive filter, wherein the first filter parameter is a part of the filter parameters of all the filter parameters of the adaptive filter; and Determine all filter parameters of the adaptive filter according to the first filter parameter;
- the third filtering part is configured to use the adaptive filter to filter the input pixels according to all the filtering parameters and side information to obtain filtered pixels.
- An embodiment of the present application also provides a filtering device, including:
- Memory used to store executable instructions
- the processor is configured to implement the filtering method provided in the embodiment of the present application when executing the executable instructions stored in the memory.
- the embodiment of the present application provides a computer storage medium that stores executable instructions for causing a processor to execute, to implement the filtering method provided in the embodiment of the present application.
- the filtering device aims at the pixel information to be filtered; determines the side information of the video frame to be filtered; inputs the side information and the pixels to be filtered into the filter for filtering, and outputs the filtered pixels. Since the filter is obtained by the online filtering part and the offline filtering part, When filtering, you can use the offline filtering part, which is suitable for filtering a wide range of videos. At the same time, it can also ensure the update of the parameters of the online filtering part to avoid the degradation of filtering performance, that is, to improve the filtering performance of the filtering device, and it is applicable wide range.
- FIG. 1 is a schematic structural diagram of a coding block diagram provided by an embodiment of the application
- FIG. 2 is a schematic structural diagram of a decoding block diagram provided by an embodiment of the application.
- FIG. 3 is an optional flowchart of a filtering method provided by an embodiment of the application.
- FIG. 4 is a schematic structural diagram of a block division matrix provided by an embodiment of the application.
- FIG. 5 is a schematic diagram 1 of a connection mode of a filter provided by an embodiment of the application.
- FIG. 6 is a second schematic diagram of a connection mode of a filter provided by an embodiment of the application.
- FIG. 7 is a third schematic diagram of a connection mode of a filter provided by an embodiment of the application.
- FIG. 8 is a fourth schematic diagram of a connection mode of a filter provided by an embodiment of the application.
- FIG. 9 is a fifth schematic diagram of a connection mode of a filter provided by an embodiment of the application.
- FIG. 10 is another optional flowchart of a filtering method provided by an embodiment of this application.
- FIG. 11 is a schematic diagram of an optional structure of a filtering device provided by an embodiment of the application.
- FIG. 12 is a schematic diagram of another optional structure of a filtering device provided by an embodiment of the application.
- the video to be coded includes the original video frame, and the original video frame includes the original image.
- a variety of processing is performed on the original image, such as prediction, transformation, quantization, reconstruction, and filtering. During these processes , The processed video image may have shifted in pixel value relative to the original image, causing visual impairment or artifacts.
- CU block coding unit
- QP quantization Parameter
- prediction methods different reference image frames, etc.
- the size of the error introduced by each coding block and its distribution characteristics are independent of each other, and the discontinuity of the boundaries of adjacent coding blocks produces block effects.
- These distortions affect the subjective and objective quality of the reconstructed image block. If the reconstructed image block is used as a reference image for subsequent coded pixels, it will even affect the prediction accuracy of the subsequent codec, and further affect the size of the bits in the video bitstream. Therefore, in the video codec system, pre-processing filters, post-processing filters, and loop filters (In-Loop Filter) are often added to improve the subjective and objective quality of reconstructed images.
- pre-processing filters, post-processing filters, and loop filters In-Loop Filter
- FIG. 1 is a schematic structural diagram of a traditional coding block diagram.
- the traditional coding block diagram 10 may include a transform and quantization unit 101, an inverse transform and inverse quantization unit 102, a prediction unit 103, an in-loop filtering unit 104, and an entropy coding unit 105.
- Components such as a pre-processing filtering unit 106 and a post-processing filtering unit 107; wherein the prediction unit 103 further includes an intra prediction unit 1031 and an inter prediction unit 1032.
- the coding tree unit (CTU, Coding Tree Unit) can be obtained through preliminary division, and the content adaptive division of a CTU can be continued to obtain the CU.
- CTU Coding Tree Unit
- a CU generally contains one or more coding blocks (CB, Coding Block).
- CB Coding Block
- the unit 104 removes the block artifacts, and then adds the reconstructed residual block to the decoded image buffer unit to generate a reconstructed reference image; the entropy encoding unit
- the in-loop filtering unit 104 is a loop filter, also called an in-loop filter, which may include a de-blocking filter (DBF, De-Blocking Filter), sample point adaptive Compensation (SAO, Sample Adaptive Offset) filter and adaptive loop filter (ALF, Adaptive Loop Filter), etc.
- DPF de-blocking filter
- SAO Sample Adaptive Offset
- ALF adaptive Loop Filter
- the deblocking filter is used to implement deblocking filtering.
- next-generation video coding standard H.266/Versatile Video Coding for all coding block boundaries in the original image, first The coding parameters determine the boundary strength, and determine whether to perform deblocking filtering decision based on the calculated block boundary texture value, and finally correct the pixel information on both sides of the coding block boundary according to the boundary strength and filtering decision.
- SAO technology is also introduced, that is, the sample adaptive compensation filter; further, starting from the pixel domain, add pixels at the peak Add a positive value to the pixel at the negative value and valley for compensation.
- VVC VVC
- the deblocking filter and sample adaptive compensation filter are executed, it is necessary to further use the adaptive loop filter for filtering; for the adaptive loop filter, it is based on the pixel value of the original image and the distorted image Calculate the optimal filter in the sense of mean square.
- the preprocessing filtering unit 106 is used to receive the input original video frame, perform preprocessing and filtering on the original video frame to reduce the resolution of the video
- the post-processing filtering unit 107 is used to receive
- the video frame filtered in the loop is post-processed and filtered to the video frame filtered in the loop to improve the resolution of the video. In this way, less bits can be used in the video encoding and decoding process to obtain the reconstructed video frame, which can Improve the overall coding and decoding efficiency.
- the input of the neural network adopted by both the pre-processing filter and the post-processing filter is single input or multiple input, that is, a single image component or multiple image components are input to realize image reconstruction.
- Figure 2 is a schematic structural diagram of a traditional decoding block diagram.
- the traditional decoding block diagram 20 may include an entropy coding unit 201, an inverse quantization and inverse transformation unit 202, a prediction unit 203, and an in-loop
- the filtering unit 204 and the post-processing filtering unit 205 are components; wherein, the prediction unit 203 further includes an intra prediction unit 2031 and an inter prediction unit 2032.
- the video decoding process is the opposite or inverse process of the video encoding process, in which the post-processing filtered image obtained in the video decoding process is determined as the reconstructed video frame, as can be seen from Figure 2.
- the decoding process does not involve the pre-processing filtering unit in the encoding process, only the post-processing filtering unit and the in-loop filtering unit.
- the pre-processing filtering unit, the post-processing filtering unit, and the in-loop filtering unit can all be said to be one type of filter, and the filter in the embodiment of the present application may be a convolutional neural network (Convolutional Neural Networks). , CNN) filter, may also be other filters established by deep learning, and the embodiment of the present application does not specifically limit it.
- CNN convolutional Neural Networks
- the convolutional neural network filter can not only replace the preprocessing filter unit, postprocessing filter unit, and in-loop filter unit in Figure 1, but can also partially replace the preprocessing filter in Figure 1. Any one or two of the unit, post-processing filter unit and in-loop filter unit can even be used in combination with any one or more of the pre-processing filter unit, post-processing filter unit and in-loop filter unit in Figure 1 . It should also be noted that, for each of the components shown in FIG.
- transform and quantization unit 101 such as transform and quantization unit 101, inverse transform and inverse quantization unit 102, prediction unit 103, in-loop filtering unit 104, entropy encoding unit 105, preprocessing filter
- the unit 106 and the post-processing filtering unit 107, these components may be virtual modules or hardware modules.
- these units do not constitute a limitation on the coding block diagram, and the coding block diagram may include more or less components than those shown in the figure, or a combination of certain components, or different component arrangements.
- the convolutional neural network filter when used as the in-loop filtering unit, the convolutional neural network filter can be directly deployed on the encoding end and the decoding end after performing filter network training. Moreover, the convolutional neural network filter can also process side information and other auxiliary information with the input image to be filtered; in this way, it not only makes full use of the relationship between the image side information, and further improves the subjectivity and objectiveness of the video reconstruction image in the encoding and decoding process. quality.
- the convolutional neural network filter is used as the post-processing filter unit, the convolutional neural network filter can be directly deployed on the decoding side after the filter network training.
- the convolutional neural network filter is used as the preprocessing filter unit, the convolutional neural network The filter can be directly deployed on the encoding end after filtering network training.
- the filtering method in the embodiments of the present application can be applied to an encoding system and/or a decoding system when the filter types are different.
- the in-loop filter of the embodiment of the present application must be deployed in the coding system and the decoding system simultaneously.
- filters based on a convolutional neural network can be divided into two categories, one is offline training, and the other is online training.
- offline-trained filters all weight parameters of the neural network can be set at the encoding end and the decoding end at the same time after the training is completed, but because the weight coefficients are fixed, the filter performance may be in the case of some video content Down degradation.
- online training filters the weight parameters of the network need to be retrained and updated frequently, so the weight coefficients need to be transmitted in the code stream, which is computationally expensive and complex, and is suitable for processing a narrow range of video content. Relatively limited.
- an embodiment of the present application provides a filtering method, which is applied to a filtering device.
- the filtering device can be set in the preprocessing filter and the in-loop filter in the encoder, or in the decoder.
- the in-loop filter and the post-processing filter can also be used in other filters used in the prediction process, which is not specifically limited in the embodiment of the present application.
- the neural network-based filter is suitable for post-processing filtering, in-loop filtering, pre-processing filtering and prediction processes.
- the neural network-based filter when the neural network-based filter is suitable for post-processing filtering, it is set at the decoding end; when the neural network-based filter is suitable for in-loop processing filtering, it is set at the decoding end and the encoding end; based on neural network
- the filter is suitable for preprocessing filtering and is set at the encoding end.
- FIG. 3 is a schematic flowchart of an optional filtering method provided by an embodiment of this application.
- the filtering method may include:
- S101 Acquire pixel information to be filtered.
- S103 Input the pixel information to be filtered and the side information into the filter based on the neural network to output the filtered pixels, where the filter is obtained by combining the online filtering part with the offline filtering part.
- the video frame to be filtered is generated during the video encoding process of the original image in the video to be encoded.
- the video to be encoded includes the original image frame, and the original image frame includes the original image.
- the video frame to be filtered includes multiple frames of images, and the filtering device filters the pixel information of each frame of image to be filtered when filtering.
- each frame of the video frame to be filtered has corresponding side information, that is, the side information corresponding to the pixel information of each frame of image to be filtered.
- side information represents the boundary information of each frame of image.
- the original image can be divided into CTUs or CTUs into CUs; that is, the side information in the embodiments of this application can refer to CTU division information or CUs. Divide information; in this way, the filtering method in the embodiment of the present application can be applied not only to CU-level filtering, but also to CTU-level filtering, which is not specifically limited in the embodiment of the present application.
- the original image is divided into coding unit CU to obtain CU division information.
- the CU partition information fill the first value at each pixel position corresponding to the CU boundary and fill the second value at other pixel positions to obtain the first matrix corresponding to the CU partition information; where the first value is different from the second value;
- the first matrix here is the side information of each frame of image.
- the first value can be a preset value, letter, etc.
- the second value can also be a preset value, letter, etc., as long as the first value is different from the second value, for example ,
- the first value can be set to 2, and the second value can be set to 1, which is not limited in the embodiment of the application.
- the filtering device filters the to-be-filtered pixel information of the to-be-filtered video frame through the filter and the side information to obtain the filtered pixels, which can be understood as the final filtered image, where the filter is combined with the online filtering part and the offline filtering part get.
- the filtering device can use CU information as auxiliary information to assist the filtering process of the video frame to be filtered, that is, in the process of video encoding the original image in the video to be encoded, the CU division can be fully utilized Information, fused with the video frame to be filtered and then guided filtering.
- the CU division information is converted into a coding unit map (Coding Unit Map, CUmap), and represented by a two-dimensional matrix, that is, the CUmap matrix, that is, the first matrix in the embodiment of the present application; That is to say, taking the original image as an example, it can be divided into multiple CUs; each pixel position corresponding to the boundary of each CU is filled with the first value, and other pixel positions are filled with the second value.
- a first matrix reflecting CU partition information can be constructed.
- FIG. 4 shows a schematic structural diagram of a block division matrix provided by an embodiment of the present application.
- the CTU can be divided into 9 CUs; suppose the first value is set to 2 and the second value is set to 1; in this way, each pixel corresponding to the boundary of each CU The dot position is filled with 2 and the other pixel positions are filled with 1. That is to say, the pixel position filled with 2 represents the boundary of the CU, so that the CU division information can be determined, that is, the video frame to be filtered Side information of an image.
- the CU division information may also be corresponding based on the image component level, which is not limited in the embodiment of the present application.
- the filter is formed by cascading the online filtering model and the offline filtering model; or, the filter is formed by cascading the online filtering model and the offline filtering model; wherein, the offline filtering model has some online training parameters; Alternatively, the filter is formed by an offline filtering model, where there are some online training parameters in the offline filtering model.
- the filtering device filters the to-be-filtered pixel information of the video frame to be filtered through the filter and combined with the side information.
- the realization process of obtaining the filtered pixels can be: the filtering device adopts the offline filtering model and combines the side information.
- the information is filtered on the video frame to be filtered to obtain a filtered image, that is, the pixel information to be filtered and the side information are input to an offline filtering model based on a neural network to output the filtered pixels.
- the filter is formed by cascading the online filtering model and the offline filtering model; or, when the filter is formed by cascading the online filtering model and the offline filtering model, the filtering device passes through the filter and combines the side information to filter the pixel information of the video frame to be filtered Perform filtering to obtain a filtered image, that is, the realization process of filtered pixels: the filtering device adopts an offline filtering model and combines the side information to filter the video frame to be filtered to obtain an intermediate filtered image; adopts an online filtering model and combines the side information to filter the intermediate image Perform filtering to obtain a filtered image.
- the filtering device adopts an online filtering model and combines the side information to filter the to-be-filtered video frame to obtain an intermediate filtered image; adopts an offline filtering model and combines the side information to filter the intermediate filtered image to obtain a filtered image. That is, input the pixel information and side information to be filtered into the neural network-based offline filtering model to output the intermediate filtered pixels; input the intermediate filtered pixels and side information into the neural network-based online filtering model to output the filtered pixels; or , Input the pixel information and side information to be filtered into the neural network-based online filtering model to output the intermediate filtered pixels; input the intermediate filtered pixels and side information into the neural network-based offline filtering model to output the filtered pixels.
- the filter may be formed by combining an online convolutional neural network and an offline convolutional neural network.
- an offline model is cascaded with an online model, or an offline model, but part of it can be trained online, or an offline model (some of which can be trained online) is cascaded with an online model, this embodiment of the application No restrictions.
- the offline filtering model is an offline training filter; the offline filtering model also includes offline training parameters.
- the online filtering model is an online training filter; wherein, the online filtering model includes online training parameters.
- the offline filtering model refers to a filter trained offline.
- the convolutional neural network needs to be trained with a large number of pictures and tested on pictures divided from the training set. If the performance is very effective, it can be applied to filters such as in-loop/post-processing filters in video codec technology. All weight parameters (ie, parameters) in the convolutional neural network can be set on the encoding end and the decoding end at the same time after the training is completed.
- Online filtering models refer to filters trained online.
- Convolutional neural networks are often trained based on random access segments (some video frames that have just been coded in the video sequence) to obtain updated parameters and use updated online filtering with updated parameters.
- the model processes subsequent frames of the same video sequence. This type of convolutional neural network is small in scale and can only be applied to the processing of a very narrow range of video sequences.
- the filter is formed by cascading online filtering model 1 (On-line trained NN) and offline filtering model 2 (Off-line trained NN), where the cascading sequence shown in FIG. 5 or as shown in FIG. 6
- the cascade sequence shown can be used.
- the filtering device inputs the pixel information 4 to be filtered and its corresponding side information 3 (Side information) into the offline filtering model 2, and after offline filtering, the intermediate filtered pixel 5 is obtained; the intermediate filtered pixel 5
- the sum-side information 3 is then input to the online filtering model 1, for online filtering, and filtered output (Filtered output) filtered pixel 6.
- filtered output Filtered output
- the filtering device inputs the pixel information 4 to be filtered and its corresponding side information 3 into the online filtering model 1, and after online filtering, the intermediate filtered pixel 5 is obtained; the intermediate filtered pixel 5 and side information 3 Then input to the offline filtering model 2, perform offline filtering, and output the filtered pixel 6 after filtering.
- the filter is formed by cascading an online filtering model and an offline filtering model; wherein, there are some online training parameters in the offline filtering model.
- the filtering device inputs the pixel information 4 to be filtered and its corresponding side information 3 into the offline filtering model 2, and after offline-online hybrid filtering, the intermediate filtered pixel 5 is obtained; the intermediate filtered pixel 5 and the side
- the information 3 is then input to the online filtering model 1 for online filtering, and the filtered output pixel 6 is filtered.
- the filtering device inputs the pixel information 4 to be filtered and its corresponding side information 3 into the online filtering model 1, and after online filtering, the intermediate filtered pixel 5 is obtained; the intermediate filtered pixel 5 and side information 3 Then input to the offline filtering model 2, perform offline-online hybrid filtering, and output filtered pixel 6.
- the filter is formed by an offline filtering model, where there are some online training parameters in the offline filtering model.
- the filtering device inputs the pixel information 1 to be filtered and its corresponding side information 2 into the offline filtering model 3, after offline-online hybrid filtering, the filtered pixel 4 is output by filtering.
- the advantage of the offline filtering model is that it has outstanding performance and does not require additional transmission weight coefficients.
- the disadvantage is that the adaptive ability of the sequence is lacking; the advantage of the online filtering model is that it has the adaptive ability to the sequence, but it requires transmission weights. coefficient.
- the combination of the offline filtering part and the online filtering part can not only use the performance of the offline filtering model, but also use the online filtering model to improve the objective quality of the video again. That is to say, the filtering device of the embodiment of the present application can obtain a trade-off between generalization ability and sequence adaptability in filtering processing of different videos, and can bring better coding performance under the condition of low complexity.
- the filtering device inputs the pixel information to be filtered and the side information into the filter based on the neural network to output the filtered pixels, and then performs the online filter based on the filtered pixels.
- Training to obtain online filtering parameters after training the online part of the filter based on the online filtering parameters, input the subsequent pixel information and side information to be filtered into the updated filter for filtering to obtain the subsequent filtered pixels; write the online filtering parameters into the video Code stream.
- the neural network is often trained based on random access segments (the pixel information of some video frames that have just been encoded in the video sequence), and it is immediately used for subsequent follow-ups of the same sequence after training.
- Frame that is, the pixel information of subsequent frames.
- Such neural networks are small in scale and can only be applied to a very narrow range of video content.
- the weight parameters obtained by training (that is, online filtering parameters) need to be retrained and updated frequently. Therefore, the weight coefficients need to be transmitted in the code stream.
- the following takes the pixel information to be filtered of the video frame to be filtered as an example to describe the online filtering parameters.
- an optional implementation process of the implementation of S103 may include: S1031-S1036. as follows:
- the filters both have an offline filtering part and an online filtering part
- the difference is that some online filtering parts are implemented directly through the online filtering model, and some are implemented through the offline filtering model.
- Part of the parameters of is obtained through online training, and the online filtering part is realized. The latter is realized by combining the above two online filtering.
- the online filtering part is trained based on random access segments (some video frames that have just been filtered in the video sequence) to obtain updated parameters, and the updated online filtering part with updated parameters is used to process the same video sequence Of subsequent frames.
- the video frame to be filtered may contain N frames of images, N is the total number of video frames to be filtered, and N is a positive integer greater than 1, and the value of N is determined by the number of frames to be filtered.
- N is the total number of video frames to be filtered
- N is a positive integer greater than 1
- the value of N is determined by the number of frames to be filtered.
- the filtering device when the filtering device performs filtering on the i-th frame image in the video frame to be filtered, the i-th frame image and the side information of the i-th frame image are input into the filter, and the filter is processed online and offline. After filtering, output the filtered image of the i-th frame, where i starts from 1, that is, the filtering device starts to filter the image of the first frame. After filtering the image of the i-th frame, the filtering device continues to filter the i+1 The frame image is filtered until the filtering process of the i+H frame image is completed, and the i-th frame filtered image to the i+H frame filtered image are obtained, where H is greater than 1 and less than Ni.
- the filtering device adopts a filter including an online filtering part and an offline filtering part to filter the to-be-filtered video frame, after filtering a sequence of frame images, the filtered i+H frame image and i +H frame filtered image (i-th frame filtered image to i+H frame filtered image) is used as the training sample, and the online part of the filter is trained again, and the training result is the closest to the output result of the i+H frame filtered image
- the filtering device can use the updated online filtering part to combine the existing offline filtering part with the i+H+th For the side information of 1 frame of image, filter the filtered image of the i+H+1 frame to obtain the filtered image of the i+H+1 frame, and continue to filter the image of the i+H+2 frame until the Nth frame is filtered.
- the filtering process of the frame image is completed,
- the filter device can start to update the online filtering part of the filter after filtering the Hi frame, and the specific value of H can be based on actual needs and specific design It is decided that the embodiments of this application are not limited.
- the filtering process for the i+H+1 frame image to the Nth frame image can still be performed after the Hi frame image has been processed, but the Nth frame image has not been processed yet again.
- the update of the online filtering part uses the updated online filtering part to continue filtering of subsequent frames until the filtering of the Nth frame of image is completed.
- the filtering device may start to update the online filtering parameters of the online filtering part after filtering a fixed number of frames of the image, or the number of filtered frames may be different every time the online filtering part is updated.
- the embodiments of this application are not limited.
- the condition for stopping the update is that the filtering of the last frame of image is completed.
- the online filtering part may be an online filtering model, or a part of the convolutional neural network level in the offline model can update parameters online to obtain parameter update information (that is, online filter parameters), and It may be a combination of the previous two online filtering parts, which is not limited in the embodiment of the present application.
- the filter is formed by cascading online filtering model 1 (On-line trained NN) and offline filtering model 2 (Off-line trained NN), where the cascading sequence shown in FIG. 5 or as shown in FIG. 6
- the cascade sequence shown can be used.
- the filtering device inputs the pixel information 4 to be filtered and its corresponding side information 3 (Side information) into the offline filtering model 2 and the online filtering model 1.
- Side information Side information
- online filtering model 1 is trained to obtain parameter update information (that is, online filtering parameters); in order to train the online part of the filter based on the online filtering parameters, it will be later
- parameter update information that is, online filtering parameters
- the filtered pixel information and side information are input to the updated filter for filtering to obtain subsequent filtered pixels.
- the filter is formed by cascading an online filtering model and an offline filtering model; wherein, there are some online training parameters in the offline filtering model.
- an online filtering model and an offline filtering model
- there are some online training parameters in the offline filtering model either the cascade sequence shown in FIG. 7 or the cascade sequence shown in FIG. 8 may be used.
- the filtering device inputs pixel information 4 to be filtered and its corresponding side information 3 (Side information) into offline filtering model 2 (offline-online hybrid filtering) and online filtering model 1 during the filtering process .
- Side information pixel information 4 to be filtered and its corresponding side information 3 (Side information) into offline filtering model 2 (offline-online hybrid filtering) and online filtering model 1 during the filtering process .
- the subsequent pixel information and side information to be filtered are input into the updated filter for filtering to obtain the subsequent filtered pixels.
- the filter is formed by an offline filtering model, where there are some online training parameters in the offline filtering model.
- the filtering device inputs the pixel information 1 to be filtered and its corresponding side information 2 into the offline filtering model 3.
- the filtered pixel 4 is filtered and output, and the filtering of the previous frame can also be used.
- the online part of the offline filter model 3 is trained online, that is, the online part of the offline filter model 3 is trained to obtain parameter update information (that is, online filter parameters); in order to train the online part of the filter based on the online filter parameters, Subsequent to-be-filtered pixel information and side information are input to the updated filter for filtering to obtain subsequent filtered pixels.
- the filtering device is aimed at the video frame to be filtered; determines the side information of the video frame to be filtered; inputs the side information and the video frame to be filtered into the filter for filtering, and outputs the filtered image, because the filter is combined offline by the online filtering part
- the filtering part is obtained, so that the offline filtering part can be used in filtering, which is suitable for filtering a wide range of videos, and at the same time, it can also ensure the update of the online filtering parameters of the online filtering part, and use the updated online filtering part to be filtered
- the filtering of subsequent frames of the video frame avoids the degradation of filtering performance, improves the filtering performance of the filtering device, and has a wide application range.
- the embodiment of the application also provides a filtering method for encoding video and applying it to the encoding end, including: determining filter parameters of the adaptive filter; according to the filter parameters and side information, using the adaptive filter to input The pixels are filtered to obtain filtered pixels; the first filter parameter is determined, where the first filter parameter is a part of the filter parameter that needs to be encoded; the first filter parameter is encoded and written into the video code stream.
- the first filter parameter is an online filter parameter, that is, parameter update information.
- the adaptive filter is a neural network filter.
- the adaptive filter is a cascade filter of the first neural network filter and the second neural network filter.
- determining the filter parameters of the adaptive filter includes: using offline training to determine the second filter parameters of the neural network filter, where the second filter parameters are all parameters of the neural network filter; use Online training determines the third filter parameter of the neural network filter; among them, the second filter parameter is part of the neural network filter parameters; the third filter parameter is used to update the corresponding filter parameter in the second filter parameter, and the updated second filter parameter
- the filter parameters are used as all the filter parameters of the neural network filter.
- determining the first filter parameter includes: using the third filter parameter as the first filter parameter.
- determining the filter parameters of the adaptive filter includes: using offline training to determine all filter parameters of the first neural network filter; using online training to determine the fourth filter parameter of the second neural network filter , Where the fourth filter parameter is all the parameters of the second neural network filter.
- determining the first filter parameter includes: using the fourth filter parameter as the first filter parameter.
- determining the filter parameters of the adaptive filter includes: using offline training to determine the fifth filter parameter of the first neural network filter, where the fifth filter parameter is that of the first neural network filter. All parameters; use online training to determine the sixth filter parameter of the neural network filter, where the sixth filter parameter is part of the parameter of the first neural network filter; use the sixth filter parameter to update the corresponding filter parameter in the fifth filter parameter, Use the updated fifth filter parameter as the filter parameter of the first neural network filter; in the process of encoding the video or image, use online training to determine the seventh filter parameter of the second neural network filter, where the seventh filter The parameters are all the parameters of the second neural network filter.
- determining the first filter parameter includes: using the sixth filter parameter and the seventh filter parameter as the first filter parameter.
- offline training is a process of training the neural network filter using one or more images before starting to encode the video or image
- Online training is a process of training the neural network filter using one or more images in the video sequence to be encoded in the process of encoding a video or image.
- the adaptive filter is a pre-processing filter used in encoding video, or an in-loop filter.
- the method for updating the parameters of the wave device can adopt random access segment, or sequence adaptive, which is not limited in the embodiment of this application.
- the filter parameters of the adaptive filter are determined; according to the filter parameters and side information, the adaptive filter is used to filter the input pixels to obtain filtered pixels; the first filter parameter is determined, Among them, the first filter parameter is a part of the filter parameter (online filter parameter) in the filter parameter that needs to be encoded; the first filter parameter is encoded and written into the video code stream. Since the filter is obtained by the online filtering part combined with the offline filtering part, the offline filtering part can be used during filtering, and it is suitable for filtering a wide range of videos. At the same time, it can also ensure that some models of the online filtering part are updated to avoid The degradation of the filtering performance means that the filtering performance of the filtering device is improved, and the application range is wide.
- the embodiment of the present application also provides a filtering method, which is used to decode the video and applied to the decoding end, including: parsing the video code stream and determining the first filtering parameter of the adaptive filter, where the first filtering parameter is Part of the filter parameters in all the filter parameters of the adaptive filter;
- the input pixels are filtered by an adaptive filter to obtain filtered pixels.
- the first filter parameter is an online filter parameter, that is, parameter update information.
- the adaptive filter is a neural network filter.
- the adaptive filter is two or more cascaded neural network filters of different types.
- the adaptive filter is a cascade filter of the first neural network filter and the second neural network filter.
- determining all the filter parameters of the adaptive filter includes: determining the second filter parameter of the neural network filter, where the second filter parameter is all the parameters of the neural network filter; using the first The filter parameter updates the corresponding filter parameter in the second filter parameter, and uses the updated second filter parameter as all the filter parameters of the neural network filter.
- determining the second filter parameter of the neural network filter includes: using offline training to determine the second filter parameter of the neural network filter; or, before decoding the video stream, obtaining the second filter parameter .
- determining all filter parameters of the adaptive filter includes: determining all filter parameters of the first neural network filter; and using the first filter parameter as all filter parameters of the second neural network filter.
- determining all filter parameters of the first neural network filter includes: using offline training to determine all filter parameters of the first neural network filter; or, before decoding the video code stream, obtaining the first All filter parameters of neural network filter.
- determining all filter parameters of the adaptive filter includes: determining a fifth filter parameter of the first neural network filter; wherein the fifth filter parameter is all parameters of the first neural network filter ; Use part of the first filter parameter as the sixth filter parameter, where the sixth filter parameter is part of the first neural network filter. Use the sixth filter parameter to update the corresponding filter parameter in the fifth filter parameter, which will update The following fifth filter parameter is used as all filter parameters of the first neural network filter; another part of the first filter parameters is used as all parameters of the second neural network filter.
- determining the fifth filter parameter of the first neural network filter includes: using offline training to determine the fifth filter parameter; or, before decoding the video bitstream, obtaining the fifth filter parameter.
- offline training is a process of training the neural network filter using one or more images before parsing the video code stream.
- the adaptive filter is an in-loop filter or a post-processing filter used in decoding a video bitstream.
- the method (picture adaptive, or sequence adaptive) for updating the parameters of the wave device can be sequence adaptive, which is not limited in the embodiment of this application.
- the first filter parameter of the adaptive filter is determined by analyzing the video code stream.
- the first filter parameter is part of the filter parameters of all the filter parameters of the adaptive filter (online filter Parameters); and according to the first filter parameter, determine all filter parameters of the adaptive filter (online filter parameters and offline filter parameters); according to all filter parameters and side information, use the adaptive filter to filter the input pixels to obtain the filter
- the offline filtering part can be used in filtering, and it is suitable for filtering a wide range of videos, and at the same time, it can also ensure the quality of some models in the online filtering part. It is updated to avoid the degradation of filtering performance, that is, to improve the filtering performance of the filtering device, and the application range is wide.
- FIG. 11 is a schematic structural diagram of an optional filtering device provided by an embodiment of the application.
- the filtering device 1 may include: an acquiring part 11, a determining part 12, and a filtering part 13. ,among them,
- the first acquiring part 11 is configured to acquire pixel information to be filtered
- the first determining part 12 is configured to determine side information
- the first filtering part 13 is configured to input the pixel information to be filtered and the side information into a neural network-based filter to output filtered pixels, wherein the filter is combined with the offline filtering part by the online filtering part.
- the filtering part is obtained.
- the filter is formed by cascading an online filtering model and an offline filtering model; or,
- the filter is formed by cascading an online filtering model and an offline filtering model; wherein, there are some online training parameters in the offline filtering model.
- the filter is formed by an offline filtering model, where some online training parameters exist in the offline filtering model;
- the first filtering part 13 is further configured to input the pixel information to be filtered and the side information into the offline filtering model based on a neural network to output the filtered pixels.
- the first filtering part 13 is further configured to input the pixel information to be filtered and the side information into the offline filtering model based on the neural network to output the intermediate filtering Pixels; input the intermediate filtered pixels and the side information to the online filtering model based on a neural network to output the filtered pixels; or, input the pixel information to be filtered and the side information to
- the online filtering model based on the neural network is used to output intermediate filtered pixels; the intermediate filtered pixels and the side information are input to the offline filtering model based on the neural network to output the filtered pixels.
- the filter further includes a training part 14 and a writing part 15;
- the first training part 14 is configured to input the pixel information to be filtered and the side information into a filter based on a neural network to output filtered pixels, based on the filtered pixels, Online training is performed on the filter to obtain online filtering parameters.
- the first filtering part 13 is also configured to, after training the online part of the filter based on the online filtering parameters, input the subsequent pixel information to be filtered and the side information into the updated filter for filtering to obtain subsequent Filtered pixels;
- the first writing part 15 is configured to write the online filtering parameters into a video code stream.
- the offline filtering model is an offline trained filter; wherein, the offline filtering model further includes offline training parameters.
- the online filtering model is an online training filter; wherein, the online filtering model includes online training parameters.
- the neural network-based filter is suitable for post-processing filtering, in-loop filtering, pre-processing filtering and prediction processes.
- the neural network-based filter when the neural network-based filter is suitable for post-processing filtering, it is set at the decoding end;
- the neural network-based filter When the neural network-based filter is suitable for in-loop processing and filtering, it is set at the decoding end and the encoding end;
- the filter based on neural network When the filter based on neural network is suitable for preprocessing filtering, it is set at the encoding end.
- An embodiment of the present application also provides a filtering device for encoding video, including:
- the second determining part 20 is configured to determine filter parameters of the adaptive filter
- the second filtering part 21 is configured to use the adaptive filter to filter the input pixels according to the filtering parameters and side information to obtain filtered pixels;
- the second determining part 20 is further configured to determine a first filter parameter, where the first filter parameter is a part of the filter parameter in the filter parameter that needs to be encoded;
- the second writing part 22 is configured to encode the first filter parameter and write the video bitstream.
- the adaptive filter is a neural network filter.
- the adaptive filter is a cascade filter of the first neural network filter and the second neural network filter.
- the second determining part 20 is further configured to use offline training to determine the second filter parameters of the neural network filter, where the second filter parameters are all parameters of the neural network filter; Use online training to determine the third filter parameter of the neural network filter; where the second filter parameter is part of the neural network filter parameter; use the third filter parameter to update the corresponding filter parameter in the second filter parameter, and the updated first filter parameter
- the second filter parameter serves as all the filter parameters of the neural network filter.
- the second determining part 20 is further configured to use the third filter parameter as the first filter parameter.
- the second determining part 20 is further configured to: use offline training to determine all filter parameters of the first neural network filter; use online training to determine the fourth Filtering parameters, where the fourth filtering parameter is all the parameters of the second neural network filter.
- the second determining part 20 is further configured to use the fourth filter parameter as the first filter parameter.
- the second determining part 20 is further configured to use offline training to determine the fifth filter parameter of the first neural network filter, wherein the fifth filter parameter is the first neural network filter Use online training to determine the sixth filter parameter of the neural network filter, where the sixth filter parameter is part of the first neural network filter; use the sixth filter parameter to update the corresponding filter parameter in the fifth filter parameter , Use the updated fifth filter parameter as the filter parameter of the first neural network filter; in the process of encoding the video or image, use online training to determine the seventh filter parameter of the second neural network filter, where the seventh The filter parameters are all parameters of the second neural network filter.
- the second determining part 20 is further configured to use the sixth filter parameter and the seventh filter parameter as the first filter parameter.
- offline training is a process of training the neural network filter using one or more images before starting to encode the video or image
- Online training is a process of training the neural network filter using one or more images in the video sequence to be encoded in the process of encoding a video or image.
- the adaptive filter is a pre-processing filter used in encoding video, or an in-loop filter.
- the filter parameters of the adaptive filter are determined; according to the filter parameters and side information, the adaptive filter is used to filter the input pixels to obtain filtered pixels; the first filter parameter is determined, Among them, the first filter parameter is a part of the filter parameter (online filter parameter) in the filter parameter that needs to be encoded; the first filter parameter is encoded and written into the video code stream. Since the filter is obtained by the online filtering part combined with the offline filtering part, the offline filtering part can be used during filtering, and it is suitable for filtering a wide range of videos. At the same time, it can also ensure that some models of the online filtering part are updated to avoid The degradation of the filtering performance means that the filtering performance of the filtering device is improved, and the application range is wide.
- An embodiment of the application also provides a filtering device for decoding a video bitstream, including:
- the third determining part 30 is configured to analyze the video code stream and determine the first filter parameter of the adaptive filter, where the first filter parameter is a part of the filter parameters of all the filter parameters of the adaptive filter; And determining all filter parameters of the adaptive filter according to the first filter parameter;
- the third filtering part 31 is configured to use the adaptive filter to filter the input pixels according to all the filtering parameters and side information to obtain filtered pixels.
- the adaptive filter is a neural network filter.
- the adaptive filter is two or more cascaded neural network filters of different types.
- the adaptive filter is a cascade filter of the first neural network filter and the second neural network filter.
- the third determining part 30 is further configured to determine the second filter parameter of the neural network filter, where the second filter parameter is all the parameters of the neural network filter; using the first The filter parameter updates the corresponding filter parameter in the second filter parameter, and uses the updated second filter parameter as all the filter parameters of the neural network filter.
- the third determining part 30 is further configured to use offline training to determine the second filter parameter of the neural network filter; or, before decoding the video bitstream, obtain the second filter parameter.
- the third determining part 30 is further configured to determine all the filter parameters of the first neural network filter; use the first filter parameters as all the filter parameters of the second neural network filter.
- the third determining part 30 is further configured to use offline training to determine all filtering parameters of the first neural network filter; or, before decoding the video stream, obtain the first neural network All filter parameters of the filter.
- the third determining part 30 is further configured to determine a fifth filter parameter of the first neural network filter; wherein, the fifth filter parameter is all parameters of the first neural network filter ; Use part of the first filter parameter as the sixth filter parameter, where the sixth filter parameter is part of the first neural network filter. Use the sixth filter parameter to update the corresponding filter parameter in the fifth filter parameter, which will update The following fifth filter parameter is used as all filter parameters of the first neural network filter; another part of the first filter parameters is used as all parameters of the second neural network filter.
- the third determining part 30 is further configured to use offline training to determine the fifth filter parameter; or, before decoding the video bitstream, obtain the fifth filter parameter.
- offline training is a process of training the neural network filter using one or more images before parsing the video code stream.
- the adaptive filter is an in-loop filter or a post-processing filter used in decoding a video bitstream.
- the first filter parameter of the adaptive filter is determined by analyzing the video code stream.
- the first filter parameter is part of the filter parameters of all the filter parameters of the adaptive filter (online filter Parameters); and according to the first filter parameter, determine all filter parameters of the adaptive filter (online filter parameters and offline filter parameters); according to all filter parameters and side information, use the adaptive filter to filter the input pixels to obtain the filter
- the offline filtering part can be used in filtering, and it is suitable for filtering a wide range of videos, and at the same time, it can also ensure the quality of some models in the online filtering part. It is updated to avoid the degradation of filtering performance, that is, to improve the filtering performance of the filtering device, and the application range is wide.
- a "unit" may be a part of a circuit, a part of a processor, a part of a program, or software, etc., of course, may also be a module, or may be non-modular.
- the various components in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the above-mentioned integrated unit can be realized in the form of hardware or software function module.
- the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer readable storage medium.
- the technical solution of this embodiment is essentially or It is said that the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product.
- the computer software product is stored in a storage medium and includes several instructions to enable a computer device (which can A personal computer, server, or network device, etc.) or a processor (processor) executes all or part of the steps of the method described in this embodiment.
- the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
- FIG. 12 is a schematic structural diagram of an optional filtering device according to an embodiment of the application.
- An embodiment of the application provides a filtering device, including:
- the memory 17 is used to store executable instructions
- the processor 16 is configured to implement the filtering method provided in the embodiment of the present application when executing the executable instructions stored in the memory 17.
- the various components in the terminal are coupled together through the communication bus 18.
- the communication bus 18 is used to implement connection and communication between these components.
- the communication bus 18 also includes a power bus, a control bus, and a status signal bus.
- various buses are marked as the communication bus 18 in FIG. 12.
- the embodiment of the present application provides a computer storage medium that stores executable instructions for causing a processor to execute, to implement the filtering method provided in the embodiment of the present application.
- the memory in the embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
- the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), and electrically available Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
- the volatile memory may be a random access memory (Random Access Memory, RAM), which is used as an external cache.
- RAM static random access memory
- DRAM dynamic random access memory
- DRAM synchronous dynamic random access memory
- DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
- Enhanced SDRAM, ESDRAM Synchronous Link Dynamic Random Access Memory
- Synchlink DRAM Synchronous Link Dynamic Random Access Memory
- DRRAM Direct Rambus RAM
- the processor may be an integrated circuit chip with signal processing capabilities.
- the steps of the above method can be completed by hardware integrated logic circuits in the processor or instructions in the form of software.
- the aforementioned processor may be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (Field Programmable Gate Array, FPGA) or other Programming logic devices, discrete gates or transistor logic devices, discrete hardware components.
- DSP Digital Signal Processor
- ASIC application specific integrated circuit
- FPGA ready-made programmable gate array
- the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
- the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
- the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
- the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers.
- the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
- the embodiments described herein can be implemented by hardware, software, firmware, middleware, microcode, or a combination thereof.
- the processing unit can be implemented in one or more Application Specific Integrated Circuits (ASIC), Digital Signal Processing (DSP), Digital Signal Processing Equipment (DSP Device, DSPD), programmable Logic device (Programmable Logic Device, PLD), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), general-purpose processors, controllers, microcontrollers, microprocessors, and others for performing the functions described in this application Electronic unit or its combination.
- ASIC Application Specific Integrated Circuits
- DSP Digital Signal Processing
- DSP Device Digital Signal Processing Equipment
- PLD programmable Logic Device
- PLD Field-Programmable Gate Array
- FPGA Field-Programmable Gate Array
- the technology described in this article can be implemented through modules (such as procedures, functions, etc.) that perform the functions described in this article.
- the software codes can be stored in the memory and executed by the processor.
- the memory can be implemented in the processor or external to the processor.
- the method of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. ⁇
- the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes a number of instructions to enable a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to execute the method described in each embodiment of the present application.
- the filtering device determines the side information of the video frame to be filtered for the pixel information to be filtered of the video frame to be filtered; inputs the side information and the pixel information to be filtered into the filter for filtering, and outputs the filtered pixels.
- the filter is obtained by combining the online filtering part and the offline filtering part, so that the offline filtering part can be used during filtering, which is suitable for filtering a wide range of videos, and at the same time, it can also ensure that some models of the online filtering part are updated to avoid filtering.
- the performance degradation means that the filtering performance of the filtering device is improved, and the application range is wide.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (43)
- 一种滤波方法,包括:获取待滤波像素信息;确定边信息;将所述待滤波像素信息和所述边信息输入至基于神经网络的滤波器中,以输出得到滤波后像素,其中,所述滤波器由在线滤波部分结合离线滤波部分得到。
- 根据权利要求1所述的方法,其中,所述滤波器由在线滤波模型与离线滤波模型级联形成;或者,所述滤波器由在线滤波模型与离线滤波模型级联形成;其中,所述离线滤波模型中存在部分在线训练的参数。
- 根据权利要求1所述的方法,其中,所述滤波器由离线滤波模型形成,其中,离线滤波模型中存在部分在线训练的参数;其中,所述将所述待滤波像素信息和所述边信息输入至基于神经网络的滤波器中,以输出得到滤波后像素,包括:将所述待滤波像素信息和所述边信息输入至基于神经网络的所述离线滤波模型,以输出所述滤波后像素。
- 根据权利要求2所述的方法,其中,所述将所述待滤波像素信息和所述边信息输入至基于神经网络的滤波器中,以输出得到滤波后像素,包括:将所述待滤波像素信息和所述边信息输入至基于神经网络的所述离线滤波模型,以输出中间滤波后像素;将所述中间滤波后像素和所述边信息输入至基于神经网络的所述在线滤波模型,以输出所述滤波后像素;或者,将所述待滤波像素信息和所述边信息输入至基于神经网络的所述在线滤波模型,以输出中间滤波后像素;将所述中间滤波后像素和所述边信息输入至基于神经网络的所述离线滤波模型,以输出所述滤波后像素。
- 根据权利要求1至4任一项所述的方法,其中,所述将所述待滤波像素信息和所述边信息输入至基于神经网络的滤波器中,以输出得到滤波后像素之后,所述方法还包括:基于所述滤波后像素,对所述滤波器进行在线训练,得到在线滤波参数;基于所述在线滤波参数训练所述滤波器的在线部分后,将后续待滤波像素信息和所述边信息输入更新后滤波器中进行滤波,得到后续滤波后像素;将所述在线滤波参数写入视频码流。
- 根据权利要求1至4任一项所述的方法,其中,所述离线滤波模型为离线训练的滤波器;其中,所述离线滤波模型还包括离线训练的参数。所述在线滤波模型为在线训练的滤波器;其中,所述在线滤波模型包括在线训练的参数。
- 根据权利要求6所述的方法,其中,基于神经网络的所述滤波器适用于后处理滤波、环内滤波、预处理滤波和预测过程。
- 根据权利要求7所述的方法,其中,基于神经网络的所述滤波器适用于后处理滤波时,设置在解码端;基于神经网络的所述滤波器适用于环内处理滤波时,设置在解码端和编码端;基于神经网络的所述滤波器适用于预处理滤波时,设置在编码端。
- 一种滤波方法,用于对视频进行编码,包括:确定自适应滤波器的滤波参数;根据所述滤波参数和边信息,使用所述自适应滤波器对输入像素进行滤波,得到滤波后像素;确定第一滤波参数,其中,所述第一滤波参数是需要进行编码的、所述滤波参数中的部分滤波参数;对所述第一滤波参数进行编码,写入视频码流。
- 根据权利要求9所述的方法,其中,所述自适应滤波器,包括:所述自适应滤波器是一个神经网络滤波器。
- 根据权利要求9所述的方法,其中,所述自适应滤波器,包括:所述自适应滤波器是第一神经网络滤波器和第二神经网络滤波器的级联滤波器。
- 根据权利要求9所述的方法,其中,所述确定自适应滤波器的滤波参数,包括:使用离线训练确定所述神经网络滤波器的第二滤波参数,其中,所述第二滤波参数是所述神经网络滤波器的全部参数;使用在线训练确定所述神经网络滤波器的第三滤波参数;其中,所述第二滤波参数是所述神经网络滤波器的部分参数;使用所述第三滤波参数更新所述第二滤波参数中对应的滤波参数,将更新后的所述第二滤波参数作为所述神经网络滤波器的全部滤波参数。
- 根据权利要求12所述的方法,其中,所述确定第一滤波参数,包括:将所述第三滤波参数作为所述第一滤波参数。
- 根据权利要求11所述的方法,其中,所述确定自适应滤波器的滤波参数,包括:使用离线训练确定所述第一神经网络滤波器的全部滤波参数;使用在线训练确定所述第二神经网络滤波器的第四滤波参数,其中,所述第四滤波参数是所述第二神经网络滤波器的全部参数。
- 根据权利要求14所述的方法,其中,所述确定第一滤波参数,包括:将所述第四滤波参数作为所述第一滤波参数。
- 根据权利要求11所述的方法,其中,所述确定自适应滤波器的滤波参数,包括:使用离线训练确定所述第一神经网络滤波器的第五滤波参数,其中,所述第五滤波参数是所述第一神经网络滤波器的全部参数;使用在线训练确定所述神经网络滤波器的第六滤波参数,其中,所述第六滤波参数是所述第一神经网络滤波器的部分参数;使用所述第六滤波参数更新所述第五滤波参数中对应的滤波参数,将更新后的所述第五滤波参数作为所述第一神经网络滤波器的滤波参数;在对所述视频或图像编码的过程中,使用在线训练确定所述第二神经网络滤波器的第七滤波参数,其中,所述第七滤波参数是所述第二神经网络滤波器的全部参数。
- 根据权利要求16所述的方法,其中,所述确定第一滤波参数,包括:将所述第六滤波参数和所述第七滤波参数作为所述第一滤波参数。
- 根据权利要求12至17中任一项所述的方法,其中,还包括:所述离线训练是在对所述视频或图像开始编码之前,使用一个或多个图像对神经 网络滤波器进行训练的过程;所述在线训练是在对所述视频或图像编码的过程中,使用待编码视频序列中的一个或多个图像对神经网络滤波器进行训练的过程。
- 根据权利要求9至17中任一项所述的方法,其特征在于,还包括:所述自适应滤波器是所述对视频进行编码中使用的预处理滤波器,或者环内滤波器。
- 一种滤波方法,用于对视频码流进行解码,包括:解析视频码流,确定自适应滤波器的第一滤波参数,其中,所述第一滤波参数是所述自适应滤波器的全部滤波参数中的部分滤波参数;根据所述第一滤波参数,确定所述自适应滤波器的全部滤波参数;根据所述全部滤波参数和边信息,使用所述自适应滤波器对输入像素进行滤波,得到滤波后像素。
- 根据权利要求20所述的方法,其中,所述自适应滤波器,包括:所述自适应滤波器是一个神经网络滤波器。
- 根据权利要求20所述的方法,其中,所述自适应滤波器,包括:所述自适应滤波器是两个或多个级联的、不同类型的神经网络滤波器。
- 根据权利要求22所述的方法,其中,所述自适应滤波器,包括:所述自适应滤波器是第一神经网络滤波器和第二神经网络滤波器的级联滤波器。
- 根据权利要求21所述的方法,其中,所述确定自适应滤波器的全部滤波参数,包括:确定所述神经网络滤波器的第二滤波参数,其中,所述第二滤波参数是所述神经网络滤波器的全部参数;使用所述第一滤波参数更新所述第二滤波参数中对应的滤波参数,将更新后的所述第二滤波参数作为所述神经网络滤波器的全部滤波参数。
- 根据权利要求24所述的方法,其中,确定所述神经网络滤波器的第二滤波参数,包括:使用离线训练确定所述神经网络滤波器的第二滤波参数;或者,在解码所述视频码流之前,获得所述第二滤波参数。
- 根据权利要求23所述的方法,其中,所述确定自适应滤波器的全部滤波参数,包括:确定所述第一神经网络滤波器的全部滤波参数;将所述第一滤波参数作为所述第二神经网络滤波器的全部滤波参数。
- 根据权利要求26所述的方法,其中,所述确定所述第一神经网络滤波器的全部滤波参数,包括:使用离线训练确定所述第一神经网络滤波器的全部滤波参数;或者,在解码所述视频码流之前,获得所述第一神经网络滤波器的全部滤波参数。
- 根据权利要求23所述的方法,其中,所述确定自适应滤波器的全部滤波参数,包括:确定所述第一神经网络滤波器的第五滤波参数;其中,所述第五滤波参数是所述第一神经网络滤波器的全部参数;将所述第一滤波参数中的部分参数作为第六滤波参数,其中,所述第六滤波参数是所述第一神经网络滤波器的部分参数;使用所述第六滤波参数更新所述第五滤波参数中对应的滤波参数,将更新后的所述第五滤波参数作为所述第一神经网络滤波器的全部滤波参数;将所述第一滤波参数中的另一部分参数作为所述第二神经网络滤波器的全部参数。
- 根据权利要求28所述的方法,其中,所述确定所述第一神经网络滤波器的第五滤波参数,包括:使用离线训练确定所述第五滤波参数;或者,在解码所述视频码流之前,获得所述第五滤波参数。
- 根据权利要求24至29中任一项所述的方法,其特征在于,还包括:所述离线训练是在解析所述视频码流之前,使用一个或多个图像对神经网络滤波器进行训练的过程。
- 根据权利要求24至29中任一项所述的方法,其特征在于,还包括:所述自适应滤波器是所述对视频码流进行解码中使用的环内滤波器或者后处理滤波器。
- 一种滤波装置,包括:第一获取部分,被配置为获取待滤波像素信息;第一确定部分,被配置为确定边信息;第一滤波部分,被配置将所述待滤波像素信息和所述边信息输入至基于神经网络的滤波器中,以输出得到滤波后像素,其中,所述滤波器由在线滤波部分结合离线滤波部分得到。
- 根据权利要求32所述的滤波装置,其中,所述滤波器由在线滤波模型与离线滤波模型级联形成;或者,所述滤波器由在线滤波模型与离线滤波模型级联形成;其中,所述离线滤波模型中存在部分在线训练的参数。
- 根据权利要求32所述的滤波装置,其中,所述滤波器由离线滤波模型形成,其中,离线滤波模型中存在部分在线训练的参数;所述第一滤波部分,还被配置为将所述待滤波像素信息和所述边信息输入至基于神经网络的所述离线滤波模型,以输出所述滤波后像素。
- 根据权利要求33所述的滤波装置,其中,所述第一滤波部分,还被配置为将所述待滤波像素信息和所述边信息输入至基于神经网络的所述离线滤波模型,以输出中间滤波后像素;将所述中间滤波后像素和所述边信息输入至基于神经网络的所述在线滤波模型,以输出所述滤波后像素;或者,将所述待滤波像素信息和所述边信息输入至基于神经网络的所述在线滤波模型,以输出中间滤波后像素;将所述中间滤波后像素和所述边信息输入至基于神经网络的所述离线滤波模型,以输出所述滤波后像素。
- 根据权利要求32至35任一项所述的滤波装置,其中,所述滤波器还包括训练部分和写入部分;所述第一训练部分,被配置为所述将所述待滤波像素信息和所述边信息输入至基于神经网络的滤波器中,以输出得到滤波后像素之后,基于所述滤波后像素,对所述滤波器进行在线训练,得到在线滤波参数。所述第一滤波部分,还被配置为基于所述在线滤波参数训练所述滤波器的在线部分后,将后续待滤波像素信息和所述边信息输入更新后滤波器中进行滤波,得到后续滤波后像素;所述第一写入部分,被配置为将所述在线滤波参数写入视频码流。
- 根据权利要求32至35任一项所述的滤波装置,其中,所述离线滤波模型为离线训练的滤波器;其中,所述离线滤波模型还包括离线训练的参数。所述在线滤波模型为在线训练的滤波器;其中,所述在线滤波模型包括在线训练的参数。
- 根据权利要求37所述的滤波装置,其中,基于神经网络的所述滤波器适用于后处理滤波、环内滤波、预处理滤波和预测过程。
- 根据权利要求38所述的滤波装置,其中,基于神经网络的所述滤波器适用于后处理滤波时,设置在解码端;基于神经网络的所述滤波器适用于环内处理滤波时,设置在解码端和编码端;基于神经网络的所述滤波器适用于预处理滤波时,设置在编码端。
- 一种滤波装置,用于对视频进行编码,包括:第二确定部分,被配置为确定自适应滤波器的滤波参数;第二滤波部分,被配置为根据所述滤波参数和边信息,使用所述自适应滤波器对输入像素进行滤波,得到滤波后像素;所述第二确定部分,还被配置为确定第一滤波参数,其中,所述第一滤波参数是需要进行编码的、所述滤波参数中的部分滤波参数;第二写入部分,被配置为对所述第一滤波参数进行编码,写入视频码流。
- 一种滤波装置,用于对视频码流进行解码,包括:第三确定部分,被配置为解析视频码流,确定自适应滤波器的第一滤波参数,其中,所述第一滤波参数是所述自适应滤波器的全部滤波参数中的部分滤波参数;及根据所述第一滤波参数,确定所述自适应滤波器的全部滤波参数;第三滤波部分,被配置为根据所述全部滤波参数和边信息,使用所述自适应滤波器对输入像素进行滤波,得到滤波后像素。
- 一种滤波装置,包括:存储器,用于存储可执行指令;处理器,用于执行所述存储器中存储的可执行指令时,实现权利要求1至8任一项所述的方法,或者9至19任一项所述的方法,或者20至31任一项所述的方法。
- 一种计算机存储介质,存储有可执行指令,用于引起处理器执行时,实现权利要求1至8任一项所述的方法,或者9至19任一项所述的方法,或者20至31任一项所述的方法。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020217032829A KR20210134397A (ko) | 2019-03-24 | 2019-09-12 | 필터링 방법 및 장치, 컴퓨터 저장 매체 |
EP19921828.0A EP3941066A4 (en) | 2019-03-24 | 2019-09-12 | FILTER METHOD AND APPARATUS AND COMPUTER STORAGE MEDIUM |
CN201980094265.3A CN113574897A (zh) | 2019-03-24 | 2019-09-12 | 滤波方法及装置、计算机存储介质 |
JP2021555871A JP2022525235A (ja) | 2019-03-24 | 2019-09-12 | フィルタリング方法及び装置、コンピュータ記憶媒体 |
US17/475,237 US11985313B2 (en) | 2019-03-24 | 2021-09-14 | Filtering method and apparatus, and computer storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962822949P | 2019-03-24 | 2019-03-24 | |
US62/822,949 | 2019-03-24 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/475,237 Continuation US11985313B2 (en) | 2019-03-24 | 2021-09-14 | Filtering method and apparatus, and computer storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020192034A1 true WO2020192034A1 (zh) | 2020-10-01 |
Family
ID=72609590
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/105799 WO2020192034A1 (zh) | 2019-03-24 | 2019-09-12 | 滤波方法及装置、计算机存储介质 |
Country Status (6)
Country | Link |
---|---|
US (1) | US11985313B2 (zh) |
EP (1) | EP3941066A4 (zh) |
JP (1) | JP2022525235A (zh) |
KR (1) | KR20210134397A (zh) |
CN (1) | CN113574897A (zh) |
WO (1) | WO2020192034A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115956363A (zh) * | 2021-05-27 | 2023-04-11 | 腾讯美国有限责任公司 | 用于后滤波的内容自适应在线训练方法及装置 |
CN116320410A (zh) * | 2021-12-21 | 2023-06-23 | 腾讯科技(深圳)有限公司 | 一种数据处理方法、装置、设备以及可读存储介质 |
JP7522860B2 (ja) | 2021-04-12 | 2024-07-25 | テンセント・アメリカ・エルエルシー | ビデオストリームにおけるニューラル・ネットワーク・トポロジ、パラメータ、および処理情報をシグナリングするための技術 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021170901A1 (en) * | 2020-02-24 | 2021-09-02 | Nokia Technologies Oy | A method, an apparatus and a computer program product for video encoding and video decoding |
WO2024019343A1 (ko) * | 2022-07-20 | 2024-01-25 | 현대자동차주식회사 | 다양한 잡음 및 특성에 적응적인 비디오 인루프 필터 |
WO2024073145A1 (en) * | 2022-09-30 | 2024-04-04 | Beijing Dajia Internet Information Technology Co., Ltd. | Methods and devices for adaptive loop filtering and cross-component adaptive loop filter |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101841701A (zh) * | 2009-03-20 | 2010-09-22 | 华为技术有限公司 | 基于宏块对的编解码方法及装置 |
WO2017036370A1 (en) * | 2015-09-03 | 2017-03-09 | Mediatek Inc. | Method and apparatus of neural network based processing in video coding |
CN108184129A (zh) * | 2017-12-11 | 2018-06-19 | 北京大学 | 一种视频编解码方法、装置及用于图像滤波的神经网络 |
CN108932697A (zh) * | 2017-05-26 | 2018-12-04 | 杭州海康威视数字技术股份有限公司 | 一种失真图像的去失真方法、装置及电子设备 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8572184B1 (en) * | 2007-10-04 | 2013-10-29 | Bitdefender IPR Management Ltd. | Systems and methods for dynamically integrating heterogeneous anti-spam filters |
US9762906B2 (en) * | 2013-02-18 | 2017-09-12 | Mediatek Inc. | Method and apparatus for video decoding using multi-core processor |
GB2539845B (en) | 2015-02-19 | 2017-07-12 | Magic Pony Tech Ltd | Offline training of hierarchical algorithms |
US10531111B2 (en) * | 2015-11-06 | 2020-01-07 | Microsoft Technology Licensing, Llc | Flexible reference picture management for video encoding and decoding |
US11132619B1 (en) * | 2017-02-24 | 2021-09-28 | Cadence Design Systems, Inc. | Filtering in trainable networks |
US20180293486A1 (en) * | 2017-04-07 | 2018-10-11 | Tenstorrent Inc. | Conditional graph execution based on prior simplified graph execution |
MX2019014443A (es) * | 2017-05-31 | 2020-02-10 | Interdigital Vc Holdings Inc | Un metodo y un dispositivo para codificacion y decodificacion de imagenes. |
CN108134932B (zh) | 2018-01-11 | 2021-03-30 | 上海交通大学 | 基于卷积神经网络的视频编解码环路内滤波实现方法及*** |
-
2019
- 2019-09-12 EP EP19921828.0A patent/EP3941066A4/en active Pending
- 2019-09-12 JP JP2021555871A patent/JP2022525235A/ja active Pending
- 2019-09-12 CN CN201980094265.3A patent/CN113574897A/zh active Pending
- 2019-09-12 KR KR1020217032829A patent/KR20210134397A/ko active Search and Examination
- 2019-09-12 WO PCT/CN2019/105799 patent/WO2020192034A1/zh unknown
-
2021
- 2021-09-14 US US17/475,237 patent/US11985313B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101841701A (zh) * | 2009-03-20 | 2010-09-22 | 华为技术有限公司 | 基于宏块对的编解码方法及装置 |
WO2017036370A1 (en) * | 2015-09-03 | 2017-03-09 | Mediatek Inc. | Method and apparatus of neural network based processing in video coding |
CN108932697A (zh) * | 2017-05-26 | 2018-12-04 | 杭州海康威视数字技术股份有限公司 | 一种失真图像的去失真方法、装置及电子设备 |
CN108184129A (zh) * | 2017-12-11 | 2018-06-19 | 北京大学 | 一种视频编解码方法、装置及用于图像滤波的神经网络 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7522860B2 (ja) | 2021-04-12 | 2024-07-25 | テンセント・アメリカ・エルエルシー | ビデオストリームにおけるニューラル・ネットワーク・トポロジ、パラメータ、および処理情報をシグナリングするための技術 |
CN115956363A (zh) * | 2021-05-27 | 2023-04-11 | 腾讯美国有限责任公司 | 用于后滤波的内容自适应在线训练方法及装置 |
EP4128764A4 (en) * | 2021-05-27 | 2023-09-06 | Tencent America Llc | CONTENT-ADAPTIVE ONLINE TRAINING METHOD AND DEVICE FOR POST-FILTERING |
US11979565B2 (en) | 2021-05-27 | 2024-05-07 | Tencent America LLC | Content-adaptive online training method and apparatus for post-filtering |
CN116320410A (zh) * | 2021-12-21 | 2023-06-23 | 腾讯科技(深圳)有限公司 | 一种数据处理方法、装置、设备以及可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
EP3941066A1 (en) | 2022-01-19 |
KR20210134397A (ko) | 2021-11-09 |
JP2022525235A (ja) | 2022-05-11 |
US20220007015A1 (en) | 2022-01-06 |
EP3941066A4 (en) | 2022-06-22 |
US11985313B2 (en) | 2024-05-14 |
CN113574897A (zh) | 2021-10-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020192034A1 (zh) | 滤波方法及装置、计算机存储介质 | |
US11589041B2 (en) | Method and apparatus of neural network based processing in video coding | |
US20210409783A1 (en) | Loop filter implementation method and apparatus, and computer storage medium | |
US11627342B2 (en) | Loop filtering implementation method and apparatus, and computer storage medium | |
US20220021905A1 (en) | Filtering method and device, encoder and computer storage medium | |
WO2021134706A1 (zh) | 环路滤波的方法与装置 | |
CN111699686B (zh) | 用于视频编解码的分组神经网络的方法以及装置 | |
JP2024095842A (ja) | 画像予測方法、エンコーダー、デコーダー及び記憶媒体 | |
CN110619607B (zh) | 图像去噪和包含图像去噪的图像编解码方法及装置 | |
WO2021056216A1 (zh) | 预测值的确定方法、编码器、解码器以及计算机存储介质 | |
WO2020181554A1 (zh) | 预测值的确定方法、解码器以及计算机存储介质 | |
WO2022227082A1 (zh) | 块划分方法、编码器、解码器以及计算机存储介质 | |
KR20210139327A (ko) | 화상 예측 방법, 인코더, 디코더 및 저장 매체 | |
US20220007042A1 (en) | Colour component prediction method, encoder, decoder, and computer storage medium | |
WO2023050439A1 (zh) | 编解码方法、码流、编码器、解码器、存储介质和*** | |
EP4224852A1 (en) | Video encoding and decoding methods, encoder, decoder, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19921828 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021555871 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20217032829 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2019921828 Country of ref document: EP Effective date: 20211015 |