WO2024120499A1 - Method, apparatus, and medium for visual data processing - Google Patents

Method, apparatus, and medium for visual data processing Download PDF

Info

Publication number
WO2024120499A1
WO2024120499A1 PCT/CN2023/137254 CN2023137254W WO2024120499A1 WO 2024120499 A1 WO2024120499 A1 WO 2024120499A1 CN 2023137254 W CN2023137254 W CN 2023137254W WO 2024120499 A1 WO2024120499 A1 WO 2024120499A1
Authority
WO
WIPO (PCT)
Prior art keywords
wavelet
representation
subband
inverse
neural network
Prior art date
Application number
PCT/CN2023/137254
Other languages
French (fr)
Inventor
Yaojun Wu
Semih Esenlik
Zhaobin Zhang
Meng Wang
Kai Zhang
Li Zhang
Original Assignee
Douyin Vision Co., Ltd.
Bytedance Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Douyin Vision Co., Ltd., Bytedance Inc. filed Critical Douyin Vision Co., Ltd.
Publication of WO2024120499A1 publication Critical patent/WO2024120499A1/en

Links

Definitions

  • Embodiments of the present disclosure relates generally to visual data processing techniques, and more particularly, to wavelet transformation and non-linear transformation.
  • Image/video compression is an essential technique to reduce the costs of image/video transmission and storage in a lossless or lossy manner.
  • Image/video compression techniques can be divided into two branches, the classical video coding methods and the neural-network-based video compression methods.
  • Classical video coding schemes adopt transform-based solutions, in which researchers have exploited statistical dependency in the latent variables (e.g., wavelet coefficients) by carefully hand-engineering entropy codes modeling the dependencies in the quantized regime.
  • Neural network-based video compression is in two flavors, neural network-based coding tools and end-to-end neural network-based video compression.
  • the former is embedded into existing classical video codecs as coding tools and only serves as part of the framework, while the latter is a separate framework developed based on neural networks without depending on classical video codecs. Coding efficiency of image/video coding is generally expected to be further improved.
  • Embodiments of the present disclosure provide a solution for visual data processing.
  • a method for visual data processing comprises: determining, for a conversion between a current visual unit of visual data and a bitstream of the visual data, at least one wavelet subband representation of the current visual unit based on a neural network-based non-linear transformation or a neural network-based inverse non-linear transformation inverse to the non-linear transformation, a wavelet subband representation being associated with a subband of a wavelet of the current visual unit; and performing the conversion based on the at least one wavelet subband representation.
  • the method in accordance with the first aspect of the present disclosure combines the wavelet and the non-linear transformation, and the corresponding processing of the visual data. In this way, the performance of learned visual data processing such as neural network-based visual data processing can be improved.
  • an apparatus for visual data processing comprises a processor and a non-transitory memory with instructions thereon.
  • a non-transitory computer-readable storage medium stores instructions that cause a processor to perform a method in accordance with the first aspect of the present disclosure.
  • the non-transitory computer-readable recording medium stores a bitstream of visual data which is generated by a method performed by an apparatus for visual data processing.
  • the method comprises: determining at least one wavelet subband representation of a current visual unit of the visual data based on a neural network-based non-linear transformation or a neural network-based inverse non-linear transformation inverse to the non-linear transformation, a wavelet subband representation being associated with a subband of a wavelet of the current visual unit; and generating the bitstream based on the at least one wavelet subband representation.
  • a method for storing a bitstream of visual data comprises: determining at least one wavelet subband representation of a current visual unit of the visual data based on a neural network-based non-linear transformation or a neural network-based inverse non-linear transformation inverse to the non-linear transformation, a wavelet subband representation being associated with a subband of a wavelet of the current visual unit; generating the bitstream based on the at least one wavelet subband representation; and storing the bitstream in a non-transitory computer-readable recording medium.
  • Fig. 1 illustrates a block diagram that illustrates an example visual data coding system, in accordance with some embodiments of the present disclosure
  • Fig. 2 illustrates a typical transform coding scheme
  • Fig. 3 illustrates an image from the Kodak dataset and different representations of the image
  • Fig. 4 illustrates a network architecture of an autoencoder implementing the hyperprior model
  • Fig. 5 illustrates a block diagram of a combined model
  • Fig. 6 illustrates an encoding process of the combined model
  • Fig. 7 illustrates a decoding process of the combined model
  • Fig. 8 illustrates an encoder and a decoder with wavelet-based transform
  • Fig. 9 illustrates an output of a forward wavelet-based transform
  • Fig. 10 illustrates partitioning of the output of a forward wavelet-based transform
  • Fig. 11 illustrates an example of the decoding process in accordance with embodiments of the present disclosure
  • Fig. 12 illustrates an example of the inverse channel network in accordance with embodiments of the present disclosure
  • Fig. 13 illustrates an example of the inverse non-linear transformation in accordance with embodiments of the present disclosure
  • Fig. 14 illustrates another example of the inverse non-linear transformation in accordance with embodiments of the present disclosure
  • Fig. 15 illustrates a flowchart of a method for visual data processing in accordance with embodiments of the present disclosure
  • Fig. 16 illustrates a block diagram of a computing device in which various embodiments of the present disclosure can be implemented.
  • references in the present disclosure to “one embodiment, ” “an embodiment, ” “an example embodiment, ” and the like indicate that the embodiment described may include a particular feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an example embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • first and second etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of example embodiments.
  • the term “and/or” includes any and all combinations of one or more of the listed terms.
  • visual data may refer to image data or video data.
  • visual data processing may refer to image processing or video processing.
  • visual data coding may refer to image coding or video coding.
  • coding visual data may refer to “encoding visual data (for example, encoding visual data into a bitstream) ” and/or “decoding visual data (for example, decoding visual data from a bitstream” .
  • Fig. 1 is a block diagram that illustrates an example visual data coding system 100 that may utilize the techniques of this disclosure.
  • the visual data coding system 100 may include a source device 110 and a destination device 120.
  • the source device 110 can be also referred to as a data encoding device or a visual data encoding device
  • the destination device 120 can be also referred to as a data decoding device or a visual data decoding device.
  • the source device 110 can be configured to generate encoded visual data
  • the destination device 120 can be configured to decode the encoded visual data generated by the source device 110.
  • the source device 110 may include a data source 112, a data encoder 114, and an input/output (I/O) interface 116.
  • I/O input/output
  • the data source 112 may include a source such as a data capture device.
  • a source such as a data capture device.
  • the data capture device include, but are not limited to, an interface to receive data from a data provider, a computer graphics system for generating data, and/or a combination thereof.
  • the data may comprise one or more pictures of a video or one or more images.
  • the data encoder 114 encodes the data from the data source 112 to generate a bitstream.
  • the bitstream may include a sequence of bits that form a coded representation of the data.
  • the bitstream may include coded pictures and associated data.
  • the coded picture is a coded representation of a picture.
  • the associated data may include sequence parameter sets, picture parameter sets, and other syntax structures.
  • the I/O interface 116 may include a modulator/demodulator and/or a transmitter.
  • the encoded data may be transmitted directly to destination device 120 via the I/O interface 116 through the network 130A.
  • the encoded data may also be stored onto a storage medium/server 130B for access by destination device 120.
  • the destination device 120 may include an I/O interface 126, a data decoder 124, and a display device 122.
  • the I/O interface 126 may include a receiver and/or a modem.
  • the I/O interface 126 may acquire encoded data from the source device 110 or the storage medium/server 130B.
  • the data decoder 124 may decode the encoded data.
  • the display device 122 may display the decoded data to a user.
  • the display device 122 may be integrated with the destination device 120, or may be external to the destination device 120 which is configured to interface with an external display device.
  • the data encoder 114 and the data decoder 124 may operate according to a data coding standard, such as video coding standard or still picture coding standard and other current and/or further standards.
  • a data coding standard such as video coding standard or still picture coding standard and other current and/or further standards.
  • a neural network-based image and video compression method is proposed, wherein a wavelet transform and non-linear transformation are combined to boost the coding efficiency.
  • the disclosure first targets the problem to process subbands of wavelet transformation, and then further boosts the performance through the non-linear transformation.
  • Neural network was proposed originally with the interdisciplinary research of neuroscience and mathematics. It has shown strong capabilities in the context of non-linear transform and classification. Neural network-based image/video compression technology has gained significant progress during the past half decade. It is reported that the latest neural network-based image compression algorithm achieves comparable R-D performance with Versatile Video Coding (VVC) , the latest video coding standard developed by Joint Video Experts Team (JVET) with experts from MPEG and VCEG. With the performance of neural image compression continually being improved, neural network-based video compression has become an actively developing research area. However, neural network-based video coding still remains in its infancy due to the inherent difficulty of the problem.
  • VVC Versatile Video Coding
  • Image/video compression usually refers to the computing technology that compresses image/video into binary code to facilitate storage and transmission.
  • the binary codes may or may not support losslessly reconstructing the original image/video, termed lossless compression and lossy compression.
  • Most of the efforts are devoted to lossy compression since lossless reconstruction is not necessary in most scenarios.
  • the performance of image/video compression algorithms is evaluated from two aspects, i.e. compression ratio and reconstruction quality. Compression ratio is directly related to the number of binary codes, the less the better; Reconstruction quality is measured by comparing the reconstructed image/video with the original image/video, the higher the better.
  • Image/video compression techniques can be divided into two branches, the classical video coding methods and the neural-network-based video compression methods.
  • Classical video coding schemes adopt transform-based solutions, in which researchers have exploited statistical dependency in the latent variables (e.g., DCT or wavelet coefficients) by carefully hand-engineering entropy codes modeling the dependencies in the quantized regime.
  • Neural network- based video compression is in two flavors, neural network-based coding tools and end-to-end neural network-based video compression. The former is embedded into existing classical video codecs as coding tools and only serves as part of the framework, while the latter is a separate framework developed based on neural networks without depending on classical video codecs.
  • Neural network-based image/video compression is not new since there were a number of researchers working on neural network-based image coding. But the network architectures were relatively shallow, and the performance was not satisfactory. Benefit from the abundance of data and the support of powerful computing resources, neural network-based methods are better exploited in a variety of applications. At present, neural network-based image/video compression has shown promising improvements, confirmed its feasibility. Nevertheless, this technology is still far from mature and a lot of challenges need to be addressed.
  • Neural networks also known as artificial neural networks (ANN)
  • ANN artificial neural networks
  • One benefit of such deep networks is believed to be the capacity for processing data with multiple levels of abstraction and converting data into different kinds of representations. Note that these representations are not manually designed; instead, the deep network including the processing layers is learned from massive data using a general machine learning procedure. Deep learning eliminates the necessity of handcrafted representations, and thus is regarded useful especially for processing natively unstructured data, such as acoustic and visual signal, whilst processing such data has been a longstanding difficulty in the artificial intelligence field.
  • the optimal method for lossless coding can reach the minimal coding rate -log 2 p (x) where p (x) is the probability of symbol x.
  • p (x) is the probability of symbol x.
  • a number of lossless coding methods were developed in literature and among them arithmetic coding is believed to be among the optimal ones.
  • arithmetic coding ensures that the coding rate to be as close as possible to its theoretical limit -log 2 p (x) without considering the rounding error. Therefore, the remaining problem is to how to determine the probability, which is however very challenging for natural image/video due to the curse of dimensionality.
  • one way to model p (x) is to predict pixel probabilities one by one in a raster scan order based on previous observations, where x is an image.
  • p (x) p (x 1 ) p (x 2
  • k is a pre-defined constant controlling the range of the context.
  • condition may also take the sample values of other color components into consideration.
  • R sample is dependent on previously coded pixels (including R/G/B samples)
  • the current G sample may be coded according to previously coded pixels and the current R sample
  • the previously coded pixels and the current R and G samples may also be taken into consideration.
  • Neural networks were originally introduced for computer vision tasks and have been proven to be effective in regression and classification problems. Therefore, it has been proposed using neural networks to estimate the probability of p (x i ) given its context x 1 , x 2 , ..., x i-1 .
  • the pixel probability is proposed for binary images, i.e., x i ⁇ ⁇ -1, +1 ⁇ .
  • the neural autoregressive distribution estimator (NADE) is designed for pixel probability modeling, where is a feed-forward network with a single hidden layer.
  • the feed-forward network also has connections skipping the hidden layer, and the parameters are also shared.
  • NADE is extended to a real-valued model RNADE, where the probability p (x i
  • Their feed-forward network also has a single hidden layer, but the hidden layer is with rescaling to avoid saturation and uses rectified linear unit (ReLU) instead of sigmoid.
  • ReLU rectified linear unit
  • Multi-dimensional long short-term memory (LSTM) is proposed, which is working together with mixtures of conditional Gaussian scale mixtures for probability modeling.
  • LSTM is a special kind of recurrent neural networks (RNNs) and is proven to be good at modeling sequential data.
  • RNNs recurrent neural networks
  • the spatial variant of LSTM is used for images.
  • Several different neural networks are studied, including RNNs and CNNs namely PixelRNN and PixelCNN, respectively.
  • PixelRNN two variants of LSTM, called row LSTM and diagonal BiLSTM are proposed, where the latter is specifically designed for images.
  • PixelRNN incorporates residual connections to help train deep neural networks with up to 12 layers.
  • PixelCNN masked convolutions are used to suit for the shape of the context. Comparing with previous works, PixelRNN and PixelCNN are more dedicated to natural images: they consider pixels as discrete values (e.g., 0, 1, ..., 255) and predict a multinomial distribution over the discrete values; they deal with color images in RGB color space; they work well on large-scale image dataset ImageNet. Gated PixelCNN is proposed to improve the PixelCNN, and achieves comparable performance with PixelRNN but with much less complexity.
  • PixelCNN++ is proposed with the following improvements upon PixelCNN: a discretized logistic mixture likelihood is used rather than a 256-way multinomial distribution; down-sampling is used to capture structures at multiple resolutions; additional short-cut connections are introduced to speed up training; dropout is adopted for regularization; RGB is combined for one pixel.
  • PixelSNAIL is proposed, in which casual convolutions are combined with self-attention.
  • the additional condition can be image label information or high-level representations.
  • Auto-encoder is used to trained for dimensionality reduction and consists of two parts: encoding and decoding.
  • the encoding part converts the high-dimension input signal to low-dimension representations, typically with reduced spatial size but a greater number of channels.
  • the decoding part attempts to recover the high-dimension input from the low-dimension representation.
  • Auto-encoder enables automated learning of representations and eliminates the need of hand-crafted features, which is also believed to be one of the most important advantages of neural networks.
  • Fig. 2 illustrates a typical transform coding scheme 200.
  • the original image x is transformed by the analysis network g a to achieve the latent representation y.
  • the latent representation y is quantized and compressed into bits.
  • the number of bits R is used to measure the coding rate.
  • the quantized latent representation is then inversely transformed by a synthesis network g s to obtain the reconstructed image
  • the distortion is calculated in a perceptual space by transforming x and with the function g p .
  • the prototype auto-encoder for image compression is in Fig. 2, which can be regarded as a transform coding strategy.
  • the synthesis network will inversely transform the quantized latent representation back to obtain the reconstructed image
  • the framework is trained with the rate-distortion loss function, i.e., where D is the distortion between x and R is the rate calculated or estimated from the quantized representation and ⁇ is the Lagrange multiplier. It should be noted that D can be calculated in either pixel domain or perceptual domain. All existing research works follow this prototype and the difference might only be the network structure or loss function.
  • RNNs and CNNs are the most widely used architectures.
  • a general framework is proposed for variable rate image compression using RNN. They use binary quantization to generate codes and do not consider rate during training.
  • the framework indeed provides a scalable coding functionality, where RNN with convolutional and deconvolution layers is reported to perform decently.
  • An improved version is proposed by upgrading the encoder with a neural network similar to PixelRNN to compress the binary codes. The performance is reportedly better than JPEG on Kodak image dataset using MS-SSIM evaluation metric. Further improvement with the RNN-based solution by introducing hidden-state priming.
  • an SSIM-weighted loss function is also designed, and spatially adaptive bitrates mechanism is enabled. They achieve better results than BPG on Kodak image dataset using MS-SSIM as evaluation metric.
  • a general framework for rate-distortion optimized image compression is proposed. They use multiary quantization to generate integer codes and consider the rate during training, i.e. the loss is the joint rate-distortion cost, which can be MSE or others. They add random uniform noise to stimulate the quantization during training and use the differential entropy of the noisy codes as a proxy for the rate. They use generalized divisive normalization (GDN) as the network structure, which consists of a linear mapping followed by a nonlinear parametric normalization. An improved version of GDN is proposed, where they use 3 convolutional layers each followed by a down-sampling layer and a GDN layer as the forward transform.
  • GDN generalized divisive normalization
  • the inverse transform is implemented with a subnet h s attempting to decode from the quantized side information to the standard deviation of the quantized which will be further used during the arithmetic coding of
  • their method is slightly worse than BPG in terms of PSNR.
  • the structures in the residue space are further exploited by introducing an autoregressive model to estimate both the standard deviation and the mean.
  • the Gaussian mixture model is proposed afterwards to further remove redundancy in the residue.
  • the reported performance is on par with VVC on the Kodak image set using PSNR as evaluation metric.
  • the encoder subnetwork (section 2.3.2) transforms the image vector x using a parametric analysis transform into a latent representation y, which is then quantized to form Because is discrete-valued, it can be losslessly compressed using entropy coding techniques such as arithmetic coding and transmitted as a sequence of bits.
  • Fig. 3 illustrates example latent representations of an image, including an image 300 from the Kodak dataset, a visualization of the latent 310 representation y of the image 300, a standard deviations ⁇ 320 of the latent 310, and latents y 330 after a hyper prior network is introduced.
  • a hyper prior network includes a hyper encoder and decoder.
  • the encoder subnetwork transforms the image vector x using a parametric analysis transform into a latent representation y, which is then quantized to form Because is discrete-valued, can be losslessly compressed using entropy coding techniques such as arithmetic coding and transmitted as a sequence of bits.
  • Fig. 4 is a schematic diagram 400 illustrating an example network architecture of an autoencoder implementing a hyperprior model.
  • the upper side shows an image autoencoder network, and the lower side corresponds to the hyperprior subnetwork.
  • the analysis and synthesis transforms are denoted as g a and g a .
  • Q represents quantization
  • AE, AD represent arithmetic encoder and arithmetic decoder, respectively.
  • the hyperprior model includes two subnetworks, hyper encoder (denoted with h a ) and hyper decoder (denoted with h s ) .
  • the hyper prior model generates a quantized hyper latent which comprises information related to the probability distribution of the samples of the quantized latent is included in the bitstream and transmitted to the receiver (decoder) along with
  • the upper side of the models is the encoder g a and decoder g s as discussed above.
  • the lower side is the additional hyper encoder h a and hyper decoder h s networks that are used to obtain
  • the encoder subjects the input image x to g a , yielding the responses y with spatially varying standard deviations.
  • the responses y are fed into h a , summarizing the distribution of standard deviations in z.
  • z is then quantized compressed, and transmitted as side information.
  • the encoder uses the quantized vector to estimate ⁇ , the spatial distribution of standard deviations, and uses ⁇ to compress and transmit the quantized image representation
  • the decoder first recovers from the compressed signal.
  • the decoder uses h s to obtain ⁇ , which provides the decoder with the correct probability estimates to successfully recover as well.
  • the decoder then feeds into g s to obtain the reconstructed image.
  • the spatial redundancies of the quantized latent are reduced.
  • the latents y 330 in Fig. 3 correspond to the quantized latent when the hyper encoder/decoder are used. Compared to the standard deviations ⁇ 320, the spatial redundancies are significantly reduced as the samples of the quantized latent are less correlated.
  • hyperprior model improves the modelling of the probability distribution of the quantized latent
  • additional improvement can be obtained by utilizing an autoregressive model that predicts quantized latents from their causal context (Context Model) .
  • auto-regressive means that the output of a process is later used as input to it.
  • the context model subnetwork generates one sample of a latent, which is later used as input to obtain the next sample.
  • Fig. 5 is a schematic diagram 500 illustrating an example combined model configured to jointly optimize a context model along with a hyperprior and the autoencoder.
  • the combined model jointly optimizes an autoregressive component that estimates the probability distributions of latents from their causal context (Context Model) along with a hyperprior and the underlying autoencoder.
  • Real-valued latent representations are quantized (Q) to create quantized latents and quantized hyper-latents which are compressed into a bitstream using an arithmetic encoder (AE) and decompressed by an arithmetic decoder (AD) .
  • the dashed region corresponds to the components that are executed by the receiver (e.g, a decoder) to recover an image from a compressed bitstream.
  • a joint architecture is used where both hyperprior model subnetwork (hyper encoder and hyper decoder) and a context model subnetwork are utilized.
  • the hyperprior and the context model are combined to learn a probabilistic model over quantized latents which is then used for entropy coding.
  • the outputs of context subnetwork and hyper decoder subnetwork are combined by the subnetwork called Entropy Parameters, which generates the mean ⁇ and scale (or variance) ⁇ parameters for a Gaussian probability model.
  • the gaussian probability model is then used to encode the samples of the quantized latents into bitstream with the help of the arithmetic encoder (AE) module.
  • AE arithmetic encoder
  • the gaussian probability model is utilized to obtain the quantized latents from the bitstream by arithmetic decoder (AD) module.
  • the latent samples are modeled as gaussian distribution or gaussian mixture models (not limited to) .
  • the context model and hyper prior are jointly used to estimate the probability distribution of the latent samples. Since a gaussian distribution can be defined by a mean and a variance (aka sigma or scale) , the joint model is used to estimate the mean and variance (denoted as ⁇ and ⁇ ) .
  • Fig. 5 corresponds to the state of the art compression method. In this section and the next, the encoding and decoding processes will be described separately.
  • Fig. 6 illustrates an example encoding process 600.
  • the input image is first processed with an encoder subnetwork.
  • the encoder transforms the input image into a transformed representation called latent, denoted by y.
  • y is then input to a quantizer block, denoted by Q, to obtain the quantized latent is then converted to a bitstream (bits1) using an arithmetic encoding module (denoted AE) .
  • the arithmetic encoding block converts each sample of the into a bitstream (bits1) one by one, in a sequential order.
  • the modules hyper encoder, context, hyper decoder, and entropy parameters subnetworks are used to estimate the probability distributions of the samples of the quantized latent
  • the latent y is input to hyper encoder, which outputs the hyper latent (denoted by z) .
  • the hyper latent is then quantized and a second bitstream (bits2) is generated using arithmetic encoding (AE) module.
  • AE arithmetic encoding
  • the factorized entropy module generates the probability distribution, that is used to encode the quantized hyper latent into bitstream.
  • the quantized hyper latent includes information about the probability distribution of the quantized latent
  • the Entropy Parameters subnetwork generates the probability distribution estimations, that are used to encode the quantized latent
  • the information that is generated by the Entropy Parameters typically include a mean ⁇ and scale (or variance) ⁇ parameters, that are together used to obtain a gaussian probability distribution.
  • a gaussian distribution of a random variable x is defined as wherein the parameter ⁇ is the mean or expectation of the distribution (and also its median and mode) , while the parameter ⁇ is its standard deviation (or variance, or scale) .
  • the mean and the variance need to be determined.
  • the entropy parameters module are used to estimate the mean and the variance values.
  • the subnetwork hyper decoder generates part of the information that is used by the entropy parameters subnetwork, the other part of the information is generated by the autoregressive module called context module.
  • the context module generates information about the probability distribution of a sample of the quantized latent, using the samples that are already encoded by the arithmetic encoding (AE) module.
  • the quantized latent is typically a matrix composed of many samples. The samples can be indicated using indices, such as or depending on the dimensions of the matrix
  • the samples are encoded by AE one by one, typically using a raster scan order. In a raster scan order the rows of a matrix are processed from top to bottom, wherein the samples in a row are processed from left to right.
  • the context module In such a scenario (wherein the raster scan order is used by the AE to encode the samples into bitstream) , the context module generates the information pertaining to a sample using the samples encoded before, in raster scan order.
  • the information generated by the context module and the hyper decoder are combined by the entropy parameters module to generate the probability distributions that are used to encode the quantized latent into bitstream (bits1) .
  • first and the second bitstream are transmitted to the decoder as result of the encoding process.
  • encoder The analysis transform that converts the input image into latent representation is also called an encoder (or auto-encoder) .
  • Fig. 7 illustrates an example decoding process 700 separately.
  • the decoder first receives the first bitstream (bits1) and the second bitstream (bits2) that are generated by a corresponding encoder.
  • the bits2 is first decoded by the arithmetic decoding (AD) module by utilizing the probability distributions generated by the factorized entropy subnetwork.
  • the factorized entropy module typically generates the probability distributions using a predetermined template, for example using predetermined mean and variance values in the case of gaussian distribution.
  • the output of the arithmetic decoding process of the bits2 is which is the quantized hyper latent.
  • the AD process reverts to AE process that was applied in the encoder.
  • the processes of AE and AD are lossless, meaning that the quantized hyper latent that was generated by the encoder can be reconstructed at the decoder without any change.
  • the hyper decoder After obtaining of it is processed by the hyper decoder, whose output is fed to entropy parameters module.
  • the three subnetworks, context, hyper decoder and entropy parameters that are employed in the decoder are identical to the ones in the encoder. Therefore the exact same probability distributions can be obtained in the decoder (as in encoder) , which is essential for reconstructing the quantized latent without any loss. As a result, the identical version of the quantized latent that was obtained in the encoder can be obtained in the decoder.
  • the arithmetic decoding module decodes the samples of the quantized latent one by one from the bitstream bits1.
  • autoregressive model the context model
  • decoder The synthesis transform that converts the quantized latent into reconstructed image is also called a decoder (or auto-decoder) .
  • Fig. 8 shows an example diagram 800 of an encoder and a decoder with wavelet-based transform.
  • first the input image is converted from an RGB color format to a YUV color format. This conversion process is optional, and can be missing in other implementations. If however such a conversion is applied at the input image, a back conversion (from YUV to RGB) is also applied before the output image is generated.
  • post-process 1 and 2 there are 2 additional post processing modules (post-process 1 and 2) shown in Fig. 8. These modules are also optional, hence might be missing in other implementations.
  • the core of an encoder with wavelet-based transform is composed of a wavelet-based forward transform, a quantization module and an entropy coding module. After these 3 modules are applied to the input image, the bitstream is generated.
  • the core of the decoding process is composed of entropy decoding, de-quantization process and an inverse wavelet-based transform operation. The decoding process convers the bitstream into output image.
  • the encoding and decoding processes are depicted in Fig. 8.
  • Fig. 9 illustrates a diagram 900 of an output of a forward wavelet-based transform.
  • the wavelet-based forward transform After the wavelet-based forward transform is applied to the input image, in the output of the wavelet-based forward transform the image is split into its frequency components.
  • the output of a 2-dimensional forward wavelet transform (depicted as iWave forward module in Fig. 8) might take the form depicted in Fig. 9.
  • the input of the transform is an image of a castle.
  • an output with 7 distinct regions are obtained. The number of distinct regions depend on the specific implementation of the transform and might different from 7. Potential number of regions are 4, 7, 10, 13, ...
  • the input image is transformed into 7 regions with 3 small images and 4 even smaller images.
  • the transformation is based on the frequency components
  • the small image at the bottom right quarter comprises the high frequency components in both horizontal and vertical directions.
  • the smallest image at the top-left corner on the other hand comprises the lowest frequency components both in the vertical and horizontal directions.
  • the small image on the top-right quarter comprises the high frequency components in the horizontal direction and low frequency components in the vertical direction.
  • Fig. 10 illustrates a partitioning 1000 of the output of a forward wavelet-based transform.
  • Fig. 10 depicts a possible splitting of the latent representation after the 2D forward transform.
  • the latent representation are the samples (latent samples, or quantized latent samples) that are obtained after the 2D forward transform.
  • the latent samples are divided into 7 sections above, denoted as HH1, LH1, HL1, LL2, HL2, LH2 and HH2.
  • the HH1 describes that the section comprises high frequency components in the vertical direction, high frequency components in the horizontal direction and that the splitting depth is 1.
  • HL2 describes that the section comprises low frequency components in the vertical direction, high frequency components in the horizontal direction and that the splitting depth is 2.
  • the latent samples are obtained at the encoder by the forward wavelet transform, they are transmitted to the decoder by using entropy coding.
  • entropy decoding is applied to obtain the latent samples, which are then inverse transformed (by using iWave inverse module in Fig. 8) to obtain the reconstructed image.
  • neural image compression serves as the foundation of intra compression in neural network-based video compression, thus development of neural network-based video compression technology comes later than neural network-based image compression but needs far more efforts to solve the challenges due to its complexity.
  • 2017 a few researchers have been working on neural network-based video compression schemes.
  • video compression needs efficient methods to remove inter-picture redundancy.
  • Inter-picture prediction is then a crucial step in these works.
  • Motion estimation and compensation is widely adopted but is not implemented by trained neural networks until recently.
  • Random access it requires the decoding can be started from any point of the sequence, typically divides the entire sequence into multiple individual segments and each segment can be decoded independently.
  • low-latency case it aims at reducing decoding time thereby usually merely temporally previous frames can be used as reference frames to decode subsequent frames.
  • the very first method first split the video sequence frames into blocks and each block will choose one from two available modes, either intra coding or inter coding. If intra coding is selected, there is an associated auto-encoder to compress the block. If inter coding is selected, motion estimation and compensation are performed with tradition methods and a trained neural network will be used for residue compression. The outputs of auto-encoders are directly quantized and coded by the Huffman method.
  • PixelMotionCNN Another neural network-based video coding scheme is proposed with PixelMotionCNN.
  • the frames are compressed in the temporal order, and each frame is split into blocks which are compressed in the raster scan order. Each frame will firstly be extrapolated with the preceding two reconstructed frames.
  • the extrapolated frame along with the context of the current block are fed into the PixelMotionCNN to derive a latent representation. Then the residues are compressed by the variable rate image scheme. This scheme performs on par with H. 264.
  • the real-sense end-to-end neural network-based video compression framework is proposed, in which all the modules are implemented with neural networks.
  • the scheme accepts current frame and the prior reconstructed frame as inputs and optical flow will be derived with a pre-trained neural network as the motion information.
  • the motion information will be warped with the reference frame followed by a neural network generating the motion compensated frame.
  • the residues and the motion information are compressed with two separate neural auto-encoders.
  • the whole framework is trained with a single rate-distortion loss function. It achieves better performance than H. 264.
  • An advanced neural network-based video compression scheme is then proposed. It inherits and extends traditional video coding schemes with neural networks with the following major features: 1) using only one auto-encoder to compress motion information and residues; 2) motion compensation with multiple frames and multiple optical flows; 3) an on-line state is learned and propagated through the following frames over time. This scheme achieves better performance in MS-SSIM than HEVC reference software.
  • An extended end-to-end neural network-based video compression framework is proposed.
  • multiple frames are used as references. It is thereby able to provide more accurate prediction of current frame by using multiple reference frames and associated motion information.
  • motion field prediction is deployed to remove motion redundancy along temporal channel.
  • Postprocessing networks are also introduced in this work to remove reconstruction artifacts from previous processes. The performance is better than H. 265 by a noticeable margin in terms of both PSNR and MS-SSIM.
  • the scale-space flow is then proposed to replace commonly used optical flow by adding a scale parameter. It is reportedly achieving better performance than H. 264.
  • a multi-resolution representation is proposed for optical flows.
  • the motion estimation network produces multiple optical flows with different resolutions and let the network to learn which one to choose under the loss function.
  • the performance is slightly improved and better than H. 265.
  • the very beginning method involves a neural network-based video compression scheme with frame interpolation.
  • the key frames are first compressed with a neural image compressor and the remaining frames are compressed in a hierarchical order. They perform motion compensation in the perceptual domain, i.e. deriving the feature maps at multiple spatial scales of the original frame and using motion to warp the feature maps, which will be used for the image compressor.
  • the method is reportedly on par with H. 264.
  • interpolation-based video compression is proposed later on, wherein the interpolation model combines motion information compression and image synthesis, and the same auto-encoder is used for image and residual.
  • a neural network-based video compression method based on variational auto-encoders with a deterministic encoder is proposed afterwards.
  • the model consists of an auto-encoder and an auto-regressive prior. Different from previous methods, this method accepts a group of pictures (GOP) as inputs and incorporates a 3D autoregressive prior by taking into account of the temporal correlation while coding the laten representations. It provides comparative performance as H. 265.
  • GOP group of pictures
  • a grayscale digital image can be represented by where is the set of values of a pixel, m is the image height and n is the image width. For example, is a common setting and in this case thus the pixel can be represented by an 8-bit integer.
  • An uncompressed grayscale digital image has 8 bits-per-pixel (bpp) , while compressed bits are definitely less.
  • a color image is typically represented in multiple channels to record the color information.
  • an image can be denoted by with three separate channels storing Red, Green and Blue information. Similar to the 8-bit grayscale image, an uncompressed 8-bit RGB image has 24 bpp.
  • Digital images/videos can be represented in different color spaces.
  • the neural network-based video compression schemes are mostly developed in RGB color space while the traditional codecs typically use YUV color space to represent the video sequences.
  • YUV color space an image is decomposed into three channels, namely Y, Cb and Cr, where Y is the luminance component and Cb/Cr are the chroma components.
  • the benefits come from that Cb and Cr are typically down sampled to achieve pre-compression since human vision system is less sensitive to chroma components.
  • a color video sequence is composed of multiple color images, called frames, to record scenes at different timestamps.
  • MSE mean-squared-error
  • the quality of the reconstructed image compared with the original image can be measured by peak signal-to-noise ratio (PSNR) :
  • SSIM structural similarity
  • MS-SSIM multi-scale SSIM
  • wavelet transform is also a powerful tool for multiresolution time-frequency analysis, which has been extensively implemented in the traditional codecs, such as JPEG-2000. Its effectiveness has also been validated on other image processing tasks such as denoising, image enhancement and fusion. Also, in recent works, some wavelet-based learned image framework shows great potential in lossy compression with additional support for lossless coding.
  • the target of embodiments is to discover the advantages of wavelet and non-linear transformation and combine them to improve the performance of the transformation module of the learned image codec, thereby improving the compression efficiency.
  • the core of embodiments governs the combination of the wavelet and non-linear transformation, and the corresponding processing of latent samples.
  • An image decoding method comprising the steps of:
  • Fig. 11 illustrates an example diagram 1100 of the decoding process.
  • the quantized latent samples is obtained based on a bitstream and by using a latent sample reconstruction module.
  • the latent sample reconstruction module might comprise multiple neural-network based subnetworks.
  • An example latent sample reconstruction module is depicted in Fig. 11, wherein the latent sample reconstruction module comprises Prediction Fusion Model, Context Model, Hyper Decoder and Hyper Scale Decoder, which are neural network based processing units.
  • the latent sample reconstruction module also comprises two entropy decoder units (Entropy Decoder 1 and 2) , which are responsible for converting the bitstream (series of ones and zeros) into symbols such as integer or floating point numbers.
  • Entropy Decoder 1 and 2 two entropy decoder units
  • the output of the Prediction Fusion Model is prediction samples (i.e. the samples that are expected to be as close to the quantized latent samples as possible)
  • the output of the Entropy decoder 2 is the residual samples (i.e. the samples that represent the difference between the prediction sample and quantized latent sample) . Therefore the units Prediction Fusion Model, Context Model and Hyper Decoder are responsible for obtaining the prediction samples, whereas the Hyper Scale Decoder unit is responsible for obtaining the probability parameters that are used in the decoding of Bitstream 2.
  • the latent sample reconstruction module depicted in Fig. 11 is an example, and the disclosure is applicable for any type of latent sample reconstruction module, a unit that is used to obtain quantized latent samples.
  • the quantized latent representation might be a tensor or a matrix comprising the quantized latent samples.
  • an inverse channel network is applied to adjust the channel numbers of the quantized latent representation.
  • the output of the inverse channel network is fed into inverse non-linear trans network to obtain reconstructed wavelet sub-bands.
  • reconstructed wavelet sub-bands are used in the wavelet inverse transform network to obtain the reconstructed image.
  • the reconstructed image might be displayed on a display device with or without further processing.
  • Inverse channel network might be used to adjust the channel number of the input, an example inverse channel network 1200 is exemplified in Fig. 12.
  • the inverse channel network may contain the operation of the BatchNorm, ReLU and convolution layer.
  • the input of the inverse channel network is quantized latent samples might be a tensor with 3 dimensions or 4 dimensions, wherein 2 of the dimensions might be spatial dimensions such as width or height.
  • a third dimension might be channel number, or the number of feature maps.
  • the output of the inverse channel network is which comprise the samples of equal sized reconstructed sub-bands.
  • the inverse channel network may contain other activation operation.
  • leaky relu might be used inside the network.
  • sigmoid might be used inside the network.
  • the inverse channel network may contain more layers.
  • the BatchNorm may be removed or replaced with other Normalization operation.
  • the order of the operation may be different.
  • the stride of the convolution is strict to 1 to ensure that the output spatial size is identical with the input.
  • the structure of the network might be bottleneck, in this case the stride of the convolution can be any number.
  • the input feature might firstly be downsampled and then be upsampled to ensure that the output has the same spatial resolution.
  • the input feature might firstly be upsampled and then be downsampled to ensure that the output has the same spatial resolution.
  • the inverse channel network might comprise a fully connected neural network layer.
  • inverse channel network might reduce the number of channels of the input, i.e. the number of channels of might be lower than
  • inverse channel network might increase the number of channels of the input, i.e. the number of channels of might be larger than
  • the output of the inverse channel network is tensor which comprise the samples of equal-sized reconstructed sub-bands. These sub-bands are then divided into more than one tensor, each tensor corresponding to a different sub-band.
  • the division operation is along the channel dimension, for example first N channels of the might correspond to the first sub-band, and the following M channels might correspond to the second sub-band.
  • the function of the inverse channel network is to achieve allocation of the information comprised in to each sub-band.
  • the information that is comprised in is typically mixed, one of the channels (feature maps) of might comprise information from multiple sub-bands. Therefore the function of the inverse channel network is to separate this information into corresponding sub-bands.
  • the inverse channel network is not present.
  • the information comprised in the might already be separated, i.e. the first N channels of the might comprise information related to first sub-band only, following M channels might comprise information related to second sub-band only etc.
  • Fig. 13 illustrates an example of the inverse non-linear transformation
  • RB Nx means Residual blocks with N times up sampling.
  • input feature may be processed with 4 individual branch to obtain 4 group information (e.g. sub-band groups) .
  • group information e.g. sub-band groups
  • Each group might comprise one or more sub-bands with same spatial resolution, whereas the spatial resolution (size is spatial dimensions) might be different between different groups.
  • the input of Inverse non-linear transformation is which comprise equal sized sub-bands, and its output is which comprise one or more groups of sub-bands.
  • the function of the inverse non-linear transformation is adjusting of the spatial dimensions.
  • the number of subband groups might be greater than one. One part of is adjusted to match the size of first subband group, and a second part of the is adjusted to match the size of the second subband group.
  • the inverse non-linear transformation might comprise at least two branches, at least one of which comprises a neural network layer that performs resizing.
  • a resizing layer increases or reduces the spatial size of the input.
  • the inverse non-linear transformation might comprise a single branch, which comprises a neural network layer that performs resizing.
  • the resizing layer might be an upsampling layer or a downsampling layer.
  • the said upsampling layer might be implemented using a deconvolution layer or a convolution layer.
  • the said downsampling layer might be implemented using a deconvolution layer or a convolution layer.
  • the output of the each branch might be a group of subbands each of which having a different spatial size.
  • the inverse non-linear transformation comprises more than one branch, for each branch, different resizing ratio might be used to obtain different sub-bands.
  • the input feature might split into 4 parts through the channel dimension, different part might be used to reconstruct the sub-bands respectively.
  • the channel number of all part is same.
  • each part has different channel numbers, it depends on the importance of the subbands.
  • all above branch might use same input feature as the input to obtain different sub-bands.
  • the resizing operation might be upscaling, or upsampling.
  • the resizing operation might be downscaling, or downsampling.
  • the resizing operation might be implemented using a neural network.
  • each branch may comprise convolution layer, deconvolution layer or activation functions.
  • the subband with the lowest resolution (smallest spatial size) might be obtained without application of resizing.
  • the inverse non-linear transformation may contain an activation operation.
  • leaky ReLU or ReLu (Rectified Linear Unit) might be used.
  • sigmoid operation or tanh (hyperbolic tangent function) might be used inside the network.
  • Embodiments of the disclosure are not limited to any specific activation function, or the presence of activation function.
  • the inverse non-linear transform network may contain convolution of deconvolution layers.
  • the inverse non-linear transformation may contain more than 1 layer.
  • the inverse non-linear transform network may contain normalization operations.
  • the order of the operation may be different.
  • the resizing operation might be implemented by a deconvolution layer or a convolution layer with a stride equal to N.
  • N is equal to 2.
  • Wavelet transform network might be implemented as:
  • the parameters of the wavelet transform might be fixed parameter, which is same with the traditional wavelet transformation.
  • the parameters of the wavelet transformation can be learnable (i.e neural network based) , so that wavelet transformation can jointly be optimized with the whole network.
  • the wavelet transform (or inverse wavelet transform) takes reconstructed subbands as input and applies transformation to convert to reconstructed image.
  • Another example implementation of the wavelet transform is described in section 2.3.7.
  • the names latent sample reconstruction module, inverse channel network, inverse non-linear trans network might be different.
  • the encoding process follows the inverse process as the decoding process for obtaining the quantized latent samples. The difference is that, after the quantizes latent samples are obtained, the samples are included in a bitstream using an entropy encoding method.
  • an image encoding method comprising the steps of:
  • the quantized latent samples might be obtained by a latent sample reconstruction module. Or it might be obtained by a quantization process.
  • the disclosure provides a combination method to the wavelet transformation and non-linear transformation to further boost the coding efficiency of the transformation operation in learned image compression.
  • An image or video decoding method comprising the steps of:
  • An image or video encoding method comprising the steps of:
  • the quantized latent samples might be obtained by a latent sample reconstruction module. Or it might be obtained by a quantization process.
  • Fig. 15 illustrates a flowchart of a method 1500 for visual data processing in accordance with embodiments of the present disclosure.
  • the method 1500 is implemented for a conversion between a current visual unit of visual data and a bitstream of the visual data.
  • At block 1510, at least one wavelet subband representation of the current visual unit is determined based on a neural network-based non-linear transformation or a neural network-based inverse non-linear transformation inverse to the non-linear transformation.
  • a wavelet subband representation is associated with a subband of a wavelet of the current visual unit.
  • a subband of the wavelet represents a frequency subband or a frequency component of the wavelet.
  • the conversion is performed based on the at least one wavelet subband representation.
  • the method 1500 enables the combination of wavelet transform and the non-linear transformation, and the corresponding visual data processing. For example, the processing of latent samples can be combined.
  • the combination of wavelet and non-linear transformation can improve the performance of the transformation module of the learned image codec, thereby improving the compression efficiency.
  • the conversion comprises decoding the current visual unit from the bitstream.
  • determining the at least one wavelet subband representation comprises: determining a quantized sample of the current visual unit based on the bitstream and a neural network-based latent sample reconstruction module, the quantized sample being associated with a plurality of channels; determining an intermediate representation of the quantized sample, the intermediate representation being associated with at least one wavelet subband, a wavelet subband being associated with at least one channel of the plurality of channels; and determining the at least one wavelet subband representation based on the intermediate representation and the neural network-based inverse non-linear transformation.
  • the neural network-based inverse non-linear transformation may be performed by an inverse non-linear transformation network which may be a neural network.
  • the term “quantized sample” may be a quantized latent sample, which may be referred to as
  • intermediate representation may be denoted as and the wavelet subband representation may be denoted as
  • preforming the conversion comprises: determining a reconstruction of the current visual unit based on the at least one wavelet subband representation and an inverse wavelet transformation.
  • the inverse wavelet transformation may be performed by an inverse wavelet module or inverse wavelet network.
  • the quantized latent samples is obtained based on a bitstream and by using a latent sample reconstruction module.
  • the latent sample reconstruction module might comprise multiple neural-network based subnetworks.
  • An example latent sample reconstruction module is depicted in Fig. 11, wherein the latent sample reconstruction module comprises Prediction Fusion Model, Context Model, Hyper Decoder and Hyper Scale Decoder, which are neural network-based processing units.
  • the latent sample reconstruction module also comprises two entropy decoder units (Entropy Decoder 1 and 2) , which are responsible for converting the bitstream (series of ones and zeros) into symbols such as integer or floating point numbers.
  • the output of the Prediction Fusion Model is prediction samples (i.e., the samples that are expected to be as close to the quantized latent samples as possible)
  • the output of the Entropy decoder 2 is the residual samples (i.e., the samples that represent the difference between the prediction sample and quantized latent sample) . Therefore, the units Prediction Fusion Model, Context Model and Hyper Decoder are responsible for obtaining the prediction samples, whereas the Hyper Scale Decoder unit is responsible for obtaining the probability parameters that are used in the decoding of Bitstream 2.
  • the latent sample reconstruction module depicted in Fig. 11 is an example, and the disclosure is applicable for any type of latent sample reconstruction module, a unit that is used to obtain quantized latent samples.
  • the quantized latent representation might be a tensor or a matrix comprising the quantized latent samples.
  • an inverse channel network is applied to adjust the channel numbers of the quantized latent representation.
  • the output of the inverse channel network is fed into inverse non-linear trans network to obtain reconstructed wavelet sub-bands.
  • reconstructed wavelet sub-bands are used in the wavelet inverse transform network to obtain the reconstructed image.
  • the reconstructed image might be displayed on a display device with or without further processing.
  • the intermediate representation of the quantized sample is determined based on an inverse channel network for adjusting a channel number of the quantized sample.
  • An example of the inverse channel network is shown in Fig. 11.
  • the inverse channel network comprises at least one of: a batch normalization unit such as BatchNorm, a rectified linear unit (ReLU) , or at least one convolutional layer.
  • a batch normalization unit such as BatchNorm
  • ReLU rectified linear unit
  • the quantized sample comprises at least one of: a first dimension of a first spatial dimension of the current visual unit, a second dimension of a second spatial dimension of the current visual unit, a third dimension of the number of the plurality of channels, or a fourth dimension of the number of feature maps associated with the current visual unit.
  • an activation operation of the inverse channel network comprises at least one of: a leaky rectified linear unit (ReLU) , or a sigmoid operation.
  • ReLU leaky rectified linear unit
  • an activation layer is absent from the inverse channel network.
  • a stride of a convolution of the inverse channel network is a first predefined value, such as 1.
  • the inverse channel network is bottleneck, and a stride of a convolution of the inverse channel network is a second value, such as any number.
  • the bottleneck inverse channel network performs a downsampling operation and an upsampling operation.
  • the inverse channel network comprises a fully connected neural network layer.
  • the number of channels of the intermediate representation is less than or larger than the number of channels of the quantized sample.
  • the intermediate representation comprises samples of equal sized reconstructed wavelet subbands.
  • a first number of channels of the intermediate representation are associated with a first wavelet subband, and a second number of channels of the intermediate representation are associated with a second wavelet subband.
  • a first network for the inverse non-linear transformation comprises a plurality of branches associated with a plurality of groups of wavelet subbands, a group of wavelet subbands being associated with a same spatial resolution.
  • the first network may also be referred to as an inverse non-linear transformation network.
  • An example of the inverse non-linear transformation network is shown in Fig. 13.
  • the at least one wavelet subband representation comprises a plurality of groups of subband representations corresponding to the plurality of groups of wavelet subbands.
  • the first network adjusts a spatial dimension of the intermediate representation, a first part of the intermediate representation is adjusted to match a first size of a first group in the plurality of groups of wavelet subbands, and a second part of the intermediate representation is adjusted to match a second size of a second group in the plurality of groups.
  • the plurality of branches comprises a first branch including a first neural network layer for resizing.
  • a first network for the inverse non-linear transformation comprises a single branch including a first neural network layer for resizing.
  • the first neural network layer increases or reduces a spatial size of the intermediate representation.
  • the first neural network layer comprises an upsampling layer or a downsampling layer.
  • the first neural network layer comprises a deconvolution layer or a convolution layer.
  • a first spatial size of a first group in the plurality of groups of wavelet subbands is different from a second spatial size of a second group in the plurality of groups of wavelet subbands.
  • a first resizing ratio is used for a first group in the plurality of groups of wavelet subbands, and a second resizing ratio different from the first resizing ratio is used for a second group in the plurality of groups of wavelet subbands.
  • the at least one wavelet subband representation comprises a plurality of wavelet subband representations, each wavelet subband representation being associated with a group of wavelet subbands in the plurality of groups of wavelet subbands, and wherein determining the plurality of wavelet subband representations comprises: determining a plurality of partitioned representations of the intermediate representation; and determining the plurality of wavelet subband representations based on the plurality of partitioned representations.
  • the plurality of partitioned representations is associated with a same channel number.
  • a first partitioned representation of the plurality of partitioned representations is associated with a first channel number
  • a second partitioned representation of the plurality of partitioned representations is associated with a second channel number different from the first channel number
  • the first and second channel numbers are determined based on importance of wavelet subbands associated with the first and second partitioned representations.
  • the at least one wavelet subband representation comprises a plurality of wavelet subband representations, each wavelet subband representation being associated with a group of wavelet subbands in the plurality of groups of wavelet subbands, and the plurality of wavelet subband representations is determined based on the intermediate representation.
  • a branch of the plurality of branches comprises at least one of: a convolution layer, a deconvolution layer, an activation operation, a rectified linear unit (ReLU) , a leaky ReLU, a sigmoid operation, a hyperbolic tangent operation, or a normalization operation.
  • ReLU rectified linear unit
  • a leaky ReLU a leaky ReLU
  • a sigmoid operation a hyperbolic tangent operation
  • hyperbolic tangent operation or a normalization operation.
  • a wavelet subband representation of a wavelet subband with a lowest resolution among the plurality of groups of wavelet subbands is determined without resizing.
  • a stride of the deconvolution layer or the convolution layer in the branch is a third number, such as 2.
  • the conversion comprises encoding the current visual unit into the bitstream.
  • determining the at least one wavelet subband representation comprises: determining wavelet subband information of the current visual unit based on a wavelet transformation; and determining the at least one wavelet subband representation based on the wavelet subband information and the non-linear transformation.
  • preforming the conversion comprises: determining a sample of the current visual unit based on the at least one wavelet subband representation and a channel network; determining a quantized sample of the current visual unit by quantizing the sample of the current visual unit; and determining the bitstream at least based on the quantized sample and an entropy coding module.
  • the wavelet transformation comprises at least one fixed parameter.
  • the wavelet transformation may include at least one fixed parameter same with a traditional wavelet transformation.
  • that wavelet transformation comprises at least one learnable parameter
  • the at least one learnable parameter is updated together with a further neural network for the conversion.
  • the parameters of the wavelet transformation may be learnable (for example, neural network based) , so that the wavelet transformation may jointly be optimized with the whole network.
  • a non-transitory computer-readable recording medium stores a bitstream of visual data which is generated by a method performed by an apparatus for visual data processing.
  • at least one wavelet subband representation of a current visual unit of the visual data is determined based on a neural network-based non-linear transformation or a neural network-based inverse non-linear transformation inverse to the non-linear transformation.
  • a wavelet subband representation is associated with a subband of a wavelet of the current visual unit.
  • the bitstream is generated based on the at least one wavelet subband representation.
  • a method for storing bitstream of a video is provided.
  • at least one wavelet subband representation of a current visual unit of the visual data is determined based on a neural network-based non-linear transformation or a neural network-based inverse non-linear transformation inverse to the non-linear transformation.
  • a wavelet subband representation is associated with a subband of a wavelet of the current visual unit.
  • the bitstream is generated based on the at least one wavelet subband representation.
  • the bitstream is stored in a non-transitory computer-readable recording medium.
  • a method for visual data processing comprising: determining, for a conversion between a current visual unit of visual data and a bitstream of the visual data, at least one wavelet subband representation of the current visual unit based on a neural network-based non-linear transformation or a neural network-based inverse non-linear transformation inverse to the non-linear transformation, a wavelet subband representation being associated with a subband of a wavelet of the current visual unit; and performing the conversion based on the at least one wavelet subband representation.
  • Clause 2 The method of clause 1, wherein the conversion comprises decoding the current visual unit from the bitstream.
  • determining the at least one wavelet subband representation comprises: determining a quantized sample of the current visual unit based on the bitstream and a neural network-based latent sample reconstruction module, the quantized sample being associated with a plurality of channels; determining an intermediate representation of the quantized sample, the intermediate representation being associated with at least one wavelet subband, a wavelet subband being associated with at least one channel of the plurality of channels; and determining the at least one wavelet subband representation based on the intermediate representation and the neural network-based inverse non-linear transformation.
  • Clause 4 The method of clause 2 or clause 3, wherein preforming the conversion comprises: determining a reconstruction of the current visual unit based on the at least one wavelet subband representation and an inverse wavelet transformation.
  • Clause 5 The method of clause 3, wherein the intermediate representation of the quantized sample is determined based on an inverse channel network for adjusting a channel number of the quantized sample.
  • the inverse channel network comprises at least one of: a batch normalization unit, a rectified linear unit (ReLU) , or at least one convolutional layer.
  • ReLU rectified linear unit
  • Clause 7 The method of clause 5 or clause 6, wherein the quantized sample comprises at least one of: a first dimension of a first spatial dimension of the current visual unit, a second dimension of a second spatial dimension of the current visual unit, a third dimension of the number of the plurality of channels, or a fourth dimension of the number of feature maps associated with the current visual unit.
  • Clause 8 The method of any of clauses 5 to 7, wherein an activation operation of the inverse channel network comprises at least one of: a leaky rectified linear unit (ReLU) , or a sigmoid operation.
  • ReLU leaky rectified linear unit
  • Clause 9 The method of any of clauses 5 to 8, wherein an activation layer is absent from the inverse channel network.
  • Clause 10 The method of any of clauses 5 to 9, wherein a stride of a convolution of the inverse channel network is a first predefined value.
  • Clause 11 The method of any of clauses 5 to 9, wherein the inverse channel network is bottleneck, and a stride of a convolution of the inverse channel network is a second value.
  • Clause 12 The method of clause 11, wherein the bottleneck inverse channel network performs a downsampling operation and an upsampling operation.
  • Clause 13 The method of any of clauses 5 to 12, wherein the inverse channel network comprises a fully connected neural network layer.
  • Clause 14 The method of any of clauses 5 to 13, wherein the number of channels of the intermediate representation is less than or larger than the number of channels of the quantized sample.
  • Clause 15 The method of any of clauses 3 to 7, wherein the intermediate representation comprises samples of equal sized reconstructed wavelet subbands.
  • Clause 16 The method of any of clause 3 to 15, wherein a first number of channels of the intermediate representation are associated with a first wavelet subband, and a second number of channels of the intermediate representation are associated with a second wavelet subband.
  • Clause 17 The method of any of clauses 3 to 16, wherein a first network for the inverse non-linear transformation comprises a plurality of branches associated with a plurality of groups of wavelet subbands, a group of wavelet subbands being associated with a same spatial resolution.
  • Clause 18 The method of clause 17, wherein the at least one wavelet subband representation comprises a plurality of groups of subband representations corresponding to the plurality of groups of wavelet subbands.
  • Clause 19 The method of clause 17 or clause 18, wherein the first network adjusts a spatial dimension of the intermediate representation, a first part of the intermediate representation is adjusted to match a first size of a first group in the plurality of groups of wavelet subbands, and a second part of the intermediate representation is adjusted to match a second size of a second group in the plurality of groups.
  • Clause 20 The method of any of clauses 17 to 19, wherein the plurality of branches comprises a first branch including a first neural network layer for resizing.
  • Clause 21 The method of any of clauses 3 to 16, wherein a first network for the inverse non-linear transformation comprises a single branch including a first neural network layer for resizing.
  • Clause 22 The method of clause 20 or clause 21, wherein the first neural network layer increases or reduces a spatial size of the intermediate representation.
  • Clause 23 The method of any of clauses 20 to 22, wherein the first neural network layer comprises an upsampling layer or a downsampling layer.
  • Clause 24 The method of any of clauses 20 to 23, wherein the first neural network layer comprises a deconvolution layer or a convolution layer.
  • Clause 25 The method of any of clauses 17 to 19, wherein a first spatial size of a first group in the plurality of groups of wavelet subbands is different from a second spatial size of a second group in the plurality of groups of wavelet subbands.
  • Clause 26 The method of any of clauses 17 to 19, wherein a first resizing ratio is used for a first group in the plurality of groups of wavelet subbands, and a second resizing ratio different from the first resizing ratio is used for a second group in the plurality of groups of wavelet subbands.
  • Clause 27 The method of any of clauses 17 to 19, wherein the at least one wavelet subband representation comprises a plurality of wavelet subband representations, each wavelet subband representation being associated with a group of wavelet subbands in the plurality of groups of wavelet subbands, and wherein determining the plurality of wavelet subband representations comprises: determining a plurality of partitioned representations of the intermediate representation; and determining the plurality of wavelet subband representations based on the plurality of partitioned representations.
  • Clause 28 The method of clause 27, wherein the plurality of partitioned representations is associated with a same channel number.
  • Clause 29 The method of clause 27, wherein a first partitioned representation of the plurality of partitioned representations is associated with a first channel number, and a second partitioned representation of the plurality of partitioned representation s is associated with a second channel number different from the first channel number.
  • Clause 30 The method of clause 29, wherein the first and second channel numbers are determined based on importance of wavelet subbands associated with the first and second partitioned representations.
  • Clause 31 The method of any of clauses 17 to 19, wherein the at least one wavelet subband representation comprises a plurality of wavelet subband representations, each wavelet subband representation being associated with a group of wavelet subbands in the plurality of groups of wavelet subbands, and the plurality of wavelet subband representations is determined based on the intermediate representation.
  • a branch of the plurality of branches comprises at least one of: a convolution layer, a deconvolution layer, an activation operation, a rectified linear unit (ReLU) , a leaky ReLU, a sigmoid operation, a hyperbolic tangent operation, or a normalization operation.
  • ReLU rectified linear unit
  • a leaky ReLU a sigmoid operation
  • hyperbolic tangent operation or a normalization operation.
  • Clause 33 The method of clause 32, wherein a wavelet subband representation of a wavelet subband with a lowest resolution among the plurality of groups of wavelet subbands is determined without resizing.
  • Clause 34 The method of clause 32, wherein a stride of the deconvolution layer or the convolution layer in the branch is a third number.
  • Clause 35 The method of clause 1, wherein the conversion comprises encoding the current visual unit into the bitstream.
  • determining the at least one wavelet subband representation comprises: determining wavelet subband information of the current visual unit based on a wavelet transformation; and determining the at least one wavelet subband representation based on the wavelet subband information and the non-linear transformation.
  • Clause 37 The method of clause 35 or clause 36, wherein preforming the conversion comprises: determining a sample of the current visual unit based on the at least one wavelet subband representation and a channel network; determining a quantized sample of the current visual unit by quantizing the sample of the current visual unit; and determining the bitstream at least based on the quantized sample and an entropy coding module.
  • Clause 38 The method of clause 36, wherein the wavelet transformation comprises at least one fixed parameter.
  • Clause 39 The method of clause 36, wherein that wavelet transformation comprises at least one learnable parameter, the at least one learnable parameter is updated together with a further neural network for the conversion.
  • Clause 40 An apparatus for visual data processing comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to perform a method in accordance with any of clauses 1-39.
  • Clause 41 A non-transitory computer-readable storage medium storing instructions that cause a processor to perform a method in accordance with any of clauses 1-39.
  • a non-transitory computer-readable recording medium storing a bitstream of visual data which is generated by a method performed by an apparatus for visual data processing, wherein the method comprises: determining at least one wavelet subband representation of a current visual unit of the visual data based on a neural network-based non-linear transformation or a neural network-based inverse non-linear transformation inverse to the non-linear transformation, a wavelet subband representation being associated with a subband of a wavelet of the current visual unit; and generating the bitstream based on the at least one wavelet subband representation.
  • a method for storing a bitstream of visual data comprising: determining at least one wavelet subband representation of a current visual unit of the visual data based on a neural network-based non-linear transformation or a neural network-based inverse non-linear transformation inverse to the non-linear transformation, a wavelet subband representation being associated with a subband of a wavelet of the current visual unit; generating the bitstream based on the at least one wavelet subband representation; and storing the bitstream in a non-transitory computer-readable recording medium.
  • Fig. 16 illustrates a block diagram of a computing device 1600 in which various embodiments of the present disclosure can be implemented.
  • the computing device 1600 may be implemented as or included in the source device 110 (or the data encoder 114) or the destination device 120 (or the data decoder 124) .
  • computing device 1600 shown in Fig. 16 is merely for purpose of illustration, without suggesting any limitation to the functions and scopes of the embodiments of the present disclosure in any manner.
  • the computing device 1600 includes a general-purpose computing device 1600.
  • the computing device 1600 may at least comprise one or more processors or processing units 1610, a memory 1620, a storage unit 1630, one or more communication units 1640, one or more input devices 1650, and one or more output devices 1660.
  • the computing device 1600 may be implemented as any user terminal or server terminal having the computing capability.
  • the server terminal may be a server, a large-scale computing device or the like that is provided by a service provider.
  • the user terminal may for example be any type of mobile terminal, fixed terminal, or portable terminal, including a mobile phone, station, unit, device, multimedia computer, multimedia tablet, Internet node, communicator, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, personal communication system (PCS) device, personal navigation device, personal digital assistant (PDA) , audio/video player, digital camera/video camera, positioning device, television receiver, radio broadcast receiver, E-book device, gaming device, or any combination thereof, including the accessories and peripherals of these devices, or any combination thereof.
  • the computing device 1600 can support any type of interface to a user (such as “wearable” circuitry and the like) .
  • the processing unit 1610 may be a physical or virtual processor and can implement various processes based on programs stored in the memory 1620. In a multi-processor system, multiple processing units execute computer executable instructions in parallel so as to improve the parallel processing capability of the computing device 1600.
  • the processing unit 1610 may also be referred to as a central processing unit (CPU) , a microprocessor, a controller or a microcontroller.
  • the computing device 1600 typically includes various computer storage medium. Such medium can be any medium accessible by the computing device 1600, including, but not limited to, volatile and non-volatile medium, or detachable and non-detachable medium.
  • the memory 1620 can be a volatile memory (for example, a register, cache, Random Access Memory (RAM) ) , a non-volatile memory (such as a Read-Only Memory (ROM) , Electrically Erasable Programmable Read-Only Memory (EEPROM) , or a flash memory) , or any combination thereof.
  • the storage unit 1630 may be any detachable or non-detachable medium and may include a machine-readable medium such as a memory, flash memory drive, magnetic disk or another other media, which can be used for storing information and/or data and can be accessed in the computing device 1600.
  • a machine-readable medium such as a memory, flash memory drive, magnetic disk or another other media, which can be used for storing information and/or data and can be accessed in the computing device 1600.
  • the computing device 1600 may further include additional detachable/non-detachable, volatile/non-volatile memory medium.
  • additional detachable/non-detachable, volatile/non-volatile memory medium may be provided.
  • a magnetic disk drive for reading from and/or writing into a detachable and non-volatile magnetic disk
  • an optical disk drive for reading from and/or writing into a detachable non-volatile optical disk.
  • each drive may be connected to a bus (not shown) via one or more data medium interfaces.
  • the communication unit 1640 communicates with a further computing device via the communication medium.
  • the functions of the components in the computing device 1600 can be implemented by a single computing cluster or multiple computing machines that can communicate via communication connections. Therefore, the computing device 1600 can operate in a networked environment using a logical connection with one or more other servers, networked personal computers (PCs) or further general network nodes.
  • PCs personal computers
  • the input device 1650 may be one or more of a variety of input devices, such as a mouse, keyboard, tracking ball, voice-input device, and the like.
  • the output device 1660 may be one or more of a variety of output devices, such as a display, loudspeaker, printer, and the like.
  • the computing device 1600 can further communicate with one or more external devices (not shown) such as the storage devices and display device, with one or more devices enabling the user to interact with the computing device 1600, or any devices (such as a network card, a modem and the like) enabling the computing device 1600 to communicate with one or more other computing devices, if required. Such communication can be performed via input/output (I/O) interfaces (not shown) .
  • I/O input/output
  • some or all components of the computing device 1600 may also be arranged in cloud computing architecture.
  • the components may be provided remotely and work together to implement the functionalities described in the present disclosure.
  • cloud computing provides computing, software, data access and storage service, which will not require end users to be aware of the physical locations or configurations of the systems or hardware providing these services.
  • the cloud computing provides the services via a wide area network (such as Internet) using suitable protocols.
  • a cloud computing provider provides applications over the wide area network, which can be accessed through a web browser or any other computing components.
  • the software or components of the cloud computing architecture and corresponding data may be stored on a server at a remote position.
  • the computing resources in the cloud computing environment may be merged or distributed at locations in a remote data center.
  • Cloud computing infrastructures may provide the services through a shared data center, though they behave as a single access point for the users. Therefore, the cloud computing architectures may be used to provide the components and functionalities described herein from a service provider at a remote location. Alternatively, they may be provided from a conventional server or installed directly or otherwise on a client device.
  • the computing device 1600 may be used to implement visual data encoding/decoding in embodiments of the present disclosure.
  • the memory 1620 may include one or more visual data coding modules 1625 having one or more program instructions. These modules are accessible and executable by the processing unit 1610 to perform the functionalities of the various embodiments described herein.
  • the input device 1650 may receive visual data as an input 1670 to be encoded.
  • the visual data may be processed, for example, by the visual data coding module 1625, to generate an encoded bitstream.
  • the encoded bitstream may be provided via the output device 1660 as an output 1680.
  • the input device 1650 may receive an encoded bitstream as the input 1670.
  • the encoded bitstream may be processed, for example, by the visual data coding module 1625, to generate decoded visual data.
  • the decoded visual data may be provided via the output device 1660 as the output 1680.

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Embodiments of the present disclosure provide a solution for visual data processing. In a method for visual data processing, for a conversion between a current visual unit of visual data and a bitstream of the visual data, at least one wavelet subband representation of the current visual unit is determined based on a neural network-based non-linear transformation or a neural network-based inverse non-linear transformation inverse to the non-linear transformation. A wavelet subband representation is associated with a subband of a wavelet of the current visual unit. The conversion is performed based on the at least one wavelet subband representation.

Description

METHOD, APPARATUS, AND MEDIUM FOR VISUAL DATA PROCESSING
FIELDS
Embodiments of the present disclosure relates generally to visual data processing techniques, and more particularly, to wavelet transformation and non-linear transformation.
BACKGROUND
Image/video compression is an essential technique to reduce the costs of image/video transmission and storage in a lossless or lossy manner. Image/video compression techniques can be divided into two branches, the classical video coding methods and the neural-network-based video compression methods. Classical video coding schemes adopt transform-based solutions, in which researchers have exploited statistical dependency in the latent variables (e.g., wavelet coefficients) by carefully hand-engineering entropy codes modeling the dependencies in the quantized regime. Neural network-based video compression is in two flavors, neural network-based coding tools and end-to-end neural network-based video compression. The former is embedded into existing classical video codecs as coding tools and only serves as part of the framework, while the latter is a separate framework developed based on neural networks without depending on classical video codecs. Coding efficiency of image/video coding is generally expected to be further improved.
SUMMARY
Embodiments of the present disclosure provide a solution for visual data processing.
In a first aspect, a method for visual data processing is proposed. The method comprises: determining, for a conversion between a current visual unit of visual data and a bitstream of the visual data, at least one wavelet subband representation of the current visual unit based on a neural network-based non-linear transformation or a neural network-based inverse non-linear transformation inverse to the non-linear transformation, a wavelet subband representation being associated with a subband of a wavelet of the  current visual unit; and performing the conversion based on the at least one wavelet subband representation. The method in accordance with the first aspect of the present disclosure combines the wavelet and the non-linear transformation, and the corresponding processing of the visual data. In this way, the performance of learned visual data processing such as neural network-based visual data processing can be improved.
In a second aspect, an apparatus for visual data processing is proposed. The apparatus comprises a processor and a non-transitory memory with instructions thereon. The instructions upon execution by the processor, cause the processor to perform a method in accordance with the first aspect of the present disclosure.
In a third aspect, a non-transitory computer-readable storage medium is proposed. The non-transitory computer-readable storage medium stores instructions that cause a processor to perform a method in accordance with the first aspect of the present disclosure.
In a fourth aspect, another non-transitory computer-readable recording medium is proposed. The non-transitory computer-readable recording medium stores a bitstream of visual data which is generated by a method performed by an apparatus for visual data processing. The method comprises: determining at least one wavelet subband representation of a current visual unit of the visual data based on a neural network-based non-linear transformation or a neural network-based inverse non-linear transformation inverse to the non-linear transformation, a wavelet subband representation being associated with a subband of a wavelet of the current visual unit; and generating the bitstream based on the at least one wavelet subband representation.
In a fifth aspect, a method for storing a bitstream of visual data is proposed. The method comprises: determining at least one wavelet subband representation of a current visual unit of the visual data based on a neural network-based non-linear transformation or a neural network-based inverse non-linear transformation inverse to the non-linear transformation, a wavelet subband representation being associated with a subband of a wavelet of the current visual unit; generating the bitstream based on the at least one wavelet subband representation; and storing the bitstream in a non-transitory computer-readable recording medium.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not  intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
Through the following detailed description with reference to the accompanying drawings, the above and other objectives, features, and advantages of example embodiments of the present disclosure will become more apparent. In the example embodiments of the present disclosure, the same reference numerals usually refer to the same components.
Fig. 1 illustrates a block diagram that illustrates an example visual data coding system, in accordance with some embodiments of the present disclosure;
Fig. 2 illustrates a typical transform coding scheme;
Fig. 3 illustrates an image from the Kodak dataset and different representations of the image;
Fig. 4 illustrates a network architecture of an autoencoder implementing the hyperprior model;
Fig. 5 illustrates a block diagram of a combined model;
Fig. 6 illustrates an encoding process of the combined model;
Fig. 7 illustrates a decoding process of the combined model;
Fig. 8 illustrates an encoder and a decoder with wavelet-based transform;
Fig. 9 illustrates an output of a forward wavelet-based transform;
Fig. 10 illustrates partitioning of the output of a forward wavelet-based transform;
Fig. 11 illustrates an example of the decoding process in accordance with embodiments of the present disclosure;
Fig. 12 illustrates an example of the inverse channel network in accordance with embodiments of the present disclosure;
Fig. 13 illustrates an example of the inverse non-linear transformation in accordance with embodiments of the present disclosure;
Fig. 14 illustrates another example of the inverse non-linear transformation in accordance with embodiments of the present disclosure;
Fig. 15 illustrates a flowchart of a method for visual data processing in accordance with embodiments of the present disclosure;
Fig. 16 illustrates a block diagram of a computing device in which various embodiments of the present disclosure can be implemented.
Throughout the drawings, the same or similar reference numerals usually refer to the same or similar elements.
DETAILED DESCRIPTION
Principle of the present disclosure will now be described with reference to some embodiments. It is to be understood that these embodiments are described only for the purpose of illustration and help those skilled in the art to understand and implement the present disclosure, without suggesting any limitation as to the scope of the disclosure. The disclosure described herein can be implemented in various manners other than the ones described below.
In the following description and claims, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skills in the art to which this disclosure belongs.
References in the present disclosure to “one embodiment, ” “an embodiment, ” “an example embodiment, ” and the like indicate that the embodiment described may include a particular feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an example embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
It shall be understood that although the terms “first” and “second” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could  be termed a first element, without departing from the scope of example embodiments. As used herein, the term “and/or” includes any and all combinations of one or more of the listed terms.
As used herein, the term “visual data” may refer to image data or video data. The term “visual data processing” may refer to image processing or video processing. The term “visual data coding” may refer to image coding or video coding. The term “coding visual data” may refer to “encoding visual data (for example, encoding visual data into a bitstream) ” and/or “decoding visual data (for example, decoding visual data from a bitstream” .
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a” , “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” , “comprising” , “has” , “having” , “includes” and/or “including” , when used herein, specify the presence of stated features, elements, and/or components etc., but do not preclude the presence or addition of one or more other features, elements, components and/or combinations thereof.
Example Environment
Fig. 1 is a block diagram that illustrates an example visual data coding system 100 that may utilize the techniques of this disclosure. As shown, the visual data coding system 100 may include a source device 110 and a destination device 120. The source device 110 can be also referred to as a data encoding device or a visual data encoding device, and the destination device 120 can be also referred to as a data decoding device or a visual data decoding device. In operation, the source device 110 can be configured to generate encoded visual data and the destination device 120 can be configured to decode the encoded visual data generated by the source device 110. The source device 110 may include a data source 112, a data encoder 114, and an input/output (I/O) interface 116.
The data source 112 may include a source such as a data capture device. Examples of the data capture device include, but are not limited to, an interface to receive data from a data provider, a computer graphics system for generating data, and/or a combination thereof.
The data may comprise one or more pictures of a video or one or more images. The data encoder 114 encodes the data from the data source 112 to generate a bitstream. The bitstream may include a sequence of bits that form a coded representation of the data. The bitstream may include coded pictures and associated data. The coded picture is a coded representation of a picture. The associated data may include sequence parameter sets, picture parameter sets, and other syntax structures. The I/O interface 116 may include a modulator/demodulator and/or a transmitter. The encoded data may be transmitted directly to destination device 120 via the I/O interface 116 through the network 130A. The encoded data may also be stored onto a storage medium/server 130B for access by destination device 120.
The destination device 120 may include an I/O interface 126, a data decoder 124, and a display device 122. The I/O interface 126 may include a receiver and/or a modem. The I/O interface 126 may acquire encoded data from the source device 110 or the storage medium/server 130B. The data decoder 124 may decode the encoded data. The display device 122 may display the decoded data to a user. The display device 122 may be integrated with the destination device 120, or may be external to the destination device 120 which is configured to interface with an external display device.
The data encoder 114 and the data decoder 124 may operate according to a data coding standard, such as video coding standard or still picture coding standard and other current and/or further standards.
Some exemplary embodiments of the present disclosure will be described in detailed hereinafter. It should be understood that section headings are used in the present document to facilitate ease of understanding and do not limit the embodiments disclosed in a section to only that section. Furthermore, while certain embodiments are described with reference to Versatile Video Coding or other specific data codecs, the disclosed techniques are applicable to other coding technologies also. Furthermore, while some embodiments describe coding steps in detail, it will be understood that corresponding steps decoding that undo the coding will be implemented by a decoder. Furthermore, the term data processing encompasses data coding or compression, data decoding or decompression and data transcoding in which data are represented from one compressed format into another compressed format or at a different compressed bitrate.
1. Brief Summary
A neural network-based image and video compression method is proposed, wherein a wavelet transform and non-linear transformation are combined to boost the coding efficiency. The disclosure first targets the problem to process subbands of wavelet transformation, and then further boosts the performance through the non-linear transformation.
2. Introduction
The past decade has witnessed the rapid development of deep learning in a variety of areas, especially in computer vision and image processing. Inspired from the great success of deep learning technology to computer vision areas, many researchers have shifted their attention from conventional image/video compression techniques to neural image/video compression technologies. Neural network was proposed originally with the interdisciplinary research of neuroscience and mathematics. It has shown strong capabilities in the context of non-linear transform and classification. Neural network-based image/video compression technology has gained significant progress during the past half decade. It is reported that the latest neural network-based image compression algorithm achieves comparable R-D performance with Versatile Video Coding (VVC) , the latest video coding standard developed by Joint Video Experts Team (JVET) with experts from MPEG and VCEG. With the performance of neural image compression continually being improved, neural network-based video compression has become an actively developing research area. However, neural network-based video coding still remains in its infancy due to the inherent difficulty of the problem.
2.1. Image/Video compression
Image/video compression usually refers to the computing technology that compresses image/video into binary code to facilitate storage and transmission. The binary codes may or may not support losslessly reconstructing the original image/video, termed lossless compression and lossy compression. Most of the efforts are devoted to lossy compression since lossless reconstruction is not necessary in most scenarios. Usually the performance of image/video compression algorithms is evaluated from two aspects, i.e. compression ratio and reconstruction quality. Compression ratio is directly related to the number of binary codes, the less the better; Reconstruction quality is measured by comparing the reconstructed image/video with the original image/video, the higher the better.
Image/video compression techniques can be divided into two branches, the classical video coding methods and the neural-network-based video compression methods. Classical video coding schemes adopt transform-based solutions, in which researchers have exploited statistical dependency in the latent variables (e.g., DCT or wavelet coefficients) by carefully hand-engineering entropy codes modeling the dependencies in the quantized regime. Neural network- based video compression is in two flavors, neural network-based coding tools and end-to-end neural network-based video compression. The former is embedded into existing classical video codecs as coding tools and only serves as part of the framework, while the latter is a separate framework developed based on neural networks without depending on classical video codecs. In the last three decades, a series of classical video coding standards have been developed to accommodate the increasing visual content. The international standardization organizations ISO/IEC has two expert groups namely Joint Photographic Experts Group (JPEG) and Moving Picture Experts Group (MPEG) , and ITU-T also has its own Video Coding Experts Group (VCEG) which is for standardization of image/video coding technology. The influential video coding standards published by these organizations include JPEG, JPEG 2000, H. 262, H. 264/AVC and H. 265/HEVC. After H. 265/HEVC, the Joint Video Experts Team (JVET) formed by MPEG and VCEG has been working on a new video coding standard Versatile Video Coding (VVC) . The first version of VVC was released in July 2020. An average of 50%bitrate reduction is reported by VVC under the same visual quality compared with HEVC.
Neural network-based image/video compression is not new since there were a number of researchers working on neural network-based image coding. But the network architectures were relatively shallow, and the performance was not satisfactory. Benefit from the abundance of data and the support of powerful computing resources, neural network-based methods are better exploited in a variety of applications. At present, neural network-based image/video compression has shown promising improvements, confirmed its feasibility. Nevertheless, this technology is still far from mature and a lot of challenges need to be addressed.
2.2. Neural networks
Neural networks, also known as artificial neural networks (ANN) , are the computational models used in machine learning technology which are usually composed of multiple processing layers and each layer is composed of multiple simple but non-linear basic computational units. One benefit of such deep networks is believed to be the capacity for processing data with multiple levels of abstraction and converting data into different kinds of representations. Note that these representations are not manually designed; instead, the deep network including the processing layers is learned from massive data using a general machine learning procedure. Deep learning eliminates the necessity of handcrafted representations, and thus is regarded useful especially for processing natively unstructured data, such as acoustic and visual signal, whilst processing such data has been a longstanding difficulty in the artificial intelligence field.
2.3. Neural networks for image compression
Existing neural networks for image compression methods can be classified in two categories, i.e., pixel probability modeling and auto-encoder. The former one belongs to the predictive coding strategy, while the latter one is the transform-based solution. Sometimes, these two  methods are combined together in literature.
2.3.1. Pixel Probability Modeling
According to Shannon’s information theory, the optimal method for lossless coding can reach the minimal coding rate -log2p (x) where p (x) is the probability of symbol x. A number of lossless coding methods were developed in literature and among them arithmetic coding is believed to be among the optimal ones. Given a probability distribution p (x) , arithmetic coding ensures that the coding rate to be as close as possible to its theoretical limit -log2p (x) without considering the rounding error. Therefore, the remaining problem is to how to determine the probability, which is however very challenging for natural image/video due to the curse of dimensionality.
Following the predictive coding strategy, one way to model p (x) is to predict pixel probabilities one by one in a raster scan order based on previous observations, where x is an image.
p (x) =p (x1) p (x2|x1) …p (xi|x1, …, xi-1) …p (xm×n|x1, …, xm×n-1)   (1)
where m and n are the height and width of the image, respectively. The previous observation is also known as the context of the current pixel. When the image is large, it can be difficult to estimate the conditional probability, thereby a simplified method is to limit the range of its context.
p (x) =p (x1) p (x2|x1) …p (xi|xi-k, …, xi-1) …p (xm×n|xm×n-k, …, xm×n-1)    (2)
where k is a pre-defined constant controlling the range of the context.
It should be noted that the condition may also take the sample values of other color components into consideration. For example, when coding the RGB color component, R sample is dependent on previously coded pixels (including R/G/B samples) , the current G sample may be coded according to previously coded pixels and the current R sample, while for coding the current B sample, the previously coded pixels and the current R and G samples may also be taken into consideration.
Neural networks were originally introduced for computer vision tasks and have been proven to be effective in regression and classification problems. Therefore, it has been proposed using neural networks to estimate the probability of p (xi) given its context x1, x2, …, xi-1. The pixel probability is proposed for binary images, i.e., xi∈ {-1, +1} . The neural autoregressive distribution estimator (NADE) is designed for pixel probability modeling, where is a feed-forward network with a single hidden layer. A similar work is proposed, where the feed-forward network also has connections skipping the hidden layer, and the parameters are also shared. NADE is extended to a real-valued model RNADE, where the probability p (xi|x1, …, xi-1) is derived with a mixture of Gaussians. Their feed-forward network also has a single hidden layer, but the hidden layer is with rescaling to avoid saturation and uses rectified linear unit (ReLU) instead of sigmoid. NADE and RNADE are improved by using reorganizing the order of the pixels and with deeper neural networks.
Designing advanced neural networks plays an important role in improving pixel probability modeling. Multi-dimensional long short-term memory (LSTM) is proposed, which is working together with mixtures of conditional Gaussian scale mixtures for probability modeling. LSTM is a special kind of recurrent neural networks (RNNs) and is proven to be good at modeling sequential data. The spatial variant of LSTM is used for images. Several different neural networks are studied, including RNNs and CNNs namely PixelRNN and PixelCNN, respectively. In PixelRNN, two variants of LSTM, called row LSTM and diagonal BiLSTM are proposed, where the latter is specifically designed for images. PixelRNN incorporates residual connections to help train deep neural networks with up to 12 layers. In PixelCNN, masked convolutions are used to suit for the shape of the context. Comparing with previous works, PixelRNN and PixelCNN are more dedicated to natural images: they consider pixels as discrete values (e.g., 0, 1, …, 255) and predict a multinomial distribution over the discrete values; they deal with color images in RGB color space; they work well on large-scale image dataset ImageNet. Gated PixelCNN is proposed to improve the PixelCNN, and achieves comparable performance with PixelRNN but with much less complexity. PixelCNN++ is proposed with the following improvements upon PixelCNN: a discretized logistic mixture likelihood is used rather than a 256-way multinomial distribution; down-sampling is used to capture structures at multiple resolutions; additional short-cut connections are introduced to speed up training; dropout is adopted for regularization; RGB is combined for one pixel. PixelSNAIL is proposed, in which casual convolutions are combined with self-attention.
Most of the above methods directly model the probability distribution in the pixel domain. Some researchers also attempt to model the probability distribution as a conditional one upon explicit or latent representations. That being said, it may be estimated as
where h is the additional condition and p (x) =p (h) p (x|h) , meaning the modeling is split into an unconditional one and a conditional one. The additional condition can be image label information or high-level representations.
2.3.2. Auto-encoder
Auto-encoder is used to trained for dimensionality reduction and consists of two parts: encoding and decoding. The encoding part converts the high-dimension input signal to low-dimension representations, typically with reduced spatial size but a greater number of channels. The decoding part attempts to recover the high-dimension input from the low-dimension representation. Auto-encoder enables automated learning of representations and eliminates the need of hand-crafted features, which is also believed to be one of the most important advantages of neural networks.
Fig. 2 illustrates a typical transform coding scheme 200. The original image x is transformed by the analysis network ga to achieve the latent representation y. The latent representation y is quantized and compressed into bits. The number of bits R is used to measure the coding rate.  The quantized latent representationis then inversely transformed by a synthesis network gs to obtain the reconstructed imageThe distortion is calculated in a perceptual space by transforming x andwith the function gp.
It is intuitive to apply auto-encoder network to lossy image compression. It only needs to encode the learned latent representation from the well-trained neural networks. However, it is not trivial to adapt auto-encoder to image compression since the original auto-encoder is not optimized for compression thereby not efficient by directly using a trained auto-encoder. In addition, there exist other major challenges: First, the low-dimension representation should be quantized before being encoded, but the quantization is not differentiable, which is required in backpropagation while training the neural networks. Second, the objective under compression scenario is different since both the distortion and the rate need to be take into consideration. Estimating the rate is challenging. Third, a practical image coding scheme needs to support variable rate, scalability, encoding/decoding speed, interoperability. In response to these challenges, a number of researchers have been actively contributing to this area.
The prototype auto-encoder for image compression is in Fig. 2, which can be regarded as a transform coding strategy. The original image x is transformed with the analysis network y= ga (x) , where y is the latent representation which will be quantized and coded. The synthesis network will inversely transform the quantized latent representationback to obtain the reconstructed imageThe framework is trained with the rate-distortion loss function, i.e., where D is the distortion between x andR is the rate calculated or estimated from the quantized representationand λ is the Lagrange multiplier. It should be noted that D can be calculated in either pixel domain or perceptual domain. All existing research works follow this prototype and the difference might only be the network structure or loss function.
In terms of network structure, RNNs and CNNs are the most widely used architectures. In the RNNs relevant category, a general framework is proposed for variable rate image compression using RNN. They use binary quantization to generate codes and do not consider rate during training. The framework indeed provides a scalable coding functionality, where RNN with convolutional and deconvolution layers is reported to perform decently. An improved version is proposed by upgrading the encoder with a neural network similar to PixelRNN to compress the binary codes. The performance is reportedly better than JPEG on Kodak image dataset using MS-SSIM evaluation metric. Further improvement with the RNN-based solution by introducing hidden-state priming. In addition, an SSIM-weighted loss function is also designed, and spatially adaptive bitrates mechanism is enabled. They achieve better results than BPG on Kodak image dataset using MS-SSIM as evaluation metric.
A general framework for rate-distortion optimized image compression is proposed. They use multiary quantization to generate integer codes and consider the rate during training, i.e. the loss is the joint rate-distortion cost, which can be MSE or others. They add random uniform  noise to stimulate the quantization during training and use the differential entropy of the noisy codes as a proxy for the rate. They use generalized divisive normalization (GDN) as the network structure, which consists of a linear mapping followed by a nonlinear parametric normalization. An improved version of GDN is proposed, where they use 3 convolutional layers each followed by a down-sampling layer and a GDN layer as the forward transform. Accordingly, they use 3 layers of inverse GDN each followed by an up-sampling layer and convolution layer to stimulate the inverse transform. In addition, an arithmetic coding method is devised to compress the integer codes. The performance is reportedly better than JPEG and JPEG 2000 on Kodak dataset in terms of MSE. Furthermore, the method is further improved by devising a scale hyper-prior into the auto-encoder. They transform the latent representation y with a subnet ha to z=ha (y) and z will be quantized and transmitted as side information. Accordingly, the inverse transform is implemented with a subnet hs attempting to decode from the quantized side informationto the standard deviation of the quantizedwhich will be further used during the arithmetic coding ofOn the Kodak image set, their method is slightly worse than BPG in terms of PSNR. The structures in the residue space are further exploited by introducing an autoregressive model to estimate both the standard deviation and the mean. The Gaussian mixture model is proposed afterwards to further remove redundancy in the residue. The reported performance is on par with VVC on the Kodak image set using PSNR as evaluation metric.
2.3.3. Hyper Prior Model
In the transform coding approach to image compression, the encoder subnetwork (section 2.3.2) transforms the image vector x using a parametric analysis transforminto a latent representation y, which is then quantized to formBecauseis discrete-valued, it can be losslessly compressed using entropy coding techniques such as arithmetic coding and transmitted as a sequence of bits.
Fig. 3 illustrates example latent representations of an image, including an image 300 from the Kodak dataset, a visualization of the latent 310 representation y of the image 300, a standard deviations σ 320 of the latent 310, and latents y 330 after a hyper prior network is introduced. A hyper prior network includes a hyper encoder and decoder. In the transform coding approach to image compression, as shown in Fig. 2, the encoder subnetwork transforms the image vector x using a parametric analysis transforminto a latent representation y, which is then quantized to formBecauseis discrete-valued, can be losslessly compressed using entropy coding techniques such as arithmetic coding and transmitted as a sequence of bits.
As evident from the latent 310 and the standard deviations σ 320 of Fig. 3, there are significant spatial dependencies among the elements ofNotably, their scales (standard deviations σ 320) appear to be coupled spatially. An additional set of random variablesmay be introduced to  capture the spatial dependencies and to further reduce the redundancies. In this case the image compression network is depicted in Fig. 4.
Fig. 4 is a schematic diagram 400 illustrating an example network architecture of an autoencoder implementing a hyperprior model. The upper side shows an image autoencoder network, and the lower side corresponds to the hyperprior subnetwork. The analysis and synthesis transforms are denoted as ga and ga. Q represents quantization, and AE, AD represent arithmetic encoder and arithmetic decoder, respectively. The hyperprior model includes two subnetworks, hyper encoder (denoted with ha) and hyper decoder (denoted with hs) . The hyper prior model generates a quantized hyper latentwhich comprises information related to the probability distribution of the samples of the quantized latentis included in the bitstream and transmitted to the receiver (decoder) along with
In schematic diagram 400, the upper side of the models is the encoder ga and decoder gs as discussed above. The lower side is the additional hyper encoder ha and hyper decoder hs networks that are used to obtainIn this architecture the encoder subjects the input image x to ga, yielding the responses y with spatially varying standard deviations. The responses y are fed into ha, summarizing the distribution of standard deviations in z. z is then quantizedcompressed, and transmitted as side information. The encoder then uses the quantized vector to estimate σ, the spatial distribution of standard deviations, and uses σ to compress and transmit the quantized image representationThe decoder first recoversfrom the compressed signal. The decoder then uses hs to obtain σ, which provides the decoder with the correct probability estimates to successfully recoveras well. The decoder then feedsinto gs to obtain the reconstructed image.
When the hyper encoder and hyper decoder are added to the image compression network, the spatial redundancies of the quantized latentare reduced. The latents y 330 in Fig. 3 correspond to the quantized latent when the hyper encoder/decoder are used. Compared to the standard deviations σ 320, the spatial redundancies are significantly reduced as the samples of the quantized latent are less correlated.
2.3.4. Context Model
Although the hyperprior model improves the modelling of the probability distribution of the quantized latentadditional improvement can be obtained by utilizing an autoregressive model that predicts quantized latents from their causal context (Context Model) .
The term auto-regressive means that the output of a process is later used as input to it. For example, the context model subnetwork generates one sample of a latent, which is later used as input to obtain the next sample.
Fig. 5 is a schematic diagram 500 illustrating an example combined model configured to jointly optimize a context model along with a hyperprior and the autoencoder. The combined model jointly optimizes an autoregressive component that estimates the probability distributions of latents from their causal context (Context Model) along with a hyperprior and the underlying autoencoder. Real-valued latent representations are quantized (Q) to create quantized latents and quantized hyper-latentswhich are compressed into a bitstream using an arithmetic encoder (AE) and decompressed by an arithmetic decoder (AD) . The dashed region corresponds to the components that are executed by the receiver (e.g, a decoder) to recover an image from a compressed bitstream.
A joint architecture is used where both hyperprior model subnetwork (hyper encoder and hyper decoder) and a context model subnetwork are utilized. The hyperprior and the context model are combined to learn a probabilistic model over quantized latentswhich is then used for entropy coding. As depicted in Fig. 5, the outputs of context subnetwork and hyper decoder subnetwork are combined by the subnetwork called Entropy Parameters, which generates the mean μ and scale (or variance) σ parameters for a Gaussian probability model. The gaussian probability model is then used to encode the samples of the quantized latents into bitstream with the help of the arithmetic encoder (AE) module. In the decoder the gaussian probability model is utilized to obtain the quantized latentsfrom the bitstream by arithmetic decoder (AD) module.
Typically, the latent samples are modeled as gaussian distribution or gaussian mixture models (not limited to) . In a subsequent work, the context model and hyper prior are jointly used to estimate the probability distribution of the latent samples. Since a gaussian distribution can be defined by a mean and a variance (aka sigma or scale) , the joint model is used to estimate the mean and variance (denoted as μ and σ) .
2.3.5. The encoding process using joint auto-regressive hyper prior model
The design in Fig. 5 corresponds to the state of the art compression method. In this section and the next, the encoding and decoding processes will be described separately.
Fig. 6 illustrates an example encoding process 600. The input image is first processed with an encoder subnetwork. The encoder transforms the input image into a transformed representation called latent, denoted by y. y is then input to a quantizer block, denoted by Q, to obtain the  quantized latentis then converted to a bitstream (bits1) using an arithmetic encoding module (denoted AE) . The arithmetic encoding block converts each sample of theinto a bitstream (bits1) one by one, in a sequential order.
The modules hyper encoder, context, hyper decoder, and entropy parameters subnetworks are used to estimate the probability distributions of the samples of the quantized latentThe latent y is input to hyper encoder, which outputs the hyper latent (denoted by z) . The hyper latent is then quantizedand a second bitstream (bits2) is generated using arithmetic encoding (AE) module. The factorized entropy module generates the probability distribution, that is used to encode the quantized hyper latent into bitstream. The quantized hyper latent includes information about the probability distribution of the quantized latent
The Entropy Parameters subnetwork generates the probability distribution estimations, that are used to encode the quantized latentThe information that is generated by the Entropy Parameters typically include a mean μ and scale (or variance) σ parameters, that are together used to obtain a gaussian probability distribution. A gaussian distribution of a random variable x is defined aswherein the parameter μ is the mean or expectation of the distribution (and also its median and mode) , while the parameter σ is its standard deviation (or variance, or scale) . In order to define a gaussian distribution, the mean and the variance need to be determined. The entropy parameters module are used to estimate the mean and the variance values.
The subnetwork hyper decoder generates part of the information that is used by the entropy parameters subnetwork, the other part of the information is generated by the autoregressive module called context module. The context module generates information about the probability distribution of a sample of the quantized latent, using the samples that are already encoded by the arithmetic encoding (AE) module. The quantized latentis typically a matrix composed of many samples. The samples can be indicated using indices, such asordepending on the dimensions of the matrixThe samplesare encoded by AE one by one, typically using a raster scan order. In a raster scan order the rows of a matrix are processed from top to bottom, wherein the samples in a row are processed from left to right. In such a scenario (wherein the raster scan order is used by the AE to encode the samples into bitstream) , the context module generates the information pertaining to a sampleusing the samples encoded before, in raster scan order. The information generated by the context module and the hyper decoder are combined by the entropy parameters module to generate the probability distributions that are used to encode the quantized latentinto bitstream (bits1) .
Finally the first and the second bitstream are transmitted to the decoder as result of the encoding process.
It is noted that the other names can be used for the modules described above.
In the above description, the all of the elements in Fig. 6 are collectively called encoder. The  analysis transform that converts the input image into latent representation is also called an encoder (or auto-encoder) .
2.3.6. The decoding process using joint auto-regressive hyper prior model
Fig. 7 illustrates an example decoding process 700 separately.
In the decoding process, the decoder first receives the first bitstream (bits1) and the second bitstream (bits2) that are generated by a corresponding encoder. The bits2 is first decoded by the arithmetic decoding (AD) module by utilizing the probability distributions generated by the factorized entropy subnetwork. The factorized entropy module typically generates the probability distributions using a predetermined template, for example using predetermined mean and variance values in the case of gaussian distribution. The output of the arithmetic decoding process of the bits2 iswhich is the quantized hyper latent. The AD process reverts to AE process that was applied in the encoder. The processes of AE and AD are lossless, meaning that the quantized hyper latentthat was generated by the encoder can be reconstructed at the decoder without any change.
After obtaining ofit is processed by the hyper decoder, whose output is fed to entropy parameters module. The three subnetworks, context, hyper decoder and entropy parameters that are employed in the decoder are identical to the ones in the encoder. Therefore the exact same probability distributions can be obtained in the decoder (as in encoder) , which is essential for reconstructing the quantized latentwithout any loss. As a result, the identical version of the quantized latentthat was obtained in the encoder can be obtained in the decoder.
After the probability distributions (e.g. the mean and variance parameters) are obtained by the entropy parameters subnetwork, the arithmetic decoding module decodes the samples of the quantized latent one by one from the bitstream bits1. From a practical standpoint, autoregressive model (the context model) is inherently serial, and therefore can not be sped up using techniques such as parallelization.
Finally the fully reconstructed quantized latentis input to the synthesis transform (denoted as decoder in Fig. 7) module to obtain the reconstructed image.
In the above description, the all of the elements in Fig. 7 are collectively called decoder. The synthesis transform that converts the quantized latent into reconstructed image is also called a decoder (or auto-decoder) .
2.3.7. Wavelet based neural compression architecture
The analysis transform (denoted as encoder) in Fig. 6 and the synthesis transform (denoted as decoder) in Fig. 7 might be replaced by a wavelet based transform. Fig. 8 shows an example diagram 800 of an encoder and a decoder with wavelet-based transform. In Fig. 8, first the input image is converted from an RGB color format to a YUV color format. This conversion process is optional, and can be missing in other implementations. If however such a conversion is applied at the input image, a back conversion (from YUV to RGB) is also applied before the  output image is generated. Moreover there are 2 additional post processing modules (post-process 1 and 2) shown in Fig. 8. These modules are also optional, hence might be missing in other implementations. The core of an encoder with wavelet-based transform is composed of a wavelet-based forward transform, a quantization module and an entropy coding module. After these 3 modules are applied to the input image, the bitstream is generated. The core of the decoding process is composed of entropy decoding, de-quantization process and an inverse wavelet-based transform operation. The decoding process convers the bitstream into output image. The encoding and decoding processes are depicted in Fig. 8.
Fig. 9 illustrates a diagram 900 of an output of a forward wavelet-based transform.
After the wavelet-based forward transform is applied to the input image, in the output of the wavelet-based forward transform the image is split into its frequency components. The output of a 2-dimensional forward wavelet transform (depicted as iWave forward module in Fig. 8) might take the form depicted in Fig. 9. The input of the transform is an image of a castle. In the example, after the transform an output with 7 distinct regions are obtained. The number of distinct regions depend on the specific implementation of the transform and might different from 7. Potential number of regions are 4, 7, 10, 13, …
In Fig. 9, one can see that the input image is transformed into 7 regions with 3 small images and 4 even smaller images. The transformation is based on the frequency components, the small image at the bottom right quarter comprises the high frequency components in both horizontal and vertical directions. The smallest image at the top-left corner on the other hand comprises the lowest frequency components both in the vertical and horizontal directions. The small image on the top-right quarter comprises the high frequency components in the horizontal direction and low frequency components in the vertical direction.
Fig. 10 illustrates a partitioning 1000 of the output of a forward wavelet-based transform. Fig. 10 depicts a possible splitting of the latent representation after the 2D forward transform. The latent representation are the samples (latent samples, or quantized latent samples) that are obtained after the 2D forward transform. The latent samples are divided into 7 sections above, denoted as HH1, LH1, HL1, LL2, HL2, LH2 and HH2. The HH1 describes that the section comprises high frequency components in the vertical direction, high frequency components in the horizontal direction and that the splitting depth is 1. HL2 describes that the section comprises low frequency components in the vertical direction, high frequency components in the horizontal direction and that the splitting depth is 2.
After the latent samples are obtained at the encoder by the forward wavelet transform, they are transmitted to the decoder by using entropy coding. At the decoder, entropy decoding is applied to obtain the latent samples, which are then inverse transformed (by using iWave inverse module in Fig. 8) to obtain the reconstructed image.
2.4 Neural networks for video compression
Similar to conventional video coding technologies, neural image compression serves as the foundation of intra compression in neural network-based video compression, thus development of neural network-based video compression technology comes later than neural network-based image compression but needs far more efforts to solve the challenges due to its complexity. Starting from 2017, a few researchers have been working on neural network-based video compression schemes. Compared with image compression, video compression needs efficient methods to remove inter-picture redundancy. Inter-picture prediction is then a crucial step in these works. Motion estimation and compensation is widely adopted but is not implemented by trained neural networks until recently.
Studies on neural network-based video compression can be divided into two categories according to the targeted scenarios: random access and the low-latency. In random access case, it requires the decoding can be started from any point of the sequence, typically divides the entire sequence into multiple individual segments and each segment can be decoded independently. In low-latency case, it aims at reducing decoding time thereby usually merely temporally previous frames can be used as reference frames to decode subsequent frames.
2.4.1. Low-latency
The very first method first split the video sequence frames into blocks and each block will choose one from two available modes, either intra coding or inter coding. If intra coding is selected, there is an associated auto-encoder to compress the block. If inter coding is selected, motion estimation and compensation are performed with tradition methods and a trained neural network will be used for residue compression. The outputs of auto-encoders are directly quantized and coded by the Huffman method.
Another neural network-based video coding scheme is proposed with PixelMotionCNN. The frames are compressed in the temporal order, and each frame is split into blocks which are compressed in the raster scan order. Each frame will firstly be extrapolated with the preceding two reconstructed frames. When a block is to be compressed, the extrapolated frame along with the context of the current block are fed into the PixelMotionCNN to derive a latent representation. Then the residues are compressed by the variable rate image scheme. This scheme performs on par with H. 264.
The real-sense end-to-end neural network-based video compression framework is proposed, in which all the modules are implemented with neural networks. The scheme accepts current frame and the prior reconstructed frame as inputs and optical flow will be derived with a pre-trained neural network as the motion information. The motion information will be warped with the reference frame followed by a neural network generating the motion compensated frame. The residues and the motion information are compressed with two separate neural auto-encoders.  The whole framework is trained with a single rate-distortion loss function. It achieves better performance than H. 264.
An advanced neural network-based video compression scheme is then proposed. It inherits and extends traditional video coding schemes with neural networks with the following major features: 1) using only one auto-encoder to compress motion information and residues; 2) motion compensation with multiple frames and multiple optical flows; 3) an on-line state is learned and propagated through the following frames over time. This scheme achieves better performance in MS-SSIM than HEVC reference software.
An extended end-to-end neural network-based video compression framework is proposed. In this solution, multiple frames are used as references. It is thereby able to provide more accurate prediction of current frame by using multiple reference frames and associated motion information. In addition, motion field prediction is deployed to remove motion redundancy along temporal channel. Postprocessing networks are also introduced in this work to remove reconstruction artifacts from previous processes. The performance is better than H. 265 by a noticeable margin in terms of both PSNR and MS-SSIM.
The scale-space flow is then proposed to replace commonly used optical flow by adding a scale parameter. It is reportedly achieving better performance than H. 264.
A multi-resolution representation is proposed for optical flows. Concretely, the motion estimation network produces multiple optical flows with different resolutions and let the network to learn which one to choose under the loss function. The performance is slightly improved and better than H. 265.
2.4.2. Random access
The very beginning method involves a neural network-based video compression scheme with frame interpolation. The key frames are first compressed with a neural image compressor and the remaining frames are compressed in a hierarchical order. They perform motion compensation in the perceptual domain, i.e. deriving the feature maps at multiple spatial scales of the original frame and using motion to warp the feature maps, which will be used for the image compressor. The method is reportedly on par with H. 264.
An interpolation-based video compression is proposed later on, wherein the interpolation model combines motion information compression and image synthesis, and the same auto-encoder is used for image and residual.
A neural network-based video compression method based on variational auto-encoders with a deterministic encoder is proposed afterwards. Concretely, the model consists of an auto-encoder and an auto-regressive prior. Different from previous methods, this method accepts a group of pictures (GOP) as inputs and incorporates a 3D autoregressive prior by taking into account of the temporal correlation while coding the laten representations. It provides comparative performance as H. 265.
2.5. Preliminaries
Almost all the natural image/video is in digital format. A grayscale digital image can be represented bywhereis the set of values of a pixel, m is the image height and n is the image width. For example, is a common setting and in this case thus the pixel can be represented by an 8-bit integer. An uncompressed grayscale digital image has 8 bits-per-pixel (bpp) , while compressed bits are definitely less.
A color image is typically represented in multiple channels to record the color information. For example, in the RGB color space an image can be denoted bywith three separate channels storing Red, Green and Blue information. Similar to the 8-bit grayscale image, an uncompressed 8-bit RGB image has 24 bpp. Digital images/videos can be represented in different color spaces. The neural network-based video compression schemes are mostly developed in RGB color space while the traditional codecs typically use YUV color space to represent the video sequences. In YUV color space, an image is decomposed into three channels, namely Y, Cb and Cr, where Y is the luminance component and Cb/Cr are the chroma components. The benefits come from that Cb and Cr are typically down sampled to achieve pre-compression since human vision system is less sensitive to chroma components.
A color video sequence is composed of multiple color images, called frames, to record scenes at different timestamps. For example, in the RGB color space, a color video can be denoted by X= {x0, x1, …, xt, …, xT-1} where T is the number of frames in this video sequence,  If m=1080, n=1920, and the video has 50 frames-per-second (fps) , then the data rate of this uncompressed video is 1920×1080×8×3×50=2,488,320,000 bits-per-second (bps) , about 2.32 Gbps, which needs a lot storage thereby definitely needs to be compressed before transmission over the internet.
Usually the lossless methods can achieve compression ratio of about 1.5 to 3 for natural images, which is clearly below requirement. Therefore, lossy compression is developed to achieve further compression ratio, but at the cost of incurred distortion. The distortion can be measured by calculating the average squared difference between the original image and the reconstructed image, i.e., mean-squared-error (MSE) . For a grayscale image, MSE can be calculated with the following equation.
Accordingly, the quality of the reconstructed image compared with the original image can be measured by peak signal-to-noise ratio (PSNR) :
whereis the maximal value ine.g., 255 for 8-bit grayscale images. There are other quality evaluation metrics such as structural similarity (SSIM) and multi-scale SSIM (MS-SSIM) .
To compare different lossless compression schemes, it is sufficient to compare either the  compression ratio given the resulting rate or vice versa. However, to compare different lossy compression methods, it has to take into account both the rate and reconstructed quality. For example, to calculate the relative rates at several different quality levels, and then to average the rates, is a commonly adopted method; the average relative rate is known as Bjontegaard’s delta-rate (BD-rate) . There are other important aspects to evaluate image/video coding schemes, including encoding/decoding complexity, scalability, robustness, and so on.
3. Problems
Most of the learning-based image compression methods usually utilize non-linear transformations to achieve a compact representation, which has already demonstrated the efficiency from the perspective of human visual and objective quality. I addition to the non-linear transformation, wavelet transform is also a powerful tool for multiresolution time-frequency analysis, which has been extensively implemented in the traditional codecs, such as JPEG-2000. Its effectiveness has also been validated on other image processing tasks such as denoising, image enhancement and fusion. Also, in recent works, some wavelet-based learned image framework shows great potential in lossy compression with additional support for lossless coding.
To further boost the coding performance and provide more functionality to the learned image compression, the combination of wavelet transformation and non-linear transformation has great potential. However, different from directly feeding the image into the transformation, the output of the wavelet and non-linear transformation might have quite different statistical characteristics, how to tap their advantages and combine them is still an essential issue.
4. Detail solutions
The detailed embodiments below should be considered as examples to explain general concepts. These embodiments should not be interpreted in a narrow way. Furthermore, these embodiments can be combined in any manner.
4.1. Target of embodiments
The target of embodiments is to discover the advantages of wavelet and non-linear transformation and combine them to improve the performance of the transformation module of the learned image codec, thereby improving the compression efficiency.
4.2 Core of embodiments
The core of embodiments governs the combination of the wavelet and non-linear transformation, and the corresponding processing of latent samples.
4.3 Details
4.3.1. Decoding process
According to embodiments of the disclosure the decoding of the bitstream to obtain the reconstructed picture is performed as follows. An image decoding method, comprising the steps of:
- Obtaining, using latent sample reconstruction module, a quantized latent sampleaccording to a bitstream.
- Adjusting the information comprised in the channels of the quantized latent sampleby the inverse channel network.
- Based on the output of the inverse channel network, performing inverse non-linear trans network to obtain the reconstructed sub bands of the wavelet.
- Obtaining the reconstructed image using the reconstructed sub bands and a wavelet transform network.
Fig. 11 illustrates an example diagram 1100 of the decoding process.
In the decoding process, firstly the quantized latent samplesis obtained based on a bitstream and by using a latent sample reconstruction module. The latent sample reconstruction module might comprise multiple neural-network based subnetworks. An example latent sample reconstruction module is depicted in Fig. 11, wherein the latent sample reconstruction module comprises Prediction Fusion Model, Context Model, Hyper Decoder and Hyper Scale Decoder, which are neural network based processing units. In the example depicted in Fig. 11, the latent sample reconstruction module also comprises two entropy decoder units (Entropy Decoder 1 and 2) , which are responsible for converting the bitstream (series of ones and zeros) into symbols such as integer or floating point numbers. In the example in Fig. 11, the output of the Prediction Fusion Model is prediction samples (i.e. the samples that are expected to be as close to the quantized latent samples as possible) , and the output of the Entropy decoder 2 is the residual samples (i.e. the samples that represent the difference between the prediction sample and quantized latent sample) . Therefore the units Prediction Fusion Model, Context Model and Hyper Decoder are responsible for obtaining the prediction samples, whereas the Hyper Scale Decoder unit is responsible for obtaining the probability parameters that are used in the decoding of Bitstream 2. It is noted that the latent sample reconstruction module depicted in Fig. 11 is an example, and the disclosure is applicable for any type of latent sample reconstruction module, a unit that is used to obtain quantized latent samples.
The quantized latent representation might be a tensor or a matrix comprising the quantized latent samples. After the quantized latent samples are obtained, an inverse channel network is applied to adjust the channel numbers of the quantized latent representation. Then the output of the inverse channel network is fed into inverse non-linear trans network to obtain reconstructed wavelet sub-bands. Finally, reconstructed wavelet sub-bands are used in the wavelet inverse  transform network to obtain the reconstructed image. The reconstructed image might be displayed on a display device with or without further processing.
Inverse channel network might be used to adjust the channel number of the input, an example inverse channel network 1200 is exemplified in Fig. 12.
According to the example, the inverse channel network may contain the operation of the BatchNorm, ReLU and convolution layer. The input of the inverse channel network is quantized latent samplesmight be a tensor with 3 dimensions or 4 dimensions, wherein 2 of the dimensions might be spatial dimensions such as width or height. A third dimension might be channel number, or the number of feature maps. The output of the inverse channel network is which comprise the samples of equal sized reconstructed sub-bands.
- In one example, the inverse channel network may contain other activation operation.
a) In one example, leaky relu might be used inside the network.
b) In one example, sigmoid might be used inside the network.
c) In another example, there might not be any activation layer.
- In one example, the inverse channel network may contain more layers.
- In one example, the BatchNorm may be removed or replaced with other Normalization operation.
- In one example, the order of the operation may be different.
- In one example, the stride of the convolution is strict to 1 to ensure that the output spatial size is identical with the input.
- In one example, the structure of the network might be bottleneck, in this case the stride of the convolution can be any number.
a) In one example, the input feature might firstly be downsampled and then be upsampled to ensure that the output has the same spatial resolution.
b) In one example, the input feature might firstly be upsampled and then be downsampled to ensure that the output has the same spatial resolution.
- The inverse channel network might comprise a fully connected neural network layer.
- In one example inverse channel network might reduce the number of channels of the input, i.e. the number of channels ofmight be lower than
- In one example inverse channel network might increase the number of channels of the input, i.e. the number of channels ofmight be larger than
The output of the inverse channel network is tensorwhich comprise the samples of equal-sized reconstructed sub-bands. These sub-bands are then divided into more than one tensor, each tensor corresponding to a different sub-band. The division operation is along the channel dimension, for example first N channels of themight correspond to the first sub-band, and the following M channels might correspond to the second sub-band.
The function of the inverse channel network is to achieve allocation of the information comprised into each sub-band. The information that is comprised inis typically mixed, one of the channels (feature maps) ofmight comprise information from multiple sub-bands. Therefore the function of the inverse channel network is to separate this information into  corresponding sub-bands.
In one implementation of the disclosure, the inverse channel network is not present. In such an implementation, the information comprised in themight already be separated, i.e. the first N channels of themight comprise information related to first sub-band only, following M channels might comprise information related to second sub-band only etc.
Inverse non-linear transformation to obtain reconstructed wavelet sub bands, an example inverse non-linear transformation network 1300 is exemplified in Fig. 13.
Fig. 13 illustrates an example of the inverse non-linear transformation, RB Nx means Residual blocks with N times up sampling.
According to the example, input feature may be processed with 4 individual branch to obtain 4 group information (e.g. sub-band groups) . Each group might comprise one or more sub-bands with same spatial resolution, whereas the spatial resolution (size is spatial dimensions) might be different between different groups. The input of Inverse non-linear transformation iswhich comprise equal sized sub-bands, and its output iswhich comprise one or more groups of sub-bands.
The function of the inverse non-linear transformation is adjusting of the spatial dimensions. In one example, the number of subband groups might be greater than one. One part ofis adjusted to match the size of first subband group, and a second part of theis adjusted to match the size of the second subband group.
In an example the inverse non-linear transformation might comprise at least two branches, at least one of which comprises a neural network layer that performs resizing. A resizing layer increases or reduces the spatial size of the input.
In an example the inverse non-linear transformation might comprise a single branch, which comprises a neural network layer that performs resizing.
The resizing layer might be an upsampling layer or a downsampling layer.
The terms resizing, upsizing, downsizing, upscaling, downscaling, upsampling or downsampling might be used interchangeably.
The said upsampling layer might be implemented using a deconvolution layer or a convolution layer.
The said downsampling layer might be implemented using a deconvolution layer or a convolution layer.
If the inverse non-linear transformation comprises more than one branch, the output of the each branch might be a group of subbands each of which having a different spatial size.
If the inverse non-linear transformation comprises more than one branch, for each branch, different resizing ratio might be used to obtain different sub-bands.
- In one example, the input feature might split into 4 parts through the channel dimension, different part might be used to reconstruct the sub-bands respectively.
a) In one example, the channel number of all part is same.
b) In one example, each part has different channel numbers, it depends on the importance of the subbands.
- In one example, all above branch might use same input feature as the input to obtain different sub-bands.
- The resizing operation might be upscaling, or upsampling.
- The resizing operation might be downscaling, or downsampling.
- The resizing operation might be implemented using a neural network.
Another possible solution of the inverse non-linear transformation is exemplified in a diagram 1400 in Fig. 14.
According to the example, each branch may comprise convolution layer, deconvolution layer or activation functions.
- In one example, the subband with the lowest resolution (smallest spatial size) might be obtained without application of resizing.
- In one example, the inverse non-linear transformation may contain an activation operation.
a) In one example, leaky ReLU or ReLu (Rectified Linear Unit) might be used.
b) In one example, sigmoid operation or tanh (hyperbolic tangent function) might be used inside the network.
c) Embodiments of the disclosure are not limited to any specific activation function, or the presence of activation function.
- In one example, the inverse non-linear transform network may contain convolution of deconvolution layers.
- In one example, the inverse non-linear transformation may contain more than 1 layer.
- In one example, the inverse non-linear transform network may contain normalization operations.
- In one example, the order of the operation may be different.
- The resizing operation might be implemented by a deconvolution layer or a convolution layer with a stride equal to N.
a) In one example the N is equal to 2.
Wavelet transform network might be implemented as:
- In one example, the parameters of the wavelet transform might be fixed parameter, which is same with the traditional wavelet transformation.
- Alternatively, the parameters of the wavelet transformation can be learnable (i.e neural network based) , so that wavelet transformation can jointly be optimized with the whole network.
The wavelet transform (or inverse wavelet transform) takes reconstructed subbands as input and applies transformation to convert to reconstructed image. Another example implementation of the wavelet transform is described in section 2.3.7.
The names latent sample reconstruction module, inverse channel network, inverse non-linear trans network might be different.
4.3.2. Encoding process
According to the disclosure, the encoding process follows the inverse process as the decoding process for obtaining the quantized latent samples. The difference is that, after the quantizes latent samples are obtained, the samples are included in a bitstream using an entropy encoding method.
According to the disclosure, the encoding of an input image to obtain bitstream is performed as follows. An image encoding method, comprising the steps of:
- Inputting image and obtaining wavelet subbands by using wavelet transformation.
- Processing the wavelet subbands by using non-linear trans networks.
- Obtaining, latent sample y by using the channel network.
- Quantizing the latent sample to obtain quantized latent samplesThe quantized latent samples might be obtained by a latent sample reconstruction module. Or it might be obtained by a quantization process.
- Obtaining a bitstream using the quantized latentand an entropy encoding module.
4.4. Benefit
The disclosure provides a combination method to the wavelet transformation and non-linear transformation to further boost the coding efficiency of the transformation operation in learned image compression.
5. Embodiments
1. An image or video decoding method, comprising the steps of:
- Obtaining, using latent sample reconstruction module, a quantized latent sampleaccording to a bitstream.
- Adjusting the information comprised in the channels of the quantized latent sampleby the inverse channel network.
- Based on the output of the inverse channel network, performing inverse non-linear trans network to obtain the reconstructed sub bands of the wavelet.
- Obtaining the reconstructed image using the reconstructed sub bands and a wavelet transform network.
2. An image or video encoding method, comprising the steps of:
- Using an input image and obtaining wavelet sub-bands by using wavelet transformation.
- Processing the wavelet sub-bands by using non-linear trans networks.
- Obtaining, latent sample y by using the channel network.
- Quantizing the latent sample to obtain quantized latent samplesThe quantized latent samples might be obtained by a latent sample reconstruction module. Or it might be obtained by a quantization process.
- Obtaining a bitstream using the quantized latentand an entropy encoding module.
Fig. 15 illustrates a flowchart of a method 1500 for visual data processing in accordance with embodiments of the present disclosure. The method 1500 is implemented for a conversion between a current visual unit of visual data and a bitstream of the visual data.
At block 1510, at least one wavelet subband representation of the current visual unit is determined based on a neural network-based non-linear transformation or a neural network-based inverse non-linear transformation inverse to the non-linear transformation. A wavelet subband representation is associated with a subband of a wavelet of the current visual unit. A subband of the wavelet represents a frequency subband or a frequency component of the wavelet.
At block 1520, the conversion is performed based on the at least one wavelet subband representation.
The method 1500 enables the combination of wavelet transform and the non-linear transformation, and the corresponding visual data processing. For example, the processing of latent samples can be combined. The combination of wavelet and non-linear transformation can improve the performance of the transformation module of the learned image codec, thereby improving the compression efficiency.
In some embodiments, the conversion comprises decoding the current visual unit from the bitstream.
In some embodiments, determining the at least one wavelet subband representation comprises: determining a quantized sample of the current visual unit based  on the bitstream and a neural network-based latent sample reconstruction module, the quantized sample being associated with a plurality of channels; determining an intermediate representation of the quantized sample, the intermediate representation being associated with at least one wavelet subband, a wavelet subband being associated with at least one channel of the plurality of channels; and determining the at least one wavelet subband representation based on the intermediate representation and the neural network-based inverse non-linear transformation. For example, the neural network-based inverse non-linear transformation may be performed by an inverse non-linear transformation network which may be a neural network. As used herein, the term “quantized sample” may be a quantized latent sample, which may be referred to asThe term “intermediate representation” may be denoted asand the wavelet subband representation may be denoted as
In some embodiments, preforming the conversion comprises: determining a reconstruction of the current visual unit based on the at least one wavelet subband representation and an inverse wavelet transformation. The inverse wavelet transformation may be performed by an inverse wavelet module or inverse wavelet network.
By way of example, as shown in Fig. 11, in the decoding process, firstly the quantized latent samplesis obtained based on a bitstream and by using a latent sample reconstruction module. The latent sample reconstruction module might comprise multiple neural-network based subnetworks. An example latent sample reconstruction module is depicted in Fig. 11, wherein the latent sample reconstruction module comprises Prediction Fusion Model, Context Model, Hyper Decoder and Hyper Scale Decoder, which are neural network-based processing units. The latent sample reconstruction module also comprises two entropy decoder units (Entropy Decoder 1 and 2) , which are responsible for converting the bitstream (series of ones and zeros) into symbols such as integer or floating point numbers. The output of the Prediction Fusion Model is prediction samples (i.e., the samples that are expected to be as close to the quantized latent samples as possible) , and the output of the Entropy decoder 2 is the residual samples (i.e., the samples that represent the difference between the prediction sample and quantized latent sample) . Therefore, the units Prediction Fusion Model, Context Model and Hyper Decoder are responsible for obtaining the prediction samples, whereas the Hyper Scale Decoder unit is responsible for obtaining the probability parameters that are used in the decoding of Bitstream 2. It is noted that the latent sample reconstruction module depicted in Fig. 11 is an example, and  the disclosure is applicable for any type of latent sample reconstruction module, a unit that is used to obtain quantized latent samples.
The quantized latent representation might be a tensor or a matrix comprising the quantized latent samples. After the quantized latent samples are obtained, an inverse channel network is applied to adjust the channel numbers of the quantized latent representation. Then the output of the inverse channel network is fed into inverse non-linear trans network to obtain reconstructed wavelet sub-bands. Finally, reconstructed wavelet sub-bands are used in the wavelet inverse transform network to obtain the reconstructed image. The reconstructed image might be displayed on a display device with or without further processing.
In some embodiments, the intermediate representation of the quantized sample is determined based on an inverse channel network for adjusting a channel number of the quantized sample. An example of the inverse channel network is shown in Fig. 11.
In some embodiments, the inverse channel network comprises at least one of: a batch normalization unit such as BatchNorm, a rectified linear unit (ReLU) , or at least one convolutional layer.
In some embodiments, the quantized sample comprises at least one of: a first dimension of a first spatial dimension of the current visual unit, a second dimension of a second spatial dimension of the current visual unit, a third dimension of the number of the plurality of channels, or a fourth dimension of the number of feature maps associated with the current visual unit.
In some embodiments, an activation operation of the inverse channel network comprises at least one of: a leaky rectified linear unit (ReLU) , or a sigmoid operation.
In some embodiments, an activation layer is absent from the inverse channel network.
In some embodiments, a stride of a convolution of the inverse channel network is a first predefined value, such as 1.
In some embodiments, the inverse channel network is bottleneck, and a stride of a convolution of the inverse channel network is a second value, such as any number.
In some embodiments, the bottleneck inverse channel network performs a  downsampling operation and an upsampling operation.
In some embodiments, the inverse channel network comprises a fully connected neural network layer.
In some embodiments, the number of channels of the intermediate representation is less than or larger than the number of channels of the quantized sample.
In some embodiments, the intermediate representation comprises samples of equal sized reconstructed wavelet subbands.
In some embodiments, a first number of channels of the intermediate representation are associated with a first wavelet subband, and a second number of channels of the intermediate representation are associated with a second wavelet subband.
In some embodiments, a first network for the inverse non-linear transformation comprises a plurality of branches associated with a plurality of groups of wavelet subbands, a group of wavelet subbands being associated with a same spatial resolution. As used herein, the first network may also be referred to as an inverse non-linear transformation network. An example of the inverse non-linear transformation network is shown in Fig. 13.
In some embodiments, the at least one wavelet subband representation comprises a plurality of groups of subband representations corresponding to the plurality of groups of wavelet subbands.
In some embodiments, the first network adjusts a spatial dimension of the intermediate representation, a first part of the intermediate representation is adjusted to match a first size of a first group in the plurality of groups of wavelet subbands, and a second part of the intermediate representation is adjusted to match a second size of a second group in the plurality of groups.
In some embodiments, the plurality of branches comprises a first branch including a first neural network layer for resizing.
In some embodiments, a first network for the inverse non-linear transformation comprises a single branch including a first neural network layer for resizing.
In some embodiments, the first neural network layer increases or reduces a spatial size of the intermediate representation.
In some embodiments, the first neural network layer comprises an upsampling layer or a downsampling layer.
In some embodiments, the first neural network layer comprises a deconvolution layer or a convolution layer.
In some embodiments, a first spatial size of a first group in the plurality of groups of wavelet subbands is different from a second spatial size of a second group in the plurality of groups of wavelet subbands.
In some embodiments, a first resizing ratio is used for a first group in the plurality of groups of wavelet subbands, and a second resizing ratio different from the first resizing ratio is used for a second group in the plurality of groups of wavelet subbands.
In some embodiments, the at least one wavelet subband representation comprises a plurality of wavelet subband representations, each wavelet subband representation being associated with a group of wavelet subbands in the plurality of groups of wavelet subbands, and wherein determining the plurality of wavelet subband representations comprises: determining a plurality of partitioned representations of the intermediate representation; and determining the plurality of wavelet subband representations based on the plurality of partitioned representations.
In some embodiments, the plurality of partitioned representations is associated with a same channel number.
In some embodiments, a first partitioned representation of the plurality of partitioned representations is associated with a first channel number, and a second partitioned representation of the plurality of partitioned representations is associated with a second channel number different from the first channel number.
In some embodiments, the first and second channel numbers are determined based on importance of wavelet subbands associated with the first and second partitioned representations.
In some embodiments, the at least one wavelet subband representation comprises a plurality of wavelet subband representations, each wavelet subband representation being associated with a group of wavelet subbands in the plurality of groups of wavelet subbands, and the plurality of wavelet subband representations is determined based on the intermediate representation.
Another example of the inverse non-linear transformation network is shown in Fig. 14. In some embodiments, a branch of the plurality of branches comprises at least one of: a convolution layer, a deconvolution layer, an activation operation, a rectified linear unit (ReLU) , a leaky ReLU, a sigmoid operation, a hyperbolic tangent operation, or a normalization operation.
In some embodiments, a wavelet subband representation of a wavelet subband with a lowest resolution among the plurality of groups of wavelet subbands is determined without resizing.
In some embodiments, a stride of the deconvolution layer or the convolution layer in the branch is a third number, such as 2.
In some embodiments, the conversion comprises encoding the current visual unit into the bitstream.
In some embodiments, determining the at least one wavelet subband representation comprises: determining wavelet subband information of the current visual unit based on a wavelet transformation; and determining the at least one wavelet subband representation based on the wavelet subband information and the non-linear transformation.
In some embodiments, preforming the conversion comprises: determining a sample of the current visual unit based on the at least one wavelet subband representation and a channel network; determining a quantized sample of the current visual unit by quantizing the sample of the current visual unit; and determining the bitstream at least based on the quantized sample and an entropy coding module.
In some embodiments, the wavelet transformation comprises at least one fixed parameter. For example, the wavelet transformation may include at least one fixed parameter same with a traditional wavelet transformation.
In some embodiments, that wavelet transformation comprises at least one learnable parameter, the at least one learnable parameter is updated together with a further neural network for the conversion. For example, the parameters of the wavelet transformation may be learnable (for example, neural network based) , so that the wavelet transformation may jointly be optimized with the whole network.
According to further embodiments of the present disclosure, a non-transitory  computer-readable recording medium is provided. The non-transitory computer-readable recording medium stores a bitstream of visual data which is generated by a method performed by an apparatus for visual data processing. In the method, at least one wavelet subband representation of a current visual unit of the visual data is determined based on a neural network-based non-linear transformation or a neural network-based inverse non-linear transformation inverse to the non-linear transformation. A wavelet subband representation is associated with a subband of a wavelet of the current visual unit. The bitstream is generated based on the at least one wavelet subband representation.
According to still further embodiments of the present disclosure, a method for storing bitstream of a video is provided. In the method, at least one wavelet subband representation of a current visual unit of the visual data is determined based on a neural network-based non-linear transformation or a neural network-based inverse non-linear transformation inverse to the non-linear transformation. A wavelet subband representation is associated with a subband of a wavelet of the current visual unit. The bitstream is generated based on the at least one wavelet subband representation. The bitstream is stored in a non-transitory computer-readable recording medium.
Implementations of the present disclosure can be described in view of the following clauses, the features of which can be combined in any reasonable manner.
Clause 1. A method for visual data processing, comprising: determining, for a conversion between a current visual unit of visual data and a bitstream of the visual data, at least one wavelet subband representation of the current visual unit based on a neural network-based non-linear transformation or a neural network-based inverse non-linear transformation inverse to the non-linear transformation, a wavelet subband representation being associated with a subband of a wavelet of the current visual unit; and performing the conversion based on the at least one wavelet subband representation.
Clause 2. The method of clause 1, wherein the conversion comprises decoding the current visual unit from the bitstream.
Clause 3. The method of clause 2, wherein determining the at least one wavelet subband representation comprises: determining a quantized sample of the current visual unit based on the bitstream and a neural network-based latent sample reconstruction module, the quantized sample being associated with a plurality of channels; determining an intermediate representation of the quantized sample, the intermediate representation  being associated with at least one wavelet subband, a wavelet subband being associated with at least one channel of the plurality of channels; and determining the at least one wavelet subband representation based on the intermediate representation and the neural network-based inverse non-linear transformation.
Clause 4. The method of clause 2 or clause 3, wherein preforming the conversion comprises: determining a reconstruction of the current visual unit based on the at least one wavelet subband representation and an inverse wavelet transformation.
Clause 5. The method of clause 3, wherein the intermediate representation of the quantized sample is determined based on an inverse channel network for adjusting a channel number of the quantized sample.
Clause 6. The method of clause 5, wherein the inverse channel network comprises at least one of: a batch normalization unit, a rectified linear unit (ReLU) , or at least one convolutional layer.
Clause 7. The method of clause 5 or clause 6, wherein the quantized sample comprises at least one of: a first dimension of a first spatial dimension of the current visual unit, a second dimension of a second spatial dimension of the current visual unit, a third dimension of the number of the plurality of channels, or a fourth dimension of the number of feature maps associated with the current visual unit.
Clause 8. The method of any of clauses 5 to 7, wherein an activation operation of the inverse channel network comprises at least one of: a leaky rectified linear unit (ReLU) , or a sigmoid operation.
Clause 9. The method of any of clauses 5 to 8, wherein an activation layer is absent from the inverse channel network.
Clause 10. The method of any of clauses 5 to 9, wherein a stride of a convolution of the inverse channel network is a first predefined value.
Clause 11. The method of any of clauses 5 to 9, wherein the inverse channel network is bottleneck, and a stride of a convolution of the inverse channel network is a second value.
Clause 12. The method of clause 11, wherein the bottleneck inverse channel network performs a downsampling operation and an upsampling operation.
Clause 13. The method of any of clauses 5 to 12, wherein the inverse channel network comprises a fully connected neural network layer.
Clause 14. The method of any of clauses 5 to 13, wherein the number of channels of the intermediate representation is less than or larger than the number of channels of the quantized sample.
Clause 15. The method of any of clauses 3 to 7, wherein the intermediate representation comprises samples of equal sized reconstructed wavelet subbands.
Clause 16. The method of any of clause 3 to 15, wherein a first number of channels of the intermediate representation are associated with a first wavelet subband, and a second number of channels of the intermediate representation are associated with a second wavelet subband.
Clause 17. The method of any of clauses 3 to 16, wherein a first network for the inverse non-linear transformation comprises a plurality of branches associated with a plurality of groups of wavelet subbands, a group of wavelet subbands being associated with a same spatial resolution.
Clause 18. The method of clause 17, wherein the at least one wavelet subband representation comprises a plurality of groups of subband representations corresponding to the plurality of groups of wavelet subbands.
Clause 19. The method of clause 17 or clause 18, wherein the first network adjusts a spatial dimension of the intermediate representation, a first part of the intermediate representation is adjusted to match a first size of a first group in the plurality of groups of wavelet subbands, and a second part of the intermediate representation is adjusted to match a second size of a second group in the plurality of groups.
Clause 20. The method of any of clauses 17 to 19, wherein the plurality of branches comprises a first branch including a first neural network layer for resizing.
Clause 21. The method of any of clauses 3 to 16, wherein a first network for the inverse non-linear transformation comprises a single branch including a first neural network layer for resizing.
Clause 22. The method of clause 20 or clause 21, wherein the first neural network layer increases or reduces a spatial size of the intermediate representation.
Clause 23. The method of any of clauses 20 to 22, wherein the first neural network layer comprises an upsampling layer or a downsampling layer.
Clause 24. The method of any of clauses 20 to 23, wherein the first neural network layer comprises a deconvolution layer or a convolution layer.
Clause 25. The method of any of clauses 17 to 19, wherein a first spatial size of a first group in the plurality of groups of wavelet subbands is different from a second spatial size of a second group in the plurality of groups of wavelet subbands.
Clause 26. The method of any of clauses 17 to 19, wherein a first resizing ratio is used for a first group in the plurality of groups of wavelet subbands, and a second resizing ratio different from the first resizing ratio is used for a second group in the plurality of groups of wavelet subbands.
Clause 27. The method of any of clauses 17 to 19, wherein the at least one wavelet subband representation comprises a plurality of wavelet subband representations, each wavelet subband representation being associated with a group of wavelet subbands in the plurality of groups of wavelet subbands, and wherein determining the plurality of wavelet subband representations comprises: determining a plurality of partitioned representations of the intermediate representation; and determining the plurality of wavelet subband representations based on the plurality of partitioned representations.
Clause 28. The method of clause 27, wherein the plurality of partitioned representations is associated with a same channel number.
Clause 29. The method of clause 27, wherein a first partitioned representation of the plurality of partitioned representations is associated with a first channel number, and a second partitioned representation of the plurality of partitioned representation s is associated with a second channel number different from the first channel number.
Clause 30. The method of clause 29, wherein the first and second channel numbers are determined based on importance of wavelet subbands associated with the first and second partitioned representations.
Clause 31. The method of any of clauses 17 to 19, wherein the at least one wavelet subband representation comprises a plurality of wavelet subband representations, each wavelet subband representation being associated with a group of wavelet subbands in the plurality of groups of wavelet subbands, and the plurality of wavelet subband  representations is determined based on the intermediate representation.
Clause 32. The method of clause 17, wherein a branch of the plurality of branches comprises at least one of: a convolution layer, a deconvolution layer, an activation operation, a rectified linear unit (ReLU) , a leaky ReLU, a sigmoid operation, a hyperbolic tangent operation, or a normalization operation.
Clause 33. The method of clause 32, wherein a wavelet subband representation of a wavelet subband with a lowest resolution among the plurality of groups of wavelet subbands is determined without resizing.
Clause 34. The method of clause 32, wherein a stride of the deconvolution layer or the convolution layer in the branch is a third number.
Clause 35. The method of clause 1, wherein the conversion comprises encoding the current visual unit into the bitstream.
Clause 36. The method of clause 35, wherein determining the at least one wavelet subband representation comprises: determining wavelet subband information of the current visual unit based on a wavelet transformation; and determining the at least one wavelet subband representation based on the wavelet subband information and the non-linear transformation.
Clause 37. The method of clause 35 or clause 36, wherein preforming the conversion comprises: determining a sample of the current visual unit based on the at least one wavelet subband representation and a channel network; determining a quantized sample of the current visual unit by quantizing the sample of the current visual unit; and determining the bitstream at least based on the quantized sample and an entropy coding module.
Clause 38. The method of clause 36, wherein the wavelet transformation comprises at least one fixed parameter.
Clause 39. The method of clause 36, wherein that wavelet transformation comprises at least one learnable parameter, the at least one learnable parameter is updated together with a further neural network for the conversion.
Clause 40. An apparatus for visual data processing comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon  execution by the processor, cause the processor to perform a method in accordance with any of clauses 1-39.
Clause 41. A non-transitory computer-readable storage medium storing instructions that cause a processor to perform a method in accordance with any of clauses 1-39.
Clause 42. A non-transitory computer-readable recording medium storing a bitstream of visual data which is generated by a method performed by an apparatus for visual data processing, wherein the method comprises: determining at least one wavelet subband representation of a current visual unit of the visual data based on a neural network-based non-linear transformation or a neural network-based inverse non-linear transformation inverse to the non-linear transformation, a wavelet subband representation being associated with a subband of a wavelet of the current visual unit; and generating the bitstream based on the at least one wavelet subband representation.
Clause 43. A method for storing a bitstream of visual data, comprising: determining at least one wavelet subband representation of a current visual unit of the visual data based on a neural network-based non-linear transformation or a neural network-based inverse non-linear transformation inverse to the non-linear transformation, a wavelet subband representation being associated with a subband of a wavelet of the current visual unit; generating the bitstream based on the at least one wavelet subband representation; and storing the bitstream in a non-transitory computer-readable recording medium.
Example Device
Fig. 16 illustrates a block diagram of a computing device 1600 in which various embodiments of the present disclosure can be implemented. The computing device 1600 may be implemented as or included in the source device 110 (or the data encoder 114) or the destination device 120 (or the data decoder 124) .
It would be appreciated that the computing device 1600 shown in Fig. 16 is merely for purpose of illustration, without suggesting any limitation to the functions and scopes of the embodiments of the present disclosure in any manner.
As shown in Fig. 16, the computing device 1600 includes a general-purpose computing device 1600. The computing device 1600 may at least comprise one or more  processors or processing units 1610, a memory 1620, a storage unit 1630, one or more communication units 1640, one or more input devices 1650, and one or more output devices 1660.
In some embodiments, the computing device 1600 may be implemented as any user terminal or server terminal having the computing capability. The server terminal may be a server, a large-scale computing device or the like that is provided by a service provider. The user terminal may for example be any type of mobile terminal, fixed terminal, or portable terminal, including a mobile phone, station, unit, device, multimedia computer, multimedia tablet, Internet node, communicator, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, personal communication system (PCS) device, personal navigation device, personal digital assistant (PDA) , audio/video player, digital camera/video camera, positioning device, television receiver, radio broadcast receiver, E-book device, gaming device, or any combination thereof, including the accessories and peripherals of these devices, or any combination thereof. It would be contemplated that the computing device 1600 can support any type of interface to a user (such as “wearable” circuitry and the like) .
The processing unit 1610 may be a physical or virtual processor and can implement various processes based on programs stored in the memory 1620. In a multi-processor system, multiple processing units execute computer executable instructions in parallel so as to improve the parallel processing capability of the computing device 1600. The processing unit 1610 may also be referred to as a central processing unit (CPU) , a microprocessor, a controller or a microcontroller.
The computing device 1600 typically includes various computer storage medium. Such medium can be any medium accessible by the computing device 1600, including, but not limited to, volatile and non-volatile medium, or detachable and non-detachable medium. The memory 1620 can be a volatile memory (for example, a register, cache, Random Access Memory (RAM) ) , a non-volatile memory (such as a Read-Only Memory (ROM) , Electrically Erasable Programmable Read-Only Memory (EEPROM) , or a flash memory) , or any combination thereof. The storage unit 1630 may be any detachable or non-detachable medium and may include a machine-readable medium such as a memory, flash memory drive, magnetic disk or another other media, which can be used for storing information and/or data and can be accessed in the computing device 1600.
The computing device 1600 may further include additional detachable/non-detachable, volatile/non-volatile memory medium. Although not shown in Fig. 16, it is possible to provide a magnetic disk drive for reading from and/or writing into a detachable and non-volatile magnetic disk and an optical disk drive for reading from and/or writing into a detachable non-volatile optical disk. In such cases, each drive may be connected to a bus (not shown) via one or more data medium interfaces.
The communication unit 1640 communicates with a further computing device via the communication medium. In addition, the functions of the components in the computing device 1600 can be implemented by a single computing cluster or multiple computing machines that can communicate via communication connections. Therefore, the computing device 1600 can operate in a networked environment using a logical connection with one or more other servers, networked personal computers (PCs) or further general network nodes.
The input device 1650 may be one or more of a variety of input devices, such as a mouse, keyboard, tracking ball, voice-input device, and the like. The output device 1660 may be one or more of a variety of output devices, such as a display, loudspeaker, printer, and the like. By means of the communication unit 1640, the computing device 1600 can further communicate with one or more external devices (not shown) such as the storage devices and display device, with one or more devices enabling the user to interact with the computing device 1600, or any devices (such as a network card, a modem and the like) enabling the computing device 1600 to communicate with one or more other computing devices, if required. Such communication can be performed via input/output (I/O) interfaces (not shown) .
In some embodiments, instead of being integrated in a single device, some or all components of the computing device 1600 may also be arranged in cloud computing architecture. In the cloud computing architecture, the components may be provided remotely and work together to implement the functionalities described in the present disclosure. In some embodiments, cloud computing provides computing, software, data access and storage service, which will not require end users to be aware of the physical locations or configurations of the systems or hardware providing these services. In various embodiments, the cloud computing provides the services via a wide area network (such as Internet) using suitable protocols. For example, a cloud computing provider provides applications over the wide area network, which can be accessed through a web browser or  any other computing components. The software or components of the cloud computing architecture and corresponding data may be stored on a server at a remote position. The computing resources in the cloud computing environment may be merged or distributed at locations in a remote data center. Cloud computing infrastructures may provide the services through a shared data center, though they behave as a single access point for the users. Therefore, the cloud computing architectures may be used to provide the components and functionalities described herein from a service provider at a remote location. Alternatively, they may be provided from a conventional server or installed directly or otherwise on a client device.
The computing device 1600 may be used to implement visual data encoding/decoding in embodiments of the present disclosure. The memory 1620 may include one or more visual data coding modules 1625 having one or more program instructions. These modules are accessible and executable by the processing unit 1610 to perform the functionalities of the various embodiments described herein.
In the example embodiments of performing visual data encoding, the input device 1650 may receive visual data as an input 1670 to be encoded. The visual data may be processed, for example, by the visual data coding module 1625, to generate an encoded bitstream. The encoded bitstream may be provided via the output device 1660 as an output 1680.
In the example embodiments of performing visual data decoding, the input device 1650 may receive an encoded bitstream as the input 1670. The encoded bitstream may be processed, for example, by the visual data coding module 1625, to generate decoded visual data. The decoded visual data may be provided via the output device 1660 as the output 1680.
While this disclosure has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present application as defined by the appended claims. Such variations are intended to be covered by the scope of this present application. As such, the foregoing description of embodiments of the present application is not intended to be limiting.

Claims (43)

  1. A method for visual data processing, comprising:
    determining, for a conversion between a current visual unit of visual data and a bitstream of the visual data, at least one wavelet subband representation of the current visual unit based on a neural network-based non-linear transformation or a neural network-based inverse non-linear transformation inverse to the non-linear transformation, a wavelet subband representation being associated with a subband of a wavelet of the current visual unit; and
    performing the conversion based on the at least one wavelet subband representation.
  2. The method of claim 1, wherein the conversion comprises decoding the current visual unit from the bitstream.
  3. The method of claim 2, wherein determining the at least one wavelet subband representation comprises:
    determining a quantized sample of the current visual unit based on the bitstream and a neural network-based latent sample reconstruction module, the quantized sample being associated with a plurality of channels;
    determining an intermediate representation of the quantized sample, the intermediate representation being associated with at least one wavelet subband, a wavelet subband being associated with at least one channel of the plurality of channels; and
    determining the at least one wavelet subband representation based on the intermediate representation and the neural network-based inverse non-linear transformation.
  4. The method of claim 2 or claim 3, wherein preforming the conversion comprises:
    determining a reconstruction of the current visual unit based on the at least one wavelet subband representation and an inverse wavelet transformation.
  5. The method of claim 3, wherein the intermediate representation of the quantized sample is determined based on an inverse channel network for adjusting a channel number of the quantized sample.
  6. The method of claim 5, wherein the inverse channel network comprises at least one of:
    a batch normalization unit,
    a rectified linear unit (ReLU) , or
    at least one convolutional layer.
  7. The method of claim 5 or claim 6, wherein the quantized sample comprises at least one of:
    a first dimension of a first spatial dimension of the current visual unit,
    a second dimension of a second spatial dimension of the current visual unit,
    a third dimension of the number of the plurality of channels, or
    a fourth dimension of the number of feature maps associated with the current visual unit.
  8. The method of any of claims 5 to 7, wherein an activation operation of the inverse channel network comprises at least one of:
    a leaky rectified linear unit (ReLU) , or
    a sigmoid operation.
  9. The method of any of claims 5 to 8, wherein an activation layer is absent from the inverse channel network.
  10. The method of any of claims 5 to 9, wherein a stride of a convolution of the inverse channel network is a first predefined value.
  11. The method of any of claims 5 to 9, wherein the inverse channel network is bottleneck, and a stride of a convolution of the inverse channel network is a second value.
  12. The method of claim 11, wherein the bottleneck inverse channel network performs a downsampling operation and an upsampling operation.
  13. The method of any of claims 5 to 12, wherein the inverse channel network comprises a fully connected neural network layer.
  14. The method of any of claims 5 to 13, wherein the number of channels of the intermediate representation is less than or larger than the number of channels of the quantized sample.
  15. The method of any of claims 3 to 7, wherein the intermediate representation comprises samples of equal sized reconstructed wavelet subbands.
  16. The method of any of claim 3 to 15, wherein a first number of channels of the intermediate representation are associated with a first wavelet subband, and a second number of channels of the intermediate representation are associated with a second wavelet subband.
  17. The method of any of claims 3 to 16, wherein a first network for the inverse non-linear transformation comprises a plurality of branches associated with a plurality of groups of wavelet subbands, a group of wavelet subbands being associated with a same spatial resolution.
  18. The method of claim 17, wherein the at least one wavelet subband representation comprises a plurality of groups of subband representations corresponding to the plurality of groups of wavelet subbands.
  19. The method of claim 17 or claim 18, wherein the first network adjusts a spatial dimension of the intermediate representation, a first part of the intermediate representation is adjusted to match a first size of a first group in the plurality of groups of wavelet subbands, and a second part of the intermediate representation is adjusted to match a second size of a second group in the plurality of groups.
  20. The method of any of claims 17 to 19, wherein the plurality of branches comprises a first branch including a first neural network layer for resizing.
  21. The method of any of claims 3 to 16, wherein a first network for the inverse non-linear transformation comprises a single branch including a first neural network layer for resizing.
  22. The method of claim 20 or claim 21, wherein the first neural network layer increases or reduces a spatial size of the intermediate representation.
  23. The method of any of claims 20 to 22, wherein the first neural network layer comprises an upsampling layer or a downsampling layer.
  24. The method of any of claims 20 to 23, wherein the first neural network layer comprises a deconvolution layer or a convolution layer.
  25. The method of any of claims 17 to 19, wherein a first spatial size of a first group in the plurality of groups of wavelet subbands is different from a second spatial size of a second group in the plurality of groups of wavelet subbands.
  26. The method of any of claims 17 to 19, wherein a first resizing ratio is used for a first group in the plurality of groups of wavelet subbands, and a second resizing ratio different from the first resizing ratio is used for a second group in the plurality of groups of wavelet subbands.
  27. The method of any of claims 17 to 19, wherein the at least one wavelet subband representation comprises a plurality of wavelet subband representations, each wavelet subband representation being associated with a group of wavelet subbands in the plurality of groups of wavelet subbands, and
    wherein determining the plurality of wavelet subband representations comprises:
    determining a plurality of partitioned representations of the intermediate representation; and
    determining the plurality of wavelet subband representations based on the plurality of partitioned representations.
  28. The method of claim 27, wherein the plurality of partitioned representations is associated with a same channel number.
  29. The method of claim 27, wherein a first partitioned representation of the plurality of partitioned representations is associated with a first channel number, and a second partitioned representation of the plurality of partitioned representations is associated with a second channel number different from the first channel number.
  30. The method of claim 29, wherein the first and second channel numbers are determined based on importance of wavelet subbands associated with the first and second partitioned representations.
  31. The method of any of claims 17 to 19, wherein the at least one wavelet subband representation comprises a plurality of wavelet subband representations, each wavelet subband representation being associated with a group of wavelet subbands in the plurality of groups of wavelet subbands, and the plurality of wavelet subband representations is determined based on the intermediate representation.
  32. The method of claim 17, wherein a branch of the plurality of branches comprises at least one of: a convolution layer, a deconvolution layer, an activation operation, a rectified linear unit (ReLU) , a leaky ReLU, a sigmoid operation, a hyperbolic tangent operation, or a normalization operation.
  33. The method of claim 32, wherein a wavelet subband representation of a wavelet subband with a lowest resolution among the plurality of groups of wavelet subbands is determined without resizing.
  34. The method of claim 32, wherein a stride of the deconvolution layer or the convolution layer in the branch is a third number.
  35. The method of claim 1, wherein the conversion comprises encoding the current visual unit into the bitstream.
  36. The method of claim 35, wherein determining the at least one wavelet subband representation comprises:
    determining wavelet subband information of the current visual unit based on a wavelet transformation; and
    determining the at least one wavelet subband representation based on the wavelet subband information and the non-linear transformation.
  37. The method of claim 35 or claim 36, wherein preforming the conversion comprises:
    determining a sample of the current visual unit based on the at least one wavelet subband representation and a channel network;
    determining a quantized sample of the current visual unit by quantizing the sample of the current visual unit; and
    determining the bitstream at least based on the quantized sample and an entropy coding module.
  38. The method of claim 36, wherein the wavelet transformation comprises at least one fixed parameter.
  39. The method of claim 36, wherein that wavelet transformation comprises at least one learnable parameter, the at least one learnable parameter is updated together with a further neural network for the conversion.
  40. An apparatus for visual data processing comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to perform a method in accordance with any of claims 1-39.
  41. A non-transitory computer-readable storage medium storing instructions that cause a processor to perform a method in accordance with any of claims 1-39.
  42. A non-transitory computer-readable recording medium storing a bitstream of visual data which is generated by a method performed by an apparatus for visual data processing, wherein the method comprises:
    determining at least one wavelet subband representation of a current visual unit of the visual data based on a neural network-based non-linear transformation or a neural network-based inverse non-linear transformation inverse to the non-linear transformation, a wavelet subband representation being associated with a subband of a wavelet of the current visual unit; and
    generating the bitstream based on the at least one wavelet subband representation.
  43. A method for storing a bitstream of visual data, comprising:
    determining at least one wavelet subband representation of a current visual unit of the visual data based on a neural network-based non-linear transformation or a neural network-based inverse non-linear transformation inverse to the non-linear transformation, a wavelet subband representation being associated with a subband of a wavelet of the current visual unit;
    generating the bitstream based on the at least one wavelet subband representation; and
    storing the bitstream in a non-transitory computer-readable recording medium.
PCT/CN2023/137254 2022-12-10 2023-12-07 Method, apparatus, and medium for visual data processing WO2024120499A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CNPCT/CN2022/138229 2022-12-10
CN2022138229 2022-12-10

Publications (1)

Publication Number Publication Date
WO2024120499A1 true WO2024120499A1 (en) 2024-06-13

Family

ID=91378614

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/137254 WO2024120499A1 (en) 2022-12-10 2023-12-07 Method, apparatus, and medium for visual data processing

Country Status (1)

Country Link
WO (1) WO2024120499A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106791867A (en) * 2015-10-03 2017-05-31 特克特朗尼克公司 A kind of low complex degree for JPEG2000 compression streams perceives visual quality appraisal procedure
US20200366938A1 (en) * 2017-11-24 2020-11-19 V-Nova International Limited Signal encoding
WO2022112774A1 (en) * 2020-11-27 2022-06-02 V-Nova International Limited Video encoding using pre-processing
US20220279183A1 (en) * 2020-04-29 2022-09-01 Deep Render Ltd Image compression and decoding, video compression and decoding: methods and systems
CN115278262A (en) * 2022-08-01 2022-11-01 天津大学 End-to-end intelligent video coding method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106791867A (en) * 2015-10-03 2017-05-31 特克特朗尼克公司 A kind of low complex degree for JPEG2000 compression streams perceives visual quality appraisal procedure
US20200366938A1 (en) * 2017-11-24 2020-11-19 V-Nova International Limited Signal encoding
US20220279183A1 (en) * 2020-04-29 2022-09-01 Deep Render Ltd Image compression and decoding, video compression and decoding: methods and systems
WO2022112774A1 (en) * 2020-11-27 2022-06-02 V-Nova International Limited Video encoding using pre-processing
CN115278262A (en) * 2022-08-01 2022-11-01 天津大学 End-to-end intelligent video coding method and device

Similar Documents

Publication Publication Date Title
US11310509B2 (en) Method and apparatus for applying deep learning techniques in video coding, restoration and video quality analysis (VQA)
US10623775B1 (en) End-to-end video and image compression
US11544606B2 (en) Machine learning based video compression
US20240064318A1 (en) Apparatus and method for coding pictures using a convolutional neural network
US11895330B2 (en) Neural network-based video compression with bit allocation
EP4388742A1 (en) Conditional image compression
WO2024020053A1 (en) Neural network-based adaptive image and video compression method
WO2024120499A1 (en) Method, apparatus, and medium for visual data processing
WO2024017173A1 (en) Method, apparatus, and medium for visual data processing
WO2023138687A1 (en) Method, apparatus, and medium for data processing
WO2024083249A1 (en) Method, apparatus, and medium for visual data processing
CN115442618A (en) Time domain-space domain self-adaptive video compression based on neural network
WO2023138686A1 (en) Method, apparatus, and medium for data processing
WO2023165596A1 (en) Method, apparatus, and medium for visual data processing
WO2023165601A1 (en) Method, apparatus, and medium for visual data processing
WO2024083248A1 (en) Method, apparatus, and medium for visual data processing
WO2023165599A1 (en) Method, apparatus, and medium for visual data processing
WO2023169501A1 (en) Method, apparatus, and medium for visual data processing
WO2024083247A1 (en) Method, apparatus, and medium for visual data processing
WO2023155848A1 (en) Method, apparatus, and medium for data processing
WO2024140849A1 (en) Method, apparatus, and medium for visual data processing
WO2024083202A1 (en) Method, apparatus, and medium for visual data processing
CN118020306A (en) Video encoding and decoding method, encoder, decoder, and storage medium
WO2024083250A1 (en) Method, apparatus, and medium for video processing
WO2024020403A1 (en) Method, apparatus, and medium for visual data processing