WO2023124148A1 - 数据处理方法及装置、电子设备和存储介质 - Google Patents

数据处理方法及装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2023124148A1
WO2023124148A1 PCT/CN2022/114451 CN2022114451W WO2023124148A1 WO 2023124148 A1 WO2023124148 A1 WO 2023124148A1 CN 2022114451 W CN2022114451 W CN 2022114451W WO 2023124148 A1 WO2023124148 A1 WO 2023124148A1
Authority
WO
WIPO (PCT)
Prior art keywords
dct coefficient
data
coefficient
tensor
features
Prior art date
Application number
PCT/CN2022/114451
Other languages
English (en)
French (fr)
Inventor
王园园
王岩
何岱岚
郭莉娜
秦红伟
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023124148A1 publication Critical patent/WO2023124148A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/149Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements

Definitions

  • the present disclosure relates to the field of computer technology, and in particular to a data processing method and device, electronic equipment, and a storage medium.
  • image compression technology such as JPEG image compression standard
  • JPEG image compression standard can be used in related technologies to compress the volume of image data, thereby saving storage resources and bandwidth resources.
  • the disclosure proposes a data processing technical solution.
  • a data processing method including: acquiring discrete cosine transform DCT coefficient data corresponding to image data; performing feature extraction on the DCT coefficient data to obtain prior features and context features, the prior The priori feature is used to characterize the global correlation of each coefficient in the DCT coefficient data, and the context feature is used to characterize the local correlation of each coefficient in the DCT coefficient data; according to the priori feature and the context feature, Determining a probability distribution parameter corresponding to the DCT coefficient data; performing entropy encoding on the DCT coefficient data according to the probability distribution parameter to obtain compressed data corresponding to the DCT coefficient data, the compressed data being used as the image data Compress the result. In this manner, compressed data with a better lossless compression rate is obtained.
  • the DCT coefficient data includes multiple DCT coefficient matrices
  • performing feature extraction on the DCT coefficient data to obtain prior features and context features includes: according to the multiple DCT coefficient matrices The frequencies corresponding to each coefficient in the matrix are reorganized to obtain a DCT coefficient tensor; the feature extraction is performed on the DCT coefficient tensor to obtain prior features and context features.
  • the preprocessed DCT coefficient tensor can be used to efficiently obtain prior features and context features, so that more accurate probability distribution parameters can be obtained later.
  • the reorganizing the multiple DCT coefficient matrices according to the frequency corresponding to each coefficient in the multiple DCT coefficient matrices to obtain a DCT coefficient tensor includes: combining the multiple DCT coefficient matrices The coefficients with the same frequency in the DCT coefficient matrix are spliced in the spatial dimension to obtain multiple spliced matrices; the multiple spliced matrices are spliced in the channel dimension according to a specified order to obtain the DCT coefficient tensor.
  • the recombined DCT coefficient tensor can have certain structural redundant information in the space dimension and the channel dimension, so that the redundant information can be used to generate more accurate probability distribution parameters later.
  • the performing feature extraction on the DCT coefficient tensor to obtain prior features and context features includes: performing feature extraction on the DCT coefficient tensor through a prior network to obtain the A priori feature; performing feature extraction on the DCT coefficient tensor through an autoregressive network to obtain the context feature. In this way, prior features and contextual features can be effectively obtained.
  • the DCT coefficient tensor has n channels, n is a positive integer, the autoregressive network includes a spatial autoregressive network and a channel autoregressive network, and the autoregressive network is used for the Perform feature extraction on the DCT coefficient tensor to obtain the context features, including: dividing the DCT coefficient tensor into I coefficient tensors with n/I channels in the channel dimension, I ⁇ [1,n] Carry out autoregressive prediction of the spatial dimension to each coefficient in the i coefficient tensor through the space autoregressive network, and obtain the i space context feature corresponding to the i coefficient tensor, and the i space context feature represents The local correlation between each coefficient in the i-th coefficient tensor, i ⁇ [1,I]; through the channel autoregressive network, according to the first coefficient tensor to j-1 coefficient tensor, for the j-th The coefficient tensor performs autoregressive prediction of the
  • the redundant information of the DCT coefficient tensor in the spatial dimension and the channel dimension can be learned separately, that is, the DCT coefficient tensor is auto-regressively predicted in the channel dimension and the spatial dimension, thereby obtaining a more informative contextual features.
  • the context features include I spatial context features and I-1 channel context features, I ⁇ [1, n], n is a positive integer, wherein, according to the prior features and the context features, determining the probability distribution parameters corresponding to the DCT coefficient data, including: channel splicing the prior features, the 1 spatial context features and the 1-1 channel context features to obtain I splicing features; according to the one splicing features, determine the probability distribution parameters corresponding to the DCT coefficient data.
  • splicing features with richer information can be used to obtain more accurate probability distribution parameters.
  • the channel splicing of the prior features, the I spatial context features, and the I-1 channel context features to obtain I splicing features includes: combining the Perform channel splicing on the prior feature and the first spatial context feature to obtain the first splicing feature; perform channel splicing on the prior feature, the jth spatial context feature, and the jth channel context feature to obtain the jth splicing feature Features, j ⁇ [2,I].
  • the prior features and the context features can be divided into multiple groups of concatenated features, which is beneficial to efficiently obtain the probability distribution model corresponding to each coefficient in each coefficient matrix, and improve the operation efficiency.
  • the determining the probability distribution parameters corresponding to the DCT coefficient data according to the 1 splicing feature includes: determining the DCT according to the 1 splicing feature through an entropy parameter analysis network.
  • splicing features with richer information can be used to obtain
  • performing entropy encoding on the DCT coefficient data according to the probability distribution parameters to obtain compressed data corresponding to the DCT coefficient data includes: according to the probability distribution parameters and specified A probability distribution function for determining the probability of occurrence of each coefficient in the DCT coefficient data; performing entropy encoding on each coefficient in the DCT coefficient data according to the probability of occurrence of each coefficient in the DCT coefficient data to obtain the corresponding compressed data.
  • the DCT coefficient data is entropy encoded using more accurate probability distribution parameters, and compressed data with a better lossless compression rate can be obtained, thereby saving storage resources and bandwidth resources.
  • the entropy encoding is performed on each coefficient in the DCT coefficient data according to the occurrence probability of each coefficient in the DCT coefficient data to obtain compressed data corresponding to the DCT coefficient data, including: According to the probability of occurrence of each coefficient in the DCT coefficient data, each coefficient in the i coefficient tensor of the I coefficient tensor is entropy encoded to obtain the i sub compressed data corresponding to the i coefficient tensor; wherein, The compressed data includes 1 sub-compressed data, the 1 coefficient tensor is obtained by segmenting the DCT coefficient tensor in the channel dimension, and the DCT coefficient tensor is a plurality of DCT coefficients in the DCT coefficient data obtained by reorganizing the coefficient matrix, I ⁇ [1,n], i ⁇ [1,I], n is the number of channels of the DCT coefficient tensor.
  • the DCT coefficient data is entropy encoded using the probability of each coefficient determined by more accurate probability distribution parameters, so that
  • the method further includes: performing entropy decoding on the compressed data according to the occurrence probability of each coefficient in the DCT coefficient data, The DCT coefficient data is obtained, wherein the occurrence probability of each coefficient in the DCT coefficient data is determined according to the probability distribution parameter and a specified probability distribution function.
  • the probability of occurrence of each coefficient in the DCT coefficient data can be used to effectively realize entropy decoding of the compressed data, and obtain the DCT coefficient data before encoding.
  • the compressed data includes one sub-compressed data, and performing entropy decoding on the compressed data according to the occurrence probability of each coefficient in the DCT coefficient data to obtain the DCT coefficient data
  • the method includes: performing entropy decoding on the i-th sub-compressed data according to the occurrence probability of each coefficient in the DCT coefficient data to obtain the i-th coefficient tensor; reversely reorganizing the DCT coefficient tensor composed of I coefficient tensors, A plurality of DCT coefficient matrices are obtained, and the DCT coefficient data includes the plurality of DCT coefficient matrices.
  • the probability of occurrence of each coefficient in the DCT coefficient data can be used to effectively realize entropy decoding of the compressed data, and obtain the DCT coefficient data before encoding.
  • a data processing device including: an acquisition module, configured to acquire discrete cosine transform DCT coefficient data corresponding to image data; a feature extraction module, configured to perform feature extraction on the DCT coefficient data, A priori feature and a context feature are obtained, the a priori feature is used to characterize the global correlation relationship of each coefficient in the DCT coefficient data, and the context feature is used to characterize the local correlation relationship of each coefficient in the DCT coefficient data; parameter A determining module, configured to determine probability distribution parameters corresponding to the DCT coefficient data according to the priori features and the context features; an encoding module, configured to perform entropy encoding on the DCT coefficient data according to the probability distribution parameters , to obtain compressed data corresponding to the DCT coefficient data, where the compressed data is used as a compression result of the image data.
  • the DCT coefficient data includes a plurality of DCT coefficient matrices
  • the feature extraction module includes: a recombination submodule configured to, according to the frequency corresponding to each coefficient in the plurality of DCT coefficient matrices, Recombining the plurality of DCT coefficient matrices to obtain DCT coefficient tensors; the feature extraction submodule is used to perform feature extraction on the DCT coefficient tensors to obtain prior features and context features.
  • the reorganizing the multiple DCT coefficient matrices according to the frequency corresponding to each coefficient in the multiple DCT coefficient matrices to obtain a DCT coefficient tensor includes: combining the multiple DCT coefficient matrices The coefficients with the same frequency in the DCT coefficient matrix are spliced in the spatial dimension to obtain multiple spliced matrices; the multiple spliced matrices are spliced in the channel dimension according to a specified order to obtain the DCT coefficient tensor.
  • the performing feature extraction on the DCT coefficient tensor to obtain prior features and context features includes: performing feature extraction on the DCT coefficient tensor through a prior network to obtain the A priori feature; performing feature extraction on the DCT coefficient tensor through an autoregressive network to obtain the context feature.
  • the DCT coefficient tensor has n channels, n is a positive integer, the autoregressive network includes a spatial autoregressive network and a channel autoregressive network, and the autoregressive network is used for the Perform feature extraction on the DCT coefficient tensor to obtain the context features, including: dividing the DCT coefficient tensor into I coefficient tensors with n/I channels in the channel dimension, I ⁇ [1,n] Carry out autoregressive prediction of the spatial dimension to each coefficient in the i coefficient tensor through the space autoregressive network, and obtain the i space context feature corresponding to the i coefficient tensor, and the i space context feature represents The local correlation between each coefficient in the i-th coefficient tensor, i ⁇ [1,I]; through the channel autoregressive network, according to the first coefficient tensor to j-1 coefficient tensor, for the j-th The coefficient tensor performs autoregressive prediction of the
  • the context features include I spatial context features and I-1 channel context features, I ⁇ [1,n], n is a positive integer
  • the parameter determination module includes : feature splicing submodule, used to carry out channel splicing with described prior feature, described 1 spatial context features and described 1-1 channel context features, obtains 1 splicing feature; Parameter determination submodule, used for according to The one splicing feature determines the probability distribution parameters corresponding to the DCT coefficient data.
  • the channel splicing of the prior features, the I spatial context features, and the I-1 channel context features to obtain I splicing features includes: combining the Perform channel splicing on the prior feature and the first spatial context feature to obtain the first splicing feature; perform channel splicing on the prior feature, the jth spatial context feature, and the jth channel context feature to obtain the jth splicing feature Features, j ⁇ [2,I].
  • the determining the probability distribution parameters corresponding to the DCT coefficient data according to the 1 splicing feature includes: determining the DCT according to the 1 splicing feature through an entropy parameter analysis network.
  • the encoding module includes: a probability determination submodule, configured to determine the probability of occurrence of each coefficient in the DCT coefficient data according to the probability distribution parameter and the specified probability distribution function;
  • the sub-module is configured to perform entropy coding on each coefficient in the DCT coefficient data according to the probability of occurrence of each coefficient in the DCT coefficient data, to obtain compressed data corresponding to the DCT coefficient data.
  • the entropy encoding is performed on each coefficient in the DCT coefficient data according to the occurrence probability of each coefficient in the DCT coefficient data to obtain compressed data corresponding to the DCT coefficient data, including: According to the probability of occurrence of each coefficient in the DCT coefficient data, each coefficient in the i coefficient tensor of the I coefficient tensor is entropy encoded to obtain the i sub compressed data corresponding to the i coefficient tensor; wherein, The compressed data includes 1 sub-compressed data, the 1 coefficient tensor is obtained by segmenting the DCT coefficient tensor in the channel dimension, and the DCT coefficient tensor is a plurality of DCT coefficients in the DCT coefficient data obtained by reorganizing the coefficient matrix, I ⁇ [1,n], i ⁇ [1,I], n is the number of channels of the DCT coefficient tensor.
  • the device further includes: a decoding module, configured to perform the compression on the compressed data according to the probability of occurrence of each coefficient in the DCT coefficient data. performing entropy decoding on the data to obtain the DCT coefficient data, wherein the occurrence probability of each coefficient in the DCT coefficient data is determined according to the probability distribution parameters and a specified probability distribution function.
  • the compressed data includes I sub-compressed data
  • the decoding module includes: a decoding sub-module configured to compress the i-th sub-compressed data according to the probability of occurrence of each coefficient in the DCT coefficient data.
  • the data is entropy decoded to obtain the i-th coefficient tensor;
  • the reverse reorganization submodule is used to perform reverse reorganization on the DCT coefficient tensor composed of I coefficient tensors to obtain a plurality of DCT coefficient matrices, and the DCT coefficient data
  • the plurality of DCT coefficient matrices are included.
  • a computer program including computer readable codes, and when the computer readable codes are run in an electronic device, a processor in the electronic device executes the above method.
  • an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to call the instructions stored in the memory to execute the above-mentioned method.
  • a computer-readable storage medium on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the above method is implemented.
  • the priori features that characterize the global correlation relationship and the contextual features that characterize the local correlation relationship can be used to obtain more accurate probability distribution parameters , then based on the principle of Shannon source coding, the more accurate the probability estimation of the data to be coded, the more the lossless compression rate of the data can be improved. Therefore, entropy coding of DCT coefficient data based on more accurate probability distribution parameters can obtain better lossless compression rate compressed data, that is, a smaller compressed result can be obtained.
  • FIG. 1 shows a flowchart of a data processing method according to an embodiment of the present disclosure.
  • Fig. 2 shows a schematic diagram of DCT coefficient data according to an embodiment of the present disclosure.
  • Fig. 3 shows a schematic diagram of a DCT coefficient tensor according to an embodiment of the disclosure.
  • Fig. 4 shows a schematic diagram of a data processing method according to an embodiment of the present disclosure.
  • Fig. 5 shows a block diagram of a data processing device according to an embodiment of the present disclosure.
  • FIG. 6 shows a block diagram of an electronic device 800 according to an embodiment of the present disclosure.
  • FIG. 7 shows a block diagram of another electronic device 1900 according to an embodiment of the present disclosure.
  • Fig. 1 shows a flow chart of a data processing method according to an embodiment of the present disclosure
  • the data processing method may be executed by electronic devices such as a terminal device or a server
  • the terminal device may be a user equipment (User Equipment, UE), a mobile device, a user Terminal, terminal, cellular phone, cordless phone, personal digital assistant (Personal Digital Assistant, PDA), handheld device, computing device, vehicle-mounted device, wearable device, etc.
  • the method can call the computer-readable information stored in the memory through the processor instructions, or the method may be executed by a server.
  • the data processing method includes:
  • step S11 discrete cosine transform DCT coefficient data corresponding to the image data is acquired.
  • the image data may refer to an original image, or may also be JPEG data.
  • the original image is an image directly collected by an image acquisition device such as a camera or a camera;
  • the JPEG data may refer to data obtained by encoding the original image according to the JPEG standard.
  • JPEG Joint Photographic Experts Group
  • DCT discrete cosine transform
  • the image data when the image data is an original image, the image data can be discrete cosine transformed according to the JPEG standard to obtain a plurality of DCT coefficient matrices, and the DCT coefficient data includes the plurality of DCT coefficient matrices;
  • the JPEG data may be decoded according to the above-mentioned JPEG standard to directly extract DCT coefficient data from the JPEG data. It should be understood that, the embodiment of the present disclosure does not limit the source of the DCT coefficient data.
  • step S12 feature extraction is performed on the DCT coefficient data to obtain prior features and context features.
  • the prior features are used to represent the global correlation of each coefficient in the DCT coefficient data
  • the context features are used to represent the relationship between each coefficient in the DCT coefficient data. local correlation.
  • the local correlation can be understood as the linear or nonlinear relationship between a current coefficient and the adjacent coefficients in the local receptive field
  • the global correlation can be understood as the adjacent coefficient between a current coefficient and the global receptive field
  • the linear relationship and nonlinear relationship between the coefficients; wherein, the adjacent coefficients can include the coefficients arranged in order around the current coefficient in the local receptive field or the global receptive field, and the range of the global receptive field is larger than that of the local receptive field, or , the number of adjacent coefficients in the global receptive field is greater than the number of adjacent coefficients in the local receptive field.
  • the coefficients in the embodiments of the present disclosure are also DCT coefficients.
  • a priori network and an autoregressive network may be used to perform feature extraction on the DCT coefficient data, respectively, to obtain a priori features and context features.
  • the prior network may include a priori analysis sub-network and a priori synthesis sub-network, wherein, for example, the prior analysis sub-network may include m convolutional layers, and the first m-1 convolutional layers Followed by an activation function layer, it is used to extract the depth features of the DCT coefficient data, or to downsample the DCT coefficient data layer by layer, and m is a positive integer (for example, m is 3).
  • the prior synthesis sub-network can include m convolutional layers, the first m-1 convolutional layers are followed by an activation function layer, which is used to upsample the depth features extracted by the prior analysis sub-network layer by layer, and obtain the first experimental features. It should be understood that the embodiments of the present disclosure do not limit the number, size, and convolution step size of convolution kernels in each convolution layer, and the activation function type used by the activation function layer.
  • each feature value in the depth feature can be modeled through an existing probability model (such as a parameter probability model, a parameter-free probability model), that is, each feature value in the depth feature is described by a probability model. Probabilities of feature values to store computed deep features.
  • the depth features output by the prior analysis sub-network are floating-point numbers
  • the depth features can be discretized first, that is, the depth features output by the prior analysis sub-network can be Quantize, and model the quantized depth features through the above probability model to store the quantized depth features; and input the quantized depth features to the prior synthesis sub-network to obtain prior features.
  • quantifying the depth features output by the prior analysis sub-network may include: rounding the depth features, for example, using the quantization function round() to round the feature values in the depth features; or, it is also possible to Uniformly distributed random noise is added to the eigenvalues in the depth features, and the value range of the random noise may be [-0.5, 0.5], for example.
  • the embodiment of the present disclosure does not limit the quantification method used.
  • the autoregressive network can be understood as a convolutional neural network combined with an autoregressive prediction algorithm, such as a masked convolutional network, which can be used to learn contextual information between input data , that is, to extract the context features between multiple coefficients in the DCI coefficient data.
  • an autoregressive prediction algorithm such as a masked convolutional network
  • step S13 the probability distribution parameters corresponding to the DCT coefficient data are determined according to the priori features and the context features.
  • each coefficient in the DCT coefficient data obeys a specified probability distribution, for example, obeys Gaussian distribution, Laplace distribution, mixed Gaussian distribution, etc.
  • each coefficient in the DCT coefficient data also obeys the probability distribution with the mean (also known as expectation) as ⁇ and the variance as ⁇ 2 , where ⁇ is the standard deviation, where the mean and standard deviation are also the probability distribution parameters.
  • the probability distribution parameters corresponding to each coefficient are calculated, the occurrence probability of each coefficient can be calculated in combination with the probability distribution function corresponding to the specified probability distribution.
  • determining the probability distribution parameters corresponding to the DCT coefficient data according to the prior features and the context features may include: channel splicing the prior features and the context features to obtain splicing features, and inputting the splicing features
  • the probability distribution parameters corresponding to the DCT coefficient data are output, that is, the probability distribution parameters corresponding to each coefficient in the DCT coefficient data are obtained.
  • the entropy parameter analysis network can adopt, for example, a convolutional neural network with a 3-layer convolution kernel size of 1 ⁇ 1 and a step size of 1, and the output result of the entropy parameter analysis network can be, for example, a tensor with 2 ⁇ T channels,
  • the tensor of half of the channels may indicate the mean value corresponding to each coefficient in the multiple DCT coefficient matrices, and the other half of the channels may indicate the standard deviation corresponding to each coefficient in the multiple DCT coefficient matrices.
  • image quality evaluation indicators such as SSIM (Structural Similarity, structural similarity) indicators, PSNR (Peak Signal to Noise Ratio , peak signal-to-noise ratio) index, training entropy parameter analysis network, where D is the distortion item, R is the code rate, and ⁇ is a constant parameter. Since the D
  • the information entropy of the DCT coefficient data may be used to approximate the coding rate corresponding to the DCT coefficient data
  • the information entropy of the prior feature may be used to approximate the coding rate corresponding to the prior feature.
  • the information entropy of the DCT coefficient data can be obtained by entropy coding the DCT coefficient data according to the probability distribution parameters output by the entropy parameter analysis network, and the information entropy of the prior features can be entropy coded on the prior features according to the probability distribution parameters output by the entropy parameter analysis network get.
  • entropy parameter analysis network is an implementation provided by the embodiments of the present disclosure, and the embodiments of the present disclosure do not limit the network structure, network type, and training method of the entropy parameter analysis network.
  • step S14 entropy encoding is performed on the DCT coefficient data according to the probability distribution parameters to obtain compressed data corresponding to the DCT coefficient data, and the compressed data is used as a compression result of the image data.
  • each coefficient in the DCT coefficient data obeys a specified probability distribution, for example, Gaussian distribution, Laplace distribution, mixed Gaussian distribution, etc.
  • a specified probability distribution for example, Gaussian distribution, Laplace distribution, mixed Gaussian distribution, etc.
  • the probability P(x) of each DCT coefficient in the DCT coefficient data can be determined with the Gaussian distribution function F x) shown in formula (1),
  • x represents any DCT coefficient
  • exp represents an exponential function with the natural constant e as the base
  • represents the mean (also known as expectation)
  • represents the standard deviation
  • any entropy coding method such as ANS (Asymmetric numerical systems) coding or arithmetic coding can be used to implement entropy coding on the DCT coefficient data to obtain compressed data corresponding to the DCT coefficient data.
  • the initial coding interval [0,1) is continuously divided into multiple sub-intervals, each sub-interval represents a DCT coefficient, and the size of the sub-interval is proportional to the The probability P(x) of DCT coefficients, the greater the probability, the larger the sub-interval, and the sum of all sub-intervals is exactly [0,1).
  • the encoding starts from the initial encoding interval [0,1), and then encodes a DCT coefficient each time.
  • the sub-interval where the DCT coefficient is located is taken out according to the probability ratio as the encoding interval of the next DCT coefficient , for example, the subinterval of the first DCT coefficient x 1 falls on 0 to 0.6, and the coding interval is reduced to [0, 0.6), and the subinterval of the second DCT coefficient x 2 falls on 0.48 of the coding interval [0, 0.6) to 0.54, the coding interval is reduced to [0.48, 0.54), and the sub-interval of the third DCT coefficient x 3 falls in the coding interval [0.48, 0.54) from 0.534 to 0.54, and so on; finally, the sub-interval corresponding to the DCT coefficient Any decimal in the interval is output in binary form to obtain encoded data.
  • the compressed data can be entropy decoded according to the probability obtained by the probability distribution parameters to obtain the DCT coefficient data, and then the inverse discrete cosine transform is performed on the entropy decoded DCT coefficient data, Obtain the original image; or encode the DCT coefficient data according to the above-mentioned JPEG standard to obtain JPEG data.
  • the method further includes: performing entropy decoding on the compressed data according to the occurrence probability of each coefficient in the DCT coefficient data to obtain the DCT coefficient data, wherein , the probability of occurrence of each coefficient in the DCT coefficient data is determined according to the probability distribution parameters and the specified probability distribution function.
  • the probability of occurrence of each coefficient in the DCT coefficient data can be used to effectively realize entropy decoding of the compressed data, and obtain the DCT coefficient data before encoding.
  • the priori features that characterize the global correlation relationship and the contextual features that characterize the local correlation relationship can be used to obtain more accurate probability distribution parameters , then based on the principle of Shannon source coding, the more accurate the probability estimation of the data to be coded, the more the lossless compression rate of the data can be improved. Therefore, entropy coding of DCT coefficient data based on more accurate probability distribution parameters can obtain better lossless compression rate compressed data, that is, a smaller compressed result can be obtained.
  • the DCT coefficient data includes multiple DCT coefficient matrices.
  • the DCT coefficient data can be preprocessed first, and the preprocessed DCT coefficient data can be characterized extract.
  • feature extraction is performed on the DCT coefficient data to obtain prior features and context features, including:
  • Step S121 According to the frequency corresponding to each coefficient in the multiple DCT coefficient matrices, reorganize the multiple DCT coefficient matrices to obtain a DCT coefficient tensor.
  • the discrete cosine transform is performed on the image data, that is, the image data is converted from the spatial domain to the frequency domain, and each DCT coefficient corresponds to a frequency.
  • the frequency corresponding to each coefficient is reorganized into multiple DCT coefficient matrices to obtain a DCT coefficient tensor, which may include: splicing coefficients with the same frequency in multiple DCT coefficient matrices in the spatial dimension to obtain multiple splicing matrices;
  • the splicing matrices are concatenated in the channel dimension according to the specified order to obtain the DCT coefficient tensor.
  • the reorganized DCT coefficient tensor can have certain structural redundant information in the spatial dimension and the channel dimension. Redundant information can be understood as multiple coefficients of the DCT coefficient tensor in the spatial dimension and the same frequency There are coefficients with high similarity between them, and/or there are channels with high similarity between multiple channels with different frequencies in the channel dimension, so that the redundant information can be used to generate more accurate probability distribution parameters.
  • the spatial dimension can be understood as the length and width dimensions.
  • splicing 9 DCT coefficients in the spatial dimension can obtain a 3 ⁇ 3 splicing matrix
  • splicing in the channel dimension can be understood as combining a two-dimensional matrix into a three-dimensional tensor , for example, five 3 ⁇ 3 concatenated matrices can be concatenated in the channel dimension to obtain a 3 ⁇ 3 ⁇ 5 DCT coefficient tensor.
  • the DCT coefficients in each DCT coefficient matrix are arranged in zigzag (zigzag) order from low frequency to high frequency, so it can be considered that the frequencies of DCT coefficients in the same position of multiple DCT coefficient matrices are the same, and more Coefficients with the same frequency in two DCT coefficient matrices are spliced on the spatial dimension to obtain multiple spliced matrices, which may include: splicing coefficients at the same position in multiple DCT coefficient matrices on the spatial dimension to obtain multiple spliced matrix.
  • the specified order may include: the frequency corresponding to each splicing matrix, that is, the above-mentioned zigzag (zigzag) order, of course, the DCT coefficients may also be arranged in the DCT coefficient matrix from left to right, from The arrangement order from top to bottom is not limited by this embodiment of the present disclosure.
  • Fig. 2 shows a schematic diagram of DCT coefficient data according to an embodiment of the present disclosure
  • Fig. 3 shows a schematic diagram of a DCT coefficient tensor according to an embodiment of the present disclosure.
  • the DCT coefficient data shown in Figure 2 includes four 8 ⁇ 8 DCT coefficient matrices, and the coefficients with the same frequency in the four DCT coefficient matrices are spliced in the spatial dimension to obtain 64 spliced matrices of 2 ⁇ 2;
  • the channels of the 64 splicing matrices are spliced according to the zigzag order to obtain a 2 ⁇ 2 ⁇ 64 DCT coefficient tensor, that is, the DCT coefficient tensor has 64 channels.
  • the above method of reorganizing multiple DCT coefficient matrices is an implementation method provided by the embodiments of the present disclosure.
  • those skilled in the art can set the reorganization methods of multiple DCT coefficient matrices according to actual needs.
  • Embodiments of the present disclosure are not limited to this.
  • the entire frequency distribution interval corresponding to the DCT coefficient data may be divided into multiple frequency intervals, and the DCT coefficients in the same frequency interval may be spliced in the spatial dimension, and the like.
  • Step S122 Perform feature extraction on the DCT coefficient tensor to obtain prior features and context features.
  • performing feature extraction on the DCT coefficient tensor to obtain prior features and context features may include: performing feature extraction on the DCT coefficient tensor through a priori network to obtain prior features; through autoregressive The network performs feature extraction on the DCT coefficient tensor to obtain context features. In this way, prior features and contextual features can be effectively obtained.
  • the priori network and autoregressive network in the above-mentioned embodiments of the present disclosure can be used to extract priori features and context features respectively; for the network structure, network type and training method of the priori network and autoregressive network, the embodiments of the present disclosure No limit.
  • the prior analysis subnetwork in the prior network can use 3 convolutional layers, and the first convolutional layer can include 384 convolutions of size 3 ⁇ 3 ⁇ 64 Kernel, the convolution step is 1, the activation function is leaky Relu, the second convolutional layer can include 384 convolution kernels of 5 ⁇ 5 ⁇ 384 size, the convolution step is 2, and the activation function is leaky Relu , the third convolution layer can include 192 convolution kernels of 5 ⁇ 5 ⁇ 384 size, and the convolution step size is 2, then the output depth feature has 192 channels; the prior synthesis subnetwork in the prior network corresponds to Using 3 convolutional layers, the first convolutional layer can include 192 convolution kernels of size 5 ⁇ 5 ⁇ 192, the convolution step size is 2, and the activation function is leaky Relu, and the second convolutional layer can include 288 convolution kernels with a size of 5 ⁇ 5 ⁇ 192, the convolution step size is 2,
  • the preprocessed DCT coefficient tensor can be used to efficiently obtain prior features and context features, so that more accurate probability distribution parameters can be obtained later.
  • the DCT coefficient tensor has multiple channels, and the reorganized DCT coefficient tensor has certain structural redundant information in the space dimension and the channel dimension.
  • the channel dimension and the channel dimension can be The DCT coefficient tensors are auto-regressively predicted in spatial dimensions, resulting in more informative contextual features.
  • the DCT coefficient tensor has n channels, and n is a positive integer.
  • the autoregressive network includes a space autoregressive network and a channel autoregressive network.
  • the feature extraction of the DCT coefficient tensor is performed through the autoregressive network.
  • each coefficient in the i-th coefficient tensor is auto-regressively predicted in the spatial dimension, and the i-th spatial context feature corresponding to the i-th coefficient tensor is obtained.
  • the i-th spatial context feature represents the i-th coefficient tensor.
  • the context feature of the jth channel represents the local correlation relationship between the first coefficient tensor to j-1 coefficient tensor and the jth coefficient tensor, j ⁇ [2,I];
  • the context features include I spatial context features and I-1 channel context features.
  • the number of channels n of the DCT coefficient tensor is consistent with the number of DCT coefficients in the DCT coefficient matrix, for example, an 8 ⁇ 8 DCT coefficient matrix, that is, the DCT coefficient matrix includes 8 ⁇ 8 DCT coefficients, then the DCT The coefficient tensor has 64 channels.
  • the value of I can be customized, for example, it can be set to 8, then the DCT coefficient tensor can be divided into 8 coefficient tensors with 8 channels, and this embodiment of the present disclosure does not make any limit.
  • the i-th spatial context feature represents the local correlation between the coefficients in the i-th coefficient tensor. It can be understood that the i-th spatial context feature represents a current coefficient in the i-th coefficient tensor and the adjacent coefficients in the local receptive field There is a linear or nonlinear relationship between the adjacent coefficients.
  • the adjacent coefficients can include the coefficients of the i-th coefficient tensor arranged in order before the current coefficient in the local receptive field, and can also include the i-th coefficient tensor in the local receptive field. The coefficients in the field are arranged in order around this current coefficient.
  • the jth channel context feature represents the local correlation between the first coefficient tensor to j-1 coefficient tensor, and the jth coefficient tensor. It can be understood that the jth channel context feature represents the first Coefficient tensors to j-1 coefficient tensors, the linear or nonlinear relationship with the jth coefficient tensor.
  • autoregressive prediction can be understood as using one or more independent variables to predict the value of a dependent variable, or to analyze the correlation between a dependent variable and one or more independent variables, so according to the arrangement of channel dimensions Sequentially, according to the first coefficient tensor to j-1 coefficient tensors, perform autoregressive prediction of the channel dimension on the jth coefficient tensor, and obtain the jth channel context feature corresponding to the jth coefficient tensor, a total of Get I-1 channel context features.
  • the channel autoregressive network may include I-1 sub-channel autoregressive networks, and the j-1th sub-channel autoregressive network is used for the first coefficient tensor to the j-1 coefficient tensor , perform autoregressive prediction of the channel dimension on the jth coefficient tensor, and obtain the jth channel context feature corresponding to the jth coefficient tensor.
  • each subchannel autoregressive network can use multiple convolutional layers, and the size of the convolution kernel in the first convolutional layer of the j-1th subchannel autoregressive network is length a ⁇ width a ⁇ depth [(n/I) ⁇ (j-1)], a is a positive integer (for example, 3), for example, suppose n is 64 and I is 16, that is, each coefficient tensor has 2 channels , in order to obtain the context features of the fourth channel, the first coefficient tensor to the third coefficient tensor can be input to the third sub-channel autoregressive network, then the first layer convolution of the third sub-channel autoregressive network The depth of each convolution kernel in the layer should be 6.
  • the number of convolution kernels in each convolution layer in each subchannel autoregressive network and the convolution step size are not limited in this embodiment of the disclosure.
  • the last convolution layer may include 128 convolution kernels, that is, The channel context features output by each subchannel autoregressive network have 128 channels.
  • the spatial autoregressive network may include I subspace autoregressive networks, and the i-th subspace autoregressive network is used to perform autoregressive prediction of the spatial dimension on each coefficient in the i-th coefficient tensor, and obtain the first The ith spatial context feature corresponding to the i coefficient tensor.
  • the i-th subspace autoregressive network for example, 128 convolution kernels of 5 ⁇ 5 ⁇ (n/I) size can be directly used, and the convolution step size is 1, that is, the spatial context features output by each subspace autoregressive network have 128 channels.
  • the network structure of the channel autoregressive network and the space autoregressive network is related to the value of I and the value of n. After setting the values of I and n, the user can adjust the channel autoregressive network and the space autoregressive network accordingly.
  • the network structure of the network is an implementation method provided by the embodiments of the present disclosure. In fact, those skilled in the art can set the above-mentioned volume of channel autoregressive network and space autoregressive network according to actual needs.
  • the number of layers, the number of convolution kernels, and their sizes are not limited by this embodiment of the present disclosure.
  • the spatial autoregressive network to learn the spatial context information of each coefficient tensor in the spatial dimension
  • the channel autoregressive network to learn the channel context information of each coefficient tensor in the channel dimension, that is, to learn the above two Local correlation
  • the spatial autoregressive network and the channel autoregressive network can be used to learn the redundant information of the DCT coefficient tensor in the spatial dimension and the channel dimension, that is, the DCT coefficients in the channel dimension and the spatial dimension respectively.
  • Tensors perform autoregressive predictions, resulting in more informative contextual features.
  • the context features include I spatial context features and I-1 channel context features, I ⁇ [1,n], n is a positive integer, in a possible implementation, in step S13, according to the previous
  • the experimental features and context features are used to determine the probability distribution parameters corresponding to the DCT coefficient data, including:
  • Step S131 Concatenate the prior features, I spatial context features and I-1 channel context features to obtain I splicing features.
  • channel concatenation is performed on the priori features, I spatial context features and I-1 channel context features to obtain I concatenated features, including: combining the priori features with the first spatial context features Perform channel splicing to obtain the first splicing feature; perform channel splicing on the prior feature, the jth spatial context feature, and the jth channel context feature to obtain the jth splicing feature, j ⁇ [2,I].
  • the prior features and the context features can be divided into multiple groups of concatenated features, which is beneficial to efficiently obtain the probability distribution model corresponding to each coefficient in each coefficient matrix, and improve the operation efficiency.
  • the prior feature is a tensor with 128 channels
  • each spatial context feature and each channel context feature is also a tensor with 128 channels
  • the prior feature and the first spatial context feature are channeled Splicing to obtain the first splicing feature with 256 channels
  • channel splicing of the prior features, the jth spatial context feature, and the jth channel context feature to obtain the jth splicing feature with 384 channels.
  • Step S132 Determine the probability distribution parameters corresponding to the DCT coefficient data according to one stitching feature.
  • the probability distribution parameters corresponding to the DCT coefficient data can be determined according to one splicing feature through the above-mentioned entropy parameter analysis network.
  • the entropy parameter analysis network may include I sub-entropy parameter analysis network, and the i-th sub-entropy parameter analysis network is used to determine the mean value and standard deviation corresponding to each coefficient in the i-th coefficient tensor according to the i-th splicing feature.
  • the entropy parameter analysis network determines the probability distribution parameters corresponding to the DCT coefficient data according to the I splicing feature, which may include: inputting the i-th splicing feature into the i-th sub-entropy parameter analysis network to obtain The mean value and standard deviation corresponding to each coefficient in the i-th coefficient tensor, wherein the probability distribution parameters include the mean value and standard deviation corresponding to each coefficient in the I coefficient tensor, and the I coefficient tensor is the DCT coefficient tensor corresponding to the DCT coefficient data in It is obtained by segmenting in the channel dimension.
  • the process of segmenting the DCT coefficient tensor corresponding to the DCT coefficient data in the channel dimension to obtain one coefficient tensor can refer to the relevant description of the above-mentioned embodiments of the present disclosure, and details are not repeated here.
  • each sub-entropy parameter analysis network can refer to the above-mentioned entropy parameter analysis network, that is, each sub-entropy parameter analysis network can use, for example, a 3-layer convolutional neural network with a convolution kernel size of 1 ⁇ 1 and a step size of 1.
  • the output result of each sub-entropy parameter analysis network can be, for example, a tensor with 2 ⁇ (n/I) channels, where the tensor of half of the channels indicates the mean value corresponding to each coefficient in the i-th coefficient tensor, and the other Half of the channels may indicate the standard deviation corresponding to each coefficient in the i-th coefficient tensor.
  • Indicators such as SSIM (Structural Similarity, structural similarity) indicators, PSNR (Peak Signal to Noise Ratio, peak signal-to-noise ratio) indicators, train I sub-entropy parameter analysis network, where D is the distortion item, R is the code rate, ⁇ is a constant parameter, since the DCT coefficient data is losslessly compressed, the distortion item D is 0, and R can include the coding rate corresponding to each coefficient matrix and the coding rate corresponding to the prior feature.
  • SSIM Structuretural Similarity, structural similarity
  • PSNR Peak Signal to Noise Ratio, peak signal-to-noise ratio
  • the coding rate corresponding to each coefficient matrix may be approximated by using the information entropy of each coefficient matrix
  • the coding rate corresponding to the prior feature may be approximated by using the information entropy of the prior feature.
  • the information entropy of the i-th coefficient tensor can be obtained by entropy coding the i-th coefficient tensor according to the probability distribution parameter of the i-th sub-entropy parameter analysis network output
  • the information entropy of the prior feature can be obtained by analyzing the network output according to the entropy parameter
  • the probability distribution parameters of are obtained by entropy coding the prior features.
  • each coefficient in the DCT coefficient data obeys a specified probability distribution, for example, Gaussian distribution, Laplace distribution, mixed Gaussian distribution, etc.
  • a specified probability distribution for example, Gaussian distribution, Laplace distribution, mixed Gaussian distribution, etc.
  • step S14 entropy coding is performed on the DCT coefficient data according to the probability distribution parameters to obtain compressed data corresponding to the DCT coefficient data, including:
  • the probability distribution parameters and the specified probability distribution function determine the occurrence probability of each coefficient in the DCT coefficient data; according to the occurrence probability of each coefficient in the DCT coefficient data, perform entropy encoding on each coefficient in the DCT coefficient data, and obtain the corresponding DCT coefficient data Compress data.
  • the probability distribution function may adopt a Gaussian distribution function, a Laplace distribution function or a mixed Gaussian distribution function, etc., which are not limited in this embodiment of the present disclosure.
  • the DCT coefficient data can be reorganized and segmented to obtain I coefficient tensors, and the i-th stitching feature can be input to the i-th sub-entropy parameter analysis network to obtain the mean value corresponding to each coefficient in the i-th coefficient tensor and standard deviation, in one possible implementation, according to the probability distribution parameters and the specified probability distribution function, determining the probability of occurrence of each coefficient in the DCT coefficient data may include: according to the mean value and The standard deviation, along with the specified probability distribution function, determines the probability of each coefficient in the ith coefficient tensor.
  • entropy coding is performed on each coefficient in the DCT coefficient data according to the probability of occurrence of each coefficient in the DCT coefficient data to obtain compressed data corresponding to the DCT coefficient data, which may include: according to the DCT coefficient data The probability of occurrence of each coefficient, that is, according to the probability of occurrence of each coefficient in the I coefficient tensor, entropy encoding is performed on each coefficient in the i coefficient tensor of the I coefficient tensor to obtain the i sub compressed data, wherein the DCT coefficient data corresponds to The compressed data includes I sub-compressed data.
  • the I coefficient tensor is obtained by segmenting the DCT coefficient tensor in the channel dimension, and the DCT coefficient tensor is obtained by reorganizing multiple DCT coefficient matrices in the DCT coefficient data, I ⁇ [1, n], i ⁇ [1,I], n is the number of channels of the DCT coefficient tensor. It should be understood that, for specific implementations of determining the occurrence probability of each coefficient in the I coefficient tensor and performing entropy coding on each coefficient, reference may be made to the relevant description in the above-mentioned step S14, and details are not repeated here.
  • the occurrence probability of each coefficient in the I coefficient tensor may be recorded in the form of a probability table, so as to facilitate entropy encoding and entropy decoding of each coefficient in the DCT coefficient data.
  • the i-th sub-compressed data can be entropy-decoded according to the occurrence probability of each coefficient in the i-th coefficient tensor to obtain the i-th coefficient tensor;
  • the DCT coefficient tensor composed of the quantity is reversely recombined to obtain multiple DCT coefficient matrices; then the inverse discrete cosine transform is performed on multiple DCT coefficient matrices to obtain the original image; or the multiple DCT coefficient matrices are encoded according to the above JPEG standard to obtain JPEG data, that is, perform a reverse data decompression process according to the data compression process of the embodiment of the present disclosure.
  • the compressed data includes 1 sub-compressed data.
  • entropy decoding is performed on the compressed data according to the occurrence probability of each coefficient in the DCT coefficient data to obtain the DCT coefficient data, including: according to the DCT coefficient data
  • the probability of occurrence of each coefficient in the i-th sub-compressed data is entropy-decoded to obtain the i-th coefficient tensor; the DCT coefficient tensor composed of I coefficient tensors is reversely reorganized to obtain multiple DCT coefficient matrices, DCT
  • the coefficient data includes a plurality of DCT coefficient matrices. In this manner, the probability of occurrence of each coefficient in the DCT coefficient data can be used to effectively realize entropy decoding of the compressed data, and obtain the DCT coefficient data before encoding.
  • the probability of occurrence of each coefficient in one coefficient tensor can be recorded through the probability table, that is, the probability of occurrence of each coefficient in the DCT coefficient data can be recorded, so that the probability of occurrence of each coefficient in the DCT coefficient data can be directly obtained during entropy decoding.
  • the process of entropy encoding and entropy decoding is opposite; the process of reversely reorganizing the DCT coefficient tensor composed of 1 coefficient tensor to obtain multiple DCT coefficient matrices is the same as the process of multiple DCT coefficient matrices in the above-mentioned embodiments of the present disclosure.
  • the process of recombining DCT coefficient matrices to obtain DCT coefficient tensors is reversed, that is, it may be the reverse data decompression process of the data compression process of DCT coefficient data in the above-mentioned embodiments of the present disclosure.
  • entropy coding is performed on DCT coefficient data by using more accurate probability distribution parameters, so that compressed data with better lossless compression rate can be obtained, thereby saving storage resources and bandwidth resources.
  • Fig. 4 shows a schematic diagram of a data processing method according to an embodiment of the present disclosure. As shown in Fig. 4, the data processing method includes:
  • the DCT coefficient data is reorganized to obtain a DCT coefficient tensor, and the DCT coefficient tensor is divided into 1 coefficient tensors in the channel dimension;
  • the data processing method according to the embodiments of the present disclosure can be applied to scenarios such as data centers, cloud storage, and JPEG data transcoding.
  • massive image data occupies a large amount of storage resources and bandwidth resources, which improves data storage and
  • the image data can be efficiently compressed while ensuring that the image data is lossless, thereby significantly reducing the occupation of storage resources and bandwidth resources.
  • the present disclosure also provides data processing devices, electronic equipment, computer-readable storage media, and programs, all of which can be used to implement any data processing method provided in the present disclosure, corresponding technical solutions and descriptions, and corresponding records in the method section ,No longer.
  • Fig. 5 shows a block diagram of a data processing device according to an embodiment of the present disclosure. As shown in Fig. 5, the device includes:
  • An acquisition module 101 configured to acquire discrete cosine transform DCT coefficient data corresponding to the image data
  • the feature extraction module 102 is configured to perform feature extraction on the DCT coefficient data to obtain prior features and context features, the prior features are used to characterize the global correlation of each coefficient in the DCT coefficient data, and the context features It is used to characterize the local correlation relationship of each coefficient in the DCT coefficient data;
  • a parameter determination module 103 configured to determine probability distribution parameters corresponding to the DCT coefficient data according to the priori features and the context features;
  • the coding module 104 is configured to perform entropy coding on the DCT coefficient data according to the probability distribution parameters to obtain compressed data corresponding to the DCT coefficient data, and the compressed data is used as a compression result of the image data.
  • the DCT coefficient data includes a plurality of DCT coefficient matrices
  • the feature extraction module 102 includes: a recombination submodule configured to , reorganizing the plurality of DCT coefficient matrices to obtain DCT coefficient tensors; the feature extraction submodule is used to perform feature extraction on the DCT coefficient tensors to obtain prior features and context features.
  • the reorganizing the multiple DCT coefficient matrices according to the frequency corresponding to each coefficient in the multiple DCT coefficient matrices to obtain a DCT coefficient tensor includes: combining the multiple DCT coefficient matrices The coefficients with the same frequency in the DCT coefficient matrix are spliced in the spatial dimension to obtain multiple spliced matrices; the multiple spliced matrices are spliced in the channel dimension according to a specified order to obtain the DCT coefficient tensor.
  • the performing feature extraction on the DCT coefficient tensor to obtain prior features and context features includes: performing feature extraction on the DCT coefficient tensor through a prior network to obtain the A priori feature; performing feature extraction on the DCT coefficient tensor through an autoregressive network to obtain the context feature.
  • the DCT coefficient tensor has n channels, n is a positive integer, the autoregressive network includes a spatial autoregressive network and a channel autoregressive network, and the autoregressive network is used for the Perform feature extraction on the DCT coefficient tensor to obtain the context features, including: dividing the DCT coefficient tensor into I coefficient tensors with n/I channels in the channel dimension, I ⁇ [1,n] Carry out autoregressive prediction of the spatial dimension to each coefficient in the i coefficient tensor through the space autoregressive network, and obtain the i space context feature corresponding to the i coefficient tensor, and the i space context feature represents The local correlation between each coefficient in the i-th coefficient tensor, i ⁇ [1,I]; through the channel autoregressive network, according to the first coefficient tensor to j-1 coefficient tensor, for the j-th The coefficient tensor performs autoregressive prediction of the
  • the context features include I spatial context features and I-1 channel context features, I ⁇ [1, n], n is a positive integer, wherein the parameter determination module 103, Including: a feature splicing submodule, used for channel splicing the prior features, the I spatial context features, and the I-1 channel context features to obtain I splicing features; a parameter determination submodule for According to the one splicing feature, determine the probability distribution parameter corresponding to the DCT coefficient data.
  • the channel splicing of the prior features, the I spatial context features, and the I-1 channel context features to obtain I splicing features includes: combining the Perform channel splicing on the prior feature and the first spatial context feature to obtain the first splicing feature; perform channel splicing on the prior feature, the jth spatial context feature, and the jth channel context feature to obtain the jth splicing feature Features, j ⁇ [2,I].
  • the determining the probability distribution parameters corresponding to the DCT coefficient data according to the 1 splicing feature includes: determining the DCT according to the 1 splicing feature through an entropy parameter analysis network.
  • the encoding module 104 includes: a probability determination submodule, configured to determine the probability of occurrence of each coefficient in the DCT coefficient data according to the probability distribution parameter and a specified probability distribution function;
  • the encoding submodule is configured to perform entropy encoding on each coefficient in the DCT coefficient data according to the probability of occurrence of each coefficient in the DCT coefficient data, to obtain compressed data corresponding to the DCT coefficient data.
  • the entropy encoding is performed on each coefficient in the DCT coefficient data according to the occurrence probability of each coefficient in the DCT coefficient data to obtain compressed data corresponding to the DCT coefficient data, including: According to the probability of occurrence of each coefficient in the DCT coefficient data, each coefficient in the i coefficient tensor of the I coefficient tensor is entropy encoded to obtain the i sub compressed data corresponding to the i coefficient tensor; wherein, The compressed data includes 1 sub-compressed data, the 1 coefficient tensor is obtained by segmenting the DCT coefficient tensor in the channel dimension, and the DCT coefficient tensor is a plurality of DCT coefficients in the DCT coefficient data obtained by reorganizing the coefficient matrix, I ⁇ [1,n], i ⁇ [1,I], n is the number of channels of the DCT coefficient tensor.
  • the device further includes: a decoding module, configured to perform the compression on the compressed data according to the probability of occurrence of each coefficient in the DCT coefficient data. performing entropy decoding on the data to obtain the DCT coefficient data, wherein the occurrence probability of each coefficient in the DCT coefficient data is determined according to the probability distribution parameters and a specified probability distribution function.
  • the compressed data includes I sub-compressed data
  • the decoding module includes: a decoding sub-module configured to compress the i-th sub-compressed data according to the probability of occurrence of each coefficient in the DCT coefficient data.
  • the data is entropy decoded to obtain the i-th coefficient tensor;
  • the reverse reorganization submodule is used to perform reverse reorganization on the DCT coefficient tensor composed of I coefficient tensors to obtain a plurality of DCT coefficient matrices, and the DCT coefficient data
  • the plurality of DCT coefficient matrices are included.
  • the priori features that characterize the global correlation relationship and the contextual features that characterize the local correlation relationship can be used to obtain more accurate probability distribution parameters , then based on the principle of Shannon source coding, the more accurate the probability estimation of the data to be coded, the more the lossless compression rate of the data can be improved. Therefore, entropy coding of DCT coefficient data based on more accurate probability distribution parameters can obtain better lossless compression rate compressed data, that is, a smaller compressed result can be obtained.
  • the functions or modules included in the device provided by the embodiments of the present disclosure can be used to execute the methods described in the method embodiments above, and its specific implementation can refer to the description of the method embodiments above. For brevity, here No longer.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, on which computer program instructions are stored, and the above-mentioned method is implemented when the computer program instructions are executed by a processor.
  • Computer readable storage media may be volatile or nonvolatile computer readable storage media.
  • An embodiment of the present disclosure also proposes an electronic device, including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to invoke the instructions stored in the memory to execute the above method.
  • An embodiment of the present disclosure also provides a computer program product, including computer-readable codes, or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer-readable codes are stored in a processor of an electronic device When running in the electronic device, the processor in the electronic device executes the above method.
  • An embodiment of the present disclosure also provides a computer program, including computer readable codes, and when the computer readable codes are run in an electronic device, a processor in the electronic device executes the above method.
  • Electronic devices may be provided as terminals, servers, or other forms of devices.
  • FIG. 6 shows a block diagram of an electronic device 800 according to an embodiment of the present disclosure.
  • the electronic device 800 may be a terminal such as a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, or a personal digital assistant.
  • electronic device 800 may include one or more of the following components: processing component 802, memory 804, power supply component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and a communication component 816 .
  • the processing component 802 generally controls the overall operations of the electronic device 800, such as those associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the above method. Additionally, processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components. For example, processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802 .
  • the memory 804 is configured to store various types of data to support operations at the electronic device 800 . Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like.
  • the memory 804 can be implemented by any type of volatile or non-volatile storage devices or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic or Optical Disk Magnetic Disk
  • the power supply component 806 provides power to various components of the electronic device 800 .
  • Power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 800 .
  • the multimedia component 808 includes a screen providing an output interface between the electronic device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense a boundary of a touch or swipe action, but also detect duration and pressure associated with the touch or swipe action.
  • the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capability.
  • the audio component 810 is configured to output and/or input audio signals.
  • the audio component 810 includes a microphone (MIC), which is configured to receive external audio signals when the electronic device 800 is in operation modes, such as call mode, recording mode and voice recognition mode. Received audio signals may be further stored in memory 804 or sent via communication component 816 .
  • the audio component 810 also includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: a home button, volume buttons, start button, and lock button.
  • Sensor assembly 814 includes one or more sensors for providing status assessments of various aspects of electronic device 800 .
  • the sensor component 814 can detect the open/closed state of the electronic device 800, the relative positioning of components, such as the display and the keypad of the electronic device 800, the sensor component 814 can also detect the electronic device 800 or a Changes in position of components, presence or absence of user contact with electronic device 800 , electronic device 800 orientation or acceleration/deceleration and temperature changes in electronic device 800 .
  • Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • Sensor assembly 814 may also include an optical sensor, such as a complementary metal-oxide-semiconductor (CMOS) or charge-coupled device (CCD) image sensor, for use in imaging applications.
  • CMOS complementary metal-oxide-semiconductor
  • CCD charge-coupled device
  • the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
  • the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
  • the electronic device 800 can access wireless networks based on communication standards, such as wireless networks (Wi-Fi), second-generation mobile communication technologies (2G), third-generation mobile communication technologies (3G), fourth-generation mobile communication technologies (4G ), long-term evolution (LTE) of universal mobile communication technology, fifth generation mobile communication technology (5G) or their splicing.
  • the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wideband
  • Bluetooth Bluetooth
  • electronic device 800 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation for performing the methods described above.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A programmable gate array
  • controller microcontroller, microprocessor or other electronic component implementation for performing the methods described above.
  • a non-volatile computer-readable storage medium such as the memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to implement the above method.
  • FIG. 7 shows a block diagram of another electronic device 1900 according to an embodiment of the present disclosure.
  • electronic device 1900 may be provided as a server.
  • electronic device 1900 includes processing component 1922 , which further includes one or more processors, and a memory resource represented by memory 1932 for storing instructions executable by processing component 1922 , such as application programs.
  • the application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above method.
  • Electronic device 1900 may also include a power supply component 1926 configured to perform power management of electronic device 1900, a wired or wireless network interface 1950 configured to connect electronic device 1900 to a network, and an input-output (I/O) interface 1958 .
  • the electronic device 1900 can operate based on the operating system stored in the memory 1932, such as the Microsoft server operating system (Windows Server TM ), the graphical user interface-based operating system (Mac OS X TM ) introduced by Apple Inc., and the multi-user and multi-process computer operating system (Unix TM ), a free and open-source Unix-like operating system (Linux TM ), an open-source Unix-like operating system (FreeBSD TM ), or the like.
  • Microsoft server operating system Windows Server TM
  • Mac OS X TM graphical user interface-based operating system
  • Unix TM multi-user and multi-process computer operating system
  • Linux TM free and open-source Unix-like operating system
  • FreeBSD TM open-source Unix-like operating system
  • a non-transitory computer-readable storage medium such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to implement the above method.
  • the present disclosure can be a system, method and/or computer program product.
  • a computer program product may include a computer readable storage medium having computer readable program instructions thereon for causing a processor to implement various aspects of the present disclosure.
  • a computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device.
  • a computer readable storage medium may be, for example, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Computer-readable storage media include: portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or flash memory), static random access memory (SRAM), compact disc read only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanically encoded device, such as a printer with instructions stored thereon Hole cards or raised structures in grooves, and any suitable splicing of the above.
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disc read only memory
  • DVD digital versatile disc
  • memory stick floppy disk
  • mechanically encoded device such as a printer with instructions stored thereon Hole cards or raised structures in grooves, and any suitable splicing of the above.
  • computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., pulses of light through fiber optic cables), or transmitted electrical signals.
  • Computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or downloaded to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or a network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • Computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or Source code or object code written in arbitrary concatenation, said programming languages include object-oriented programming languages—such as Smalltalk, C++, etc., and conventional procedural programming languages—such as “C” language or similar programming languages.
  • Computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as via the Internet using an Internet service provider). connect).
  • LAN local area network
  • WAN wide area network
  • an electronic circuit such as a programmable logic circuit, field programmable gate array (FPGA), or programmable logic array (PLA)
  • FPGA field programmable gate array
  • PDA programmable logic array
  • These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that when executed by the processor of the computer or other programmable data processing apparatus , producing an apparatus for realizing the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
  • These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions cause computers, programmable data processing devices and/or other devices to work in a specific way, so that the computer-readable medium storing instructions includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks in flowcharts and/or block diagrams.
  • each block in a flowchart or block diagram may represent a module, a portion of a program segment, or an instruction that includes one or more Executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block in the block diagrams and/or flow diagrams, and combinations of blocks in the block diagrams and/or flow diagrams can be implemented by a dedicated hardware-based system that performs the specified functions or actions , or can be realized by splicing special hardware and computer instructions.
  • the computer program product can be specifically realized by means of hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) etc. wait.
  • a software development kit Software Development Kit, SDK

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Analysis (AREA)
  • Algebra (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Discrete Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

本公开涉及一种数据处理方法及装置、电子设备和存储介质,所述方法包括:获取图像数据对应的离散余弦变换DCT系数数据;对DCT系数数据进行特征提取,得到先验特征以及上下文特征,先验特征用于表征DCT系数数据中各个系数的全局相关关系,所述上下文特征用于表征所述DCT系数数据中各个系数的局部相关关系;根据先验特征以及上下文特征,确定DCT系数数据对应的概率分布参数;根据概率分布参数,对DCT系数数据进行熵编码,得到DCT系数数据对应的压缩数据,压缩数据作为图像数据的压缩结果。

Description

数据处理方法及装置、电子设备和存储介质
本公开要求在2021年12月27日提交中国专利局、申请号为202111614879.5、申请名称为“数据处理方法及装置、电子设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及计算机技术领域,尤其涉及一种数据处理方法及装置、电子设备和存储介质。
背景技术
为了存储或传输海量的图像数据,相关技术中可以通过图像压缩技术,例如JPEG图像压缩标准等,压缩图像数据的体积,从而节约存储资源和带宽资源。
发明内容
本公开提出了一种数据处理技术方案。
根据本公开的一方面,提供了一种数据处理方法,包括:获取图像数据对应的离散余弦变换DCT系数数据;对所述DCT系数数据进行特征提取,得到先验特征以及上下文特征,所述先验特征用于表征所述DCT系数数据中各个系数的全局相关关系,所述上下文特征用于表征所述DCT系数数据中各个系数的局部相关关系;根据所述先验特征以及所述上下文特征,确定所述DCT系数数据对应的概率分布参数;根据所述概率分布参数,对所述DCT系数数据进行熵编码,得到所述DCT系数数据对应的压缩数据,所述压缩数据作为所述图像数据的压缩结果。通过该方式,得到无损压缩率更优的压缩数据。
在一种可能的实现方式中,所述DCT系数数据包括多个DCT系数矩阵,所述对所述DCT系数数据进行特征提取,得到先验特征以及上下文特征,包括:根据所述多个DCT系数矩阵内各个系数对应的频率,对所述多个DCT系数矩阵进行重组,得到DCT系数张量;对所述DCT系数张量进行特征提取,得到先验特征以及上下文特征。通过该方式,能够利用预处理后的DCT系数张量,高效得到先验特征与上下文特征,从而便于之后得到更准确的概率分布参数。
在一种可能的实现方式中,所述根据所述多个DCT系数矩阵内各个系数对应的频率,对所述多个DCT系数矩阵进行重组,得到DCT系数张量,包括:将所述多个DCT系数矩阵中频率相同的系数在空间维度上进行拼接,得到多个拼接矩阵;将所述多个拼接矩阵按照指定顺序在通道维度上进行拼接,得到所述DCT系数张量。通过该方式,可以使重组后的DCT系数张量在空间维度与通道维度上存在一定结构性的冗余信息,从而之后可以利用该冗余信息生成更准确的概率分布参数。
在一种可能的实现方式中,所述对所述DCT系数张量进行特征提取,得到先验特征以及上下文特征,包括:通过先验网络对所述DCT系数张量进行特征提取,得到所述先验特征;通过自回归网络对所述DCT系数张量进行特征提取,得到所述上下文特征。通过该方式,可以有效得到先验特征以及上下文特征。
在一种可能的实现方式中,所述DCT系数张量具有n个通道,n为正整数,所述自回归网络包括空间自回归网络与通道自回归网络,所述通过自回归网络对所述DCT系数张量进行特征提取,得到所述上下文特征,包括:将所述DCT系数张量在通道维度上切分为I个具有n/I个通道的系数张量,I∈[1,n];通过所述空间自回归网络对第i个系数张量中的各个系数进行空间维度的自回归预测,得到第i个系数张量对应的第i个空间上下文特征,所述第i个空间上下文特征表示所述第i个系数张量中各个系数之间局部的相关关系,i∈[1,I];通过所述通道自回归网络根据第1个系数张量至j-1个系数张量,对第j个系数张量进行通道维度的自回归预测,得到所述第j个系数张量对应的第j个通道上下文特征,所述第j个通道上下文特征表示所述第1个系数张量至所述j-1个系数张量,与所述第j个系数张量之间的局部相关关系,j∈[2,I];其中,所述上下文特征包括I个空间上下文特征以及I-1个通道上下文特征。通过该方式,能够分别学习DCT系数张量在空间维度与通道维度上存在的冗余信息,也即在通道维度与空间维度上分别对DCT系数张量进行自回归预测,从而得到信息更丰富的上下文特征。
在一种可能的实现方式中,所述上下文特征包括I个空间上下文特征以及I-1个通道上下文特征,I ∈[1,n],n为正整数,其中,所述根据所述先验特征以及所述上下文特征,确定所述DCT系数数据对应的概率分布参数,包括:将所述先验特征、所述I个空间上下文特征以及所述I-1个通道上下文特征进行通道拼接,得到I个拼接特征;根据所述I个拼接特征,确定所述DCT系数数据对应的概率分布参数。通过该方式,能够利用信息更丰富的拼接特征,得到更准确的概率分布参数。
在一种可能的实现方式中,所述将所述先验特征、所述I个空间上下文特征以及所述I-1个通道上下文特征进行通道拼接,得到I个拼接特征,包括:将所述先验特征与第1个空间上下文特征进行通道拼接,得到第1个拼接特征;将所述先验特征、第j个空间上下文特征以及第j个通道上下文特征进行通道拼接,得到第j个拼接特征,j∈[2,I]。通过该方式,能够将先验特征与上下文特征切分为多组拼接特征,有利于高效地得到每个系数矩阵中各个系数对应的概率分布模型,提高运算效率。
在一种可能的实现方式中,所述根据所述I个拼接特征,确定所述DCT系数数据对应的概率分布参数,包括:通过熵参数分析网络根据所述I个拼接特征,确定所述DCT系数对应的概率分布参数;其中,所述熵参数分析网络包括I个子熵参数分析网络,所述通过熵参数分析网络根据所述I个拼接特征,确定所述DCT系数对应的概率分布参数,包括:将第i个拼接特征输入至第i个子熵参数分析网络,得到第i个系数张量中各个系数对应的均值与标准差,其中,所述概率分布参数包括I个系数张量中各个系数对应的均值与标准差,所述I个系数张量是对所述DCT系数数据对应的DCT系数张量在通道维度上进行切分得到的。通过该方式,能够利用信息更丰富的拼接特征,得到更准确的概率分布参数。
在一种可能的实现方式中,所述根据所述概率分布参数,对所述DCT系数数据进行熵编码,得到所述DCT系数数据对应的压缩数据,包括:根据所述概率分布参数以及指定的概率分布函数,确定所述DCT系数数据中各个系数出现的概率;根据所述DCT系数数据中各个系数出现的概率,对所述DCT系数数据中各个系数进行熵编码,得到所述DCT系数数据对应的压缩数据。通过该方式,利用更准确的概率分布参数对DCT系数数据进行熵编码,能够获得无损压缩率更优的压缩数据,从而节约存储资源和带宽资源。
在一种可能的实现方式中,所述根据所述DCT系数数据中各个系数出现的概率,对所述DCT系数数据中各个系数进行熵编码,得到所述DCT系数数据对应的压缩数据,包括:根据所述DCT系数数据中各个系数出现的概率,对I个系数张量的第i个系数张量中各个系数进行熵编码,得到所述第i个系数张量对应的第i个子压缩数据;其中,所述压缩数据包括I个子压缩数据,所述I个系数张量是对DCT系数张量在通道维度上切分得到的,所述DCT系数张量是对所述DCT系数数据中的多个DCT系数矩阵进行重组得到的,I∈[1,n],i∈[1,I],n为所述DCT系数张量的通道数。通过该方式,利用更准确的概率分布参数所确定出的各个系数的概率,对DCT系数数据进行熵编码,能够获得无损压缩率更优的压缩数据,从而节约存储资源和带宽资源。
在一种可能的实现方式中,在得到所述DCT系数数据对应的压缩数据之后,所述方法还包括:根据所述DCT系数数据中各个系数出现的概率,对所述压缩数据进行熵解码,得到所述DCT系数数据,其中,所述DCT系数数据中各个系数出现的概率是根据所述概率分布参数以及指定的概率分布函数确定的。通过该方式,能够利用DCT系数数据中各个系数出现的概率,有效实现对压缩数据的熵解码,得到编码前的DCT系数数据。
在一种可能的实现方式中,所述压缩数据包括I个子压缩数据,所述根据所述DCT系数数据中各个系数出现的概率,对所述压缩数据进行熵解码,得到所述DCT系数数据,包括:根据所述DCT系数数据中各个系数出现的概率,对第i个子压缩数据进行熵解码,得到第i个系数张量;对I个系数张量组成的DCT系数张量进行反向重组,得到多个DCT系数矩阵,所述DCT系数数据包括所述多个DCT系数矩阵。通过该方式,能够利用DCT系数数据中各个系数出现的概率,有效实现对压缩数据的熵解码,得到编码前的DCT系数数据。
根据本公开的一方面,提供了一种数据处理装置,包括:获取模块,用于获取图像数据对应的离散余弦变换DCT系数数据;特征提取模块,用于对所述DCT系数数据进行特征提取,得到先验特征以及上下文特征,所述先验特征用于表征所述DCT系数数据中各个系数的全局相关关系,所述上下文特征用于表征所述DCT系数数据中各个系数的局部相关关系;参数确定模块,用于根据所述先验特征以 及所述上下文特征,确定所述DCT系数数据对应的概率分布参数;编码模块,用于根据所述概率分布参数,对所述DCT系数数据进行熵编码,得到所述DCT系数数据对应的压缩数据,所述压缩数据作为所述图像数据的压缩结果。
在一种可能的实现方式中,所述DCT系数数据包括多个DCT系数矩阵,所述特征提取模块,包括:重组子模块,用于根据所述多个DCT系数矩阵内各个系数对应的频率,对所述多个DCT系数矩阵进行重组,得到DCT系数张量;特征提取子模块,用于对所述DCT系数张量进行特征提取,得到先验特征以及上下文特征。
在一种可能的实现方式中,所述根据所述多个DCT系数矩阵内各个系数对应的频率,对所述多个DCT系数矩阵进行重组,得到DCT系数张量,包括:将所述多个DCT系数矩阵中频率相同的系数在空间维度上进行拼接,得到多个拼接矩阵;将所述多个拼接矩阵按照指定顺序在通道维度上进行拼接,得到所述DCT系数张量。
在一种可能的实现方式中,所述对所述DCT系数张量进行特征提取,得到先验特征以及上下文特征,包括:通过先验网络对所述DCT系数张量进行特征提取,得到所述先验特征;通过自回归网络对所述DCT系数张量进行特征提取,得到所述上下文特征。
在一种可能的实现方式中,所述DCT系数张量具有n个通道,n为正整数,所述自回归网络包括空间自回归网络与通道自回归网络,所述通过自回归网络对所述DCT系数张量进行特征提取,得到所述上下文特征,包括:将所述DCT系数张量在通道维度上切分为I个具有n/I个通道的系数张量,I∈[1,n];通过所述空间自回归网络对第i个系数张量中的各个系数进行空间维度的自回归预测,得到第i个系数张量对应的第i个空间上下文特征,所述第i个空间上下文特征表示所述第i个系数张量中各个系数之间局部的相关关系,i∈[1,I];通过所述通道自回归网络根据第1个系数张量至j-1个系数张量,对第j个系数张量进行通道维度的自回归预测,得到所述第j个系数张量对应的第j个通道上下文特征,所述第j个通道上下文特征表示所述第1个系数张量至所述j-1个系数张量,与所述第j个系数张量之间的局部相关关系,j∈[2,I];其中,所述上下文特征包括I个空间上下文特征以及I-1个通道上下文特征。
在一种可能的实现方式中,所述上下文特征包括I个空间上下文特征以及I-1个通道上下文特征,I∈[1,n],n为正整数,其中,所述参数确定模块,包括:特征拼接子模块,用于将所述先验特征、所述I个空间上下文特征以及所述I-1个通道上下文特征进行通道拼接,得到I个拼接特征;参数确定子模块,用于根据所述I个拼接特征,确定所述DCT系数数据对应的概率分布参数。
在一种可能的实现方式中,所述将所述先验特征、所述I个空间上下文特征以及所述I-1个通道上下文特征进行通道拼接,得到I个拼接特征,包括:将所述先验特征与第1个空间上下文特征进行通道拼接,得到第1个拼接特征;将所述先验特征、第j个空间上下文特征以及第j个通道上下文特征进行通道拼接,得到第j个拼接特征,j∈[2,I]。
在一种可能的实现方式中,所述根据所述I个拼接特征,确定所述DCT系数数据对应的概率分布参数,包括:通过熵参数分析网络根据所述I个拼接特征,确定所述DCT系数对应的概率分布参数;其中,所述熵参数分析网络包括I个子熵参数分析网络,所述通过熵参数分析网络根据所述I个拼接特征,确定所述DCT系数对应的概率分布参数,包括:将第i个拼接特征输入至第i个子熵参数分析网络,得到第i个系数张量中各个系数对应的均值与标准差,其中,所述概率分布参数包括I个系数张量中各个系数对应的均值与标准差,所述I个系数张量是对所述DCT系数数据对应的DCT系数张量在通道维度上进行切分得到的。
在一种可能的实现方式中,所述编码模块,包括:概率确定子模块,用于根据所述概率分布参数以及指定的概率分布函数,确定所述DCT系数数据中各个系数出现的概率;编码子模块,用于根据所述DCT系数数据中各个系数出现的概率,对所述DCT系数数据中各个系数进行熵编码,得到所述DCT系数数据对应的压缩数据。
在一种可能的实现方式中,所述根据所述DCT系数数据中各个系数出现的概率,对所述DCT系数数据中各个系数进行熵编码,得到所述DCT系数数据对应的压缩数据,包括:根据所述DCT系数数据中各个系数出现的概率,对I个系数张量的第i个系数张量中各个系数进行熵编码,得到所述第i个系数 张量对应的第i个子压缩数据;其中,所述压缩数据包括I个子压缩数据,所述I个系数张量是对DCT系数张量在通道维度上切分得到的,所述DCT系数张量是对所述DCT系数数据中的多个DCT系数矩阵进行重组得到的,I∈[1,n],i∈[1,I],n为所述DCT系数张量的通道数。
在一种可能的实现方式中,在得到所述DCT系数数据对应的压缩数据之后,所述装置还包括:解码模块,用于根据所述DCT系数数据中各个系数出现的概率,对所述压缩数据进行熵解码,得到所述DCT系数数据,其中,所述DCT系数数据中各个系数出现的概率是根据所述概率分布参数以及指定的概率分布函数确定的。
在一种可能的实现方式中,所述压缩数据包括I个子压缩数据,所述解码模块,包括:解码子模块,用于根据所述DCT系数数据中各个系数出现的概率,对第i个子压缩数据进行熵解码,得到第i个系数张量;反向重组子模块,用于对I个系数张量组成的DCT系数张量进行反向重组,得到多个DCT系数矩阵,所述DCT系数数据包括所述多个DCT系数矩阵。
根据本公开的一方面,提供了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行上述方法。
根据本公开的一方面,提供了一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为调用所述存储器存储的指令,以执行上述方法。
根据本公开的一方面,提供了一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述方法。
在本公开实施例中,通过提取图像数据对应的DCT系数数据的先验特征与上下文特征,能够利用表征全局相关关系的先验特征以及表征局部相关关系的上下文特征,得到更准确的概率分布参数,那么基于香农信源编码原理,待编码数据的概率估计越准确,越能提高数据的无损压缩率,因此基于更准确的概率分布参数对DCT系数数据进行熵编码,可以得到无损压缩率更优的压缩数据,也即能够获得体积更小的压缩结果。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。根据下面参考附图对示例性实施例的详细说明,本公开的其它特征及方面将变得清楚。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。
图1示出根据本公开实施例的数据处理方法的流程图。
图2示出根据本公开实施例的DCT系数数据的示意图。
图3示出根据本公开实施例的DCT系数张量的示意图。
图4示出根据本公开实施例的数据处理方法的示意图。
图5示出根据本公开实施例的数据处理装置的框图。
图6示出根据本公开实施例的一种电子设备800的框图。
图7示出根据本公开实施例的另一种电子设备1900的框图。
具体实施方式
以下将参考附图详细说明本公开的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意拼接,例如,包括A、B、C中的至少一种, 可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
另外,为了更好地说明本公开,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本公开同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本公开的主旨。
图1示出根据本公开实施例的数据处理方法的流程图,所述数据处理方法可以由终端设备或服务器等电子设备执行,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字助理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等,所述方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现,或者,可通过服务器执行所述方法。如图1所示,所述数据处理方法包括:
在步骤S11中,获取图像数据对应的离散余弦变换DCT系数数据。
在一种可能的实现方式中,图像数据可以指原始图像,或还可以是JPEG数据。其中,原始图像也即为摄像头、相机等图像采集设备直接采集的图像;JPEG数据可以指按照JPEG标准对原始图像进行编码后的数据。
可知晓的是,联合图像专家组(Joint Photographic Experts Group,JPEG)标准是图像压缩编码领域的一项重要技术。按照JPEG标准对原始图像进行压缩编码的过程可以简述为:将原始图像切分为多个8×8尺寸的图像块;对每个图像块中的像素值进行离散余弦变换(Discrete Cos ine Transform,DCT),得到多个DCT系数矩阵,其中,多个DCT系数矩阵的总数取决于图像数据的尺寸,例如,长×宽为H×W的图像数据,可得到T=(H×W)/64个DCT系数矩阵,每个DCT系数矩阵中的DCT系数从低频到高频按照zigzag(之字形)次序排列,每个DCT系数矩阵包含64个DCT系数,每个DCT系数矩阵中的第一个值为直流(DC)系数,其它63个为交流(AC)系数;再直接对多个DCT系数矩阵中的各个DCT系数进行熵编码,得到JPEG数据。
在一种可能的实现方式中,在图像数据为原始图像的情况下,可以按照上述JPEG标准对图像数据进行离散余弦变换得到多个DCT系数矩阵,DCT系数数据包括该多个DCT系数矩阵;在图像数据是JPEG数据的情况下,可以按照上述JPEG标准对JPEG数据进行解码,以直接从JPEG数据中提取DCT系数数据。应理解的是,对于DCT系数数据的来源,本公开实施例不作限制。
在步骤S12中,对DCT系数数据进行特征提取,得到先验特征以及上下文特征,先验特征用于表征DCT系数数据中各个系数的全局相关关系,上下文特征用于表征DCT系数数据中各个系数的局部相关关系。
其中,局部相关关系,可以理解为某个当前系数与局部感受野内的相邻系数之间存在的线性关系或非线性关系;全局相关关系,可以理解为某个当前系数与全局感受野内的相邻系数之间存在的线性关系与非线性关系;其中,该相邻系数可以包括局部感受野内或全局感受野内按照次序排列在当前系数周围的系数,全局感受野的范围比局部感受野大,或者说,全局感受野内的相邻系数的数量比局部感受野内的相邻系数的数量要多。应理解的是,本公开实施例中的系数也即为DCT系数。
在一种可能的实现方式中,可以通过先验网络与自回归网络分别对DCT系数数据进行特征提取,得到先验特征以及上下文特征。
在一种可能的实现方式中,先验网络可以包括先验分析子网络以及先验合成子网络,其中,先验分析子网络例如可以包括m层卷积层,前m-1层卷积层后接激活函数层,用于提取DCT系数数据的深度特征,或者说对DCT系数数据进行逐层降采样,m为正整数(例如m为3)。相应的,先验合成子网络可以包括m层卷积层,前m-1层卷积层后接激活函数层,用于对先验分析子网络提取的深度特征进行逐层上采样,得到先验特征。应理解的是,对于各个卷积层中卷积核的数量、尺寸以及卷积步长等,以及激活函数层采用的激活函数类型,本公开实施例均不作限制。
在一种可能的实现方式中,可以通过现有的概率模型(如参数概率模型、无参数概率模型),对深度特征中的各个特征值进行建模,也即通过概率模型描述深度特征中各个特征值的概率,以存储计算出的深度特征。
考虑到先验分析子网络输出的深度特征是浮点数,为便于存储深度特征时对深度特征进行编码, 可以先对深度特征进行离散化,也即可以对先验分析子网络输出的深度特征进行量化,并通过上述概率模型对量化后的深度特征进行建模,以存储量化后的深度特征;并可以将量化后的深度特征输入至先验合成子网络得到先验特征。
其中,对先验分析子网络输出的深度特征进行量化,可以包括:对该深度特征进行取整操作,例如采用量化函数round()对深度特征中的特征值进行四舍五入运算;或者,还可以对深度特征中的特征值添加均匀分布的随机噪声,该随机噪声的取值范围例如可以是[-0.5,0.5]。对于采用何种量化方式,本公开实施例不作限制。
在一种可能的实现方式中,自回归网络可以理解为一种结合自回归预测算法的卷积神经网络,例如可以是一种掩码卷积网络,可以用于学习输入数据之间的上下文信息,也即提取DCI系数数据中多个系数之间的上下文特征。
需要说明的是,以上先验网络与自回归网络各自的网络结构是本公开实施例提供的一种实现方式,对于先验网络与自回归网络的网络结构、网络类型以及训练方式,本公开实施例不作限制。
在步骤S13中,根据先验特征以及上下文特征,确定DCT系数数据对应的概率分布参数。
可理解的是,为求得DCT系数数据中各个系数的出现的概率,可以假设DCT系数数据中各个系数均服从指定的概率分布,例如,服从高斯分布、拉普拉斯分布、混合高斯分布等,那么DCT系数数据中的各个系数也即服从均值(又称期望)为μ,方差为σ 2的概率分布,σ为标准差,其中,均值与标准差也即为概率分布参数。应理解的是,在计算出各个系数对应的概率分布参数后,结合指定的概率分布对应的概率分布函数,便可以计算各个系数出现的概率。
在一种可能的实现方式中,根据先验特征以及上下文特征,确定DCT系数数据对应的概率分布参数,可以包括:可以将先验特征与上下文特征进行通道拼接,得到拼接特征,将拼接特征输入至熵参数分析网络中,输出DCT系数数据对应的概率分布参数,也即得到DCT系数数据中各个系数各自对应的概率分布参数。
其中,熵参数分析网络例如可以采用3层卷积核尺寸为1×1、步长为1的卷积神经网络,熵参数分析网络的输出结果例如可以是具有2×T个通道的张量,其中一半通道的张量指示的可以是多个DCT系数矩阵中各个系数对应的均值,另一半通道指示的可以是多个DCT系数矩阵中各个系数对应的标准差。
在一种可能的实现方式中,可以以最小化率失真优化函数J=λD+R为目标,采用图像质量评价指标,例如SSIM(Structural Similarity,结构相似性)指标、PSNR(Peak Signal to Noise Ratio,峰值信噪比)指标,训练熵参数分析网络,其中,D为失真项、R为码率、λ为常数参数,由于是对DCT系数数据进行无损压缩,失真项D为0,R可以包括DCT系数数据对应的编码码率以及先验特征对应的编码码率。
其中,可以采用DCT系数数据的信息熵近似为DCT系数数据对应的编码码率,以及采用先验特征的信息熵近似为先验特征对应的编码码率。DCT系数数据的信息熵可以按照熵参数分析网络输出的概率分布参数对DCT系数数据进行熵编码得到,先验特征的信息熵可以按照熵参数分析网络输出的概率分布参数对先验特征进行熵编码得到。
需要说明的是,上述熵参数分析网络是本公开实施例提供的一种实现方式,对于熵参数分析网络的网络结构、网络类型以及训练方式,本公开实施例不作限制。
在步骤S14中,根据概率分布参数,对DCT系数数据进行熵编码,得到DCT系数数据对应的压缩数据,压缩数据作为图像数据的压缩结果。
如上所述,假设DCT系数数据中各个系数均服从指定的概率分布,例如,服从高斯分布、拉普拉斯分布、混合高斯分布等,在计算出各个系数对应的概率分布参数后,结合概率分布对应的概率分布函数,便可以计算各个系数出现的概率。
举例来说,DCT系数数据中的各个DCT系数的概率P(x),可以以公式(1)示出的高斯分布函数F x)确定,
Figure PCTCN2022114451-appb-000001
其中,x代表任一DCT系数,exp代表以自然常数e为底的指数函数,μ代表均值(又称期望),σ代表标准差。
在一种可能的实现方式中,可以采用ANS(Asymmetric numeral systems,非对称数字***)编码或算术编码等任意熵编码方式,实现对DCT系数数据进行熵编码,得到DCT系数数据对应的压缩数据。
以算术编码为例说明对DCT系数数据进行熵编码的递推过程:先将初始编码区间[0,1)连续划分成多个子区间,每个子区间代表一个DCT系数,子区间的大小正比于该DCT系数出现的概率P(x),概率越大,子区间越大,所有的子区间加起来正好是[0,1)。编码从该初始的编码区间[0,1)开始,然后每次编码一个DCT系数,就在现有的编码区间上,按照概率比例取出这个DCT系数所在的子区间作为下一个DCT系数的编码区间,例如第一个DCT系数x 1的子区间落在0到0.6上,编码区间缩小为[0,0.6),第二个DCT系数x 2的子区间落在编码区间[0,0.6)的0.48到0.54上,则编码区间缩小为[0.48,0.54),第三个DCT系数x 3的子区间落在编码区间[0.48,0.54)的0.534到0.54,以此类推;最后将DCT系数对应的子区间中任意一个小数以二进制形式输出即得到编码的数据。
可理解的是,在用户期望查看图像数据时,可以根据概率分布参数求得的概率,对压缩数据进行熵解码,得到DCT系数数据,再对熵解码后的DCT系数数据进行逆离散余弦变换,得到原始图像;或按照上述JPEG标准对DCT系数数据进行编码得到JPEG数据。
在一种可能的实现方式中,在得到DCT系数数据对应的压缩数据之后,所述方法还包括:根据DCT系数数据中各个系数出现的概率,对压缩数据进行熵解码,得到DCT系数数据,其中,DCT系数数据中各个系数出现的概率是根据概率分布参数以及指定的概率分布函数确定的。通过该方式,能够利用DCT系数数据中各个系数出现的概率,有效实现对压缩数据的熵解码,得到编码前的DCT系数数据。应理解的是,熵编码与熵解码的过程是相反的,也即可以按照上述本公开实施例中对DCT系数数据的数据压缩过程进行反向的数据解压过程。
在本公开实施例中,通过提取图像数据对应的DCT系数数据的先验特征与上下文特征,能够利用表征全局相关关系的先验特征以及表征局部相关关系的上下文特征,得到更准确的概率分布参数,那么基于香农信源编码原理,待编码数据的概率估计越准确,越能提高数据的无损压缩率,因此基于更准确的概率分布参数对DCT系数数据进行熵编码,可以得到无损压缩率更优的压缩数据,也即能够获得体积更小的压缩结果。
如上所述,DCT系数数据包括多个DCT系数矩阵,为了使各个网络更好的提取先验特征以及上下文特征,可以先对DCT系数数据进行预处理,并对预处理后的DCT系数数据进行特征提取。在一种可能的实现方式中,在步骤S12中,对DCT系数数据进行特征提取,得到先验特征以及上下文特征,包括:
步骤S121:根据多个DCT系数矩阵内各个系数对应的频率,对多个DCT系数矩阵进行重组,得到DCT系数张量。
可知晓的是,对图像数据进行离散余弦变换,也即将图像数据从空间域转换至频域,每个DCT系数各自对应有频率,在一种可能的实现方式中,根据多个DCT系数矩阵内各个系数对应的频率,对多个DCT系数矩阵进行重组,得到DCT系数张量,可以包括:将多个DCT系数矩阵中频率相同的系数在空间维度上进行拼接,得到多个拼接矩阵;将多个拼接矩阵按照指定顺序在通道维度上进行拼接,得到DCT系数张量。
通过该方式,可以使重组后的DCT系数张量在空间维度与通道维度上存在一定结构性的冗余信息,冗余信息可以理解为DCT系数张量在空间维度上、相同频率的多个系数之间存在相似度较高的系数,和/或在通道维度上、频率不同的多个通道之间存在相似度较高的通道,从而之后可以利用该冗余信息生成更准确的概率分布参数。
其中,空间维度可以理解为长宽维度,例如9个DCT系数在空间维度上进行拼接可以得到一个3 ×3的拼接矩阵;在通道维度上进行拼接可以理解为将二维矩阵组合成三维张量,例如,5个3×3的拼接矩阵在通道维度上进行拼接可以得到3×3×5的DCT系数张量。
如上所述,每个DCT系数矩阵中的DCT系数是从低频到高频按照zigzag(之字形)次序排列的,因此可以认为多个DCT系数矩阵在同一位置处的DCT系数的频率相同,将多个DCT系数矩阵中、频率相同的系数在空间维度上进行拼接,得到多个拼接矩阵,可以包括:将多个DCT系数矩阵中、同一位置处的系数在空间维度上进行拼接,得到多个拼接矩阵。
在一种可能的实现方式中,指定顺序可以包括:每个拼接矩阵对应的频率高低,也即上述zigzag(之字形)次序,当然还可以按照DCT系数在DCT系数矩阵中从左到右、从上到下的排列顺序,对此本公开实施例不作限制。
图2示出根据本公开实施例的DCT系数数据的示意图,图3示出根据本公开实施例的DCT系数张量的示意图。如图2所示的DCT系数数据包括4个8×8的DCT系数矩阵,将该4个DCT系数矩阵中频率相同的系数在空间维度上进行拼接,可以得到64个2×2的拼接矩阵;将64个拼接矩阵按照zigzag次序进行通道拼接,得到2×2×64的DCT系数张量,也即该DCT系数张量具有64个通道。
应理解的是,以上对多个DCT系数矩阵进行重组的方式,是本公开实施例提供的一种实现方式,实际上,本领域技术人员可以根据实际需求设置多个DCT系数矩阵的重组方式,对此本公开实施例不作限制。例如可以将DCT系数数据对应的整个频率分布区间切分为多个频率区间,将处于相同频率区间内的DCT系数在空间维度上进行拼接等。
步骤S122:对DCT系数张量进行特征提取,得到先验特征以及上下文特征。
在一种可能的实现方式中,对DCT系数张量进行特征提取,得到先验特征以及上下文特征,可以包括:通过先验网络对DCT系数张量进行特征提取,得到先验特征;通过自回归网络对DCT系数张量进行特征提取,得到上下文特征。通过该方式,可以有效得到先验特征以及上下文特征。
其中,可以采用上述本公开实施例中的先验网络与自回归网络,分别提取先验特征与上下文特征;对于先验网络与自回归网络的网络结构、网络类型以及训练方式,本公开实施例不作限制。
例如,针对具有64个通道的DCT系数张量,先验网络中的先验分析子网络可以采用3层卷积层,第一层卷积层可以包括384个3×3×64大小的卷积核,卷积步长为1,采用激活函数为leaky Relu,第二层卷积层可以包括384个5×5×384大小的卷积核,卷积步长为2,采用激活函数为leaky Relu,第三层卷积层可以包括192个5×5×384大小的卷积核,卷积步长为2,则输出的深度特征具有192个通道;先验网络中的先验合成子网络对应采用3层卷积层,第一层卷积层可以包括192个5×5×192大小的卷积核,卷积步长为2,采用激活函数为leaky Relu,第二层卷积层可以包括288个5×5×192大小的卷积核,卷积步长为2,采用激活函数为leaky Relu,第三层卷积层可以包括128个3×3×288大小的卷积核,卷积步长为1,则输出的先验特征具有128个通道。
在本公开实施例中,能够利用预处理后的DCT系数张量,高效得到先验特征与上下文特征,从而便于之后得到更准确的概率分布参数。
如上所述,DCT系数张量具有多个通道,重组后的DCT系数张量在空间维度与通道维度上存在一定结构性的冗余信息,在一种可能的实现方式中,可以在通道维度与空间维度上分别对DCT系数张量进行自回归预测,从而得到信息更丰富的上下文特征。
在一种可能的实现方式中,DCT系数张量具有n个通道,n为正整数,自回归网络包括空间自回归网络与通道自回归网络,通过自回归网络对DCT系数张量进行特征提取,得到上下文特征,包括:
将DCT系数张量在通道维度上切分为I个具有n/I个通道的系数张量,I∈[1,n];
通过空间自回归网络对第i个系数张量中的各个系数进行空间维度的自回归预测,得到第i个系数张量对应的第i个空间上下文特征,第i个空间上下文特征表示第i个系数张量中各个系数之间的局部相关关系,i∈[1,I];
通过通道自回归网络根据第1个系数张量至j-1个系数张量,对第j个系数张量进行通道维度的自回归预测,得到第j个系数张量对应的第j个通道上下文特征,第j个通道上下文特征表示第1个系数张量至j-1个系数张量,与第j个系数张量之间的局部相关关系,j∈[2,I];
其中,上下文特征包括I个空间上下文特征以及I-1个通道上下文特征。
可理解的是,DCT系数张量具有的通道数n与DCT系数矩阵中DCT系数的数量一致,例如8×8的DCT系数矩阵,也即DCT系数矩阵中包括8×8个DCT系数,则DCT系数张量具有64个通道。
在一种可能的实现方式中,I的值可以自定义设置,例如可以设置为8,那么DCT系数张量可以切分为8个具有8个通道的系数张量,对此本公开实施例不作限制。
其中,第i个空间上下文特征表示第i个系数张量中各个系数之间的局部相关关系,可以理解为,第i个空间上下文特征表示第i个系数张量中某个当前系数与局部感受野内相邻系数之间存在的线性关系或非线性关系,该相邻系数可以包括第i个系数张量在局部感受野内按顺序排列在该当前系数之前的系数,也可包括第i个系数张量在局部感受野内按顺序排列在该当前系数周围的系数。第j个通道上下文特征表示第1个系数张量至j-1个系数张量,与第j个系数张量之间的局部相关关系,可以理解为,第j个通道上下文特征表示第1个系数张量至j-1个系数张量,与第j个系数张量之间的线性关系或非线性关系。
可知晓的是,自回归预测可以理解为利用一个或多个自变量预测一个因变量的值,或者说分析一个因变量与一个或多个自变量之间的相关关系,因此按照通道维度的排列顺序,根据第1个系数张量至j-1个系数张量,对第j个系数张量进行通道维度的自回归预测,得到第j个系数张量对应的第j个通道上下文特征,共得到I-1个通道上下文特征。
在一种可能的实现方式中,通道自回归网络可以包括I-1个子通道自回归网络,第j-1个子通道自回归网络用于根据第1个系数张量至j-1个系数张量,对第j个系数张量进行通道维度的自回归预测,得到第j个系数张量对应的第j个通道上下文特征。
在一种可能的实现方式中,每个子通道自回归网络可以采用多层卷积层,第j-1个子通道自回归网络的第一层卷积层中卷积核大小为长度a×宽度a×深度[(n/I)×(j-1)],a为正整数(例如为3),举例来说,假设n为64、I为16,也即每个系数张量具有2个通道,为得到第4个通道上下文特征,可以将第1个系数张量至第3个系数张量输入至第3个子通道自回归网络中,那么第3个子通道自回归网络的第一层卷积层中每个卷积核深度应为6。
其中,对于每个子通道自回归网络中各个卷积层中卷积核的数量以及卷积步长本公开实施例不作限制,例如最后一层卷积层中可以包括128个卷积核,也即每个子通道自回归网络输出的通道上下文特征具有128个通道。
在一种可能的实现方式中,空间自回归网络可以包括I个子空间自回归网络,第i个子空间自回归网络用于对第i个系数张量中的各个系数进行空间维度的自回归预测,得到第i个系数张量对应的第i个空间上下文特征。其中每个子空间自回归网络中例如可以直接采用128个5×5×(n/I)大小的卷积核,卷积步长为1,也即每个子空间自回归网络输出的空间上下文特征具有128个通道。
可理解的是,上述通道自回归网络与空间自回归网络的网络结构与I的值以及n的值相关,用户在设定I与n的值后,可以对应调整通道自回归网络与空间自回归网络的网络结构。上述通道自回归网络与空间自回归网络的网络结构是本公开实施例提供的一种实现方式,实际上,本领域技术人员可以根据实际需求设置上述通道自回归网络与空间自回归网络中的卷积层数、卷积核数量、尺寸等,对此本公开实施例不作限制。
在本公开实施例中,能够利用空间自回归网络学习各个系数张量在空间维度的空间上下文信息,利用通道自回归网络学习各个系数张量在通道维度的通道上下文信息,也即学习上述两种局部相关关系,或者说能够利用空间自回归网络与通道自回归网络,分别学习DCT系数张量在空间维度与通道维度上存在的冗余信息,也即在通道维度与空间维度上分别对DCT系数张量进行自回归预测,从而得到信息更丰富的上下文特征。
如上所述,上下文特征包括I个空间上下文特征以及I-1个通道上下文特征,I∈[1,n],n为正整数,在一种可能的实现方式中,在步骤S13中,根据先验特征以及上下文特征,确定DCT系数数据对应的概率分布参数,包括:
步骤S131:将先验特征、I个空间上下文特征以及I-1个通道上下文特征进行通道拼接,得到I个拼 接特征。
在一种可能的实现方式中,将先验特征、I个空间上下文特征以及I-1个通道上下文特征进行通道拼接,得到I个拼接特征,包括:将先验特征与第1个空间上下文特征进行通道拼接,得到第1个拼接特征;将先验特征、第j个空间上下文特征以及第j个通道上下文特征进行通道拼接,得到第j个拼接特征,j∈[2,I]。通过该方式,能够将先验特征与上下文特征切分为多组拼接特征,有利于高效地得到每个系数矩阵中各个系数对应的概率分布模型,提高运算效率。
举例来说,先验特征是具有128个通道的张量,各个空间上下文特征与各个通道上下文特征各自也是具有128个通道的张量,那么将先验特征与第1个空间上下文本特征进行通道拼接,得到具有256个通道的第1个拼接特征;将先验特征、第j个空间上下文特征以及第j个通道上下文特征进行通道拼接,得到具有384个通道的第j个拼接特征。
步骤S132:根据I个拼接特征,确定DCT系数数据对应的概率分布参数。
在一种可能的实现方式中,可以通过上述熵参数分析网络根据I个拼接特征,确定DCT系数数据对应的概率分布参数。其中,熵参数分析网络可以包括I个子熵参数分析网络,第i个子熵参数分析网络用于根据第i个拼接特征,确定第i个系数张量中各个系数对应的均值与标准差。
在一种可能的实现方式中,通过熵参数分析网络根据I个拼接特征,确定DCT系数数据对应的概率分布参数,可以包括:将第i个拼接特征输入至第i个子熵参数分析网络,得到第i个系数张量中各个系数对应的均值与标准差,其中,概率分布参数包括I个系数张量中各个系数对应的均值与标准差,I个系数张量是对DCT系数数据对应的DCT系数张量在通道维度上进行切分得到的。
其中,对DCT系数数据对应的DCT系数张量在通道维度上进行切分得到I个系数张量的过程,可以参照上述本公开实施例的相关描述,在此不做赘述。
其中,每个子熵参数分析网络的网络结构可以参照上述熵参数分析网络,也即,每个子熵参数分析网络例如可以采用3层卷积核尺寸为1×1、步长为1的卷积神经网络,每个子熵参数分析网络的输出结果例如可以是具有2×(n/I)个通道的张量,其中一半通道的张量指示的可以是第i个系数张量中各个系数对应的均值,另一半通道指示的可以是第i个系数张量中各个系数对应的标准差。
在一种可能的实现方式中,可以参照上述训练熵参数分析网络的训练方式,训练I个子熵参数分析网络,也即以最小化率失真优化函数J=λD+R为目标,采用图像质量评价指标,例如SSIM(Structural Similarity,结构相似性)指标、PSNR(Peak Signal to Noise Ratio,峰值信噪比)指标,训练I个子熵参数分析网络,其中,D为失真项、R为码率、λ为常数参数,由于是对DCT系数数据进行无损压缩,失真项D为0,R可以包括各个系数矩阵对应的编码码率以及先验特征对应的编码码率。
在一种可能的实现方式中,可以采用各个系数矩阵的信息熵近似为各个系数矩阵对应的编码码率,以及采用先验特征的信息熵近似为先验特征对应的编码码率。其中,第i个系数张量的信息熵可以按照第i个子熵参数分析网络输出的概率分布参数对第i个系数张量进行熵编码得到,先验特征的信息熵可以按照熵参数分析网络输出的概率分布参数对先验特征进行熵编码得到。
通过该方式,能够利用信息更丰富的拼接特征,得到更准确的概率分布参数。
如上所述,假设DCT系数数据中各个系数均服从指定的概率分布,例如,服从高斯分布、拉普拉斯分布、混合高斯分布等,在计算出各个系数对应的概率分布参数后,结合概率分布对应的概率分布函数,便可以计算各个系数出现的概率。
在一种可能的实现方式中,在步骤S14中,根据概率分布参数,对DCT系数数据进行熵编码,得到DCT系数数据对应的压缩数据,包括:
根据概率分布参数以及指定的概率分布函数,确定DCT系数数据中各个系数出现的概率;根据DCT系数数据中各个系数出现的概率,对DCT系数数据中各个系数进行熵编码,得到DCT系数数据对应的压缩数据。
其中,概率分布函数可以采用高斯分布函数、拉普拉斯分布函数或混合高斯分布函数等,对此本公开实施例不作限制。
如上所述,可以将DCT系数数据进行重组和切分,得到I个系数张量,并将第i个拼接特征输入至 第i个子熵参数分析网络,得到第i个系数张量中各个系数对应的均值与标准差,在一种可能的实现方式中,根据概率分布参数以及指定的概率分布函数,确定DCT系数数据中各个系数出现的概率,可以包括:根据第i个系数张量中各个系数对应的均值与标准差以及指定的概率分布函数,确定第i个系数张量中各个系数出现的概率。
以及,在一种可能的实现方式中,根据DCT系数数据中各个系数出现的概率,对DCT系数数据中各个系数进行熵编码,得到DCT系数数据对应的压缩数据,可以包括:根据DCT系数数据中各个系数出现的概率,也即根据I个系数张量中各个系数出现的概率,对I个系数张量的第i个系数张量中各个系数进行熵编码,得到第i个子压缩数据,其中,DCT系数数据对应的压缩数据包括I个子压缩数据。
如上所述,I个系数张量是对DCT系数张量在通道维度上切分得到的,DCT系数张量是对DCT系数数据中的多个DCT系数矩阵进行重组得到的,I∈[1,n],i∈[1,I],n为DCT系数张量的通道数。应理解的是,确定I个系数张量中各个系数出现的概率以及对各个系数进行熵编码的具体实现方式,可以参照上述步骤S14中的相关描述,在此不做赘述。
在一种可能的实现方式中,可以通过概率表的形式记录I个系数张量中各个系数出现的概率,便于对DCT系数数据中各个系数进行熵编码以及熵解码。
可理解的是,在用户期望查看图像数据时,可以根据第i个系数张量中各个系数出现的概率,对第i个子压缩数据进行熵解码,得到第i个系数张量;再对I个系数张量组成的DCT系数张量进行反向重组,得到多个DCT系数矩阵;再对多个DCT系数矩阵进行逆离散余弦变换,得到原始图像;或按照上述JPEG标准对多个DCT系数矩阵进行编码得到JPEG数据,也即按照本公开实施例的数据压缩过程进行反向的数据解压过程。
如上所述,压缩数据包括I个子压缩数据,在一种可能的实现方式中,根据DCT系数数据中各个系数出现的概率,对压缩数据进行熵解码,得到DCT系数数据,包括:根据DCT系数数据中各个系数出现的概率,对第i个子压缩数据进行熵解码,得到第i个系数张量;对I个系数张量组成的DCT系数张量进行反向重组,得到多个DCT系数矩阵,DCT系数数据包括多个DCT系数矩阵。通过该方式,能够利用DCT系数数据中各个系数出现的概率,有效实现对压缩数据的熵解码,得到编码前的DCT系数数据。
如上所述,可以通过概率表记录I个系数张量中各个系数出现的概率,也即记录DCT系数数据中各个系数出现的概率,从而可以熵解码时直接获取到DCT系数数据中各个系数出现的概率。
应理解的是,熵编码与熵解码的过程是相反的;对I个系数张量组成的DCT系数张量进行反向重组得到多个DCT系数矩阵的过程,与上述本公开实施例中对多个DCT系数矩阵进行重组得到DCT系数张量的过程是相反的,也即可以是按照上述本公开实施例中对DCT系数数据的数据压缩过程进行反向的数据解压过程。
在本公开实施例中,利用更准确的概率分布参数对DCT系数数据进行熵编码,能够获得无损压缩率更优的压缩数据,从而节约存储资源和带宽资源。
图4示出根据本公开实施例的数据处理方法的示意图,如图4所示,所述数据处理方法包括:
对DCT系数数据进行重组,得到DCT系数张量,对DCT系数张量在通道维度切分为I个系数张量;
将DCT系数张量输入至先验分析子网络ha,得到深度特征;对深度特征进行量化,得到量化后的深度特征;将量化后的深度特征输入至先验合成子网络hs,得到先验特征;
将第i个系数张量输入至第i个子空间自回归网络,得到第i个空间上下文特征;
将第1个系数张量至第j-1个系数张量输入至第j-1个子通道自回归网络,得到第j个通道上下文特征;
将先验特征与第1个空间上下文特征进行通道拼接,得到第1个拼接特征;将先验特征、第j个空间上下文特征以及第j个通道上下文特征进行通道拼接,得到第j个拼接特征;
将第i个拼接特征输入至第i个子熵参数分析网络,得到第i个系数张量中各个系数对应的均值与标准差,也即得到各个系数对应的概率分布参数;
根据第i个系数张量中各个系数对应的均值与标准差,对i个系数张量进行熵编码,得到第i个子压缩数据;
对第i个子压缩数据进行熵解码,得到第i个系数张量;对I个系数张量组成的DCT系数张量进行反向重组,得到多个DCT系数矩阵,也即得到DCT系数数据。
应理解的是,本公开实施例中数据处理方法的各个步骤的具体实现方式,可以参照上文方法实施例的描述,在此不再赘述。
根据本公开的实施例,能够充分利用DCT系数数据中的空间维度和通道维度上的冗余信息,相较于相关技术中的无损压缩技术,显著提升了图像数据的无损压缩率。
根据本公开实施例的数据处理方法,可以应用于数据中心、云存储、JPEG数据转码等场景,在这些场景中,海量的图像数据占据了大量的存储资源和带宽资源,提升了数据存储和传输的成本,根据本公开实施例的数据处理方法,能够在保证图像数据无损的情况下,对图像数据进行高效压缩,从而显著降低存储资源和带宽资源的占用。
可以理解,本公开提及的上述各个方法实施例,在不违背原理逻辑的情况下,均可以彼此相互结合形成结合后的实施例,限于篇幅,本公开不再赘述。本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
此外,本公开还提供了数据处理装置、电子设备、计算机可读存储介质、程序,上述均可用来实现本公开提供的任一种数据处理方法,相应技术方案和描述和参见方法部分的相应记载,不再赘述。
图5示出根据本公开实施例的数据处理装置的框图,如图5所示,所述装置包括:
获取模块101,用于获取图像数据对应的离散余弦变换DCT系数数据;
特征提取模块102,用于对所述DCT系数数据进行特征提取,得到先验特征以及上下文特征,所述先验特征用于表征所述DCT系数数据中各个系数的全局相关关系,所述上下文特征用于表征所述DCT系数数据中各个系数的局部相关关系;
参数确定模块103,用于根据所述先验特征以及所述上下文特征,确定所述DCT系数数据对应的概率分布参数;
编码模块104,用于根据所述概率分布参数,对所述DCT系数数据进行熵编码,得到所述DCT系数数据对应的压缩数据,所述压缩数据作为所述图像数据的压缩结果。
在一种可能的实现方式中,所述DCT系数数据包括多个DCT系数矩阵,所述特征提取模块102,包括:重组子模块,用于根据所述多个DCT系数矩阵内各个系数对应的频率,对所述多个DCT系数矩阵进行重组,得到DCT系数张量;特征提取子模块,用于对所述DCT系数张量进行特征提取,得到先验特征以及上下文特征。
在一种可能的实现方式中,所述根据所述多个DCT系数矩阵内各个系数对应的频率,对所述多个DCT系数矩阵进行重组,得到DCT系数张量,包括:将所述多个DCT系数矩阵中频率相同的系数在空间维度上进行拼接,得到多个拼接矩阵;将所述多个拼接矩阵按照指定顺序在通道维度上进行拼接,得到所述DCT系数张量。
在一种可能的实现方式中,所述对所述DCT系数张量进行特征提取,得到先验特征以及上下文特征,包括:通过先验网络对所述DCT系数张量进行特征提取,得到所述先验特征;通过自回归网络对所述DCT系数张量进行特征提取,得到所述上下文特征。
在一种可能的实现方式中,所述DCT系数张量具有n个通道,n为正整数,所述自回归网络包括空间自回归网络与通道自回归网络,所述通过自回归网络对所述DCT系数张量进行特征提取,得到所述上下文特征,包括:将所述DCT系数张量在通道维度上切分为I个具有n/I个通道的系数张量,I∈[1,n];通过所述空间自回归网络对第i个系数张量中的各个系数进行空间维度的自回归预测,得到第i个系数张量对应的第i个空间上下文特征,所述第i个空间上下文特征表示所述第i个系数张量中各个系数之间局部的相关关系,i∈[1,I];通过所述通道自回归网络根据第1个系数张量至j-1个系数张量,对第j个系数张量进行通道维度的自回归预测,得到所述第j个系数张量对应的第j个通道上下文特征,所述第j个通道上下文特征表示所述第1个系数张量至所述j-1个系数张量,与所述第j个系数张量之间的局部相关关系,j∈[2,I];其中,所述上下文特征包括I个空间上下文特征以及I-1个通道上下文特征。
在一种可能的实现方式中,所述上下文特征包括I个空间上下文特征以及I-1个通道上下文特征,I ∈[1,n],n为正整数,其中,所述参数确定模块103,包括:特征拼接子模块,用于将所述先验特征、所述I个空间上下文特征以及所述I-1个通道上下文特征进行通道拼接,得到I个拼接特征;参数确定子模块,用于根据所述I个拼接特征,确定所述DCT系数数据对应的概率分布参数。
在一种可能的实现方式中,所述将所述先验特征、所述I个空间上下文特征以及所述I-1个通道上下文特征进行通道拼接,得到I个拼接特征,包括:将所述先验特征与第1个空间上下文特征进行通道拼接,得到第1个拼接特征;将所述先验特征、第j个空间上下文特征以及第j个通道上下文特征进行通道拼接,得到第j个拼接特征,j∈[2,I]。
在一种可能的实现方式中,所述根据所述I个拼接特征,确定所述DCT系数数据对应的概率分布参数,包括:通过熵参数分析网络根据所述I个拼接特征,确定所述DCT系数对应的概率分布参数;其中,所述熵参数分析网络包括I个子熵参数分析网络,所述通过熵参数分析网络根据所述I个拼接特征,确定所述DCT系数对应的概率分布参数,包括:将第i个拼接特征输入至第i个子熵参数分析网络,得到第i个系数张量中各个系数对应的均值与标准差,其中,所述概率分布参数包括I个系数张量中各个系数对应的均值与标准差,所述I个系数张量是对所述DCT系数数据对应的DCT系数张量在通道维度上进行切分得到的。
在一种可能的实现方式中,所述编码模块104,包括:概率确定子模块,用于根据所述概率分布参数以及指定的概率分布函数,确定所述DCT系数数据中各个系数出现的概率;编码子模块,用于根据所述DCT系数数据中各个系数出现的概率,对所述DCT系数数据中各个系数进行熵编码,得到所述DCT系数数据对应的压缩数据。
在一种可能的实现方式中,所述根据所述DCT系数数据中各个系数出现的概率,对所述DCT系数数据中各个系数进行熵编码,得到所述DCT系数数据对应的压缩数据,包括:根据所述DCT系数数据中各个系数出现的概率,对I个系数张量的第i个系数张量中各个系数进行熵编码,得到所述第i个系数张量对应的第i个子压缩数据;其中,所述压缩数据包括I个子压缩数据,所述I个系数张量是对DCT系数张量在通道维度上切分得到的,所述DCT系数张量是对所述DCT系数数据中的多个DCT系数矩阵进行重组得到的,I∈[1,n],i∈[1,I],n为所述DCT系数张量的通道数。
在一种可能的实现方式中,在得到所述DCT系数数据对应的压缩数据之后,所述装置还包括:解码模块,用于根据所述DCT系数数据中各个系数出现的概率,对所述压缩数据进行熵解码,得到所述DCT系数数据,其中,所述DCT系数数据中各个系数出现的概率是根据所述概率分布参数以及指定的概率分布函数确定的。
在一种可能的实现方式中,所述压缩数据包括I个子压缩数据,所述解码模块,包括:解码子模块,用于根据所述DCT系数数据中各个系数出现的概率,对第i个子压缩数据进行熵解码,得到第i个系数张量;反向重组子模块,用于对I个系数张量组成的DCT系数张量进行反向重组,得到多个DCT系数矩阵,所述DCT系数数据包括所述多个DCT系数矩阵。
在本公开实施例中,通过提取图像数据对应的DCT系数数据的先验特征与上下文特征,能够利用表征全局相关关系的先验特征以及表征局部相关关系的上下文特征,得到更准确的概率分布参数,那么基于香农信源编码原理,待编码数据的概率估计越准确,越能提高数据的无损压缩率,因此基于更准确的概率分布参数对DCT系数数据进行熵编码,可以得到无损压缩率更优的压缩数据,也即能够获得体积更小的压缩结果。
在一些实施例中,本公开实施例提供的装置具有的功能或包含的模块可以用于执行上文方法实施例描述的方法,其具体实现可以参照上文方法实施例的描述,为了简洁,这里不再赘述。
本公开实施例还提出一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述方法。计算机可读存储介质可以是易失性或非易失性计算机可读存储介质。
本公开实施例还提出一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为调用所述存储器存储的指令,以执行上述方法。
本公开实施例还提供了一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码 的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备的处理器中运行时,所述电子设备中的处理器执行上述方法。
本公开实施例还提供了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行上述方法。
电子设备可以被提供为终端、服务器或其它形态的设备。
图6示出根据本公开实施例的一种电子设备800的框图。例如,电子设备800可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等终端。
参照图6,电子设备800可以包括以下一个或多个组件:处理组件802,存储器804,电源组件806,多媒体组件808,音频组件810,输入/输出(I/O)接口812,传感器组件814,以及通信组件816。
处理组件802通常控制电子设备800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件802可以包括一个或多个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件802可以包括一个或多个模块,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体模块,以方便多媒体组件808和处理组件802之间的交互。
存储器804被配置为存储各种类型的数据以支持在电子设备800的操作。这些数据的示例包括用于在电子设备800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的拼接实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件806为电子设备800的各种组件提供电力。电源组件806可以包括电源管理***,一个或多个电源,及其他与为电子设备800生成、管理和分配电力相关联的组件。
多媒体组件808包括在所述电子设备800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当电子设备800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜***或具有焦距和光学变焦能力。
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC),当电子设备800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。
I/O接口812为处理组件802和***接口模块之间提供接口,上述***接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件814包括一个或多个传感器,用于为电子设备800提供各个方面的状态评估。例如,传感器组件814可以检测到电子设备800的打开/关闭状态,组件的相对定位,例如所述组件为电子设备800的显示器和小键盘,传感器组件814还可以检测电子设备800或电子设备800一个组件的位置改变,用户与电子设备800接触的存在或不存在,电子设备800方位或加速/减速和电子设备800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如互补金属氧化物半导体(CMOS)或电荷耦合装置(CCD)图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件816被配置为便于电子设备800和其他设备之间有线或无线方式的通信。电子设备800可以接入基于通信标准的无线网络,如无线网络(Wi-Fi)、第二代移动通信技术(2G)、第三代移动通 信技术(3G)、***移动通信技术(4G)、通用移动通信技术的长期演进(LTE)、第五代移动通信技术(5G)或它们的拼接。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理***的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件816还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,电子设备800可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器804,上述计算机程序指令可由电子设备800的处理器820执行以完成上述方法。
图7示出根据本公开实施例的另一种电子设备1900的框图。例如,电子设备1900可以被提供为一服务器。参照图7,电子设备1900包括处理组件1922,其进一步包括一个或多个处理器,以及由存储器1932所代表的存储器资源,用于存储可由处理组件1922的执行的指令,例如应用程序。存储器1932中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件1922被配置为执行指令,以执行上述方法。
电子设备1900还可以包括一个电源组件1926被配置为执行电子设备1900的电源管理,一个有线或无线网络接口1950被配置为将电子设备1900连接到网络,和一个输入输出(I/O)接口1958。电子设备1900可以操作基于存储在存储器1932的操作***,例如微软服务器操作***(Windows Server TM),苹果公司推出的基于图形用户界面操作***(Mac OS X TM),多用户多进程的计算机操作***(Unix TM),自由和开放原代码的类Unix操作***(Linux TM),开放原代码的类Unix操作***(FreeBSD TM)或类似。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器1932,上述计算机程序指令可由电子设备1900的处理组件1922执行以完成上述方法。
本公开可以是***、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本公开的各个方面的计算机可读程序指令。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是(但不限于)电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的拼接。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的拼接。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本公开操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意拼接编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通 过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本公开的各个方面。
这里参照根据本公开实施例的方法、装置(***)和计算机程序产品的流程图和/或框图描述了本公开的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的拼接,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本公开的多个实施例的***、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的拼接,可以用执行规定的功能或动作的专用的基于硬件的***来实现,或者可以用专用硬件与计算机指令的拼接来实现。
该计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
以上已经描述了本公开的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。

Claims (16)

  1. 一种数据处理方法,其特征在于,包括:
    获取图像数据对应的离散余弦变换DCT系数数据;
    对所述DCT系数数据进行特征提取,得到先验特征以及上下文特征,所述先验特征用于表征所述DCT系数数据中至少部分系数的全局相关关系,所述上下文特征用于表征所述DCT系数数据中各个系数的局部相关关系;
    根据所述先验特征以及所述上下文特征,确定所述DCT系数数据对应的概率分布参数;
    根据所述概率分布参数,对所述DCT系数数据进行熵编码,得到所述DCT系数数据对应的压缩数据,所述压缩数据作为所述图像数据的压缩结果。
  2. 根据权利要求1所述的方法,其特征在于,所述DCT系数数据包括多个DCT系数矩阵,所述对所述DCT系数数据进行特征提取,得到先验特征以及上下文特征,包括:
    根据所述多个DCT系数矩阵内至少部分系数对应的频率,对所述多个DCT系数矩阵进行重组,得到DCT系数张量;
    对所述DCT系数张量进行特征提取,得到先验特征以及上下文特征。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述多个DCT系数矩阵内至少部分系数对应的频率,对所述多个DCT系数矩阵进行重组,得到DCT系数张量,包括:
    将所述多个DCT系数矩阵中频率相同的系数在空间维度上进行拼接,得到多个拼接矩阵;
    将所述多个拼接矩阵按照指定顺序在通道维度上进行拼接,得到所述DCT系数张量。
  4. 根据权利要求2或3所述的方法,其特征在于,其中,所述对所述DCT系数张量进行特征提取,得到先验特征以及上下文特征,包括:
    通过先验网络对所述DCT系数张量进行特征提取,得到所述先验特征;
    通过自回归网络对所述DCT系数张量进行特征提取,得到所述上下文特征。
  5. 根据权利要求4所述的方法,其特征在于,所述DCT系数张量具有n个通道,n为正整数,所述自回归网络包括空间自回归网络与通道自回归网络,所述通过自回归网络对所述DCT系数张量进行特征提取,得到所述上下文特征,包括:
    将所述DCT系数张量在通道维度上切分为I个具有n/I个通道的系数张量,I∈[1,n];
    通过所述空间自回归网络对第i个系数张量中的至少部分系数进行空间维度的自回归预测,得到第i个系数张量对应的第i个空间上下文特征,所述第i个空间上下文特征表示所述第i个系数张量中至少部分系数之间局部的相关关系,i∈[1,I];
    通过所述通道自回归网络根据第1个系数张量至j-1个系数张量,对第j个系数张量进行通道维度的自回归预测,得到所述第j个系数张量对应的第j个通道上下文特征,所述第j个通道上下文特征表示所述第1个系数张量至所述j-1个系数张量,与所述第j个系数张量之间的局部相关关系,j∈[2,I];
    其中,所述上下文特征包括I个空间上下文特征以及I-1个通道上下文特征。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述上下文特征包括I个空间上下文特征以及I-1个通道上下文特征,I∈[1,n],n为正整数,其中,所述根据所述先验特征以及所述上下文特征,确定所述DCT系数数据对应的概率分布参数,包括:
    将所述先验特征、所述I个空间上下文特征以及所述I-1个通道上下文特征进行通道拼接,得到I个拼接特征;
    根据所述I个拼接特征,确定所述DCT系数数据对应的概率分布参数。
  7. 根据权利要求6所述的方法,其特征在于,所述将所述先验特征、所述I个空间上下文特征以 及所述I-1个通道上下文特征进行通道拼接,得到I个拼接特征,包括:
    将所述先验特征与第1个空间上下文特征进行通道拼接,得到第1个拼接特征;
    将所述先验特征、第j个空间上下文特征以及第j个通道上下文特征进行通道拼接,得到第j个拼接特征,j∈[2,I]。
  8. 根据权利要求6或7所述的方法,其特征在于,所述根据所述I个拼接特征,确定所述DCT系数数据对应的概率分布参数,包括:
    通过熵参数分析网络根据所述I个拼接特征,确定所述DCT系数对应的概率分布参数;
    其中,所述熵参数分析网络包括I个子熵参数分析网络,所述通过熵参数分析网络根据所述I个拼接特征,确定所述DCT系数对应的概率分布参数,包括:
    将第i个拼接特征输入至第i个子熵参数分析网络,得到第i个系数张量中至少部分系数对应的均值与标准差,其中,所述概率分布参数包括I个系数张量中至少部分系数对应的均值与标准差,所述I个系数张量是对所述DCT系数数据对应的DCT系数张量在通道维度上进行切分得到的。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,所述根据所述概率分布参数,对所述DCT系数数据进行熵编码,得到所述DCT系数数据对应的压缩数据,包括:
    根据所述概率分布参数以及指定的概率分布函数,确定所述DCT系数数据中至少部分系数出现的概率;
    根据所述DCT系数数据中至少部分系数出现的概率,对所述DCT系数数据中至少部分系数进行熵编码,得到所述DCT系数数据对应的压缩数据。
  10. 根据权利要求9所述的方法,其特征在于,所述根据所述DCT系数数据中至少部分系数出现的概率,对所述DCT系数数据中至少部分系数进行熵编码,得到所述DCT系数数据对应的压缩数据,包括:
    根据所述DCT系数数据中至少部分系数出现的概率,对I个系数张量的第i个系数张量中至少部分系数进行熵编码,得到所述第i个系数张量对应的第i个子压缩数据;
    其中,所述压缩数据包括I个子压缩数据,所述I个系数张量是对DCT系数张量在通道维度上切分得到的,所述DCT系数张量是对所述DCT系数数据中的多个DCT系数矩阵进行重组得到的,I∈[1,n],i∈[1,I],n为所述DCT系数张量的通道数。
  11. 根据权利要求1-10任一项所述的方法,其特征在于,在得到所述DCT系数数据对应的压缩数据之后,所述方法还包括:
    根据所述DCT系数数据中至少部分系数出现的概率,对所述压缩数据进行熵解码,得到所述DCT系数数据,其中,所述DCT系数数据中至少部分系数出现的概率是根据所述概率分布参数以及指定的概率分布函数确定的。
  12. 根据权利要求11所述的方法,其特征在于,所述压缩数据包括I个子压缩数据,所述根据所述DCT系数数据中至少部分系数出现的概率,对所述压缩数据进行熵解码,得到所述DCT系数数据,包括:
    根据所述DCT系数数据中至少部分系数出现的概率,对第i个子压缩数据进行熵解码,得到第i个系数张量;
    对I个系数张量组成的DCT系数张量进行反向重组,得到多个DCT系数矩阵,所述DCT系数数据包括所述多个DCT系数矩阵。
  13. 一种数据处理装置,其特征在于,包括:
    获取模块,用于获取图像数据对应的离散余弦变换DCT系数数据;
    特征提取模块,用于对所述DCT系数数据进行特征提取,得到先验特征以及上下文特征,所述先验特征用于表征所述DCT系数数据中至少部分系数的全局相关关系,所述上下文特征用于表征所述DCT系数数据中至少部分系数的局部相关关系;
    参数确定模块,用于根据所述先验特征以及所述上下文特征,确定所述DCT系数数据对应的概率分布参数;
    编码模块,用于根据所述概率分布参数,对所述DCT系数数据进行熵编码,得到所述DCT系数数据对应的压缩数据,所述压缩数据作为所述图像数据的压缩结果。
  14. 一种电子设备,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为调用所述存储器存储的指令,以执行权利要求1至12中任意一项所述的方法。
  15. 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1至12中任意一项所述的方法。
  16. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1至2中的任意一项所述的方法。
PCT/CN2022/114451 2021-12-27 2022-08-24 数据处理方法及装置、电子设备和存储介质 WO2023124148A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111614879.5A CN114363615B (zh) 2021-12-27 2021-12-27 数据处理方法及装置、电子设备和存储介质
CN202111614879.5 2021-12-27

Publications (1)

Publication Number Publication Date
WO2023124148A1 true WO2023124148A1 (zh) 2023-07-06

Family

ID=81102332

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/114451 WO2023124148A1 (zh) 2021-12-27 2022-08-24 数据处理方法及装置、电子设备和存储介质

Country Status (2)

Country Link
CN (1) CN114363615B (zh)
WO (1) WO2023124148A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114363615B (zh) * 2021-12-27 2023-05-19 上海商汤科技开发有限公司 数据处理方法及装置、电子设备和存储介质
CN115866252B (zh) * 2023-02-09 2023-05-02 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) 一种图像压缩方法、装置、设备及存储介质
CN116416616B (zh) * 2023-04-13 2024-01-05 沃森克里克(北京)生物科技有限公司 一种dc细胞体外培养筛分方法、装置及计算机可读介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190098321A1 (en) * 2016-09-15 2019-03-28 Dropbox, Inc. Digital image recompression
US10594338B1 (en) * 2019-03-18 2020-03-17 WaveOne Inc. Adaptive quantization
CN111009018A (zh) * 2019-12-24 2020-04-14 苏州天必佑科技有限公司 基于深度神经网络的图像降维和重建方法
CN112866694A (zh) * 2020-12-31 2021-05-28 杭州电子科技大学 联合非对称卷积块和条件上下文的智能图像压缩优化方法
CN113810693A (zh) * 2021-09-01 2021-12-17 上海交通大学 一种jpeg图像无损压缩和解压缩方法、***与装置
CN113810717A (zh) * 2020-06-11 2021-12-17 华为技术有限公司 图像处理方法及装置
CN114363615A (zh) * 2021-12-27 2022-04-15 上海商汤科技开发有限公司 数据处理方法及装置、电子设备和存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537456B (zh) * 2021-06-15 2023-10-17 北京大学 一种深度特征压缩方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190098321A1 (en) * 2016-09-15 2019-03-28 Dropbox, Inc. Digital image recompression
US10594338B1 (en) * 2019-03-18 2020-03-17 WaveOne Inc. Adaptive quantization
CN111009018A (zh) * 2019-12-24 2020-04-14 苏州天必佑科技有限公司 基于深度神经网络的图像降维和重建方法
CN113810717A (zh) * 2020-06-11 2021-12-17 华为技术有限公司 图像处理方法及装置
CN112866694A (zh) * 2020-12-31 2021-05-28 杭州电子科技大学 联合非对称卷积块和条件上下文的智能图像压缩优化方法
CN113810693A (zh) * 2021-09-01 2021-12-17 上海交通大学 一种jpeg图像无损压缩和解压缩方法、***与装置
CN114363615A (zh) * 2021-12-27 2022-04-15 上海商汤科技开发有限公司 数据处理方法及装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN114363615A (zh) 2022-04-15
CN114363615B (zh) 2023-05-19

Similar Documents

Publication Publication Date Title
WO2023124148A1 (zh) 数据处理方法及装置、电子设备和存储介质
JP6728385B2 (ja) デジタルイメージ再圧縮
CN107944409B (zh) 能够区分关键动作的视频分析方法及装置
TWI761851B (zh) 圖像處理方法、圖像處理裝置、電子設備和電腦可讀儲存媒體
TWI777112B (zh) 圖像處理方法、電子設備和儲存介質
WO2022198853A1 (zh) 任务调度方法及装置、电子设备、存储介质和程序产品
US11671576B2 (en) Method and apparatus for inter-channel prediction and transform for point-cloud attribute coding
JP2022533065A (ja) 文字認識方法及び装置、電子機器並びに記憶媒体
US20200226797A1 (en) Systems and methods for image compression at multiple, different bitrates
TWI785267B (zh) 影像處理方法、電子設備和儲存介質
WO2023165082A1 (zh) 图像预览方法、装置、电子设备、存储介质及计算机程序及其产品
CN110647508B (zh) 数据压缩方法、数据解压缩方法、装置及电子设备
US20240195968A1 (en) Method for video processing, electronic device, and storage medium
CN113139484B (zh) 人群定位方法及装置、电子设备和存储介质
CN114446318A (zh) 音频数据分离方法、装置、电子设备及存储介质
CN115512116B (zh) 图像分割模型优化方法、装置、电子设备及可读存储介质
CN114554226A (zh) 图像处理方法及装置、电子设备和存储介质
CN111885386B (zh) 图像的压缩、解压缩方法及装置、电子设备和存储介质
CN115527035B (zh) 图像分割模型优化方法、装置、电子设备及可读存储介质
CN111311483A (zh) 图像编辑及训练方法、装置、电子设备和存储介质
CN113596471B (zh) 图像处理方法及装置、电子设备和存储介质
US11546597B2 (en) Block-based spatial activity measures for pictures
CN116437094A (zh) 视频编码方法、装置、设备和存储介质
WO2023169303A1 (zh) 编解码方法、装置、设备、存储介质及计算机程序产品
RU2782583C1 (ru) Слияние изображений на блочной основе для контекстной сегментации и обработки

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22913431

Country of ref document: EP

Kind code of ref document: A1