WO2023124148A1 - Procédé et appareil de traitement de données, dispositif électronique et support de stockage - Google Patents

Procédé et appareil de traitement de données, dispositif électronique et support de stockage Download PDF

Info

Publication number
WO2023124148A1
WO2023124148A1 PCT/CN2022/114451 CN2022114451W WO2023124148A1 WO 2023124148 A1 WO2023124148 A1 WO 2023124148A1 CN 2022114451 W CN2022114451 W CN 2022114451W WO 2023124148 A1 WO2023124148 A1 WO 2023124148A1
Authority
WO
WIPO (PCT)
Prior art keywords
dct coefficient
data
coefficient
tensor
features
Prior art date
Application number
PCT/CN2022/114451
Other languages
English (en)
Chinese (zh)
Inventor
王园园
王岩
何岱岚
郭莉娜
秦红伟
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023124148A1 publication Critical patent/WO2023124148A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/149Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements

Definitions

  • the present disclosure relates to the field of computer technology, and in particular to a data processing method and device, electronic equipment, and a storage medium.
  • image compression technology such as JPEG image compression standard
  • JPEG image compression standard can be used in related technologies to compress the volume of image data, thereby saving storage resources and bandwidth resources.
  • the disclosure proposes a data processing technical solution.
  • a data processing method including: acquiring discrete cosine transform DCT coefficient data corresponding to image data; performing feature extraction on the DCT coefficient data to obtain prior features and context features, the prior The priori feature is used to characterize the global correlation of each coefficient in the DCT coefficient data, and the context feature is used to characterize the local correlation of each coefficient in the DCT coefficient data; according to the priori feature and the context feature, Determining a probability distribution parameter corresponding to the DCT coefficient data; performing entropy encoding on the DCT coefficient data according to the probability distribution parameter to obtain compressed data corresponding to the DCT coefficient data, the compressed data being used as the image data Compress the result. In this manner, compressed data with a better lossless compression rate is obtained.
  • the DCT coefficient data includes multiple DCT coefficient matrices
  • performing feature extraction on the DCT coefficient data to obtain prior features and context features includes: according to the multiple DCT coefficient matrices The frequencies corresponding to each coefficient in the matrix are reorganized to obtain a DCT coefficient tensor; the feature extraction is performed on the DCT coefficient tensor to obtain prior features and context features.
  • the preprocessed DCT coefficient tensor can be used to efficiently obtain prior features and context features, so that more accurate probability distribution parameters can be obtained later.
  • the reorganizing the multiple DCT coefficient matrices according to the frequency corresponding to each coefficient in the multiple DCT coefficient matrices to obtain a DCT coefficient tensor includes: combining the multiple DCT coefficient matrices The coefficients with the same frequency in the DCT coefficient matrix are spliced in the spatial dimension to obtain multiple spliced matrices; the multiple spliced matrices are spliced in the channel dimension according to a specified order to obtain the DCT coefficient tensor.
  • the recombined DCT coefficient tensor can have certain structural redundant information in the space dimension and the channel dimension, so that the redundant information can be used to generate more accurate probability distribution parameters later.
  • the performing feature extraction on the DCT coefficient tensor to obtain prior features and context features includes: performing feature extraction on the DCT coefficient tensor through a prior network to obtain the A priori feature; performing feature extraction on the DCT coefficient tensor through an autoregressive network to obtain the context feature. In this way, prior features and contextual features can be effectively obtained.
  • the DCT coefficient tensor has n channels, n is a positive integer, the autoregressive network includes a spatial autoregressive network and a channel autoregressive network, and the autoregressive network is used for the Perform feature extraction on the DCT coefficient tensor to obtain the context features, including: dividing the DCT coefficient tensor into I coefficient tensors with n/I channels in the channel dimension, I ⁇ [1,n] Carry out autoregressive prediction of the spatial dimension to each coefficient in the i coefficient tensor through the space autoregressive network, and obtain the i space context feature corresponding to the i coefficient tensor, and the i space context feature represents The local correlation between each coefficient in the i-th coefficient tensor, i ⁇ [1,I]; through the channel autoregressive network, according to the first coefficient tensor to j-1 coefficient tensor, for the j-th The coefficient tensor performs autoregressive prediction of the
  • the redundant information of the DCT coefficient tensor in the spatial dimension and the channel dimension can be learned separately, that is, the DCT coefficient tensor is auto-regressively predicted in the channel dimension and the spatial dimension, thereby obtaining a more informative contextual features.
  • the context features include I spatial context features and I-1 channel context features, I ⁇ [1, n], n is a positive integer, wherein, according to the prior features and the context features, determining the probability distribution parameters corresponding to the DCT coefficient data, including: channel splicing the prior features, the 1 spatial context features and the 1-1 channel context features to obtain I splicing features; according to the one splicing features, determine the probability distribution parameters corresponding to the DCT coefficient data.
  • splicing features with richer information can be used to obtain more accurate probability distribution parameters.
  • the channel splicing of the prior features, the I spatial context features, and the I-1 channel context features to obtain I splicing features includes: combining the Perform channel splicing on the prior feature and the first spatial context feature to obtain the first splicing feature; perform channel splicing on the prior feature, the jth spatial context feature, and the jth channel context feature to obtain the jth splicing feature Features, j ⁇ [2,I].
  • the prior features and the context features can be divided into multiple groups of concatenated features, which is beneficial to efficiently obtain the probability distribution model corresponding to each coefficient in each coefficient matrix, and improve the operation efficiency.
  • the determining the probability distribution parameters corresponding to the DCT coefficient data according to the 1 splicing feature includes: determining the DCT according to the 1 splicing feature through an entropy parameter analysis network.
  • splicing features with richer information can be used to obtain
  • performing entropy encoding on the DCT coefficient data according to the probability distribution parameters to obtain compressed data corresponding to the DCT coefficient data includes: according to the probability distribution parameters and specified A probability distribution function for determining the probability of occurrence of each coefficient in the DCT coefficient data; performing entropy encoding on each coefficient in the DCT coefficient data according to the probability of occurrence of each coefficient in the DCT coefficient data to obtain the corresponding compressed data.
  • the DCT coefficient data is entropy encoded using more accurate probability distribution parameters, and compressed data with a better lossless compression rate can be obtained, thereby saving storage resources and bandwidth resources.
  • the entropy encoding is performed on each coefficient in the DCT coefficient data according to the occurrence probability of each coefficient in the DCT coefficient data to obtain compressed data corresponding to the DCT coefficient data, including: According to the probability of occurrence of each coefficient in the DCT coefficient data, each coefficient in the i coefficient tensor of the I coefficient tensor is entropy encoded to obtain the i sub compressed data corresponding to the i coefficient tensor; wherein, The compressed data includes 1 sub-compressed data, the 1 coefficient tensor is obtained by segmenting the DCT coefficient tensor in the channel dimension, and the DCT coefficient tensor is a plurality of DCT coefficients in the DCT coefficient data obtained by reorganizing the coefficient matrix, I ⁇ [1,n], i ⁇ [1,I], n is the number of channels of the DCT coefficient tensor.
  • the DCT coefficient data is entropy encoded using the probability of each coefficient determined by more accurate probability distribution parameters, so that
  • the method further includes: performing entropy decoding on the compressed data according to the occurrence probability of each coefficient in the DCT coefficient data, The DCT coefficient data is obtained, wherein the occurrence probability of each coefficient in the DCT coefficient data is determined according to the probability distribution parameter and a specified probability distribution function.
  • the probability of occurrence of each coefficient in the DCT coefficient data can be used to effectively realize entropy decoding of the compressed data, and obtain the DCT coefficient data before encoding.
  • the compressed data includes one sub-compressed data, and performing entropy decoding on the compressed data according to the occurrence probability of each coefficient in the DCT coefficient data to obtain the DCT coefficient data
  • the method includes: performing entropy decoding on the i-th sub-compressed data according to the occurrence probability of each coefficient in the DCT coefficient data to obtain the i-th coefficient tensor; reversely reorganizing the DCT coefficient tensor composed of I coefficient tensors, A plurality of DCT coefficient matrices are obtained, and the DCT coefficient data includes the plurality of DCT coefficient matrices.
  • the probability of occurrence of each coefficient in the DCT coefficient data can be used to effectively realize entropy decoding of the compressed data, and obtain the DCT coefficient data before encoding.
  • a data processing device including: an acquisition module, configured to acquire discrete cosine transform DCT coefficient data corresponding to image data; a feature extraction module, configured to perform feature extraction on the DCT coefficient data, A priori feature and a context feature are obtained, the a priori feature is used to characterize the global correlation relationship of each coefficient in the DCT coefficient data, and the context feature is used to characterize the local correlation relationship of each coefficient in the DCT coefficient data; parameter A determining module, configured to determine probability distribution parameters corresponding to the DCT coefficient data according to the priori features and the context features; an encoding module, configured to perform entropy encoding on the DCT coefficient data according to the probability distribution parameters , to obtain compressed data corresponding to the DCT coefficient data, where the compressed data is used as a compression result of the image data.
  • the DCT coefficient data includes a plurality of DCT coefficient matrices
  • the feature extraction module includes: a recombination submodule configured to, according to the frequency corresponding to each coefficient in the plurality of DCT coefficient matrices, Recombining the plurality of DCT coefficient matrices to obtain DCT coefficient tensors; the feature extraction submodule is used to perform feature extraction on the DCT coefficient tensors to obtain prior features and context features.
  • the reorganizing the multiple DCT coefficient matrices according to the frequency corresponding to each coefficient in the multiple DCT coefficient matrices to obtain a DCT coefficient tensor includes: combining the multiple DCT coefficient matrices The coefficients with the same frequency in the DCT coefficient matrix are spliced in the spatial dimension to obtain multiple spliced matrices; the multiple spliced matrices are spliced in the channel dimension according to a specified order to obtain the DCT coefficient tensor.
  • the performing feature extraction on the DCT coefficient tensor to obtain prior features and context features includes: performing feature extraction on the DCT coefficient tensor through a prior network to obtain the A priori feature; performing feature extraction on the DCT coefficient tensor through an autoregressive network to obtain the context feature.
  • the DCT coefficient tensor has n channels, n is a positive integer, the autoregressive network includes a spatial autoregressive network and a channel autoregressive network, and the autoregressive network is used for the Perform feature extraction on the DCT coefficient tensor to obtain the context features, including: dividing the DCT coefficient tensor into I coefficient tensors with n/I channels in the channel dimension, I ⁇ [1,n] Carry out autoregressive prediction of the spatial dimension to each coefficient in the i coefficient tensor through the space autoregressive network, and obtain the i space context feature corresponding to the i coefficient tensor, and the i space context feature represents The local correlation between each coefficient in the i-th coefficient tensor, i ⁇ [1,I]; through the channel autoregressive network, according to the first coefficient tensor to j-1 coefficient tensor, for the j-th The coefficient tensor performs autoregressive prediction of the
  • the context features include I spatial context features and I-1 channel context features, I ⁇ [1,n], n is a positive integer
  • the parameter determination module includes : feature splicing submodule, used to carry out channel splicing with described prior feature, described 1 spatial context features and described 1-1 channel context features, obtains 1 splicing feature; Parameter determination submodule, used for according to The one splicing feature determines the probability distribution parameters corresponding to the DCT coefficient data.
  • the channel splicing of the prior features, the I spatial context features, and the I-1 channel context features to obtain I splicing features includes: combining the Perform channel splicing on the prior feature and the first spatial context feature to obtain the first splicing feature; perform channel splicing on the prior feature, the jth spatial context feature, and the jth channel context feature to obtain the jth splicing feature Features, j ⁇ [2,I].
  • the determining the probability distribution parameters corresponding to the DCT coefficient data according to the 1 splicing feature includes: determining the DCT according to the 1 splicing feature through an entropy parameter analysis network.
  • the encoding module includes: a probability determination submodule, configured to determine the probability of occurrence of each coefficient in the DCT coefficient data according to the probability distribution parameter and the specified probability distribution function;
  • the sub-module is configured to perform entropy coding on each coefficient in the DCT coefficient data according to the probability of occurrence of each coefficient in the DCT coefficient data, to obtain compressed data corresponding to the DCT coefficient data.
  • the entropy encoding is performed on each coefficient in the DCT coefficient data according to the occurrence probability of each coefficient in the DCT coefficient data to obtain compressed data corresponding to the DCT coefficient data, including: According to the probability of occurrence of each coefficient in the DCT coefficient data, each coefficient in the i coefficient tensor of the I coefficient tensor is entropy encoded to obtain the i sub compressed data corresponding to the i coefficient tensor; wherein, The compressed data includes 1 sub-compressed data, the 1 coefficient tensor is obtained by segmenting the DCT coefficient tensor in the channel dimension, and the DCT coefficient tensor is a plurality of DCT coefficients in the DCT coefficient data obtained by reorganizing the coefficient matrix, I ⁇ [1,n], i ⁇ [1,I], n is the number of channels of the DCT coefficient tensor.
  • the device further includes: a decoding module, configured to perform the compression on the compressed data according to the probability of occurrence of each coefficient in the DCT coefficient data. performing entropy decoding on the data to obtain the DCT coefficient data, wherein the occurrence probability of each coefficient in the DCT coefficient data is determined according to the probability distribution parameters and a specified probability distribution function.
  • the compressed data includes I sub-compressed data
  • the decoding module includes: a decoding sub-module configured to compress the i-th sub-compressed data according to the probability of occurrence of each coefficient in the DCT coefficient data.
  • the data is entropy decoded to obtain the i-th coefficient tensor;
  • the reverse reorganization submodule is used to perform reverse reorganization on the DCT coefficient tensor composed of I coefficient tensors to obtain a plurality of DCT coefficient matrices, and the DCT coefficient data
  • the plurality of DCT coefficient matrices are included.
  • a computer program including computer readable codes, and when the computer readable codes are run in an electronic device, a processor in the electronic device executes the above method.
  • an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to call the instructions stored in the memory to execute the above-mentioned method.
  • a computer-readable storage medium on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the above method is implemented.
  • the priori features that characterize the global correlation relationship and the contextual features that characterize the local correlation relationship can be used to obtain more accurate probability distribution parameters , then based on the principle of Shannon source coding, the more accurate the probability estimation of the data to be coded, the more the lossless compression rate of the data can be improved. Therefore, entropy coding of DCT coefficient data based on more accurate probability distribution parameters can obtain better lossless compression rate compressed data, that is, a smaller compressed result can be obtained.
  • FIG. 1 shows a flowchart of a data processing method according to an embodiment of the present disclosure.
  • Fig. 2 shows a schematic diagram of DCT coefficient data according to an embodiment of the present disclosure.
  • Fig. 3 shows a schematic diagram of a DCT coefficient tensor according to an embodiment of the disclosure.
  • Fig. 4 shows a schematic diagram of a data processing method according to an embodiment of the present disclosure.
  • Fig. 5 shows a block diagram of a data processing device according to an embodiment of the present disclosure.
  • FIG. 6 shows a block diagram of an electronic device 800 according to an embodiment of the present disclosure.
  • FIG. 7 shows a block diagram of another electronic device 1900 according to an embodiment of the present disclosure.
  • Fig. 1 shows a flow chart of a data processing method according to an embodiment of the present disclosure
  • the data processing method may be executed by electronic devices such as a terminal device or a server
  • the terminal device may be a user equipment (User Equipment, UE), a mobile device, a user Terminal, terminal, cellular phone, cordless phone, personal digital assistant (Personal Digital Assistant, PDA), handheld device, computing device, vehicle-mounted device, wearable device, etc.
  • the method can call the computer-readable information stored in the memory through the processor instructions, or the method may be executed by a server.
  • the data processing method includes:
  • step S11 discrete cosine transform DCT coefficient data corresponding to the image data is acquired.
  • the image data may refer to an original image, or may also be JPEG data.
  • the original image is an image directly collected by an image acquisition device such as a camera or a camera;
  • the JPEG data may refer to data obtained by encoding the original image according to the JPEG standard.
  • JPEG Joint Photographic Experts Group
  • DCT discrete cosine transform
  • the image data when the image data is an original image, the image data can be discrete cosine transformed according to the JPEG standard to obtain a plurality of DCT coefficient matrices, and the DCT coefficient data includes the plurality of DCT coefficient matrices;
  • the JPEG data may be decoded according to the above-mentioned JPEG standard to directly extract DCT coefficient data from the JPEG data. It should be understood that, the embodiment of the present disclosure does not limit the source of the DCT coefficient data.
  • step S12 feature extraction is performed on the DCT coefficient data to obtain prior features and context features.
  • the prior features are used to represent the global correlation of each coefficient in the DCT coefficient data
  • the context features are used to represent the relationship between each coefficient in the DCT coefficient data. local correlation.
  • the local correlation can be understood as the linear or nonlinear relationship between a current coefficient and the adjacent coefficients in the local receptive field
  • the global correlation can be understood as the adjacent coefficient between a current coefficient and the global receptive field
  • the linear relationship and nonlinear relationship between the coefficients; wherein, the adjacent coefficients can include the coefficients arranged in order around the current coefficient in the local receptive field or the global receptive field, and the range of the global receptive field is larger than that of the local receptive field, or , the number of adjacent coefficients in the global receptive field is greater than the number of adjacent coefficients in the local receptive field.
  • the coefficients in the embodiments of the present disclosure are also DCT coefficients.
  • a priori network and an autoregressive network may be used to perform feature extraction on the DCT coefficient data, respectively, to obtain a priori features and context features.
  • the prior network may include a priori analysis sub-network and a priori synthesis sub-network, wherein, for example, the prior analysis sub-network may include m convolutional layers, and the first m-1 convolutional layers Followed by an activation function layer, it is used to extract the depth features of the DCT coefficient data, or to downsample the DCT coefficient data layer by layer, and m is a positive integer (for example, m is 3).
  • the prior synthesis sub-network can include m convolutional layers, the first m-1 convolutional layers are followed by an activation function layer, which is used to upsample the depth features extracted by the prior analysis sub-network layer by layer, and obtain the first experimental features. It should be understood that the embodiments of the present disclosure do not limit the number, size, and convolution step size of convolution kernels in each convolution layer, and the activation function type used by the activation function layer.
  • each feature value in the depth feature can be modeled through an existing probability model (such as a parameter probability model, a parameter-free probability model), that is, each feature value in the depth feature is described by a probability model. Probabilities of feature values to store computed deep features.
  • the depth features output by the prior analysis sub-network are floating-point numbers
  • the depth features can be discretized first, that is, the depth features output by the prior analysis sub-network can be Quantize, and model the quantized depth features through the above probability model to store the quantized depth features; and input the quantized depth features to the prior synthesis sub-network to obtain prior features.
  • quantifying the depth features output by the prior analysis sub-network may include: rounding the depth features, for example, using the quantization function round() to round the feature values in the depth features; or, it is also possible to Uniformly distributed random noise is added to the eigenvalues in the depth features, and the value range of the random noise may be [-0.5, 0.5], for example.
  • the embodiment of the present disclosure does not limit the quantification method used.
  • the autoregressive network can be understood as a convolutional neural network combined with an autoregressive prediction algorithm, such as a masked convolutional network, which can be used to learn contextual information between input data , that is, to extract the context features between multiple coefficients in the DCI coefficient data.
  • an autoregressive prediction algorithm such as a masked convolutional network
  • step S13 the probability distribution parameters corresponding to the DCT coefficient data are determined according to the priori features and the context features.
  • each coefficient in the DCT coefficient data obeys a specified probability distribution, for example, obeys Gaussian distribution, Laplace distribution, mixed Gaussian distribution, etc.
  • each coefficient in the DCT coefficient data also obeys the probability distribution with the mean (also known as expectation) as ⁇ and the variance as ⁇ 2 , where ⁇ is the standard deviation, where the mean and standard deviation are also the probability distribution parameters.
  • the probability distribution parameters corresponding to each coefficient are calculated, the occurrence probability of each coefficient can be calculated in combination with the probability distribution function corresponding to the specified probability distribution.
  • determining the probability distribution parameters corresponding to the DCT coefficient data according to the prior features and the context features may include: channel splicing the prior features and the context features to obtain splicing features, and inputting the splicing features
  • the probability distribution parameters corresponding to the DCT coefficient data are output, that is, the probability distribution parameters corresponding to each coefficient in the DCT coefficient data are obtained.
  • the entropy parameter analysis network can adopt, for example, a convolutional neural network with a 3-layer convolution kernel size of 1 ⁇ 1 and a step size of 1, and the output result of the entropy parameter analysis network can be, for example, a tensor with 2 ⁇ T channels,
  • the tensor of half of the channels may indicate the mean value corresponding to each coefficient in the multiple DCT coefficient matrices, and the other half of the channels may indicate the standard deviation corresponding to each coefficient in the multiple DCT coefficient matrices.
  • image quality evaluation indicators such as SSIM (Structural Similarity, structural similarity) indicators, PSNR (Peak Signal to Noise Ratio , peak signal-to-noise ratio) index, training entropy parameter analysis network, where D is the distortion item, R is the code rate, and ⁇ is a constant parameter. Since the D
  • the information entropy of the DCT coefficient data may be used to approximate the coding rate corresponding to the DCT coefficient data
  • the information entropy of the prior feature may be used to approximate the coding rate corresponding to the prior feature.
  • the information entropy of the DCT coefficient data can be obtained by entropy coding the DCT coefficient data according to the probability distribution parameters output by the entropy parameter analysis network, and the information entropy of the prior features can be entropy coded on the prior features according to the probability distribution parameters output by the entropy parameter analysis network get.
  • entropy parameter analysis network is an implementation provided by the embodiments of the present disclosure, and the embodiments of the present disclosure do not limit the network structure, network type, and training method of the entropy parameter analysis network.
  • step S14 entropy encoding is performed on the DCT coefficient data according to the probability distribution parameters to obtain compressed data corresponding to the DCT coefficient data, and the compressed data is used as a compression result of the image data.
  • each coefficient in the DCT coefficient data obeys a specified probability distribution, for example, Gaussian distribution, Laplace distribution, mixed Gaussian distribution, etc.
  • a specified probability distribution for example, Gaussian distribution, Laplace distribution, mixed Gaussian distribution, etc.
  • the probability P(x) of each DCT coefficient in the DCT coefficient data can be determined with the Gaussian distribution function F x) shown in formula (1),
  • x represents any DCT coefficient
  • exp represents an exponential function with the natural constant e as the base
  • represents the mean (also known as expectation)
  • represents the standard deviation
  • any entropy coding method such as ANS (Asymmetric numerical systems) coding or arithmetic coding can be used to implement entropy coding on the DCT coefficient data to obtain compressed data corresponding to the DCT coefficient data.
  • the initial coding interval [0,1) is continuously divided into multiple sub-intervals, each sub-interval represents a DCT coefficient, and the size of the sub-interval is proportional to the The probability P(x) of DCT coefficients, the greater the probability, the larger the sub-interval, and the sum of all sub-intervals is exactly [0,1).
  • the encoding starts from the initial encoding interval [0,1), and then encodes a DCT coefficient each time.
  • the sub-interval where the DCT coefficient is located is taken out according to the probability ratio as the encoding interval of the next DCT coefficient , for example, the subinterval of the first DCT coefficient x 1 falls on 0 to 0.6, and the coding interval is reduced to [0, 0.6), and the subinterval of the second DCT coefficient x 2 falls on 0.48 of the coding interval [0, 0.6) to 0.54, the coding interval is reduced to [0.48, 0.54), and the sub-interval of the third DCT coefficient x 3 falls in the coding interval [0.48, 0.54) from 0.534 to 0.54, and so on; finally, the sub-interval corresponding to the DCT coefficient Any decimal in the interval is output in binary form to obtain encoded data.
  • the compressed data can be entropy decoded according to the probability obtained by the probability distribution parameters to obtain the DCT coefficient data, and then the inverse discrete cosine transform is performed on the entropy decoded DCT coefficient data, Obtain the original image; or encode the DCT coefficient data according to the above-mentioned JPEG standard to obtain JPEG data.
  • the method further includes: performing entropy decoding on the compressed data according to the occurrence probability of each coefficient in the DCT coefficient data to obtain the DCT coefficient data, wherein , the probability of occurrence of each coefficient in the DCT coefficient data is determined according to the probability distribution parameters and the specified probability distribution function.
  • the probability of occurrence of each coefficient in the DCT coefficient data can be used to effectively realize entropy decoding of the compressed data, and obtain the DCT coefficient data before encoding.
  • the priori features that characterize the global correlation relationship and the contextual features that characterize the local correlation relationship can be used to obtain more accurate probability distribution parameters , then based on the principle of Shannon source coding, the more accurate the probability estimation of the data to be coded, the more the lossless compression rate of the data can be improved. Therefore, entropy coding of DCT coefficient data based on more accurate probability distribution parameters can obtain better lossless compression rate compressed data, that is, a smaller compressed result can be obtained.
  • the DCT coefficient data includes multiple DCT coefficient matrices.
  • the DCT coefficient data can be preprocessed first, and the preprocessed DCT coefficient data can be characterized extract.
  • feature extraction is performed on the DCT coefficient data to obtain prior features and context features, including:
  • Step S121 According to the frequency corresponding to each coefficient in the multiple DCT coefficient matrices, reorganize the multiple DCT coefficient matrices to obtain a DCT coefficient tensor.
  • the discrete cosine transform is performed on the image data, that is, the image data is converted from the spatial domain to the frequency domain, and each DCT coefficient corresponds to a frequency.
  • the frequency corresponding to each coefficient is reorganized into multiple DCT coefficient matrices to obtain a DCT coefficient tensor, which may include: splicing coefficients with the same frequency in multiple DCT coefficient matrices in the spatial dimension to obtain multiple splicing matrices;
  • the splicing matrices are concatenated in the channel dimension according to the specified order to obtain the DCT coefficient tensor.
  • the reorganized DCT coefficient tensor can have certain structural redundant information in the spatial dimension and the channel dimension. Redundant information can be understood as multiple coefficients of the DCT coefficient tensor in the spatial dimension and the same frequency There are coefficients with high similarity between them, and/or there are channels with high similarity between multiple channels with different frequencies in the channel dimension, so that the redundant information can be used to generate more accurate probability distribution parameters.
  • the spatial dimension can be understood as the length and width dimensions.
  • splicing 9 DCT coefficients in the spatial dimension can obtain a 3 ⁇ 3 splicing matrix
  • splicing in the channel dimension can be understood as combining a two-dimensional matrix into a three-dimensional tensor , for example, five 3 ⁇ 3 concatenated matrices can be concatenated in the channel dimension to obtain a 3 ⁇ 3 ⁇ 5 DCT coefficient tensor.
  • the DCT coefficients in each DCT coefficient matrix are arranged in zigzag (zigzag) order from low frequency to high frequency, so it can be considered that the frequencies of DCT coefficients in the same position of multiple DCT coefficient matrices are the same, and more Coefficients with the same frequency in two DCT coefficient matrices are spliced on the spatial dimension to obtain multiple spliced matrices, which may include: splicing coefficients at the same position in multiple DCT coefficient matrices on the spatial dimension to obtain multiple spliced matrix.
  • the specified order may include: the frequency corresponding to each splicing matrix, that is, the above-mentioned zigzag (zigzag) order, of course, the DCT coefficients may also be arranged in the DCT coefficient matrix from left to right, from The arrangement order from top to bottom is not limited by this embodiment of the present disclosure.
  • Fig. 2 shows a schematic diagram of DCT coefficient data according to an embodiment of the present disclosure
  • Fig. 3 shows a schematic diagram of a DCT coefficient tensor according to an embodiment of the present disclosure.
  • the DCT coefficient data shown in Figure 2 includes four 8 ⁇ 8 DCT coefficient matrices, and the coefficients with the same frequency in the four DCT coefficient matrices are spliced in the spatial dimension to obtain 64 spliced matrices of 2 ⁇ 2;
  • the channels of the 64 splicing matrices are spliced according to the zigzag order to obtain a 2 ⁇ 2 ⁇ 64 DCT coefficient tensor, that is, the DCT coefficient tensor has 64 channels.
  • the above method of reorganizing multiple DCT coefficient matrices is an implementation method provided by the embodiments of the present disclosure.
  • those skilled in the art can set the reorganization methods of multiple DCT coefficient matrices according to actual needs.
  • Embodiments of the present disclosure are not limited to this.
  • the entire frequency distribution interval corresponding to the DCT coefficient data may be divided into multiple frequency intervals, and the DCT coefficients in the same frequency interval may be spliced in the spatial dimension, and the like.
  • Step S122 Perform feature extraction on the DCT coefficient tensor to obtain prior features and context features.
  • performing feature extraction on the DCT coefficient tensor to obtain prior features and context features may include: performing feature extraction on the DCT coefficient tensor through a priori network to obtain prior features; through autoregressive The network performs feature extraction on the DCT coefficient tensor to obtain context features. In this way, prior features and contextual features can be effectively obtained.
  • the priori network and autoregressive network in the above-mentioned embodiments of the present disclosure can be used to extract priori features and context features respectively; for the network structure, network type and training method of the priori network and autoregressive network, the embodiments of the present disclosure No limit.
  • the prior analysis subnetwork in the prior network can use 3 convolutional layers, and the first convolutional layer can include 384 convolutions of size 3 ⁇ 3 ⁇ 64 Kernel, the convolution step is 1, the activation function is leaky Relu, the second convolutional layer can include 384 convolution kernels of 5 ⁇ 5 ⁇ 384 size, the convolution step is 2, and the activation function is leaky Relu , the third convolution layer can include 192 convolution kernels of 5 ⁇ 5 ⁇ 384 size, and the convolution step size is 2, then the output depth feature has 192 channels; the prior synthesis subnetwork in the prior network corresponds to Using 3 convolutional layers, the first convolutional layer can include 192 convolution kernels of size 5 ⁇ 5 ⁇ 192, the convolution step size is 2, and the activation function is leaky Relu, and the second convolutional layer can include 288 convolution kernels with a size of 5 ⁇ 5 ⁇ 192, the convolution step size is 2,
  • the preprocessed DCT coefficient tensor can be used to efficiently obtain prior features and context features, so that more accurate probability distribution parameters can be obtained later.
  • the DCT coefficient tensor has multiple channels, and the reorganized DCT coefficient tensor has certain structural redundant information in the space dimension and the channel dimension.
  • the channel dimension and the channel dimension can be The DCT coefficient tensors are auto-regressively predicted in spatial dimensions, resulting in more informative contextual features.
  • the DCT coefficient tensor has n channels, and n is a positive integer.
  • the autoregressive network includes a space autoregressive network and a channel autoregressive network.
  • the feature extraction of the DCT coefficient tensor is performed through the autoregressive network.
  • each coefficient in the i-th coefficient tensor is auto-regressively predicted in the spatial dimension, and the i-th spatial context feature corresponding to the i-th coefficient tensor is obtained.
  • the i-th spatial context feature represents the i-th coefficient tensor.
  • the context feature of the jth channel represents the local correlation relationship between the first coefficient tensor to j-1 coefficient tensor and the jth coefficient tensor, j ⁇ [2,I];
  • the context features include I spatial context features and I-1 channel context features.
  • the number of channels n of the DCT coefficient tensor is consistent with the number of DCT coefficients in the DCT coefficient matrix, for example, an 8 ⁇ 8 DCT coefficient matrix, that is, the DCT coefficient matrix includes 8 ⁇ 8 DCT coefficients, then the DCT The coefficient tensor has 64 channels.
  • the value of I can be customized, for example, it can be set to 8, then the DCT coefficient tensor can be divided into 8 coefficient tensors with 8 channels, and this embodiment of the present disclosure does not make any limit.
  • the i-th spatial context feature represents the local correlation between the coefficients in the i-th coefficient tensor. It can be understood that the i-th spatial context feature represents a current coefficient in the i-th coefficient tensor and the adjacent coefficients in the local receptive field There is a linear or nonlinear relationship between the adjacent coefficients.
  • the adjacent coefficients can include the coefficients of the i-th coefficient tensor arranged in order before the current coefficient in the local receptive field, and can also include the i-th coefficient tensor in the local receptive field. The coefficients in the field are arranged in order around this current coefficient.
  • the jth channel context feature represents the local correlation between the first coefficient tensor to j-1 coefficient tensor, and the jth coefficient tensor. It can be understood that the jth channel context feature represents the first Coefficient tensors to j-1 coefficient tensors, the linear or nonlinear relationship with the jth coefficient tensor.
  • autoregressive prediction can be understood as using one or more independent variables to predict the value of a dependent variable, or to analyze the correlation between a dependent variable and one or more independent variables, so according to the arrangement of channel dimensions Sequentially, according to the first coefficient tensor to j-1 coefficient tensors, perform autoregressive prediction of the channel dimension on the jth coefficient tensor, and obtain the jth channel context feature corresponding to the jth coefficient tensor, a total of Get I-1 channel context features.
  • the channel autoregressive network may include I-1 sub-channel autoregressive networks, and the j-1th sub-channel autoregressive network is used for the first coefficient tensor to the j-1 coefficient tensor , perform autoregressive prediction of the channel dimension on the jth coefficient tensor, and obtain the jth channel context feature corresponding to the jth coefficient tensor.
  • each subchannel autoregressive network can use multiple convolutional layers, and the size of the convolution kernel in the first convolutional layer of the j-1th subchannel autoregressive network is length a ⁇ width a ⁇ depth [(n/I) ⁇ (j-1)], a is a positive integer (for example, 3), for example, suppose n is 64 and I is 16, that is, each coefficient tensor has 2 channels , in order to obtain the context features of the fourth channel, the first coefficient tensor to the third coefficient tensor can be input to the third sub-channel autoregressive network, then the first layer convolution of the third sub-channel autoregressive network The depth of each convolution kernel in the layer should be 6.
  • the number of convolution kernels in each convolution layer in each subchannel autoregressive network and the convolution step size are not limited in this embodiment of the disclosure.
  • the last convolution layer may include 128 convolution kernels, that is, The channel context features output by each subchannel autoregressive network have 128 channels.
  • the spatial autoregressive network may include I subspace autoregressive networks, and the i-th subspace autoregressive network is used to perform autoregressive prediction of the spatial dimension on each coefficient in the i-th coefficient tensor, and obtain the first The ith spatial context feature corresponding to the i coefficient tensor.
  • the i-th subspace autoregressive network for example, 128 convolution kernels of 5 ⁇ 5 ⁇ (n/I) size can be directly used, and the convolution step size is 1, that is, the spatial context features output by each subspace autoregressive network have 128 channels.
  • the network structure of the channel autoregressive network and the space autoregressive network is related to the value of I and the value of n. After setting the values of I and n, the user can adjust the channel autoregressive network and the space autoregressive network accordingly.
  • the network structure of the network is an implementation method provided by the embodiments of the present disclosure. In fact, those skilled in the art can set the above-mentioned volume of channel autoregressive network and space autoregressive network according to actual needs.
  • the number of layers, the number of convolution kernels, and their sizes are not limited by this embodiment of the present disclosure.
  • the spatial autoregressive network to learn the spatial context information of each coefficient tensor in the spatial dimension
  • the channel autoregressive network to learn the channel context information of each coefficient tensor in the channel dimension, that is, to learn the above two Local correlation
  • the spatial autoregressive network and the channel autoregressive network can be used to learn the redundant information of the DCT coefficient tensor in the spatial dimension and the channel dimension, that is, the DCT coefficients in the channel dimension and the spatial dimension respectively.
  • Tensors perform autoregressive predictions, resulting in more informative contextual features.
  • the context features include I spatial context features and I-1 channel context features, I ⁇ [1,n], n is a positive integer, in a possible implementation, in step S13, according to the previous
  • the experimental features and context features are used to determine the probability distribution parameters corresponding to the DCT coefficient data, including:
  • Step S131 Concatenate the prior features, I spatial context features and I-1 channel context features to obtain I splicing features.
  • channel concatenation is performed on the priori features, I spatial context features and I-1 channel context features to obtain I concatenated features, including: combining the priori features with the first spatial context features Perform channel splicing to obtain the first splicing feature; perform channel splicing on the prior feature, the jth spatial context feature, and the jth channel context feature to obtain the jth splicing feature, j ⁇ [2,I].
  • the prior features and the context features can be divided into multiple groups of concatenated features, which is beneficial to efficiently obtain the probability distribution model corresponding to each coefficient in each coefficient matrix, and improve the operation efficiency.
  • the prior feature is a tensor with 128 channels
  • each spatial context feature and each channel context feature is also a tensor with 128 channels
  • the prior feature and the first spatial context feature are channeled Splicing to obtain the first splicing feature with 256 channels
  • channel splicing of the prior features, the jth spatial context feature, and the jth channel context feature to obtain the jth splicing feature with 384 channels.
  • Step S132 Determine the probability distribution parameters corresponding to the DCT coefficient data according to one stitching feature.
  • the probability distribution parameters corresponding to the DCT coefficient data can be determined according to one splicing feature through the above-mentioned entropy parameter analysis network.
  • the entropy parameter analysis network may include I sub-entropy parameter analysis network, and the i-th sub-entropy parameter analysis network is used to determine the mean value and standard deviation corresponding to each coefficient in the i-th coefficient tensor according to the i-th splicing feature.
  • the entropy parameter analysis network determines the probability distribution parameters corresponding to the DCT coefficient data according to the I splicing feature, which may include: inputting the i-th splicing feature into the i-th sub-entropy parameter analysis network to obtain The mean value and standard deviation corresponding to each coefficient in the i-th coefficient tensor, wherein the probability distribution parameters include the mean value and standard deviation corresponding to each coefficient in the I coefficient tensor, and the I coefficient tensor is the DCT coefficient tensor corresponding to the DCT coefficient data in It is obtained by segmenting in the channel dimension.
  • the process of segmenting the DCT coefficient tensor corresponding to the DCT coefficient data in the channel dimension to obtain one coefficient tensor can refer to the relevant description of the above-mentioned embodiments of the present disclosure, and details are not repeated here.
  • each sub-entropy parameter analysis network can refer to the above-mentioned entropy parameter analysis network, that is, each sub-entropy parameter analysis network can use, for example, a 3-layer convolutional neural network with a convolution kernel size of 1 ⁇ 1 and a step size of 1.
  • the output result of each sub-entropy parameter analysis network can be, for example, a tensor with 2 ⁇ (n/I) channels, where the tensor of half of the channels indicates the mean value corresponding to each coefficient in the i-th coefficient tensor, and the other Half of the channels may indicate the standard deviation corresponding to each coefficient in the i-th coefficient tensor.
  • Indicators such as SSIM (Structural Similarity, structural similarity) indicators, PSNR (Peak Signal to Noise Ratio, peak signal-to-noise ratio) indicators, train I sub-entropy parameter analysis network, where D is the distortion item, R is the code rate, ⁇ is a constant parameter, since the DCT coefficient data is losslessly compressed, the distortion item D is 0, and R can include the coding rate corresponding to each coefficient matrix and the coding rate corresponding to the prior feature.
  • SSIM Structuretural Similarity, structural similarity
  • PSNR Peak Signal to Noise Ratio, peak signal-to-noise ratio
  • the coding rate corresponding to each coefficient matrix may be approximated by using the information entropy of each coefficient matrix
  • the coding rate corresponding to the prior feature may be approximated by using the information entropy of the prior feature.
  • the information entropy of the i-th coefficient tensor can be obtained by entropy coding the i-th coefficient tensor according to the probability distribution parameter of the i-th sub-entropy parameter analysis network output
  • the information entropy of the prior feature can be obtained by analyzing the network output according to the entropy parameter
  • the probability distribution parameters of are obtained by entropy coding the prior features.
  • each coefficient in the DCT coefficient data obeys a specified probability distribution, for example, Gaussian distribution, Laplace distribution, mixed Gaussian distribution, etc.
  • a specified probability distribution for example, Gaussian distribution, Laplace distribution, mixed Gaussian distribution, etc.
  • step S14 entropy coding is performed on the DCT coefficient data according to the probability distribution parameters to obtain compressed data corresponding to the DCT coefficient data, including:
  • the probability distribution parameters and the specified probability distribution function determine the occurrence probability of each coefficient in the DCT coefficient data; according to the occurrence probability of each coefficient in the DCT coefficient data, perform entropy encoding on each coefficient in the DCT coefficient data, and obtain the corresponding DCT coefficient data Compress data.
  • the probability distribution function may adopt a Gaussian distribution function, a Laplace distribution function or a mixed Gaussian distribution function, etc., which are not limited in this embodiment of the present disclosure.
  • the DCT coefficient data can be reorganized and segmented to obtain I coefficient tensors, and the i-th stitching feature can be input to the i-th sub-entropy parameter analysis network to obtain the mean value corresponding to each coefficient in the i-th coefficient tensor and standard deviation, in one possible implementation, according to the probability distribution parameters and the specified probability distribution function, determining the probability of occurrence of each coefficient in the DCT coefficient data may include: according to the mean value and The standard deviation, along with the specified probability distribution function, determines the probability of each coefficient in the ith coefficient tensor.
  • entropy coding is performed on each coefficient in the DCT coefficient data according to the probability of occurrence of each coefficient in the DCT coefficient data to obtain compressed data corresponding to the DCT coefficient data, which may include: according to the DCT coefficient data The probability of occurrence of each coefficient, that is, according to the probability of occurrence of each coefficient in the I coefficient tensor, entropy encoding is performed on each coefficient in the i coefficient tensor of the I coefficient tensor to obtain the i sub compressed data, wherein the DCT coefficient data corresponds to The compressed data includes I sub-compressed data.
  • the I coefficient tensor is obtained by segmenting the DCT coefficient tensor in the channel dimension, and the DCT coefficient tensor is obtained by reorganizing multiple DCT coefficient matrices in the DCT coefficient data, I ⁇ [1, n], i ⁇ [1,I], n is the number of channels of the DCT coefficient tensor. It should be understood that, for specific implementations of determining the occurrence probability of each coefficient in the I coefficient tensor and performing entropy coding on each coefficient, reference may be made to the relevant description in the above-mentioned step S14, and details are not repeated here.
  • the occurrence probability of each coefficient in the I coefficient tensor may be recorded in the form of a probability table, so as to facilitate entropy encoding and entropy decoding of each coefficient in the DCT coefficient data.
  • the i-th sub-compressed data can be entropy-decoded according to the occurrence probability of each coefficient in the i-th coefficient tensor to obtain the i-th coefficient tensor;
  • the DCT coefficient tensor composed of the quantity is reversely recombined to obtain multiple DCT coefficient matrices; then the inverse discrete cosine transform is performed on multiple DCT coefficient matrices to obtain the original image; or the multiple DCT coefficient matrices are encoded according to the above JPEG standard to obtain JPEG data, that is, perform a reverse data decompression process according to the data compression process of the embodiment of the present disclosure.
  • the compressed data includes 1 sub-compressed data.
  • entropy decoding is performed on the compressed data according to the occurrence probability of each coefficient in the DCT coefficient data to obtain the DCT coefficient data, including: according to the DCT coefficient data
  • the probability of occurrence of each coefficient in the i-th sub-compressed data is entropy-decoded to obtain the i-th coefficient tensor; the DCT coefficient tensor composed of I coefficient tensors is reversely reorganized to obtain multiple DCT coefficient matrices, DCT
  • the coefficient data includes a plurality of DCT coefficient matrices. In this manner, the probability of occurrence of each coefficient in the DCT coefficient data can be used to effectively realize entropy decoding of the compressed data, and obtain the DCT coefficient data before encoding.
  • the probability of occurrence of each coefficient in one coefficient tensor can be recorded through the probability table, that is, the probability of occurrence of each coefficient in the DCT coefficient data can be recorded, so that the probability of occurrence of each coefficient in the DCT coefficient data can be directly obtained during entropy decoding.
  • the process of entropy encoding and entropy decoding is opposite; the process of reversely reorganizing the DCT coefficient tensor composed of 1 coefficient tensor to obtain multiple DCT coefficient matrices is the same as the process of multiple DCT coefficient matrices in the above-mentioned embodiments of the present disclosure.
  • the process of recombining DCT coefficient matrices to obtain DCT coefficient tensors is reversed, that is, it may be the reverse data decompression process of the data compression process of DCT coefficient data in the above-mentioned embodiments of the present disclosure.
  • entropy coding is performed on DCT coefficient data by using more accurate probability distribution parameters, so that compressed data with better lossless compression rate can be obtained, thereby saving storage resources and bandwidth resources.
  • Fig. 4 shows a schematic diagram of a data processing method according to an embodiment of the present disclosure. As shown in Fig. 4, the data processing method includes:
  • the DCT coefficient data is reorganized to obtain a DCT coefficient tensor, and the DCT coefficient tensor is divided into 1 coefficient tensors in the channel dimension;
  • the data processing method according to the embodiments of the present disclosure can be applied to scenarios such as data centers, cloud storage, and JPEG data transcoding.
  • massive image data occupies a large amount of storage resources and bandwidth resources, which improves data storage and
  • the image data can be efficiently compressed while ensuring that the image data is lossless, thereby significantly reducing the occupation of storage resources and bandwidth resources.
  • the present disclosure also provides data processing devices, electronic equipment, computer-readable storage media, and programs, all of which can be used to implement any data processing method provided in the present disclosure, corresponding technical solutions and descriptions, and corresponding records in the method section ,No longer.
  • Fig. 5 shows a block diagram of a data processing device according to an embodiment of the present disclosure. As shown in Fig. 5, the device includes:
  • An acquisition module 101 configured to acquire discrete cosine transform DCT coefficient data corresponding to the image data
  • the feature extraction module 102 is configured to perform feature extraction on the DCT coefficient data to obtain prior features and context features, the prior features are used to characterize the global correlation of each coefficient in the DCT coefficient data, and the context features It is used to characterize the local correlation relationship of each coefficient in the DCT coefficient data;
  • a parameter determination module 103 configured to determine probability distribution parameters corresponding to the DCT coefficient data according to the priori features and the context features;
  • the coding module 104 is configured to perform entropy coding on the DCT coefficient data according to the probability distribution parameters to obtain compressed data corresponding to the DCT coefficient data, and the compressed data is used as a compression result of the image data.
  • the DCT coefficient data includes a plurality of DCT coefficient matrices
  • the feature extraction module 102 includes: a recombination submodule configured to , reorganizing the plurality of DCT coefficient matrices to obtain DCT coefficient tensors; the feature extraction submodule is used to perform feature extraction on the DCT coefficient tensors to obtain prior features and context features.
  • the reorganizing the multiple DCT coefficient matrices according to the frequency corresponding to each coefficient in the multiple DCT coefficient matrices to obtain a DCT coefficient tensor includes: combining the multiple DCT coefficient matrices The coefficients with the same frequency in the DCT coefficient matrix are spliced in the spatial dimension to obtain multiple spliced matrices; the multiple spliced matrices are spliced in the channel dimension according to a specified order to obtain the DCT coefficient tensor.
  • the performing feature extraction on the DCT coefficient tensor to obtain prior features and context features includes: performing feature extraction on the DCT coefficient tensor through a prior network to obtain the A priori feature; performing feature extraction on the DCT coefficient tensor through an autoregressive network to obtain the context feature.
  • the DCT coefficient tensor has n channels, n is a positive integer, the autoregressive network includes a spatial autoregressive network and a channel autoregressive network, and the autoregressive network is used for the Perform feature extraction on the DCT coefficient tensor to obtain the context features, including: dividing the DCT coefficient tensor into I coefficient tensors with n/I channels in the channel dimension, I ⁇ [1,n] Carry out autoregressive prediction of the spatial dimension to each coefficient in the i coefficient tensor through the space autoregressive network, and obtain the i space context feature corresponding to the i coefficient tensor, and the i space context feature represents The local correlation between each coefficient in the i-th coefficient tensor, i ⁇ [1,I]; through the channel autoregressive network, according to the first coefficient tensor to j-1 coefficient tensor, for the j-th The coefficient tensor performs autoregressive prediction of the
  • the context features include I spatial context features and I-1 channel context features, I ⁇ [1, n], n is a positive integer, wherein the parameter determination module 103, Including: a feature splicing submodule, used for channel splicing the prior features, the I spatial context features, and the I-1 channel context features to obtain I splicing features; a parameter determination submodule for According to the one splicing feature, determine the probability distribution parameter corresponding to the DCT coefficient data.
  • the channel splicing of the prior features, the I spatial context features, and the I-1 channel context features to obtain I splicing features includes: combining the Perform channel splicing on the prior feature and the first spatial context feature to obtain the first splicing feature; perform channel splicing on the prior feature, the jth spatial context feature, and the jth channel context feature to obtain the jth splicing feature Features, j ⁇ [2,I].
  • the determining the probability distribution parameters corresponding to the DCT coefficient data according to the 1 splicing feature includes: determining the DCT according to the 1 splicing feature through an entropy parameter analysis network.
  • the encoding module 104 includes: a probability determination submodule, configured to determine the probability of occurrence of each coefficient in the DCT coefficient data according to the probability distribution parameter and a specified probability distribution function;
  • the encoding submodule is configured to perform entropy encoding on each coefficient in the DCT coefficient data according to the probability of occurrence of each coefficient in the DCT coefficient data, to obtain compressed data corresponding to the DCT coefficient data.
  • the entropy encoding is performed on each coefficient in the DCT coefficient data according to the occurrence probability of each coefficient in the DCT coefficient data to obtain compressed data corresponding to the DCT coefficient data, including: According to the probability of occurrence of each coefficient in the DCT coefficient data, each coefficient in the i coefficient tensor of the I coefficient tensor is entropy encoded to obtain the i sub compressed data corresponding to the i coefficient tensor; wherein, The compressed data includes 1 sub-compressed data, the 1 coefficient tensor is obtained by segmenting the DCT coefficient tensor in the channel dimension, and the DCT coefficient tensor is a plurality of DCT coefficients in the DCT coefficient data obtained by reorganizing the coefficient matrix, I ⁇ [1,n], i ⁇ [1,I], n is the number of channels of the DCT coefficient tensor.
  • the device further includes: a decoding module, configured to perform the compression on the compressed data according to the probability of occurrence of each coefficient in the DCT coefficient data. performing entropy decoding on the data to obtain the DCT coefficient data, wherein the occurrence probability of each coefficient in the DCT coefficient data is determined according to the probability distribution parameters and a specified probability distribution function.
  • the compressed data includes I sub-compressed data
  • the decoding module includes: a decoding sub-module configured to compress the i-th sub-compressed data according to the probability of occurrence of each coefficient in the DCT coefficient data.
  • the data is entropy decoded to obtain the i-th coefficient tensor;
  • the reverse reorganization submodule is used to perform reverse reorganization on the DCT coefficient tensor composed of I coefficient tensors to obtain a plurality of DCT coefficient matrices, and the DCT coefficient data
  • the plurality of DCT coefficient matrices are included.
  • the priori features that characterize the global correlation relationship and the contextual features that characterize the local correlation relationship can be used to obtain more accurate probability distribution parameters , then based on the principle of Shannon source coding, the more accurate the probability estimation of the data to be coded, the more the lossless compression rate of the data can be improved. Therefore, entropy coding of DCT coefficient data based on more accurate probability distribution parameters can obtain better lossless compression rate compressed data, that is, a smaller compressed result can be obtained.
  • the functions or modules included in the device provided by the embodiments of the present disclosure can be used to execute the methods described in the method embodiments above, and its specific implementation can refer to the description of the method embodiments above. For brevity, here No longer.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, on which computer program instructions are stored, and the above-mentioned method is implemented when the computer program instructions are executed by a processor.
  • Computer readable storage media may be volatile or nonvolatile computer readable storage media.
  • An embodiment of the present disclosure also proposes an electronic device, including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to invoke the instructions stored in the memory to execute the above method.
  • An embodiment of the present disclosure also provides a computer program product, including computer-readable codes, or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer-readable codes are stored in a processor of an electronic device When running in the electronic device, the processor in the electronic device executes the above method.
  • An embodiment of the present disclosure also provides a computer program, including computer readable codes, and when the computer readable codes are run in an electronic device, a processor in the electronic device executes the above method.
  • Electronic devices may be provided as terminals, servers, or other forms of devices.
  • FIG. 6 shows a block diagram of an electronic device 800 according to an embodiment of the present disclosure.
  • the electronic device 800 may be a terminal such as a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, or a personal digital assistant.
  • electronic device 800 may include one or more of the following components: processing component 802, memory 804, power supply component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and a communication component 816 .
  • the processing component 802 generally controls the overall operations of the electronic device 800, such as those associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the above method. Additionally, processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components. For example, processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802 .
  • the memory 804 is configured to store various types of data to support operations at the electronic device 800 . Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like.
  • the memory 804 can be implemented by any type of volatile or non-volatile storage devices or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic or Optical Disk Magnetic Disk
  • the power supply component 806 provides power to various components of the electronic device 800 .
  • Power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 800 .
  • the multimedia component 808 includes a screen providing an output interface between the electronic device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense a boundary of a touch or swipe action, but also detect duration and pressure associated with the touch or swipe action.
  • the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capability.
  • the audio component 810 is configured to output and/or input audio signals.
  • the audio component 810 includes a microphone (MIC), which is configured to receive external audio signals when the electronic device 800 is in operation modes, such as call mode, recording mode and voice recognition mode. Received audio signals may be further stored in memory 804 or sent via communication component 816 .
  • the audio component 810 also includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: a home button, volume buttons, start button, and lock button.
  • Sensor assembly 814 includes one or more sensors for providing status assessments of various aspects of electronic device 800 .
  • the sensor component 814 can detect the open/closed state of the electronic device 800, the relative positioning of components, such as the display and the keypad of the electronic device 800, the sensor component 814 can also detect the electronic device 800 or a Changes in position of components, presence or absence of user contact with electronic device 800 , electronic device 800 orientation or acceleration/deceleration and temperature changes in electronic device 800 .
  • Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • Sensor assembly 814 may also include an optical sensor, such as a complementary metal-oxide-semiconductor (CMOS) or charge-coupled device (CCD) image sensor, for use in imaging applications.
  • CMOS complementary metal-oxide-semiconductor
  • CCD charge-coupled device
  • the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
  • the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
  • the electronic device 800 can access wireless networks based on communication standards, such as wireless networks (Wi-Fi), second-generation mobile communication technologies (2G), third-generation mobile communication technologies (3G), fourth-generation mobile communication technologies (4G ), long-term evolution (LTE) of universal mobile communication technology, fifth generation mobile communication technology (5G) or their splicing.
  • the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wideband
  • Bluetooth Bluetooth
  • electronic device 800 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation for performing the methods described above.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A programmable gate array
  • controller microcontroller, microprocessor or other electronic component implementation for performing the methods described above.
  • a non-volatile computer-readable storage medium such as the memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to implement the above method.
  • FIG. 7 shows a block diagram of another electronic device 1900 according to an embodiment of the present disclosure.
  • electronic device 1900 may be provided as a server.
  • electronic device 1900 includes processing component 1922 , which further includes one or more processors, and a memory resource represented by memory 1932 for storing instructions executable by processing component 1922 , such as application programs.
  • the application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above method.
  • Electronic device 1900 may also include a power supply component 1926 configured to perform power management of electronic device 1900, a wired or wireless network interface 1950 configured to connect electronic device 1900 to a network, and an input-output (I/O) interface 1958 .
  • the electronic device 1900 can operate based on the operating system stored in the memory 1932, such as the Microsoft server operating system (Windows Server TM ), the graphical user interface-based operating system (Mac OS X TM ) introduced by Apple Inc., and the multi-user and multi-process computer operating system (Unix TM ), a free and open-source Unix-like operating system (Linux TM ), an open-source Unix-like operating system (FreeBSD TM ), or the like.
  • Microsoft server operating system Windows Server TM
  • Mac OS X TM graphical user interface-based operating system
  • Unix TM multi-user and multi-process computer operating system
  • Linux TM free and open-source Unix-like operating system
  • FreeBSD TM open-source Unix-like operating system
  • a non-transitory computer-readable storage medium such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to implement the above method.
  • the present disclosure can be a system, method and/or computer program product.
  • a computer program product may include a computer readable storage medium having computer readable program instructions thereon for causing a processor to implement various aspects of the present disclosure.
  • a computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device.
  • a computer readable storage medium may be, for example, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Computer-readable storage media include: portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or flash memory), static random access memory (SRAM), compact disc read only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanically encoded device, such as a printer with instructions stored thereon Hole cards or raised structures in grooves, and any suitable splicing of the above.
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disc read only memory
  • DVD digital versatile disc
  • memory stick floppy disk
  • mechanically encoded device such as a printer with instructions stored thereon Hole cards or raised structures in grooves, and any suitable splicing of the above.
  • computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., pulses of light through fiber optic cables), or transmitted electrical signals.
  • Computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or downloaded to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or a network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • Computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or Source code or object code written in arbitrary concatenation, said programming languages include object-oriented programming languages—such as Smalltalk, C++, etc., and conventional procedural programming languages—such as “C” language or similar programming languages.
  • Computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as via the Internet using an Internet service provider). connect).
  • LAN local area network
  • WAN wide area network
  • an electronic circuit such as a programmable logic circuit, field programmable gate array (FPGA), or programmable logic array (PLA)
  • FPGA field programmable gate array
  • PDA programmable logic array
  • These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that when executed by the processor of the computer or other programmable data processing apparatus , producing an apparatus for realizing the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
  • These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions cause computers, programmable data processing devices and/or other devices to work in a specific way, so that the computer-readable medium storing instructions includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks in flowcharts and/or block diagrams.
  • each block in a flowchart or block diagram may represent a module, a portion of a program segment, or an instruction that includes one or more Executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block in the block diagrams and/or flow diagrams, and combinations of blocks in the block diagrams and/or flow diagrams can be implemented by a dedicated hardware-based system that performs the specified functions or actions , or can be realized by splicing special hardware and computer instructions.
  • the computer program product can be specifically realized by means of hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) etc. wait.
  • a software development kit Software Development Kit, SDK

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Algebra (AREA)
  • Discrete Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

La présente divulgation concerne un procédé et un appareil de traitement de données, un dispositif électronique et un support de stockage. Le procédé consiste à : acquérir des données de coefficient de transformée en cosinus discrète (DCT) correspondant à des données d'image ; effectuer une extraction de caractéristiques sur les données de coefficient de DCT pour obtenir une caractéristique a priori et une caractéristique de contexte, la caractéristique a priori étant utilisée pour représenter une relation de corrélation globale de chaque coefficient dans les données de coefficient de DCT et la caractéristique de contexte étant utilisée pour représenter une relation de corrélation locale de chaque coefficient dans les données de coefficient de DCT ; déterminer un paramètre de distribution de probabilité correspondant aux données de coefficient de DCT selon la caractéristique a priori et la caractéristique de contexte ; effectuer un codage entropique sur les données de coefficient de DCT selon le paramètre de distribution de probabilité, pour obtenir des données compressées correspondant aux données de coefficient de DCT, les données compressées étant utilisées en tant que résultat de compression des données d'image.
PCT/CN2022/114451 2021-12-27 2022-08-24 Procédé et appareil de traitement de données, dispositif électronique et support de stockage WO2023124148A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111614879.5 2021-12-27
CN202111614879.5A CN114363615B (zh) 2021-12-27 2021-12-27 数据处理方法及装置、电子设备和存储介质

Publications (1)

Publication Number Publication Date
WO2023124148A1 true WO2023124148A1 (fr) 2023-07-06

Family

ID=81102332

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/114451 WO2023124148A1 (fr) 2021-12-27 2022-08-24 Procédé et appareil de traitement de données, dispositif électronique et support de stockage

Country Status (2)

Country Link
CN (1) CN114363615B (fr)
WO (1) WO2023124148A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114363615B (zh) * 2021-12-27 2023-05-19 上海商汤科技开发有限公司 数据处理方法及装置、电子设备和存储介质
CN115866252B (zh) * 2023-02-09 2023-05-02 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) 一种图像压缩方法、装置、设备及存储介质
CN116416616B (zh) * 2023-04-13 2024-01-05 沃森克里克(北京)生物科技有限公司 一种dc细胞体外培养筛分方法、装置及计算机可读介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190098321A1 (en) * 2016-09-15 2019-03-28 Dropbox, Inc. Digital image recompression
US10594338B1 (en) * 2019-03-18 2020-03-17 WaveOne Inc. Adaptive quantization
CN111009018A (zh) * 2019-12-24 2020-04-14 苏州天必佑科技有限公司 基于深度神经网络的图像降维和重建方法
CN112866694A (zh) * 2020-12-31 2021-05-28 杭州电子科技大学 联合非对称卷积块和条件上下文的智能图像压缩优化方法
CN113810693A (zh) * 2021-09-01 2021-12-17 上海交通大学 一种jpeg图像无损压缩和解压缩方法、***与装置
CN113810717A (zh) * 2020-06-11 2021-12-17 华为技术有限公司 图像处理方法及装置
CN114363615A (zh) * 2021-12-27 2022-04-15 上海商汤科技开发有限公司 数据处理方法及装置、电子设备和存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537456B (zh) * 2021-06-15 2023-10-17 北京大学 一种深度特征压缩方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190098321A1 (en) * 2016-09-15 2019-03-28 Dropbox, Inc. Digital image recompression
US10594338B1 (en) * 2019-03-18 2020-03-17 WaveOne Inc. Adaptive quantization
CN111009018A (zh) * 2019-12-24 2020-04-14 苏州天必佑科技有限公司 基于深度神经网络的图像降维和重建方法
CN113810717A (zh) * 2020-06-11 2021-12-17 华为技术有限公司 图像处理方法及装置
CN112866694A (zh) * 2020-12-31 2021-05-28 杭州电子科技大学 联合非对称卷积块和条件上下文的智能图像压缩优化方法
CN113810693A (zh) * 2021-09-01 2021-12-17 上海交通大学 一种jpeg图像无损压缩和解压缩方法、***与装置
CN114363615A (zh) * 2021-12-27 2022-04-15 上海商汤科技开发有限公司 数据处理方法及装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN114363615A (zh) 2022-04-15
CN114363615B (zh) 2023-05-19

Similar Documents

Publication Publication Date Title
WO2023124148A1 (fr) Procédé et appareil de traitement de données, dispositif électronique et support de stockage
JP6728385B2 (ja) デジタルイメージ再圧縮
CN107944409B (zh) 能够区分关键动作的视频分析方法及装置
CN110472091B (zh) 图像处理方法及装置、电子设备和存储介质
TWI761851B (zh) 圖像處理方法、圖像處理裝置、電子設備和電腦可讀儲存媒體
TWI777112B (zh) 圖像處理方法、電子設備和儲存介質
WO2022198853A1 (fr) Procédé et appareil de planification de tâches, dispositif électronique, support de stockage et produit-programme
US11671576B2 (en) Method and apparatus for inter-channel prediction and transform for point-cloud attribute coding
JP2022533065A (ja) 文字認識方法及び装置、電子機器並びに記憶媒体
US20200226797A1 (en) Systems and methods for image compression at multiple, different bitrates
TWI785267B (zh) 影像處理方法、電子設備和儲存介質
Helin et al. Minimum description length sparse modeling and region merging for lossless plenoptic image compression
WO2023165082A1 (fr) Procédé et appareil de prévisualisation d'image, dispositif électronique, support de stockage, programme informatique et produit associé
US20220377339A1 (en) Video signal processor for block-based picture processing
CN110647508B (zh) 数据压缩方法、数据解压缩方法、装置及电子设备
US20240195968A1 (en) Method for video processing, electronic device, and storage medium
CN113139484B (zh) 人群定位方法及装置、电子设备和存储介质
CN114446318A (zh) 音频数据分离方法、装置、电子设备及存储介质
CN115512116B (zh) 图像分割模型优化方法、装置、电子设备及可读存储介质
CN114554226A (zh) 图像处理方法及装置、电子设备和存储介质
CN111885386B (zh) 图像的压缩、解压缩方法及装置、电子设备和存储介质
CN115527035B (zh) 图像分割模型优化方法、装置、电子设备及可读存储介质
CN111311483A (zh) 图像编辑及训练方法、装置、电子设备和存储介质
CN113596471B (zh) 图像处理方法及装置、电子设备和存储介质
US11546597B2 (en) Block-based spatial activity measures for pictures

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22913431

Country of ref document: EP

Kind code of ref document: A1