CN117956161A - Parameter deriving method, image encoding method, image decoding method and device thereof - Google Patents

Parameter deriving method, image encoding method, image decoding method and device thereof Download PDF

Info

Publication number
CN117956161A
CN117956161A CN202311863363.3A CN202311863363A CN117956161A CN 117956161 A CN117956161 A CN 117956161A CN 202311863363 A CN202311863363 A CN 202311863363A CN 117956161 A CN117956161 A CN 117956161A
Authority
CN
China
Prior art keywords
parameter
weight
image
decoding
allocated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311863363.3A
Other languages
Chinese (zh)
Inventor
彭双
江东
林聚财
方诚
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202311863363.3A priority Critical patent/CN117956161A/en
Publication of CN117956161A publication Critical patent/CN117956161A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application provides a parameter deriving method based on coding and decoding, an image coding method, an image decoding method, an image coder, an image decoder and a computer storage medium. The parameter derivation method comprises the following steps: acquiring a weight area to be allocated; generating parameter weights based on the weight areas to be allocated; acquiring a combination mode of the weight and the parameter, and selecting a corresponding parameter deduction mode based on the combination mode; and fusing the parameter weight with the image pixels of the weight region to be allocated according to the parameter deduction mode to obtain the encoding and decoding parameters of the weight region to be allocated. By the parameter deduction method, the accuracy of parameter deduction can be effectively improved, and the parameter deduction effect can be improved through weight generation and parameter deduction based on the weight.

Description

Parameter deriving method, image encoding method, image decoding method and device thereof
Technical Field
The present application relates to the field of video image coding technology, and in particular, to a parameter deriving method, an image coding method, an image decoding method, an image encoder, an image decoder, and a computer storage medium based on encoding and decoding.
Background
The video image data size is relatively large, and video pixel data (RGB, YUV, etc.) is usually required to be compressed, and the compressed data is called a video code stream, and the video code stream is transmitted to a user terminal through a wired or wireless network and then decoded and watched. The whole video coding flow comprises the processes of block division, prediction, transformation, quantization, coding and the like.
In existing codec processes, there are a variety of parameter derivation processes, and the derivation and application of these parameters extends throughout the entire codec process. However, the existing parameter derivation method can only derive parameters based on equal weights, and different images have different correlations, so that the existing technology is difficult to adapt to different image contents.
Disclosure of Invention
In order to solve the above technical problems, the present application provides a parameter deriving method, an image encoding method, an image decoding method, an image encoder, an image decoder and a computer storage medium based on encoding and decoding.
In order to solve the above technical problems, the present application provides a parameter deriving method based on coding and decoding, the parameter deriving method includes:
Acquiring a weight area to be allocated;
generating parameter weights based on the weight areas to be allocated;
acquiring a combination mode of the weight and the parameter, and selecting a corresponding parameter deduction mode based on the combination mode;
and fusing the parameter weight with the image pixels of the weight region to be allocated according to the parameter deduction mode to obtain the encoding and decoding parameters of the weight region to be allocated.
Wherein the generating a parameter weight based on the weight area to be allocated includes:
extracting distance information, gradient information and/or pixel information of the weight region to be allocated as weight model input information;
inputting the weight model input information into a preset weight distribution model to obtain the parameter weight of the image information;
the preset weight distribution model is one of a normal model, an arithmetic model, an geometric model and a stage model.
Wherein the parameter derivation mode is weighting parameter derivation based on a basic processing unit;
the step of fusing the parameter weight and the image pixel of the weight area to be allocated according to the parameter deduction mode to obtain the encoding and decoding parameters of the weight area to be allocated, comprising the following steps:
Performing basic processing on the image pixels to be assigned with weights by using a basic processing unit of a parameter derivation function of the encoding and decoding parameters;
weighting and fusing the image pixels subjected to basic processing with the parameter weights;
And inputting the weighted and fused image pixels into the parameter derivation function operation to obtain the coding and decoding parameters of the weight region to be allocated.
Wherein the parameter derivation mode is weighting parameter derivation based on original pixels;
the step of fusing the parameter weight and the image pixels of the weight area to be allocated according to the parameter deduction mode to obtain the encoding and decoding parameters of the image information, comprising the following steps:
weighting and fusing the image pixels of the weight region to be allocated with the parameter weights;
And inputting the weighted and fused image pixels into parameter derivation function operation of the coding and decoding parameters to obtain the coding and decoding parameters of the weight region to be allocated.
The method for deriving the parameter includes the steps of:
In response to enabling the parameter derivation method, the switch syntax is set to an enable value.
Wherein after the switch syntax is set to the enable value, the parameter derivation method further comprises:
In response to replacing the parameter derivation method in the original codec process, no pattern syntax is set;
And responding to the parameter derivation methods in the original encoding and decoding process, selecting an optimal parameter derivation method based on the rate distortion cost of each parameter derivation method, and setting the mode value of the mode syntax.
In order to solve the technical problem, the application also provides an image coding method, which comprises the following steps:
Obtaining the coding parameters of the current image block by the parameter deriving method;
acquiring the coding information of the current image block by utilizing the coding parameters;
Coding the current image block according to the coding information to obtain a coding code stream;
wherein the coding information comprises a predicted image block, a reconstructed image block, a residual image block, and/or an image quality evaluation score of the current image block.
In order to solve the technical problem, the application further provides an image decoding method, which comprises the following steps:
Obtaining a decoding code stream and decoding parameters thereof;
Decoding the decoding code stream based on the decoding parameters to obtain a decoding image;
the decoding parameters are obtained through solving the coding information of the image coding method.
In order to solve the technical problem, the application also provides an image encoder, which comprises a memory and a processor coupled with the memory;
Wherein the memory is configured to store program data, and the processor is configured to execute the program data to implement the parameter derivation method and/or the image encoding method.
In order to solve the above technical problems, the present application further provides an image decoder, which includes a memory and a processor coupled to the memory; wherein the memory is configured to store program data, and the processor is configured to execute the program data to implement the parameter derivation method and/or the image decoding method.
To solve the above technical problem, the present application further proposes a computer storage medium for storing program data, which when executed by a computer, is configured to implement the above-mentioned parameter deriving method, image encoding method, and/or image decoding method.
Compared with the prior art, the application has the beneficial effects that: the parameter deduction device obtains a weight area to be allocated; generating parameter weights based on the weight areas to be allocated; acquiring a combination mode of the weight and the parameter, and selecting a corresponding parameter deduction mode based on the combination mode; and fusing the parameter weight with the image pixels of the weight region to be allocated according to the parameter deduction mode to obtain the encoding and decoding parameters of the weight region to be allocated. By the parameter deduction method, the accuracy of parameter deduction can be effectively improved, and the parameter deduction effect can be improved through weight generation and parameter deduction based on the weight.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Wherein:
FIG. 1 is a schematic diagram of the overall codec process provided by the present application;
FIG. 2 is a schematic diagram of an intra prediction mode provided by the present application;
FIG. 3 is a flowchart illustrating an embodiment of a codec-based parameter derivation method according to the present application;
FIG. 4 is a schematic overall flow chart of the parameter deriving method according to the present application;
FIG. 5 is a schematic illustration of adjacent distances provided by the present application;
FIG. 6 is a schematic diagram of a histogram of intra prediction modes with respect to gradient magnitude provided by the present application;
FIG. 7 is a schematic diagram of an equal ratio model (decreasing) weight generation provided by the present application;
FIG. 8 is a schematic representation of the generation of an equal-ratio model (incremental) weight provided by the present application;
FIG. 9 is a flowchart of an embodiment of an image encoding method according to the present application;
FIG. 10 is a flowchart illustrating an embodiment of an image decoding method according to the present application;
FIG. 11 is a schematic diagram of an embodiment of an image encoder according to the present application;
FIG. 12 is a schematic diagram of an embodiment of an image decoder according to the present application;
Fig. 13 is a schematic structural diagram of an embodiment of a computer storage medium according to the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Referring to fig. 1, fig. 1 is an overall schematic diagram of an encoding and decoding process according to the present application. The encoding process starts from the input video frame to the end of the code stream and the decoding process starts from the code stream to the end of the reconstructed frame. Wherein, the dotted line is the coding and decoding common flow, and the arrow part is the coding specific flow and the decoding specific flow.
In video coding, the most commonly used color coding methods include YUV, RGB, etc., and the color coding method adopted in the invention is YUV. Y represents brightness, that is, a gray value of an image; u and V (i.e., cb and Cr) represent chromaticity, which functions to describe image color and saturation. Each Y luminance block corresponds to one Cb and one Cr chrominance block, and each chrominance block corresponds to only one luminance block. Taking a 4:2:0 sampling format as an example, a block of n×m corresponds to a luminance block of size n×m, the corresponding two chrominance blocks are both of size (N/2) ×m/2, and the chrominance block is 1/4 of the luminance block. For a 4:4:4 sampling format, the luma block is the same size as the chroma block.
Block division: in video coding, each image frame is input, but when one frame is coded, one frame needs to be divided into a plurality of LCUs (largest coding units), then recursive CU (coding unit) division with different sizes is performed on each coding unit, and video coding is performed by taking the CU as a unit.
Intra/inter prediction: in general, luminance and chrominance signal values of adjacent pixels are relatively close, have strong correlation, and if luminance and chrominance information is directly represented by the number of samples, more spatial redundancy exists in the data. If the redundant data is removed before encoding, the average number of bits representing each pixel is reduced, i.e., the data is compressed with reduced spatial redundancy.
In conventional intra prediction, the prediction modes generally include a series of angular prediction modes, etc., and in VVC, the angular prediction modes include 65 kinds, as shown in fig. 2, fig. 2 is a schematic view of the intra prediction modes provided in the present application. The mode numbers are 2-66 (one for each angular prediction mode) and the mode numbers for Planar and DC are 0 and 1 (non-angular mode), respectively.
And (3) transformation: and subtracting the actual value from the predicted value of the current block after the current block is predicted, so as to obtain a residual block, wherein the residual block represents the difference between the actual image and the predicted image of the current block. The residual block is then transformed, e.g., using DCT, DST, etc. Since for most images there are many flat areas and areas with slow content transformation, and the correlation between adjacent pixels is strong, by transformation, these correlations can be reduced, and the scattered distribution of the image energy in the spatial domain can be converted into a relatively concentrated distribution in the transformation domain, so that the spatial redundancy can be removed.
Quantification: quantization is a process of mapping continuous values of a signal into a plurality of discrete amplitude values, and the mapping of the signal values into one-to-many values is realized. After the residual data is transformed, the transformation coefficient has a larger value range, and the quantization can effectively reduce the value range of the signal, so that a better compression effect is obtained. Quantization is the root cause of image distortion, since quantization is the dispersion of successive values into individual quantization intervals.
Aiming at the problems in the prior art, the weighting parameter derivation method designed by the application is designed based on the correlation of texture content, so that the calculated weighting parameter is more suitable for the image content; multiple weight generation modes are designed, different generated weights can adapt to different texture contents, and deduction parameters can be greatly improved; the weighting parameter deducing method provided by the utility model has the advantages that the generated prediction residual is distributed more uniformly, and the performance of the transformation process can be improved, so that the compression performance is improved.
Referring to fig. 3 and fig. 4, fig. 3 is a flow chart of an embodiment of a parameter deriving method based on encoding and decoding according to the present application, and fig. 4 is an overall flow chart of the parameter deriving method according to the present application.
The parameter deriving method is applied to the parameter deriving device, wherein the parameter deriving device can be a server, terminal equipment or a system formed by the cooperation of the server and the terminal equipment. Accordingly, each part, for example, each unit, sub-unit, module, and sub-module, included in the parameter deriving device may be all disposed in the server, may be all disposed in the terminal device, or may be disposed in the server and the terminal device, respectively.
Further, the server may be hardware or software. When the server is hardware, the server may be implemented as a distributed server cluster formed by a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules, for example, software or software modules for providing a distributed server, or may be implemented as a single software or software module, which is not specifically limited herein.
As shown in fig. 4, the overall flow of the parameter derivation method proposed in the present application is divided into three steps, namely: 1) Weight generation; 2) Deriving parameters based on the weights; 3) Policy and syntax expression are applied. Specifically:
(1) Weight generation: in the process, a plurality of different weight generation methods are designed, and different weights can be suitable for different image contents.
(2) Weight-based parameter derivation: in this process, the generated weights are used in the parameter derivation process, making the derived parameters more efficient.
(3) Applying policy and syntax expression: this section mainly describes how the proposed method is applied to the codec process and expressed by syntax so that the codec flow remains consistent.
The following is a continuous concrete description of the three-step overall process with reference to the concrete process:
as shown in fig. 3, the specific steps are as follows:
step S11: and acquiring a weight area to be allocated.
In the embodiment of the present application, the parameter deriving device obtains the weight region to be allocated, i.e. the region where the codec parameters need to be calculated. The weight area to be allocated may be divided into a plurality of sub-areas, each sub-area may generate weight based on different manners, or may generate weight based on the same manner, and the division manner of the sub-areas is not particularly limited.
Step S12: and generating parameter weights based on the weight areas to be allocated.
In the embodiment of the present application, the method for generating the weight adopted by the parameter deriving device includes, but is not limited to: 1) Based on a fixed weight; 2) Model-based weights; 3) Based on the weights of the neural network. The following describes the generation method of the above three weights respectively:
(1) Based on fixed weights: i.e. the weight is manually allocated to the weight area to be allocated.
(2) Model-based weights: i.e. assigning weights to the weight regions to be assigned based on the model.
Wherein, the weight distribution model WG (·) is defined as follows:
w=WG(inf)
Wherein w is the generated weight, and inf is the weight model input information.
Specifically, the input information of the weight distribution model includes, but is not limited to, distance information, gradient information, pixel information, and the like.
Wherein the distance information includes, but is not limited to, horizontal distance, vertical distance, euclidean distance, adjacent distance, and the like. For horizontal, vertical, adjacent distance reference centers, reference centers include, but are not limited to, a point, a line; for the adjacent distances, the reference center is a region, and what circle around the reference center is called what adjacent distance, as shown in fig. 5, is L circles in total.
Gradient information including, but not limited to, sobel gradients, and the like.
Pixel information, i.e. the pixel value of the image pixel.
The weight distribution model provided by the application comprises, but is not limited to, a normal model, an arithmetic model, an geometric model, a stage model, a corresponding model variant and the like.
Normal model, σ and μ are the scale parameter and the position parameter of normal model respectively:
The arithmetic model (arithmetic decrement/increment), W ind and W d are model initial weight and arithmetic weight, respectively:
WG(inf)=Wind+(inf-1)·Wd
the model of the equal ratio (equal ratio decreasing/increasing), W ind and W q are model initial weight and equal ratio weight, respectively:
WG(inf)=Wind·Wc (inf-1)
Stage model:
Wherein F t is a weight generation manner including, but not limited to, a fixed weight based, a model based weight, a neural network based weight, and the like. C t is a phase condition, T represents the number of phases, t= {1,2, …, T }.
Further, in the codec process, integer arithmetic is typically used, and rounding is required for the calculated weights. And/or, to facilitate hardware processing, the weights or sum of weights need to be approximated as an integer power of 2. And/or, to limit the weight range, the weight range may be limited, i.e., w (x, y) [ w min,wmax],wmin and w min are the minimum and maximum values of the weights, respectively.
(3) Weights based on neural network: namely, the weight is distributed to the weight area to be distributed based on the neural network, and the weight is obtained based on the training of the neural network.
Step S13: and acquiring a combination mode of the weight and the parameter, and selecting a corresponding parameter deduction mode based on the combination mode.
In an embodiment of the present application, parameters are related in the weight-based parameter derivation of FIG. 4
The derivation of para can be abstracted into the following expression:
para=M(unit)
Where M (·) represents the parameter derivation function, unit represents the basic processing unit result, and unit for each location can be expressed as follows:
unit(x,y)=process(raw_data)
wherein process () is the basic process and raw_data is the original pixel.
In the parameter derivation based on the weight, the combination of the weight and the parameter derivation may be classified into the weighted parameter derivation based on the basic processing unit and the weighted parameter derivation based on the original pixel according to the combination dimension.
Step S14: and fusing the parameter weight with the image pixels of the weight region to be allocated according to the parameter deduction mode to obtain the encoding and decoding parameters of the weight region to be allocated.
The coding and decoding parameters according to the present application can be roughly classified into an index class, a coefficient class, and an information class:
(1) Index classes, including but not limited to sum of absolute residual (SAD, sum of Absolute Difference), mean absolute difference (MAD, mean Absolute Difference), sum of variance (SSE, sum of Squared Error), mean square error (MSE, mean Squared Error), peak signal to Noise Ratio (PSNR, peak Signal to Noise Ratio), etc., the index class parameters are derived based on two images:
SAD is the sum of absolute differences between the size W H image P0 and the image P1, and the calculation expression is as follows:
wherein, x and y are pixel coordinates, x= {0,1, …, W-1}, y= {0,1,2, …, H-1}.
MAD is the average of SADs, that is, the average absolute difference sum between the w×h image P0 and the image P1, and is calculated as follows:
SSE is the sum of squares of the differences, MSE is the mean of SSE, and the expression is calculated as follows:
The calculated expression for PSNR is as follows:
Where MAX is the maximum value of the pixel.
In the codec process: SAD and SSE are used for reflecting the distortion condition of the image, and are commonly used in the processes of rate distortion optimization, candidate vector (including motion vector/block vector) search, candidate vector reordering, candidate vector refinement and the like; PSNR is used for image quality, and is often used to reflect reconstructed image quality, calculate compression efficiency, and the like.
In particular, in the encoding and decoding process, parameters of the index class may be used to calculate a predicted image block, a reconstructed image block, a residual image block, and/or an image quality evaluation score of the encoded and decoded image.
Wherein, with respect to calculating the predicted image block:
And (3) rate-distortion optimization in the prediction process, namely selecting an optimal prediction mode from a plurality of prediction modes based on the rate-distortion cost, and acquiring a prediction block based on the prediction mode. Wherein the rate-distortion cost is typically calculated based on the SSE parameters.
Mode roughing of the prediction process: based on the SAD parameters, calculating SAD cost, selecting a plurality of prediction modes from a plurality of prediction modes, and acquiring a prediction block based on the prediction modes.
Candidate vector search: and searching a motion vector based on parameters such as SAD/SSE and the like, and acquiring a prediction block based on the motion vector.
Reordering candidate vectors: and reordering the candidate vectors based on SAD/SSE parameters or calculating SAD/SSE cost, and acquiring a prediction block based on the motion vector.
Candidate vector refinement: and refining the candidate vectors based on SAD/SSE and other parameters or calculating SAD/SSE cost, and acquiring a prediction block based on the motion vectors.
Wherein, with respect to computing reconstructed image blocks:
rate distortion optimization of the filtering process: and selecting an optimal filtering process based on the rate distortion cost, and acquiring a reconstructed image block based on the filtering process.
Wherein, with respect to calculating the residual image block:
Transform kernel selection based on rate-distortion optimization: the best transform kernel is selected based on the rate-distortion cost, and the residual image block is acquired based on the transform kernel.
Wherein, regarding calculating the image quality evaluation score: PSNR, SSIM, etc. are used for image quality evaluation.
(2) Coefficient classes, including but not limited to coefficient solutions for linear expressions, coefficient class parameters are typically derived based on two or more images, the linear computational expressions being as follows:
Where target is the expression output, c k is the coefficient, I k represents the input, K is the number of entries, k= {0,1, & gt, K-1}.
The expression for solving the coefficient c k, based on the minimum MSE principle, is not described here, as follows:
Wherein, For the autocorrelation matrix derived from the input samples r for solution,' is a matrix multiplication, whose computational expression is as follows:
for an input sample matrix for solving, the computational expression is as follows, N being the number of samples used to solve the parameters:
For the nth input sample vector for solution, the expression n= {0,1, …, N-1}, r n,k is the kth input of the nth input sample for solution:
Is a coefficient vector composed of coefficients c k to be solved, and the expression is as follows:
The cross-correlation vector is obtained between an input sample r for solving and an output sample o for solving:
For the output sample vector for solution, o n is the nth output sample for solution:
The expression of the coefficient c k can be solved by matrix decomposition, and the solving method is not limited to LDL decomposition, cholesky decomposition, and the like, and the specific matrix decomposition process is not developed here.
In the codec process: processes requiring coefficients to be solved include, but are not limited to, adaptive Loop Filter (ALF) parameters, chroma cross-component convolution model (CCCM) parameters, gradient Linear Model (GLM) parameters, intra template matched linear filter (IntraTMP-FLM) parameters, and the like.
In particular, in the encoding and decoding process, parameters of the coefficient class may be used to calculate a predicted image block, a reconstructed image block, a residual image block, and/or an image quality evaluation score of the encoded and decoded image.
Wherein, with respect to calculating the predicted image block:
chrominance cross-component convolution model parameters: the chroma cross-component convolution model obtains a chroma prediction block based on the chroma cross-component convolution model parameters.
Gradient linear model parameters: the gradient linear model obtains a corresponding prediction block based on gradient linear model parameters;
Intra template matched linear filtering parameters: the intra-frame template matching process obtains a prediction block based on the intra-frame template matched linear filtering parameters.
Wherein, with respect to computing reconstructed image blocks:
adaptive loop filtering parameters: adaptive loop filtering obtains a reconstructed block based on adaptive loop filtering parameters.
Wherein, with respect to calculating the residual image block:
Cross-component residual prediction parameters: the cross-component residual prediction model obtains a residual image block based on the cross-component parameter prediction parameters.
(3) Information classes, including but not limited to gradient information, etc., information class parameters are typically derived based on an image, the gradient information being calculated as follows:
the method is characterized in that convolution operation is performed, G is a gradient map, G is a gradient operator, and I is a gradient image to be solved.
The gradient is generally divided into a horizontal gradient G x and a vertical gradient G y, and the gradient operators correspond to a horizontal gradient operator G x and a vertical gradient operator G y.
In addition, the gradient may also extend to diagonal and anti-diagonal angles.
The gradient magnitude at (x, y) is generally defined as:
G(x,y)-|Gx(x,y)|+|Gy(x,y)|
Or alternatively, the first and second heat exchangers may be,
The gradient direction D (x, y) at (x, y) is defined as:
D(x,y)=arctan(Gy(x,y)/Gx(x,y))
arctan (·) is an arctangent function.
In the index type information and the information type parameter, the process (·) is a derivation function of a certain position, for example, in a calculation expression of SAD, the process (·) = |p0 (x, y) -p1 (x, y) |, that is, raw_data is the original pixels P0 (x, y) and P1 (x, y).
In the coefficient parameters, the input and the output for solving are matched, and the process (-) is a single input sample vector for solvingIs the input sample vector/>, for solution, and the obtained output samples o n And output sample o n, raw_data is the composition/>And o n original pixels.
Specifically, if the parameter derivation method is based on the weighted parameter derivation of the basic processing unit, the weight is applied to the result of the basic processing unit, and the expression is as follows:
para=M(w·unit)
If the parameter derivation method is based on the weighted parameter derivation of the original pixel, the weight is applied to the result of the original pixel, and the expression is as follows:
unit(x,y)=process(w·raw_data)
Further, as the weighting changes the data range, normalization may be required in some parameters, expressed as follows:
w(x,y)=w(x,y)/wsaw
wherein W saw is the sum of all W (x, y).
In the codec process: a procedure for information-like parameter derivation is required, including but not limited to intra mode derivation.
In particular, during the encoding and decoding process, parameters of the information class may be used to calculate a predicted image block, a reconstructed image block, a residual image block, and/or an image quality evaluation score of the encoded and decoded image.
Wherein, with respect to calculating the predicted image block:
Intra mode derivation: a prediction block is obtained based on the derived intra prediction mode.
Wherein, with respect to calculating the residual image block:
adaptive transform kernel selection: based on derived parameters, such as intra prediction modes, transform kernels are acquired, and residual image blocks are acquired based on the transform kernels.
The intra mode derivation process is roughly: and (3) performing sobel gradient calculation on the designated area to obtain gradient amplitude and gradient direction of each point, converting the gradient direction into a corresponding intra-frame prediction mode, traversing the whole area to obtain a histogram of the intra-frame prediction mode with respect to the gradient amplitude, and using a plurality of intra-frame prediction modes with highest amplitude as shown in fig. 6.
In the embodiment of the present application, in the application policy and the syntax expression in fig. 4, the application policy of the solution proposed in the present application includes, but is not limited to:
(1) Replacement strategy: and replacing the original parameter derivation scheme with the proposed parameter derivation scheme.
(2) And (3) adding a strategy: the proposed parameter derivation scheme coexists with the original parameter derivation scheme.
In this regard, the syntax expressions to which the present application relates to unify the behavior of parameter derivation in codecs, provided syntax expressions include, but are not limited to: 1) Switching syntax; 2) Pattern syntax.
Wherein, the switching syntax: the switch syntax is used to express whether this parameter derivation technique is enabled during the codec process, and is typically set in parameter sets, including but not limited to video parameter sets (VPS, video PARAMETER SET), sequence parameter sets (SPS, sequence PARAMATER SET), picture parameter sets (PPS, picture PARAMETER SET), and so on.
Pattern syntax: when the switch syntax indicates that the parameter derivation technique is enabled, the schema syntax is used to indicate whether the parameter derivation process performs the parameter derivation technique.
Specifically, in the encoder, for some parameter derivation techniques, in order to select the best parameter derivation, i.e., the parameter derivation with the smallest rate distortion cost, may be selected by rate distortion optimization. And, when the encoder selects the optimal parameter derivation mode, the encoder needs to encode the corresponding pattern syntax. In the decoder, it is determined whether the parameter derivation technique performs a corresponding parameter derivation method by decoding a corresponding pattern syntax.
In the embodiment of the application, a parameter deriving device acquires a weight region to be allocated; generating parameter weights based on the weight areas to be allocated; acquiring a combination mode of the weight and the parameter, and selecting a corresponding parameter deduction mode based on the combination mode; and fusing the parameter weight with the image pixels of the weight region to be allocated according to the parameter deduction mode to obtain the encoding and decoding parameters of the weight region to be allocated. By the parameter deduction method, the accuracy of parameter deduction can be effectively improved, and the parameter deduction effect can be improved through weight generation and parameter deduction based on the weight.
According to the parameter deduction method, in the parameter deduction process, the correlation is combined to generate the weight, so that the accuracy of parameter deduction is improved.
The parameter deducing method can be greatly suitable for different texture contents, and the parameter deducing effect is improved more effectively.
The parameter deduction method combines texture characteristics, so that the prediction residual error is more uniform, and the performance of the transformation process is improved.
The parameter derivation methods described in fig. 3 and 4 are specifically described below by several embodiments:
Example 1: the parameter derivation method of this embodiment is as follows: SAD derivation based on base processing unit weighting.
Specifically, a weight model employed in generating weights based on the model: based on an equal-ratio model (decreasing), the model is input as a neighboring distance, W ind=4,Wq =0.5, and the calculation formula of the equal-ratio model is expressed as:
WG(inf)=4·0.5(inf-1)
Further, the present embodiment may further limit the range of the generation weight, w min=1,wmax =4.
Assuming that an L-shaped weight region to be generated is shown in fig. 7, the generated weights are shown in fig. 7, and weights of the 1 st, 2 nd, 3 rd and 4 th adjacent distances are respectively 4, 2 nd, 1 st and 1 st.
In the SAD derivation based on the weight, the combination mode of the weight and the SAD derivation is as follows: and deriving based on the weighting parameters of the base processing unit.
The weighted SAD w1 is therefore rewritten by the SAD calculation formula as follows:
Wherein w1 (x, y) is the weight generated by the above-mentioned geometric model (decreasing) calculation formula.
The scheme adopted in this embodiment applies policies: the SAD derivation method based on the weighting of the basic processing unit replaces the SAD calculation method in IntraTMP search process.
The syntactic expression is:
Switching syntax: the technique is controlled in the SPS parameter set, indicating that the codec process does not enable the technique when the SPS switch syntax sad_ WEIGHTED _intra_tmp_sps=0. When the switch syntax sad_ WEIGHTED _intra_tmp_sps=1 in the SPS, it means that the codec process enables this parameter derivation technique.
Pattern syntax: the proposed scheme replaces the SAD derivation method of IntraTMP search process, and there are only 1 SAD derivation method in the current IntraTMP search process, so no pattern syntax is required.
Example 2: the parameter derivation method of this embodiment is as follows: multi-candidate SAD derivation based on base processing unit weighting.
Specifically, a weight model employed in generating weights based on the model: an equal ratio model (increment) and an equal ratio model (decrement).
Based on an equal-ratio model (incremental), the model inputs are the adjacent distances,
The calculation formula of W ind=1,Wc =2, the geometric model is expressed as:
WG(inf)-1·2(inf-1)
Further, the present embodiment may further limit the range of the generation weight, w min=1,wmax =4.
Assuming that an L-shaped weight region to be generated is shown in fig. 8, the generated weights are 1,2, 4 as shown in fig. 8, and the weights of the 1 st, 2 nd, 3 rd and 4 th adjacent distances are respectively 1,2 nd and 4 th.
Based on an equal-ratio model (decreasing), the model inputs are adjacent distances,
W ind=4,Wc =0.5, and the calculation formula of the geometric model is expressed as:
WG(inf)=4·0.5(inf-1)
Further, the present embodiment may further limit the range of the generation weight, w min=1,wmax =4.
Assuming that an L-shaped weight region to be generated is shown in fig. 7, the generated weights are shown in fig. 7, and weights of the 1 st, 2 nd, 3 rd and 4 th adjacent distances are respectively 4, 2 nd, 1 st and 1 st.
In the SAD derivation based on the weight, the combination mode of the weight and the SAD derivation is as follows: and deriving based on the weighting parameters of the base processing unit.
For the equal ratio model (decreasing), the weighted SAD w1 is rewritten by the SAD calculation formula as follows:
Wherein w1 (x, y) is the weight generated by the calculation of the above-mentioned equal-ratio model (decreasing).
For the equal ratio model (incremental), the weighted SAD w2 is rewritten by the SAD calculation formula as follows:
wherein w2 (x, y) is the weight generated by the calculation of the above-mentioned equal-ratio model (increment).
The scheme adopted in this embodiment applies policies: the multi-candidate SAD derivation method based on the weighting of the basic processing unit is used as a new SAD derivation method in IntraTMP search process, so that 3 SAD derivation methods exist in IntraTMP search process, and are divided into original SAD and SAD 1、SAD2.
The syntactic expression is:
Switching syntax: the technique is controlled in the SPS parameter set, indicating that the codec process does not enable the technique when the SPS switch syntax sad_ WEIGHTED _intra_tmp_multi_cand_sps=0; when the switch syntax sad_ WEIGHTED _intra_tmp_multi_cand_sps=1 in the SPS, it means that the codec process enables this parameter derivation technique.
Pattern syntax: in IntraTMP search process, the pattern syntax sad_ WEIGHTED _intra_tmp_idx is used to represent the selected SAD derivation method, and sad_ WEIGHTED _intra_tmp_idx=0, 1,2 represent the derivation method using the original SAD, SAD 1、SAD2, respectively.
Example 3: the parameter derivation method of this embodiment is as follows: the coefficients based on the original pixel weights are derived.
Specifically, a weight model employed in generating weights based on the model: based on an equal ratio model (decreasing), the model is input as adjacent distance, W ind-4,Wq -0.5, and the calculation formula of the equal ratio model is expressed as follows:
WG(inf)=4·0.5(inf-1)
Further, the present embodiment may further limit the range of the generation weight, w min=1,Wmax =4.
Assuming that an L-shaped weight region to be generated is shown in fig. 7, the generated weights are shown in fig. 7, and weights of the 1 st, 2 nd, 3 rd and 4 th adjacent distances are respectively 4, 2 nd, 1 st and 1 st.
In the coefficient derivation based on the weight, the combination mode of the weight and the coefficient derivation is as follows: parameter derivation based on original pixel weighting.
In the weighted coefficient derivation process, the linear calculation expression is rewritten as the following formula:
Pk-[wn,0·rn,0wn,1·rn,1…wr,K-1·rnK-1]T
where w nk is the weight generated by the corresponding position of the kth input of the nth sample used for solving.
The scheme adopted in this embodiment applies policies: the coefficient derivation method based on original pixel weighting replaces IntraTMP the derivation method of the filtering coefficient in the filtering prediction process.
The syntactic expression is:
Switching syntax: the technique is controlled in the SPS parameter set, which is indicated when the SPS switch syntax sad_ WEIGHTED _intra_tmp_flm_sps=0, the codec process does not enable the technique; when the switch syntax sad_ WEIGHTED _intra_tmp_flm_sps=1 in SPS, it means that the codec process enables this parameter derivation technique.
Pattern syntax: the proposed scheme replaces the filter coefficient derivation method in IntraTMP filter prediction, and there are only 1 filter coefficient derivation method in current IntraTMP filter prediction, so no pattern syntax is required.
Example 2: the parameter derivation method of this embodiment is as follows: gradient derivation based on basic processing unit weighting.
Specifically, a weight model employed in generating weights based on the model: based on an equal-ratio model (decreasing), the model is input as a neighboring distance, W ind=4,Wq =0.5, and the calculation formula of the equal-ratio model is expressed as:
WG(inf)=4·0.5(inf-1)
Further, the present embodiment may further limit the range of the generation weight, w min=1,wmax =4.
Assuming that an L-shaped weight region to be generated is shown in fig. 7, the generated weights are shown in fig. 7, and weights of the 1 st, 2 nd, 3 rd and 4 th adjacent distances are respectively 4, 2 nd, 1 st and 1 st.
In the gradient derivation based on the weight, the combination mode of the weight and the coefficient derivation is as follows: parameter derivation based on basic processing unit weighting.
The gradient is a vector, and the direction information cannot be weighted, so that only the gradient amplitude is weighted, and the following formula is rewritten by the calculation expression of the gradient information:
G(x,y)=w1(x,y)·(|Gx(x,y)|+|Gy(x,y)|)
The scheme adopted in this embodiment applies policies: the gradient derivation based on the weighting of the basic processing unit is used as a new gradient derivation method in the decoding end intra mode derivation (DIMD).
The syntactic expression is:
Switching syntax: the technique is controlled in the SPS parameter set, which is indicated when the SPS switch syntax sad_ WEIGHTED _ DIMD _sps=0, the codec process does not enable the technique; when the switch syntax sad_ WEIGHTED _ DIMD _sps=1 in the SPS, it means that the codec process enables this parameter derivation technique.
Pattern syntax: the pattern syntax sad_ WEIGHTED _ DIMD _mode is used to express the gradient derivation method used in the DIMD derivation process, sad_ WEIGHTED _ DIMD _mode=0, 1 respectively representing the use of the original gradient derivation method and the use of the proposed gradient derivation method.
With continued reference to fig. 9, fig. 9 is a flowchart illustrating an embodiment of an image encoding method according to the present application.
As shown in fig. 9, the specific steps are as follows:
step S21: and obtaining the coding parameters of the current image block by a parameter derivation method.
Step S22: and acquiring the coding information of the current image block by using the coding parameters.
Step S23: and encoding the current image block according to the encoding information to obtain an encoding code stream.
Specifically, the encoding information in the encoding and decoding specifically includes a predicted image block, a reconstructed image block, a residual image block, and/or an image quality evaluation score of the current image block, which are described in detail in the above embodiments and are not described herein.
With continued reference to fig. 10, fig. 10 is a flowchart illustrating an embodiment of an image decoding method according to the present application.
As shown in fig. 10, the specific steps are as follows:
step S31: and obtaining a decoding code stream and decoding parameters thereof.
Step S32: and decoding the decoding code stream based on the decoding parameters to obtain a decoded image.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
In order to implement the above parameter deriving method and/or the image encoding method, the present application further provides an image encoder, refer to fig. 11 specifically, fig. 11 is a schematic structural diagram of an embodiment of the image encoder provided by the present application.
The image encoder 400 of the present embodiment includes a processor 41, a memory 42, an input-output device 43, and a bus 44.
The processor 41, the memory 42, the input output device 43 are respectively connected to the bus 44, and the memory 42 stores program data, and the processor 41 is configured to execute the program data to implement the parameter deriving method and/or the image encoding method according to the above-described embodiments.
In an embodiment of the present application, the processor 41 may also be referred to as a CPU (Central Processing Unit ). The processor 41 may be an integrated circuit chip with signal processing capabilities. Processor 41 may also be a general purpose processor, a digital signal processor (DSP, digital Signal Process), an Application Specific Integrated Circuit (ASIC), a field programmable gate array (FPGA, field Programmable GATE ARRAY) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The general purpose processor may be a microprocessor or the processor 41 may be any conventional processor or the like.
In order to implement the above parameter deriving method and/or the image decoding method, the present application further provides an image decoder, refer to fig. 12 specifically, fig. 12 is a schematic structural diagram of an embodiment of the image decoder provided by the present application.
The image encoder 500 of the present embodiment includes a processor 51, a memory 52, an input-output device 53, and a bus 54.
The processor 51, the memory 52, the input output device 53 are respectively connected to the bus 54, and the memory 52 stores program data, and the processor 51 is configured to execute the program data to implement the parameter deriving method and/or the image decoding method according to the above embodiments.
In an embodiment of the present application, the processor 51 may also be referred to as a CPU (Central Processing Unit ). The processor 51 may be an integrated circuit chip with signal processing capabilities. Processor 51 may also be a general purpose processor, a digital signal processor (DSP, digital Signal Process), an Application SPECIFIC INTEGRATED Circuit (ASIC), a field programmable gate array (FPGA, field Programmable GATE ARRAY) or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component. The general purpose processor may be a microprocessor or the processor 51 may be any conventional processor or the like.
The present application further provides a computer storage medium, and referring to fig. 13, fig. 13 is a schematic structural diagram of an embodiment of the computer storage medium provided by the present application, in which a computer program 61 is stored in the computer storage medium 600, and the computer program 61 is used to implement the parameter deriving method, the image encoding method, and/or the image decoding method according to the above embodiments when executed by a processor.
Embodiments of the present application may be stored in a computer readable storage medium when implemented in the form of software functional units and sold or used as a stand alone product. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing description is only of embodiments of the present application, and is not intended to limit the scope of the application, and all equivalent structures or equivalent processes using the descriptions and the drawings of the present application or directly or indirectly applied to other related technical fields are included in the scope of the present application.

Claims (11)

1. A parameter derivation method based on coding and decoding, characterized in that the parameter derivation method comprises:
Acquiring a weight area to be allocated;
generating parameter weights based on the weight areas to be allocated;
acquiring a combination mode of the weight and the parameter, and selecting a corresponding parameter deduction mode based on the combination mode;
and fusing the parameter weight with the image pixels of the weight region to be allocated according to the parameter deduction mode to obtain the encoding and decoding parameters of the weight region to be allocated.
2. The method for deriving parameters according to claim 1, wherein,
The generating parameter weights based on the weight areas to be allocated comprises the following steps:
extracting distance information, gradient information and/or pixel information of the weight region to be allocated as weight model input information;
inputting the weight model input information into a preset weight distribution model to obtain the parameter weight of the image information;
the preset weight distribution model is one of a normal model, an arithmetic model, an geometric model and a stage model.
3. The method for deriving parameters according to claim 1, wherein,
The parameter derivation mode is weighting parameter derivation based on a basic processing unit;
the step of fusing the parameter weight and the image pixel of the weight area to be allocated according to the parameter deduction mode to obtain the encoding and decoding parameters of the weight area to be allocated, comprising the following steps:
Performing basic processing on the image pixels to be assigned with weights by using a basic processing unit of a parameter derivation function of the encoding and decoding parameters;
weighting and fusing the image pixels subjected to basic processing with the parameter weights;
And inputting the weighted and fused image pixels into the parameter derivation function operation to obtain the coding and decoding parameters of the weight region to be allocated.
4. The method for deriving parameters according to claim 1, wherein,
The parameter derivation mode is weighting parameter derivation based on original pixels;
the step of fusing the parameter weight and the image pixels of the weight area to be allocated according to the parameter deduction mode to obtain the encoding and decoding parameters of the image information, comprising the following steps:
weighting and fusing the image pixels of the weight region to be allocated with the parameter weights;
And inputting the weighted and fused image pixels into parameter derivation function operation of the coding and decoding parameters to obtain the coding and decoding parameters of the weight region to be allocated.
5. The method for deriving parameters according to claim 1, wherein,
The parameter weight is fused with the image pixel of the weight area to be allocated according to the parameter deduction mode, and after the encoding and decoding parameters of the weight area to be allocated are obtained, the parameter deduction method further comprises the following steps:
In response to enabling the parameter derivation method, the switch syntax is set to an enable value.
6. The method for deriving parameters according to claim 5, wherein,
After the switch syntax is set to the enable value, the parameter derivation method further includes:
In response to replacing the parameter derivation method in the original codec process, no pattern syntax is set;
And responding to the parameter derivation methods in the original encoding and decoding process, selecting an optimal parameter derivation method based on the rate distortion cost of each parameter derivation method, and setting the mode value of the mode syntax.
7. An image encoding method, characterized in that the image encoding method comprises:
Obtaining the coding parameters of the current image block by the parameter derivation method according to any one of claims 1 to 6;
acquiring the coding information of the current image block by utilizing the coding parameters;
Coding the current image block according to the coding information to obtain a coding code stream;
wherein the coding information comprises a predicted image block, a reconstructed image block, a residual image block, and/or an image quality evaluation score of the current image block.
8. An image decoding method, characterized in that the image decoding method comprises:
Obtaining a decoding code stream and decoding parameters thereof;
Decoding the decoding code stream based on the decoding parameters to obtain a decoding image;
Wherein the decoding parameters are solved by the encoding information of the image encoding method of claim 7.
9. An image encoder comprising a memory and a processor coupled to the memory;
Wherein the memory is for storing program data, and the processor is for executing the program data to implement the parameter derivation method according to any one of claims 1 to 6, and/or the image encoding method according to claim 7.
10. An image decoder, comprising a memory and a processor coupled to the memory;
Wherein the memory is for storing program data and the processor is for executing the program data to implement the parameter derivation method according to any one of claims 1 to 6, and/or the image decoding method according to claim 8.
11. A computer storage medium for storing program data which, when executed by a computer, is adapted to carry out the parameter deriving method according to any one of claims 1 to 6, the image encoding method according to claim 7, and/or the image decoding method according to claim 8.
CN202311863363.3A 2023-12-29 2023-12-29 Parameter deriving method, image encoding method, image decoding method and device thereof Pending CN117956161A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311863363.3A CN117956161A (en) 2023-12-29 2023-12-29 Parameter deriving method, image encoding method, image decoding method and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311863363.3A CN117956161A (en) 2023-12-29 2023-12-29 Parameter deriving method, image encoding method, image decoding method and device thereof

Publications (1)

Publication Number Publication Date
CN117956161A true CN117956161A (en) 2024-04-30

Family

ID=90804079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311863363.3A Pending CN117956161A (en) 2023-12-29 2023-12-29 Parameter deriving method, image encoding method, image decoding method and device thereof

Country Status (1)

Country Link
CN (1) CN117956161A (en)

Similar Documents

Publication Publication Date Title
Chen et al. An overview of core coding tools in the AV1 video codec
US8111914B2 (en) Method and apparatus for encoding and decoding image by using inter color compensation
JP5606591B2 (en) Video compression method
WO2012121535A2 (en) Intra prediction method of chrominance block using luminance sample, and apparatus using same
WO2012115420A2 (en) Intra-prediction method using filtering, and apparatus using the method
CN102415098B (en) Image processing apparatus and method
AU2011239142A1 (en) Method and apparatus for performing interpolation based on transform and inverse transform
TW202315408A (en) Block-based prediction
CN110087083B (en) Method for selecting intra chroma prediction mode, image processing apparatus, and storage apparatus
KR20080018469A (en) Method and apparatus for transforming and inverse-transforming image
EP2168382A1 (en) Method for processing images and the corresponding electronic device
CN118020297A (en) End-to-end image and video coding method based on hybrid neural network
JP5909149B2 (en) COLOR CONVERTER, ENCODER AND DECODER, AND PROGRAM THEREOF
WO2019162118A1 (en) Methods and devices for linear component sample prediction using a double classification
US20240098255A1 (en) Video picture component prediction method and apparatus, and computer storage medium
CN108401185B (en) Reference frame selection method, video transcoding method, electronic device and storage medium
WO2015115645A1 (en) Moving image encoding device and moving image encoding method
US20110310975A1 (en) Method, Device and Computer-Readable Storage Medium for Encoding and Decoding a Video Signal and Recording Medium Storing a Compressed Bitstream
CN117956161A (en) Parameter deriving method, image encoding method, image decoding method and device thereof
US11202082B2 (en) Image processing apparatus and method
EP3656125A1 (en) Early termination of block-matching for collaborative filtering
US7706440B2 (en) Method for reducing bit rate requirements for encoding multimedia data
CN111263158A (en) Multi-transformation-core rapid processing method based on spatial correlation
TWI839923B (en) Method and apparatus for prediction based on cross component linear model in video coding system
RU2800683C2 (en) Method and device for video image component prediction and computer data carrier

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination