CN108737836A - A kind of interframe prediction encoding method, device and electronic equipment - Google Patents
A kind of interframe prediction encoding method, device and electronic equipment Download PDFInfo
- Publication number
- CN108737836A CN108737836A CN201810609309.9A CN201810609309A CN108737836A CN 108737836 A CN108737836 A CN 108737836A CN 201810609309 A CN201810609309 A CN 201810609309A CN 108737836 A CN108737836 A CN 108737836A
- Authority
- CN
- China
- Prior art keywords
- block
- prediction
- coefficient matrix
- preset reference
- reference frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/625—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Discrete Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
An embodiment of the present invention provides a kind of interframe prediction encoding method, device and electronic equipments.The method includes:Corresponding first prediction block of data to be encoded block is obtained from preset reference frame, the residual error for treating coded data block and the first prediction block executes dct transform, and classify to the DCT coefficient matrix obtained after dct transform, it is then based on the classification of DCT coefficient matrix, treat the inter-prediction that coded data block carries out successive ignition, it obtains and more matched second prediction block of data to be encoded block, it will quantify the DCT coefficient matrix after corresponding quantization in iterative process for the last time, motion vector between second prediction block and data to be encoded block, the number of data to be encoded block, the number of preset reference frame, which is input in entropy coder, where second prediction block is encoded.This programme can improve the accuracy of video image interframe predictive coding.
Description
Technical field
The present invention relates to technical field of video coding, more particularly to a kind of interframe prediction encoding method, device and electronics
Equipment.
Background technology
Digital video technology is widely used in the fields such as communication, computer, radio and television, because video is regarded by a series of
Frequency frame forms, and each video frame is typically the complete image of a width, and there are same or similar in same image or in adjacent image
Content, promoted the generation of video coding technique.
Currently, the video coding technique most used extensively is block base hybrid motion compensation DCT (Discrete Cosine
Transform, discrete cosine transform) video coding technique.The schematic diagram of dct transform Video coding as shown in Figure 1, input frame
It is split and is divided into data block one by one, then from left to right, encoded successively from top to bottom.
Existing DCT video coding techniques obtain the corresponding prediction block of data to be encoded block, so from reference frame first
Afterwards, subtract each other data to be encoded block and prediction block to obtain residual error, to residual error execute the DCT coefficient matrix obtained after dct transform into
The disposable quantization of row, finally, the motion vector between DCT coefficient matrix, data to be encoded block and prediction block that quantization is obtained
It is encoded etc. being input in entropy coder.
However, inventor has found in the implementation of the present invention, due to obtaining data to be encoded block pair from reference frame
The prediction block answered is treated coded data block in reference frame and obtained on the basis of estimation before encoding,
It may cause the residual error between the prediction block obtained and data to be encoded block larger in this way, cause inter prediction encoding accuracy low
The problem of.
Invention content
The embodiment of the present invention is designed to provide a kind of interframe prediction encoding method, device and electronic equipment, to improve
The accuracy of video image interframe predictive coding.Specific technical solution is as follows:
In a first aspect, an embodiment of the present invention provides a kind of interframe prediction encoding method, the method includes:
Corresponding first prediction block of data to be encoded block is obtained from preset reference frame;
The first residual error for calculating the data to be encoded block and the first prediction block executes discrete cosine to first residual error
Dct transform is converted, obtains DCT coefficient matrix, and current DCT coefficient matrix is classified according to coordinate position;
Quantified the coefficient corresponding to the coordinate position of target category in current DCT coefficient matrix, is belonged to, the amount of obtaining
DCT coefficient matrix after change;Wherein, the coefficient corresponding to the coordinate position of the target category does not quantify;
Based on the DCT coefficient matrix after the quantization, the first reconstructed block of the data to be encoded block is determined, and from default
Corresponding second prediction block of first reconstructed block is obtained in reference frame;
The second residual error for calculating the data to be encoded block and second prediction block executes DCT to second residual error
Transformation, obtains DCT coefficient matrix;
If the corresponding coefficient of at least a kind of coordinate position in obtained DCT coefficient matrix is not quantified, hold
Row is above-mentioned by current DCT coefficient matrix, belongs to the step of coefficient corresponding to the coordinate position of target category is quantified;It is no
Then,
It will quantify DCT coefficient matrix, the second prediction block of target and data to be encoded block after corresponding quantization for the last time
Between motion vector, the number of data to be encoded block and the number input of preset reference frame where the target the second prediction block
To being encoded in entropy coder;Wherein, the second prediction block of the target is that last time quantifies corresponding second prediction block.
Optionally, the step of corresponding second prediction block of first reconstructed block is obtained in the frame from preset reference, packet
It includes:
Estimation is carried out to first reconstructed block in preset reference frame, obtains first reconstructed block corresponding the
Two prediction blocks.
Optionally, described that estimation is carried out to first reconstructed block in preset reference frame, obtain first weight
The step of building block corresponding second prediction block, including:
Using scheduled interpolation filter, estimation is carried out to first reconstructed block in preset reference frame, is obtained
Corresponding second prediction block of first reconstructed block;
Or,
Using preset tap filter, estimation is carried out to first reconstructed block in preset reference frame, is obtained
Corresponding second prediction block of first reconstructed block.
Optionally, described that estimation is carried out to first reconstructed block in preset reference frame, obtain first weight
The step of building block corresponding second prediction block, including:
When there are a preset reference frame, first reconstructed block is moved in one preset reference frame
Estimation, obtains corresponding second prediction block of first reconstructed block;
When there are multiple preset reference frames, first reconstructed block is moved in the multiple preset reference frame
Estimation obtains the corresponding multiple prediction blocks of first reconstructed block, calculates between first reconstructed block and multiple prediction blocks
Error, using the second prediction block corresponding as first reconstructed block with the prediction block of the first reconstructed block error minimum.
Optionally, described coefficient corresponding to the coordinate position of target category in current DCT coefficient matrix, be belonged to carry out
The step of quantization, DCT coefficient matrix after being quantified, including:
The coefficient for belonging to corresponding to the coordinate position of target category in current DCT coefficient matrix is quantified, and will be worked as
Target factor zero setting in preceding DCT coefficient matrix, the DCT coefficient matrix after being quantified, the target factor are except belonging to mesh
Mark the coefficient outside the coefficient corresponding to the coordinate position of classification.
Second aspect, the embodiment of the present invention additionally provide a kind of inter prediction encoding device, and described device includes:
First acquisition module, for obtaining corresponding first prediction block of data to be encoded block from preset reference frame;
First computing module, the first residual error for calculating the data to be encoded block and the first prediction block, to described
One residual error executes discrete cosine transform transformation, obtains DCT coefficient matrix, and by current DCT coefficient matrix according to coordinate position
Classify;
Coefficient quantization module is in current DCT coefficient matrix, belonging to corresponding to the coordinate position of target category
Number is quantified, the DCT coefficient matrix after being quantified;Wherein, the coefficient corresponding to the coordinate position of the target category is not
Quantization;
Second acquisition module, for based on the DCT coefficient matrix after the quantization, determining the data to be encoded block
One reconstructed block, and corresponding second prediction block of first reconstructed block is obtained from preset reference frame;
Second computing module, the second residual error for calculating the data to be encoded block and second prediction block, to institute
It states the second residual error and executes dct transform, obtain DCT coefficient matrix;
Execution module, if not for the corresponding coefficient of at least a kind of coordinate position in obtained DCT coefficient matrix
Quantified, then triggers the coefficient quantization module;Otherwise,
It will quantify DCT coefficient matrix, the second prediction block of target and data to be encoded block after corresponding quantization for the last time
Between motion vector, the number of data to be encoded block and the number input of preset reference frame where the target the second prediction block
To being encoded in entropy coder;Wherein, the second prediction block of the target is that last time quantifies corresponding second prediction block.
Optionally, second acquisition module, is specifically used for:
Estimation is carried out to first reconstructed block in preset reference frame, obtains first reconstructed block corresponding the
Two prediction blocks.
Optionally, second acquisition module, is specifically used for:
Using scheduled interpolation filter, estimation is carried out to first reconstructed block in preset reference frame, is obtained
Corresponding second prediction block of first reconstructed block;
Or,
Using preset tap filter, estimation is carried out to first reconstructed block in preset reference frame, is obtained
Corresponding second prediction block of first reconstructed block.
Optionally, second acquisition module, is specifically used for:
When there are a preset reference frame, first reconstructed block is moved in one preset reference frame
Estimation, obtains corresponding second prediction block of first reconstructed block;
When there are multiple preset reference frames, first reconstructed block is moved in the multiple preset reference frame
Estimation obtains the corresponding multiple prediction blocks of first reconstructed block, calculates between first reconstructed block and multiple prediction blocks
Error, using the second prediction block corresponding as first reconstructed block with the prediction block of the first reconstructed block error minimum.
Optionally, the coefficient quantization module, is specifically used for:
The coefficient for belonging to corresponding to the coordinate position of target category in current DCT coefficient matrix is quantified, and will be worked as
Target factor zero setting in preceding DCT coefficient matrix, the DCT coefficient matrix after being quantified, the target factor are except belonging to mesh
Mark the coefficient outside the coefficient corresponding to the coordinate position of classification.
The third aspect, the embodiment of the present invention additionally provide a kind of electronic equipment, including processor, communication interface, memory
And communication bus, wherein processor, communication interface, memory complete mutual communication by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, realizes a kind of interframe described in above-mentioned first aspect
Predictive coding method.
Fourth aspect, the embodiment of the present invention additionally provide a kind of computer readable storage medium, described computer-readable to deposit
Instruction is stored in storage media, when run on a computer so that computer executes one kind described in above-mentioned first aspect
Interframe prediction encoding method.
5th aspect, the embodiment of the present invention additionally provides a kind of computer program product including instruction, when it is being calculated
When being run on machine so that computer executes a kind of interframe prediction encoding method described in above-mentioned first aspect.
A kind of interframe prediction encoding method, device and electronic equipment provided in an embodiment of the present invention, by from preset reference
Corresponding first prediction block of data to be encoded block is obtained in frame, the residual error for treating coded data block and prediction block executes dct transform,
And classify to the DCT coefficient matrix obtained after dct transform, it is then based on the classification of DCT coefficient matrix, to data to be encoded
Block carry out successive ignition inter-prediction, obtain with more matched second prediction block of data to be encoded block, by iterative process most
DCT coefficient matrix, the second prediction block after the corresponding quantization of latter quantization and the motion vector between data to be encoded block wait for
The number of preset reference frame, which is input in entropy coder, where the number of coded data block, the second prediction block is encoded.Because changing
The second prediction block that generation prediction obtains relatively is matched with data to be encoded block, and residual error is smaller, can improve inter prediction encoding standard
True property.And the residual error between data to be encoded block and the second prediction block is smaller, it is possible to reduce the code check of coding may further carry
The code efficiency of high video image.
Certainly, it implements any of the products of the present invention or method must be not necessarily required to reach all the above excellent simultaneously
Point.
Description of the drawings
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technology description to be briefly described.
Fig. 1 is the schematic diagram of dct transform Video coding;
Fig. 2 (a) is the schematic diagram of the forward prediction of image;
Fig. 2 (b) is the schematic diagram of the back forecast of image;
Fig. 2 (c) is the bi-directional predicted schematic diagram of image;
Fig. 2 (d) is the schematic diagram of image symmetrically predicted;
Fig. 3 is a kind of flow chart of interframe prediction encoding method provided in an embodiment of the present invention;
Fig. 4 is a kind of structural schematic diagram of interframe prediction encoding method provided in an embodiment of the present invention;
Fig. 5 is the structural schematic diagram of a kind of electronic equipment provided in an embodiment of the present invention.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention is described.
In order to solve prior art problem, an embodiment of the present invention provides a kind of interframe prediction encoding method, device and electricity
Sub- equipment, so that the prediction block obtained is more matched with data to be encoded block, residual error is smaller, improves video image interframe predictive coding
Accuracy.
For the sake of clarity, first inter prediction encoding related content is introduced:
In video coding technique, the frame of three types, respectively I frames, P frames and B frames are shared.I frames are also known as intraframe coding
Frame is a kind of independent frame of included all information, only use encoded data block in present frame information be used as it is to be encoded
The prediction of data block.P frames are also known as inter prediction encoding frame, need to refer to the encoded frame of front and could be encoded, and B frames
Claim bi-directional predictive coding frame, i.e., not only need to refer to the encoded frame of front, it is also necessary to could be into reference to subsequent encoded frame
Row coding.That is, P frames and B frames need to use other frames in time domain except present frame as number to be encoded in present frame
According to the prediction of block.
For inter prediction encoding, the prediction of the data to be encoded block in P frames may be used as shown in Fig. 2 (a)
The forward prediction of image.The prediction of data to be encoded block in B frames may be used the forward direction of the image as shown in Fig. 2 (a)
Prediction, the back forecast of the image as shown in Fig. 2 (b), the image as shown in Fig. 2 (c) bi-directional predicted and as shown in Fig. 2 (d)
Image symmetrical prediction.In Fig. 2 (a) to Fig. 2 (d), black block is data to be encoded block, and grey block is pre- in reference frame
Survey block, wherein data to be encoded block is directed toward prediction block BLK0, BLK1 in its reference frame by motion vector MV0, MV1 respectively.
A kind of interframe prediction encoding method is provided for the embodiments of the invention first below to be introduced.
It should be noted that a kind of interframe prediction encoding method that the embodiment of the present invention is provided can be applied to electronics and set
In standby, also, in a particular application, which can be terminal device or server, be not limited thereto certainly.
In addition, a kind of targeted picture frame to be encoded of interframe prediction encoding method for being provided of the embodiment of the present invention can be with
It is P frames, can also be B frames, in terms of can also be equally extended the intraframe predictive coding applied to I frames, it is not limited to which
A kind of picture frame.
As shown in figure 3, Fig. 3 is a kind of flow chart of interframe prediction encoding method provided in an embodiment of the present invention, this method
It may include steps of:
S301 obtains corresponding first prediction block of data to be encoded block from preset reference frame.
It, can be default when carrying out inter prediction encoding to the data to be encoded block in present frame in the embodiment of the present invention
Reference frame in the data to be encoded block carry out estimation, to obtain the data to be encoded block it is corresponding first prediction
Block.Can be in default ginseng specifically, obtaining the process of corresponding first prediction block of data to be encoded block from preset reference frame
It examines and treats coded data block progress estimation in frame, to obtain corresponding first prediction block of the data to be encoded block.
Wherein, preset reference frame, picture frame as referenced to present frame coding.When being encoded to present frame,
Coding can just be further realized by needing present frame being divided into multiple pieces, and macro block is made of integer block.The present invention is implemented
In example, data to be encoded block can be block to be encoded in present frame, can also be macro block to be encoded in present frame, specifically
Those skilled in the art can determine that this is not restricted by the present invention according to actual demand.
S302 calculates the first residual error of the data to be encoded block and the first prediction block, to first residual error execute from
Cosine transform dct transform is dissipated, obtains DCT coefficient matrix, and current DCT coefficient matrix is classified according to coordinate position.
After obtaining corresponding first prediction block of data to be encoded block, by data to be encoded block and the first prediction block phase
Subtract, the first residual error of data to be encoded block and the first prediction block is calculated.Then, the first obtained residual error is executed discrete remaining
String converts dct transform, obtains DCT coefficient matrix, and obtained current DCT coefficient matrix is classified according to coordinate position.
Specifically, executing discrete cosine transform transformation to the first obtained residual error, the process of DCT coefficient matrix is obtained
The realization of the prior art is can be found in, therefore not to repeat here by the present invention.
In the embodiment of the present invention, obtained current DCT coefficient matrix is classified according to coordinate position, Ke Yishi:Needle
It is the corresponding coordinate position of even number by the sum of x coordinate and coordinate of y-coordinate to the x coordinate and y-coordinate of current DCT coefficient matrix
It is divided into one kind, the sum of x coordinate and coordinate of y-coordinate are divided into one kind for the corresponding coordinate position of odd number.Can also be:Current
By x coordinate be even number y-coordinate it is that the corresponding coordinate position of even number is divided into one kind in DCT coefficient matrix, is that even number y is sat by x coordinate
It is designated as the corresponding coordinate position of odd number and is divided into one kind, by x coordinate be odd number y-coordinate be that the corresponding coordinate position of even number is divided into one
It is that the corresponding coordinate position of odd number is divided into one kind etc. that x coordinate is odd number y-coordinate by class.Certainly, the embodiment of the present invention be more than
It states and illustrates for mode classification, obtained current DCT coefficient matrix is classified according to coordinate position in practical application
Mode be not limited to that.
S303 is quantified the coefficient corresponding to the coordinate position of target category in current DCT coefficient matrix, is belonged to,
DCT coefficient matrix after being quantified;Wherein, the coefficient corresponding to the coordinate position of the target category does not quantify.
After obtained current DCT coefficient matrix is classified according to coordinate position, by current DCT coefficient matrix
In, the non-quantized coefficient belonged to corresponding to the coordinate position of target category is quantified, the DCT coefficient square after being quantified
Battle array.Specifically, the coordinate position of target category, can be after current DCT coefficient matrix is classified according to coordinate position
Any sort coordinate position, those skilled in the art can be defined the coordinate position of target category according to actual demand, this hair
It is bright that this is not restricted.
Wherein, it can be found in the prior art to belonging to the process that the coefficient corresponding to the coordinate position of target category is quantified
Realization, the present invention therefore not to repeat here.
Further, the coefficient amount of progress corresponding to the coordinate position of target category in current DCT coefficient matrix, will be belonged to
Change, the step of DCT coefficient matrix after being quantified, may include:
The coefficient for belonging to corresponding to the coordinate position of target category in current DCT coefficient matrix is quantified, and will be worked as
Target factor zero setting in preceding DCT coefficient matrix, the DCT coefficient matrix after being quantified, the target factor are except belonging to mesh
Mark the coefficient outside the coefficient corresponding to the coordinate position of classification.
That is, after obtained current DCT coefficient matrix is classified according to coordinate position, by current DCT coefficient square
In battle array, the non-quantized coefficient belonged to corresponding to the coordinate position of target category is quantified, except the coordinate for belonging to target category
The coefficient outside coefficient corresponding to position is all set to zero, the DCT coefficient matrix after being quantified.
For example, obtained current DCT coefficient matrix is divided into two classes according to coordinate position, one type is x coordinate and y
The sum of coordinate of coordinate is the coefficient corresponding to the corresponding coordinate position of even number, and another kind of is the sum of the coordinate of x coordinate and y-coordinate
For the coefficient corresponding to the corresponding coordinate position of odd number, and this two classes coefficient is not quantized.By the coordinate of x coordinate and y-coordinate
The sum of be the coefficient corresponding to the corresponding coordinate position of even number as target category, then by current DCT coefficient matrix, x coordinate
Quantified for the coefficient corresponding to the corresponding coordinate position of even number with the sum of the coordinate of y-coordinate, except the seat for belonging to target category
The sum of the coefficient outside coefficient corresponding to cursor position, i.e. x coordinate and coordinate of y-coordinate is corresponding to the corresponding coordinate positions of odd number
Coefficient, be all set to zero, the DCT coefficient matrix after then being quantified.
S304 determines the first reconstructed block of the data to be encoded block based on the DCT coefficient matrix after the quantization, and
Corresponding second prediction block of first reconstructed block is obtained from preset reference frame.
In the embodiment of the present invention, for the DCT coefficient matrix after quantization, by the coefficient in the DCT coefficient matrix after quantization
Inverse quantization is executed, then the DCT coefficient matrix rebuild executes inverse DCT transformation to the DCT coefficient matrix of reconstruction, obtains the
The reconstruction of first residual error is added the first prediction block, obtains the first reconstructed block of data to be encoded block by the reconstruction of one residual error, then from
Corresponding second prediction block of first reconstructed block is obtained in preset reference frame.
Wherein, inverse quantization is executed to the coefficient in the DCT coefficient matrix after quantization, the DCT coefficient matrix of reconstruction is executed
The process of inverse DCT transformation can be found in the realization of the prior art, and therefore not to repeat here by the present invention.
Specifically, the step of obtaining corresponding second prediction block of first reconstructed block from preset reference frame, can wrap
It includes:
Estimation is carried out to first reconstructed block in preset reference frame, obtains first reconstructed block corresponding the
Two prediction blocks.
In inter prediction encoding, since there is certain correlations with the scenery in neighbouring reference frame for present frame.
Therefore, present frame can be divided into several pieces or macro block, and try to search out the position of each piece or macro block in reference frame image,
And obtaining the relative displacement of spatial position between the two, obtained relative displacement is exactly motion vector, obtains movement arrow
The process of amount is referred to as estimation.
In the embodiment of the present invention, estimation is carried out to the first reconstructed block in preset reference frame, obtains first reconstruction
Motion vector between corresponding second prediction block of block and first reconstructed block and the second prediction block.Fortune used by specific
The method of dynamic estimation can be such as any mode in Fig. 2 (a)-Fig. 2 (d), and those skilled in the art can set according to actual demand
Fixed, this is not restricted by the present invention.
Further, estimation is carried out to first reconstructed block in preset reference frame, obtains described first and rebuilds
The step of block corresponding second prediction block, may include:
Using scheduled interpolation filter, estimation is carried out to first reconstructed block in preset reference frame, is obtained
Corresponding second prediction block of first reconstructed block;
Or,
Using preset tap filter, estimation is carried out to first reconstructed block in preset reference frame, is obtained
Corresponding second prediction block of first reconstructed block.
Specifically, scheduled interpolation filter may be used or utilize preset tap filter, in preset reference frame
It takes exercises to the first reconstructed block and estimates sub-pixel refinement, to obtain corresponding second prediction block of the first reconstructed block.
Estimation has pixel precision, and the levels of precision of the obtained motion vector of different pixel precisions is also not
With.In practical application, scheduled interpolation filter and preset tap filter, those skilled in the art can be according to reality
Demand is set.
For example, during carrying out estimation to the first reconstructed block in preset reference frame, if using the prior art
It realizes, is exactly taken exercises estimation with a quarter pixel precision in preset reference frame to the first reconstructed block, and the present invention is real
Apply example use scheduled interpolation filter make estimation for:To the first reconstructed block with 1/16th picture in preset reference frame
Plain precision, which is taken exercises, estimates sub-pixel refinement, obtains more accurate second prediction block.Specific scheduled interpolation filter, can be with
It is bilinear interpolation filter, can also be other interpolation filters, this is not restricted by the application.
Alternatively, using preset tap filter, such as can be each 8 tap filter of horizontal vertical, setting it is horizontal and
Each 8 unknown numbers of vertical direction carry out estimation, by prediction block in reference frame in preset reference frame to the first reconstructed block
Pixel is matched with the pixel of the first reconstructed block, and obtaining optimal horizontal and vertical 8 tap using least square fitting filters
Then wave device reuses horizontal and vertical 8 tap filter, carrying out movement to the first reconstructed block in preset reference frame estimates
Sub-pixel refinement is counted, and then obtains more accurate second prediction block.
Further, estimation is carried out to first reconstructed block in preset reference frame, obtains described first and rebuilds
The step of block corresponding second prediction block, can also include:
When there are a preset reference frame, first reconstructed block is moved in one preset reference frame
Estimation, obtains corresponding second prediction block of first reconstructed block;
When there are multiple preset reference frames, first reconstructed block is moved in the multiple preset reference frame
Estimation obtains the corresponding multiple prediction blocks of first reconstructed block, calculates between first reconstructed block and multiple prediction blocks
Error, using the second prediction block corresponding as first reconstructed block with the prediction block of the first reconstructed block error minimum.
In the embodiment of the present invention, the type for treating encoded image frame is not restricted, and treats the ginseng referenced by encoded image frame
The quantity for examining frame is not also restricted.
So, when there are a preset reference frame, it is only necessary in this preset reference frame to the first reconstructed block into
Row estimation directly obtains corresponding second prediction block of first reconstructed block.
When there are multiple preset reference frames, need in each reference frame of multiple preset reference frames to the first reconstructed block
Carrying out estimation, then the corresponding prediction block of the first reconstructed block can be accessed by being directed to each reference frame, obtain multiple prediction blocks,
Then, the error between the first reconstructed block and multiple prediction blocks is calculated separately, by the prediction of the error minimum with the first reconstructed block
Block is as corresponding second prediction block of first reconstructed block so that second prediction block of selection is in multiple prediction blocks with first
The most matched prediction block of reconstructed block.
S305 calculates the second residual error of the data to be encoded block and second prediction block, is held to second residual error
Row dct transform, obtains DCT coefficient matrix.
After obtaining corresponding second prediction block of the first reconstructed block, calculate between data to be encoded block and the second prediction block
The second residual error, dct transform then is executed to second residual error, obtains DCT coefficient matrix, realizes and treats the of coded data block
Re prediction.
S306, judges whether the corresponding coefficient of at least a kind of coordinate position in obtained DCT coefficient matrix does not carry out
Quantization, if it is, S303 is returned to step, if not, thening follow the steps S307.
After obtaining DCT coefficient matrix again, at least a kind of coordinate position in obtained DCT coefficient matrix is judged
Whether corresponding coefficient is not quantified, if it is corresponding to there is at least a kind of coordinate position in obtained DCT coefficient matrix
Coefficient is not quantified, then returns to step S303.If not, indicating that any sort in obtained DCT coefficient matrix is sat
The corresponding coefficient of cursor position is made quantization, thens follow the steps S307.
S307 will quantify the DCT coefficient matrix after corresponding quantization, the second prediction block of target and number to be encoded for the last time
According to the number of preset reference frame where the second prediction block of number and the target of motion vector, data to be encoded block between block
It is input in entropy coder and is encoded;Wherein, the second prediction block of the target is that last time quantifies corresponding second prediction
Block.
It, then will most when the corresponding coefficient of any sort coordinate position in obtained DCT coefficient matrix is made quantization
The latter DCT coefficient matrix quantified after corresponding quantization quantifies corresponding second prediction block and data to be encoded for the last time
The number and last time of motion vector, data to be encoded block between block quantify preset reference where corresponding second prediction block
The number of frame, which is input in entropy coder, to be encoded.
A kind of interframe prediction encoding method provided in an embodiment of the present invention, by obtaining number to be encoded from preset reference frame
According to corresponding first prediction block of block, the residual error for treating coded data block and prediction block executes dct transform, and to being obtained after dct transform
To DCT coefficient matrix classify, be then based on the classification of DCT coefficient matrix, treat coded data block carry out successive ignition
Inter-prediction, obtain with more matched second prediction block of data to be encoded block, will in iterative process for the last time quantization correspond to
Quantization after DCT coefficient matrix, the volume of the motion vector between the second prediction block and data to be encoded block, data to be encoded block
Number, the number of preset reference frame where the second prediction block is input in entropy coder and encoded.Because iteration predict
Two prediction blocks are relatively matched with data to be encoded block, and residual error is smaller, can improve inter prediction encoding accuracy.And number to be encoded
It is smaller according to the residual error between block and the second prediction block, it is possible to reduce the code check of coding may further improve the volume of video image
Code efficiency.
Corresponding to above method embodiment, an embodiment of the present invention provides a kind of inter prediction encoding devices, such as Fig. 4 institutes
Show, described device may include:
First acquisition module 401, for obtaining corresponding first prediction block of data to be encoded block from preset reference frame;
First computing module 402, the first residual error for calculating the data to be encoded block and the first prediction block, to described
First residual error executes discrete cosine transform transformation, obtains DCT coefficient matrix, and by current DCT coefficient matrix according to coordinate bit
It sets and classifies;
Coefficient quantization module 403, for by current DCT coefficient matrix, belonging to corresponding to the coordinate position of target category
Coefficient quantified, the DCT coefficient matrix after being quantified;Wherein, it is corresponding to the coordinate position of the target category
Number does not quantify;
Second acquisition module 404, for based on the DCT coefficient matrix after the quantization, determining the data to be encoded block
The first reconstructed block, and corresponding second prediction block of first reconstructed block is obtained from preset reference frame;
Second computing module 405, the second residual error for calculating the data to be encoded block and second prediction block are right
Second residual error executes dct transform, obtains DCT coefficient matrix;
Execution module 406, if for the corresponding coefficient of at least a kind of coordinate position in obtained DCT coefficient matrix
Do not quantified, then triggers the coefficient quantization module 403;Otherwise,
It will quantify DCT coefficient matrix, the second prediction block of target and data to be encoded block after corresponding quantization for the last time
Between motion vector, the number of data to be encoded block and the number input of preset reference frame where the target the second prediction block
To being encoded in entropy coder;Wherein, the second prediction block of the target is that last time quantifies corresponding second prediction block.
A kind of inter prediction encoding device provided in an embodiment of the present invention, by obtaining number to be encoded from preset reference frame
According to corresponding first prediction block of block, the residual error for treating coded data block and prediction block executes dct transform, and to being obtained after dct transform
To DCT coefficient matrix classify, be then based on the classification of DCT coefficient matrix, treat coded data block carry out successive ignition
Inter-prediction, obtain with more matched second prediction block of data to be encoded block, will in iterative process for the last time quantization correspond to
Quantization after DCT coefficient matrix, the volume of the motion vector between the second prediction block and data to be encoded block, data to be encoded block
Number, the number of preset reference frame where the second prediction block is input in entropy coder and encoded.Because iteration predict
Two prediction blocks are relatively matched with data to be encoded block, and residual error is smaller, can improve inter prediction encoding accuracy.And number to be encoded
It is smaller according to the residual error between block and the second prediction block, it is possible to reduce the code check of coding may further improve the volume of video image
Code efficiency.
It should be noted that the device of the embodiment of the present invention is a kind of interframe prediction encoding method correspondence as shown in figure 3
Device, a kind of all embodiments of interframe prediction encoding method shown in Fig. 3 are suitable for the device, and can reach identical
Advantageous effect.
Wherein, the second acquisition module described above, is specifically used for:
Estimation is carried out to first reconstructed block in preset reference frame, obtains first reconstructed block corresponding the
Two prediction blocks.
Second acquisition module, also particularly useful for:
Using scheduled interpolation filter, estimation is carried out to first reconstructed block in preset reference frame, is obtained
Corresponding second prediction block of first reconstructed block;
Or,
Using preset tap filter, estimation is carried out to first reconstructed block in preset reference frame, is obtained
Corresponding second prediction block of first reconstructed block.
Second acquisition module, also particularly useful for:
When there are a preset reference frame, first reconstructed block is moved in one preset reference frame
Estimation, obtains corresponding second prediction block of first reconstructed block;
When there are multiple preset reference frames, first reconstructed block is moved in the multiple preset reference frame
Estimation obtains the corresponding multiple prediction blocks of first reconstructed block, calculates between first reconstructed block and multiple prediction blocks
Error, using the second prediction block corresponding as first reconstructed block with the prediction block of the first reconstructed block error minimum.
The coefficient quantization module, is specifically used for:
The coefficient for belonging to corresponding to the coordinate position of target category in current DCT coefficient matrix is quantified, and will be worked as
Target factor zero setting in preceding DCT coefficient matrix, the DCT coefficient matrix after being quantified, the target factor are except belonging to mesh
Mark the coefficient outside the coefficient corresponding to the coordinate position of classification.
The embodiment of the present invention additionally provides a kind of electronic equipment, as shown in figure 5, including processor 501, communication interface 502,
Memory 503 and communication bus 504, wherein processor 501, communication interface 502, memory 503 are complete by communication bus 504
At mutual communication,
Memory 503, for storing computer program;
Processor 501 when for executing the program stored on memory 503, realizes what the embodiment of the present invention was provided
Method.
A kind of electronic equipment provided in an embodiment of the present invention is corresponded to by obtaining data to be encoded block from preset reference frame
The first prediction block, the residual error for treating coded data block and prediction block executes dct transform, and the DCT systems to being obtained after dct transform
Matrix number is classified, and the classification of DCT coefficient matrix is then based on, and the interframe for treating coded data block progress successive ignition is pre-
Survey, obtain with more matched second prediction block of data to be encoded block, after quantifying corresponding quantization in iterative process for the last time
DCT coefficient matrix, the motion vector between the second prediction block and data to be encoded block, the number of data to be encoded block, second
The number of preset reference frame, which is input in entropy coder, where prediction block is encoded.The second prediction block predicted by iteration
It is relatively matched with data to be encoded block, residual error is smaller, can improve inter prediction encoding accuracy.And data to be encoded block and
Residual error between two prediction blocks is smaller, it is possible to reduce the code check of coding may further improve the code efficiency of video image.
The communication bus that above-mentioned electronic equipment is mentioned can be Peripheral Component Interconnect standard (Peripheral Component
Interconnect, abbreviation PCI) bus or expanding the industrial standard structure (Extended Industry Standard
Architecture, abbreviation EISA) bus etc..The communication bus can be divided into address bus, data/address bus, controlling bus etc..
For ease of indicating, only indicated with a thick line in figure, it is not intended that an only bus or a type of bus.
Communication interface is for the communication between above-mentioned electronic equipment and other equipment.
Memory may include random access memory (Random Access Memory, abbreviation RAM), can also include
Nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.Optionally, memory may be used also
To be at least one storage device for being located remotely from aforementioned processor.
Above-mentioned processor can be general processor, including central processing unit (Central Processing Unit,
Abbreviation CPU), network processing unit (Network Processor, abbreviation NP) etc.;It can also be digital signal processor
(Digital Signal Processing, abbreviation DSP), application-specific integrated circuit (Application Specific
Integrated Circuit, abbreviation ASIC), field programmable gate array (Field-Programmable Gate Array,
Abbreviation FPGA) either other programmable logic device, discrete gate or transistor logic, discrete hardware components.
In another embodiment provided by the invention, a kind of computer readable storage medium is additionally provided, which can
It reads to be stored with instruction in storage medium, when run on a computer so that computer executes any institute in above-described embodiment
The interframe prediction encoding method stated, to obtain identical technique effect.
In another embodiment provided by the invention, a kind of computer program product including instruction is additionally provided, when it
When running on computers so that computer executes any interframe prediction encoding method in above-described embodiment, to obtain
Identical technique effect.
In the above-described embodiments, can come wholly or partly by software, hardware, firmware or its arbitrary combination real
It is existing.When implemented in software, it can entirely or partly realize in the form of a computer program product.The computer program
Product includes one or more computer instructions.When loading on computers and executing the computer program instructions, all or
It partly generates according to the flow or function described in the embodiment of the present invention.The computer can be all-purpose computer, special meter
Calculation machine, computer network or other programmable devices.The computer instruction can be stored in computer readable storage medium
In, or from a computer readable storage medium to the transmission of another computer readable storage medium, for example, the computer
Instruction can pass through wired (such as coaxial cable, optical fiber, number from a web-site, computer, server or data center
User's line (DSL)) or wireless (such as infrared, wireless, microwave etc.) mode to another web-site, computer, server or
Data center is transmitted.The computer readable storage medium can be any usable medium that computer can access or
It is comprising data storage devices such as one or more usable mediums integrated server, data centers.The usable medium can be with
It is magnetic medium, (for example, floppy disk, hard disk, tape), optical medium (for example, DVD) or semiconductor medium (such as solid state disk
Solid State Disk (SSD)) etc..
It should be noted that herein, relational terms such as first and second and the like are used merely to a reality
Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation
In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to
Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those
Element, but also include other elements that are not explicitly listed, or further include for this process, method, article or equipment
Intrinsic element.In the absence of more restrictions, the element limited by sentence " including one ... ", it is not excluded that
There is also other identical elements in the process, method, article or apparatus that includes the element.
Each embodiment in this specification is all made of relevant mode and describes, identical similar portion between each embodiment
Point just to refer each other, and each embodiment focuses on the differences from other embodiments.Especially for device reality
For applying example, since it is substantially similar to the method embodiment, so description is fairly simple, related place is referring to embodiment of the method
Part explanation.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the scope of the present invention.It is all
Any modification, equivalent replacement, improvement and so within the spirit and principles in the present invention, are all contained in protection scope of the present invention
It is interior.
Claims (11)
1. a kind of interframe prediction encoding method, which is characterized in that including:
Corresponding first prediction block of data to be encoded block is obtained from preset reference frame;
The first residual error for calculating the data to be encoded block and the first prediction block executes discrete cosine transform to first residual error
Dct transform obtains DCT coefficient matrix, and current DCT coefficient matrix is classified according to coordinate position;
Quantified the coefficient corresponding to the coordinate position of target category in current DCT coefficient matrix, is belonged to, after obtaining quantization
DCT coefficient matrix;Wherein, the coefficient corresponding to the coordinate position of the target category does not quantify;
Based on the DCT coefficient matrix after the quantization, the first reconstructed block of the data to be encoded block is determined, and from preset reference
Corresponding second prediction block of first reconstructed block is obtained in frame;
The second residual error for calculating the data to be encoded block and second prediction block executes dct transform to second residual error,
Obtain DCT coefficient matrix;
If the corresponding coefficient of at least a kind of coordinate position in obtained DCT coefficient matrix is not quantified, in execution
It states in current DCT coefficient matrix, belongs to the step of coefficient corresponding to the coordinate position of target category is quantified;Otherwise,
To for the last time it quantify between the DCT coefficient matrix after corresponding quantization, the second prediction block of target and data to be encoded block
Motion vector, the number of data to be encoded block and the number of preset reference frame where the target the second prediction block be input to entropy
It is encoded in encoder;Wherein, the second prediction block of the target is that last time quantifies corresponding second prediction block.
2. according to the method described in claim 1, it is characterized in that, obtaining first reconstructed block in the frame from preset reference
The step of corresponding second prediction block, including:
Estimation is carried out to first reconstructed block in preset reference frame, obtains first reconstructed block corresponding second in advance
Survey block.
3. according to the method described in claim 2, it is characterized in that, it is described in preset reference frame to first reconstructed block into
Row estimation, the step of obtaining first reconstructed block corresponding second prediction block, including:
Using scheduled interpolation filter, estimation is carried out to first reconstructed block in preset reference frame, is obtained described
Corresponding second prediction block of first reconstructed block;
Or,
Using preset tap filter, estimation is carried out to first reconstructed block in preset reference frame, is obtained described
Corresponding second prediction block of first reconstructed block.
4. according to the method described in claim 2, it is characterized in that, it is described in preset reference frame to first reconstructed block into
Row estimation, the step of obtaining first reconstructed block corresponding second prediction block, including:
When there are a preset reference frame, first reconstructed block move in one preset reference frame and is estimated
Meter, obtains corresponding second prediction block of first reconstructed block;
When there are multiple preset reference frames, first reconstructed block move in the multiple preset reference frame and is estimated
Meter obtains the corresponding multiple prediction blocks of first reconstructed block, calculates the mistake between first reconstructed block and multiple prediction blocks
Difference, using the second prediction block corresponding as first reconstructed block with the prediction block of the first reconstructed block error minimum.
5. according to claim 1-4 any one of them methods, which is characterized in that it is described by current DCT coefficient matrix, belong to
The step of coefficient corresponding to the coordinate position of target category is quantified, DCT coefficient matrix after being quantified, including:
The coefficient for belonging to corresponding to the coordinate position of target category in current DCT coefficient matrix is quantified, and by current DCT
Target factor zero setting in coefficient matrix, the DCT coefficient matrix after being quantified, the target factor are except belonging to target category
Coordinate position corresponding to coefficient outside coefficient.
6. a kind of inter prediction encoding device, which is characterized in that including:
First acquisition module, for obtaining corresponding first prediction block of data to be encoded block from preset reference frame;
First computing module, the first residual error for calculating the data to be encoded block and the first prediction block are residual to described first
Difference executes discrete cosine transform transformation, obtains DCT coefficient matrix, and current DCT coefficient matrix is carried out according to coordinate position
Classification;
Coefficient quantization module, in current DCT coefficient matrix, will belong to coefficient corresponding to the coordinate position of target category into
Row quantization, the DCT coefficient matrix after being quantified;Wherein, the coefficient corresponding to the coordinate position of the target category does not quantify;
Second acquisition module, for based on the DCT coefficient matrix after the quantization, determining the first weight of the data to be encoded block
Block is built, and obtains corresponding second prediction block of first reconstructed block from preset reference frame;
Second computing module, the second residual error for calculating the data to be encoded block and second prediction block, to described
Two residual errors execute dct transform, obtain DCT coefficient matrix;
Execution module, if the corresponding coefficient of at least a kind of coordinate position in obtained DCT coefficient matrix does not carry out
Quantization, then trigger the coefficient quantization module;Otherwise,
To for the last time it quantify between the DCT coefficient matrix after corresponding quantization, the second prediction block of target and data to be encoded block
Motion vector, the number of data to be encoded block and the number of preset reference frame where the target the second prediction block be input to entropy
It is encoded in encoder;Wherein, the second prediction block of the target is that last time quantifies corresponding second prediction block.
7. device according to claim 6, which is characterized in that second acquisition module is specifically used for:
Estimation is carried out to first reconstructed block in preset reference frame, obtains first reconstructed block corresponding second in advance
Survey block.
8. device according to claim 7, which is characterized in that second acquisition module is specifically used for:
Using scheduled interpolation filter, estimation is carried out to first reconstructed block in preset reference frame, is obtained described
Corresponding second prediction block of first reconstructed block;
Or,
Using preset tap filter, estimation is carried out to first reconstructed block in preset reference frame, is obtained described
Corresponding second prediction block of first reconstructed block.
9. device according to claim 7, which is characterized in that second acquisition module is specifically used for:
When there are a preset reference frame, first reconstructed block move in one preset reference frame and is estimated
Meter, obtains corresponding second prediction block of first reconstructed block;
When there are multiple preset reference frames, first reconstructed block move in the multiple preset reference frame and is estimated
Meter obtains the corresponding multiple prediction blocks of first reconstructed block, calculates the mistake between first reconstructed block and multiple prediction blocks
Difference, using the second prediction block corresponding as first reconstructed block with the prediction block of the first reconstructed block error minimum.
10. according to claim 6-9 any one of them devices, which is characterized in that the coefficient quantization module is specifically used for:
The coefficient for belonging to corresponding to the coordinate position of target category in current DCT coefficient matrix is quantified, and by current DCT
Target factor zero setting in coefficient matrix, the DCT coefficient matrix after being quantified, the target factor are except belonging to target category
Coordinate position corresponding to coefficient outside coefficient.
11. a kind of electronic equipment, which is characterized in that including processor, communication interface, memory and communication bus, wherein processing
Device, communication interface, memory complete mutual communication by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, realizes any method and steps of claim 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810609309.9A CN108737836A (en) | 2018-06-13 | 2018-06-13 | A kind of interframe prediction encoding method, device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810609309.9A CN108737836A (en) | 2018-06-13 | 2018-06-13 | A kind of interframe prediction encoding method, device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108737836A true CN108737836A (en) | 2018-11-02 |
Family
ID=63929492
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810609309.9A Pending CN108737836A (en) | 2018-06-13 | 2018-06-13 | A kind of interframe prediction encoding method, device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108737836A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112422964A (en) * | 2020-10-30 | 2021-02-26 | 西安万像电子科技有限公司 | Progressive coding method and device |
CN117490002A (en) * | 2023-12-28 | 2024-02-02 | 成都同飞科技有限责任公司 | Water supply network flow prediction method and system based on flow monitoring data |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101783957A (en) * | 2010-03-12 | 2010-07-21 | 清华大学 | Method and device for predictive encoding of video |
CN104702962A (en) * | 2015-03-03 | 2015-06-10 | 华为技术有限公司 | Intra-frame coding and decoding method, coder and decoder |
-
2018
- 2018-06-13 CN CN201810609309.9A patent/CN108737836A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101783957A (en) * | 2010-03-12 | 2010-07-21 | 清华大学 | Method and device for predictive encoding of video |
CN104702962A (en) * | 2015-03-03 | 2015-06-10 | 华为技术有限公司 | Intra-frame coding and decoding method, coder and decoder |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112422964A (en) * | 2020-10-30 | 2021-02-26 | 西安万像电子科技有限公司 | Progressive coding method and device |
CN112422964B (en) * | 2020-10-30 | 2024-05-17 | 西安万像电子科技有限公司 | Progressive coding method and device |
CN117490002A (en) * | 2023-12-28 | 2024-02-02 | 成都同飞科技有限责任公司 | Water supply network flow prediction method and system based on flow monitoring data |
CN117490002B (en) * | 2023-12-28 | 2024-03-08 | 成都同飞科技有限责任公司 | Water supply network flow prediction method and system based on flow monitoring data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104539966B (en) | Image prediction method and relevant apparatus | |
US10404993B2 (en) | Picture prediction method and related apparatus | |
CN104661031B (en) | For encoding video pictures and method, encoding device and the decoding device of decoding | |
KR102073638B1 (en) | Picture prediction method and picture prediction device | |
CN103931187B (en) | Image encoding apparatus, method for encoding images, image decoding apparatus, picture decoding method | |
CN109005408A (en) | A kind of intra-frame prediction method, device and electronic equipment | |
CN107113425A (en) | Method for video coding and equipment and video encoding/decoding method and equipment | |
CN106375764A (en) | Directional intra prediction and block copy prediction combined video intra coding method | |
CN108848381A (en) | Method for video coding, coding/decoding method, device, computer equipment and storage medium | |
CN108769681A (en) | Video coding, coding/decoding method, device, computer equipment and storage medium | |
CN104539949B (en) | The method and device of quick partitioning based on edge direction in HEVC screen codings | |
CN108134939A (en) | A kind of method for estimating and device | |
CN109660800A (en) | Method for estimating, device, electronic equipment and computer readable storage medium | |
CN103108188A (en) | Video steganalysis method based on partial cost non-optimal statistics | |
Vajgl et al. | Advanced F‐Transform‐Based Image Fusion | |
JP2019515606A (en) | Intra prediction video coding method and apparatus | |
CN106034236A (en) | Method, Apparatus and coder for selecting optimal reference frame in HEVC | |
CN108737836A (en) | A kind of interframe prediction encoding method, device and electronic equipment | |
CN110213576A (en) | Method for video coding, video coding apparatus, electronic equipment and storage medium | |
CN108574849A (en) | DCT inverse transformation methods, inverter, electronic equipment and storage medium | |
CN112261413B (en) | Video encoding method, encoding device, electronic device, and storage medium | |
CN109688411B (en) | Video coding rate distortion cost estimation method and device | |
CN109660806A (en) | A kind of coding method and device | |
CN110351560A (en) | A kind of coding method, system and electronic equipment and storage medium | |
CN110035288A (en) | Method, code device and the storage medium that video sequence is encoded |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181102 |
|
RJ01 | Rejection of invention patent application after publication |