CN109510985B - Video coding method and device thereof - Google Patents

Video coding method and device thereof Download PDF

Info

Publication number
CN109510985B
CN109510985B CN201811260535.7A CN201811260535A CN109510985B CN 109510985 B CN109510985 B CN 109510985B CN 201811260535 A CN201811260535 A CN 201811260535A CN 109510985 B CN109510985 B CN 109510985B
Authority
CN
China
Prior art keywords
coded
coding
macro block
coding method
residual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811260535.7A
Other languages
Chinese (zh)
Other versions
CN109510985A (en
Inventor
田林海
岳庆冬
李雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Lianhai Network Technology Co.,Ltd.
Original Assignee
Hangzhou Lianhai Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Lianhai Network Technology Co ltd filed Critical Hangzhou Lianhai Network Technology Co ltd
Priority to CN201811260535.7A priority Critical patent/CN109510985B/en
Publication of CN109510985A publication Critical patent/CN109510985A/en
Application granted granted Critical
Publication of CN109510985B publication Critical patent/CN109510985B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention relates to a video coding method and a device thereof, which comprises the steps of coding a macro block to be coded according to at least two groups of coding methods respectively to obtain at least two coding streams; respectively operating according to the at least two coded streams to form coded information corresponding to each coded stream; and acquiring the final coding method of the macro block to be coded according to the coding information respectively corresponding to the at least two coding streams. Compared with the prior art, the invention encodes the coding macro block by a plurality of encoding methods, can improve the image coding compression ratio for the image coding blocks of different scenes and further reduces the theoretical limit entropy of compression.

Description

Video coding method and device thereof
Technical Field
The present invention relates to the field of compression technologies, and in particular, to a video encoding method and apparatus.
Background
With the rapid development of internet science and technology and the increasing abundance of human physical and mental culture, the application requirements for videos in the internet, particularly high-definition videos, are more and more, the data volume of the high-definition videos is very large, and the problem that high-definition video compression coding needs to be firstly solved in order to transmit the high-definition videos in the internet with limited bandwidth is the problem of high-definition video compression coding. Currently, two International organizations are specialized in the formulation of Video Coding standards, namely, Motion Picture Experts Group (MPEG) under International Organization for Standardization (ISO ")/International Electrotechnical Commission (IEC), and Video Coding Experts Group (VCEG) under International Telecommunication Union Telecommunication Standardization sector (ITU-T). MPEG, established in 1986, is specifically responsible for the formulation of relevant standards in the multimedia field, mainly for storage, broadcast television, streaming media over the internet or wireless networks, etc. The ITU-T mainly defines video coding standards for the field of real-time video communication, such as video telephony, video conferencing, and other applications.
Video coding standards for various applications have been successfully established internationally over the past few decades, mainly including: the MPEG-1 standard for Video Compact Disc (VCD), the MPEG-2 standard for Digital Versatile Disc (DVD) and Digital Video Broadcasting (DVB), the h.261 standard for Video conferencing, the h.263 standard, the h.264 standard, the MPEG-4 standard that allows encoding of objects of arbitrary shape, and the latest High performance Video Coding (HEVC) standard.
The original video signal has a huge data amount, and the original video signal has a large amount of redundant information, which includes spatial redundant information, temporal redundant information, data redundant information, and visual redundant information. The purpose of compression is to reduce the variety of redundant information present in the video signal. In the current coding method, a single coding method is adopted for each macro block to be coded, and the single coding method cannot be applied to different scenes of the macro block to be coded.
Disclosure of Invention
Therefore, to solve the technical defects and shortcomings of the prior art, the present invention provides a video encoding method and apparatus.
Specifically, an embodiment of the present invention provides a video encoding method, including: respectively coding a macro block to be coded according to at least two groups of coding methods to obtain at least two coding streams;
respectively operating according to the at least two coded streams to form coded information corresponding to each coded stream;
and acquiring the final coding method of the macro block to be coded according to the coding information respectively corresponding to the at least two coding streams.
In an embodiment of the present invention, each of the encoded streams includes an encoding method adopted by the macroblock to be encoded and a prediction residual of each pixel component in the macroblock to be encoded.
In an embodiment of the present invention, the performing an operation according to the at least two encoded streams to form the encoded information corresponding to each encoded stream includes:
and calculating according to the prediction residual of each pixel component in each coding stream to obtain residual subjective sum so as to form coding information corresponding to each coding stream.
In an embodiment of the present invention, the method for obtaining a final coding method of the macroblock to be coded according to coding information respectively corresponding to the at least two coding streams includes:
selecting the minimum value of the coding information respectively corresponding to the at least two coding streams;
determining the coding method in the coding stream corresponding to the minimum value as the final coding method of the macro block to be coded
Another embodiment of the present invention provides a video encoding apparatus, including:
the encoding module is used for respectively encoding the macro blocks to be encoded according to at least two groups of encoding methods to obtain at least two encoding streams;
the operation module is connected with the coding module and is used for respectively performing operation according to the at least two coding streams to form coding information corresponding to each coding stream;
and the acquisition module is connected with the operation module and used for selecting the final coding method of the macro block to be coded according to the coding information respectively corresponding to the at least two coding streams.
In an embodiment of the present invention, each of the encoded streams includes an encoding method adopted by the macroblock to be encoded and a prediction residual of each pixel component in the macroblock to be encoded.
In an embodiment of the present invention, the operation module is specifically configured to: and calculating according to the prediction residual of each pixel component in each coding stream to obtain residual subjective sum so as to form coding information corresponding to each coding stream.
In one embodiment of the present invention, the obtaining module includes:
a selecting unit, configured to select a minimum value of the coding information corresponding to the at least two coding streams;
and the determining unit is used for determining the coding method in the coding stream corresponding to the minimum value as the final coding method of the macro block to be coded.
Based on this, the invention has the following advantages:
the invention encodes the coding macro block by a plurality of coding methods to form coding stream, selects the optimal coding method according to the coding stream, can improve the image coding compression ratio for the image coding blocks of different scenes, and further reduces the theoretical limit entropy of the compression.
Other aspects and features of the present invention will become apparent from the following detailed description, which proceeds with reference to the accompanying drawings. It is to be understood, however, that the drawings are designed solely for purposes of illustration and not as a definition of the limits of the invention, for which reference should be made to the appended claims. It should be further understood that the drawings are not necessarily drawn to scale and that, unless otherwise indicated, they are merely intended to conceptually illustrate the structures and procedures described herein.
Drawings
The following detailed description of embodiments of the invention will be made with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a video encoding method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a sampling manner of a texture gradient coding method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a prediction method of a texture gradient coding method according to an embodiment of the present invention;
fig. 4 is a reference diagram of a pixel component for prediction reconstruction in a texture adaptive coding method according to an embodiment of the present invention;
FIG. 5 is a reference diagram of a predicted reconstructed pixel component of another texture adaptive coding method according to an embodiment of the present invention;
FIG. 6 is a reference diagram of a pixel component for prediction reconstruction in another texture adaptive coding method according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a video encoding apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart of a video encoding method according to an embodiment of the present invention; this embodiment describes a video encoding method provided by the present invention in detail, and the method includes the following steps:
step 1, coding a macro block to be coded according to at least two groups of coding methods respectively to obtain at least two coding streams;
step 2, respectively operating according to the at least two coded streams to form coded information corresponding to each coded stream;
and 3, acquiring the final coding method of the macro block to be coded according to the coding information respectively corresponding to the at least two coding streams.
Wherein, in step 1, each coded stream includes a coding method adopted by the macroblock to be coded and a prediction residual of each pixel component in the macroblock to be coded.
Wherein, step 2 may include:
and calculating according to the prediction residual of each pixel component in each coding stream to obtain residual subjective sum so as to form coding information corresponding to each coding stream.
Wherein, step 3 may include:
step 31, selecting the minimum value of the coding information respectively corresponding to the at least two coding streams;
and step 32, determining the coding method in the coding stream corresponding to the minimum value as the final coding method of the macro block to be coded.
In a specific embodiment, the performing an operation according to the prediction residual of each pixel component in each encoded stream to obtain a residual subjective sum to form encoded information corresponding to each encoded stream includes:
the Sum of Absolute Differences (SAD) and standard deviation E of the residual of the macroblock to be coded under each coding method are calculated from the coded stream, as described in the following formula.
Figure BDA0001843788450000061
Figure BDA0001843788450000062
Figure BDA0001843788450000063
Where Res is the prediction residual; i is the serial number of the pixel component in the current macro block to be coded; m × n is the number of pixel components in the current macroblock to be coded; ABS is an absolute value; AVE is the average residual.
Finally, according to the SAD and E, weighting coefficients a1 and a2 are allocated according to scenes, and a subjective difference (SUBD) is calculated as shown in the following formula.
SUBD=a1×SAD+a2×E
If the scene is continuous multiframe and has conduction effect, such as H246 reference value compression, a2 is larger, and a1 is smaller; conversely, a1 is larger and a2 is smaller; further, a1+ a2 may be set to 1.
The SUBD is coding information corresponding to a coded stream of a macroblock to be coded.
Example two
The present embodiment describes a set of encoding methods proposed by the present invention in detail on the basis of the above embodiments. The coding method is a texture gradient coding method and comprises the following steps:
step 1, defining the size of the macro block to be coded
Defining the size of a macro block to be coded as m × n pixel components, wherein m is more than or equal to 1, and n is more than or equal to 1;
preferably, the size of the macroblock to be encoded may be defined as 8 × 1 pixel components, 16 × 1 pixel components, 32 × 1 pixel components, 64 × 1 pixel components; in the present embodiment, the size of the macroblock to be coded is 16 × 1 pixel components, and other macroblocks with different sizes are the same. The pixel components in the macro block to be coded are arranged from left to right in sequence according to the sequence numbers from 0 to 15, and each sequence number corresponds to one pixel component.
Step 2, defining sampling mode
According to the texture correlation existing in the macro block to be coded, the closer the pixel distance in the macro block to be coded is, the higher the consistency probability of the texture gradual change of the macro block to be coded is, and otherwise, the farther the pixel distance in the macro block to be coded is, the lower the consistency probability of the texture gradual change of the macro block to be coded is, and accordingly, the pixel components in the macro block to be coded are subjected to non-equidistant sampling, and various non-equidistant sampling modes can be selected.
Preferably, as shown in fig. 2, fig. 2 is a schematic diagram of a sampling manner of a texture gradient coding method according to an embodiment of the present invention. The present embodiment performs non-equidistant sampling on 16 × 1 pixels in the macroblock to be coded, which is exemplified by three non-equidistant sampling manners of sample1, sample 2 and sample 3, and the other non-equidistant sampling manners are the same, wherein,
sampling 1 is to sample 3 pixel components with the corresponding positions of 0, 4 and 15 in the macro block to be coded;
sampling 2 is to sample 4 pixel components with corresponding positions of 0, 5, 10 and 15 in the macro block to be coded;
sample 3 is to sample 3 pixel components at corresponding positions with serial numbers 0, 11 and 15 in the macroblock to be coded.
And 3, processing the multiple non-equidistant sampling modes selected in the step 2 to obtain a prediction residual error.
In this embodiment, a non-equidistant sampling processing procedure is taken as an example, and other types of equidistant sampling processing procedures are the same. The method comprises the following specific steps:
step 31, as shown in fig. 3, fig. 3 is a schematic diagram of a prediction method of a texture gradient coding method according to an embodiment of the present invention. For sample1, respectively predicting 45-degree pixel component points, 90-degree pixel component points and 135-degree pixel component points of a sampling point in a macro block to be coded adjacent to the top of the current macro block to be coded, namely, the prediction modes are 135-degree prediction, 45-degree prediction and 90-degree prediction, solving prediction residuals of all the sampling points in three angle prediction modes, respectively calculating the absolute values of the prediction residuals of all the sampling points in each prediction mode, and specifically, respectively subtracting the 45-degree pixel component points, 90-degree pixel component points and 135-degree pixel component points corresponding to the sampling point in the macro block to be coded adjacent to the top of the current macro block to be coded from the sampling point in the macro block to be coded to obtain the prediction residuals; and respectively adding absolute values of the prediction residual errors of each sampling point under each prediction mode to obtain the sum of absolute values of the prediction residual errors, and selecting one prediction mode corresponding to the absolute value and the minimum value of the prediction residual errors as the prediction mode of the current sampling point of the macro block to be coded.
Preferably, the prediction mode can be any combination of 135-degree prediction, 45-degree prediction and 90-degree prediction.
Step 32, obtaining the prediction residual error of the current sampling point of the macroblock to be coded in the prediction mode selected in step 31, and solving the prediction residual error of the non-sampling point by using a formula for the non-sampling point, wherein the formula is as follows:
Resi=(sample1-sample0)*(i+1)/(num+1)
sample0 and sample1 in the formula are pixel component reconstruction values of consecutive sample points in the current macroblock to be coded, i is an index of a non-sample point, and num is the number of the non-sample points.
Further, the pixel component reconstruction value may refer to a pixel component value reconstructed by the decoding end of the compressed coded macroblock.
And finally, obtaining the prediction residual of all pixel component points of the current macroblock to be coded in the prediction mode selected in the step 31, and calculating the sum of absolute values of the prediction residuals of all pixel components of the current macroblock to be coded.
And 33, repeating the steps 31 to 32, obtaining the prediction residual errors of all pixel component points of the current macroblock to be coded under the sample 2 and the sample 3, calculating the sum of absolute values of the prediction residual errors of all pixel components of the current macroblock to be coded, selecting a sampling mode corresponding to the absolute value and the minimum value of the prediction residual errors as the sampling mode of the current macroblock to be coded, and adopting the prediction mode determined in the sampling modes as the prediction mode of the current macroblock to be coded.
And 4, writing the sampling mode, the prediction residual error and the prediction mode of the sampling point in the current macro block to be coded into the code stream to form the code stream of the current macro block to be coded.
In this embodiment, the sum of the prediction residual and the absolute value of the prediction residual is calculated by defining the sampling mode of the macroblock to be encoded and the reference mode of pixel component prediction. Compared with the prior art, when the texture of the image to be coded is more complex, the prediction residual error of the macro block to be coded positioned at the texture boundary of the current image is obtained through the texture characteristic of the macro block to be coded according to the gradual change principle of the texture without depending on the macro block to be coded around the macro block to be coded, so that the precision of the prediction residual error of the complex texture area can be improved, the theoretical limit entropy of compression is further reduced, and the video compression ratio is increased.
EXAMPLE III
The present embodiment describes another set of encoding methods proposed by the present invention in detail on the basis of the above embodiments. The coding method is a texture self-adaptive coding method and comprises the following steps:
step 1, defining the reconstructed pixel component of a macro block to be coded;
fig. 4 is a reference diagram of a predicted reconstructed pixel component of a texture adaptive coding method according to an embodiment of the present invention, as shown in fig. 4. Defining the current pixel component of a macro block to be coded as Cij, selecting K encoded reconstruction pixel components around the current pixel component, and numbering the K encoded reconstruction pixel components, wherein the numbering sequence can be specified, and K is more than or equal to 1.
Preferably, the serial number of the current pixel component is set as Cij, the serial number of the reconstructed pixel component on the left side of the current pixel component Cij is sequentially and progressively decreased from right to left for sequencing, and the serial number j is sequentially and progressively decreased from bottom to top for sequencing; the serial numbers of the reconstructed pixel components right above the current pixel component Cij are sequentially and progressively sequenced from bottom to top by the serial number j, and the serial number i is unchanged; and the serial numbers i of the reconstructed pixel components on the right side of the current pixel component Cij are sequentially and progressively decreased from left to right for sequencing, and the serial numbers j are sequentially and progressively decreased from bottom to top for sequencing.
Step 2, calculating a first weight;
step 201, respectively calculating the difference between the current pixel component and the encoded K reconstructed pixel components, and calculating K difference weights DIFIj;
202, positioning the encoded K reconstruction pixel components around the current pixel component, and respectively setting different weight values according to different positions of the encoded K reconstruction pixel components to obtain K position weights POSij;
step 203, calculating the weight of each reconstructed pixel component according to a first weight calculation formula, i.e. a first weight, where the first weight calculation formula is:
Wij=a*DIFij+b*POSij
wherein, a and b are weighted values, and satisfy a + b ═ 1, the standard case is a ═ 0.5, b ═ 0.5, can also adjust flexibly; DIF is the difference weight, i.e. the difference between the current pixel component and the surrounding reconstructed pixel components; POS is the position weight, namely the space distance between the current pixel component and the surrounding reconstruction pixel component; ij is an index of K reconstructed pixel components, the value of ij is a natural number from 1 to K, and W is a first weight.
Step 3, calculating a second weight;
setting each pixel to contain N pixel components, K × N weights can be obtained. And finally calculating the weight of each reconstructed pixel component, namely a second weight by using a formula:
Mijn=p1*Wij1+p2*Wij2+p3*Wij3+...+pN*WijN
wherein p is the component weighted value, N is the value of N, and M is the second weight. Further, for the selection of pN, it is satisfied that p1+ p2+ … + pN is 1, which may be distributed evenly or configured arbitrarily according to an empirical value, and it may be determined according to the empirical value that the weight of the reconstruction pixel component closer to the current reconstruction pixel component is closer, and the value of pN may be distributed according to the distance between the reconstruction pixel component and the current reconstruction pixel component, and the value of pN is larger the closer, otherwise, pN is smaller.
Step 4, calculating a prediction residual error;
step 401, selecting a reconstructed pixel component corresponding to the optimal value of Mijn as a reference pixel of the current pixel component according to the calculated second weight Mijn;
preferably, the optimal value may be the minimum value of Mijn.
Step 402, calculating the difference between the pixel value of the current pixel component and the pixel value of the reference pixel, and solving the prediction residual error.
Step 5, information coding;
specifically, the coding method of the macro block to be predicted, the position information of the reference pixel and the prediction residual of the current pixel component are written into the code stream to form a coded stream.
Preferably, the prediction residual may be selectively encoded.
Example four
On the basis of the above embodiments, the present invention exemplifies a texture adaptive encoding method, which includes the following steps:
step 1, defining the reconstructed pixel component of a macro block to be coded;
defining the current pixel component of a macro block to be coded as Cij, selecting K encoded reconstruction pixel components around the current pixel component Cij, and numbering the K encoded reconstruction pixel components, wherein the numbering sequence can be specified, and K is more than or equal to 1.
Preferably, the encoded K reconstructed pixel components are numbered, with the numbering being ordered sequentially from top to bottom and from left to right, with the numbering being arranged from 0 to K-1.
In this embodiment, 17 reconstructed pixel components around the current pixel component Cij are taken as an example to explain, and the other reconstructed pixel components with different numbers are the same, and the 17 reconstructed pixel components are sequentially sorted from top to bottom and from left to right, and the sequence numbers are arranged from 0 to 16, as shown in fig. 5, where fig. 5 is a schematic reference diagram of a predicted reconstructed pixel component of another texture adaptive coding method provided in the embodiment of the present invention.
Step 2, calculating the absolute value of the difference degree weight of the reconstructed pixel component;
respectively calculating the difference between the current pixel component and the K encoded reconstruction pixel components, and calculating to obtain absolute values ABS (DIFIj) of K difference weights, wherein the ABS is used for absolute value operation;
preferably, the difference degrees between the current pixel component and the encoded 17 reconstructed pixel components are respectively calculated, and the absolute value abs (diffij) of the 17 difference weight is obtained by calculation;
step 3, calculating a prediction residual error;
step 301, according to the absolute value abs (diffij) of the difference weight of the reconstructed pixel component obtained by calculation, selecting the minimum value of the absolute value abs (diffij) of the difference weight, and setting the reconstructed pixel component corresponding to the minimum value as the reference pixel of the current pixel component.
Preferably, the minimum value of the absolute values abs (diffij) of the 17 difference weights is selected, and the reconstructed pixel component corresponding to the minimum value is the reference pixel of the current pixel component.
Step 302, calculating the difference between the pixel value of the current pixel component and the pixel value of the reference pixel, and solving the prediction residual error.
Step 4, information coding;
and coding the number and the prediction residual error of the current pixel component, and writing the coding method and the number of the macro block to be predicted and the prediction residual error of the current pixel component into a code stream to form a coded stream.
EXAMPLE five
On the basis of the foregoing embodiments, this embodiment exemplifies a texture adaptive encoding method, which includes the following steps:
step 1, defining the reconstructed pixel component of a macro block to be coded;
as shown in fig. 6, fig. 6 is a reference diagram of a predicted reconstructed pixel component of another texture adaptive coding method according to an embodiment of the present invention. Defining the current pixel component of a macro block to be coded as Cij, selecting K encoded reconstruction pixel components around the current pixel component Cij, and numbering the K encoded reconstruction pixel components, wherein the numbering sequence can be specified, and K is more than or equal to 1.
Preferably, the encoded K reconstructed pixel components are numbered, with the numbering being ordered sequentially from top to bottom and from left to right, with the numbering being arranged from 0 to K-1.
In this embodiment, 17 reconstructed pixel components around the current pixel component Cij are taken as an example for explanation, and other reconstructed pixel components with different numbers are similar, and the 17 reconstructed pixel components are sequentially sorted from top to bottom and from left to right, and the serial numbers are arranged from 0 to 16.
Step 2, calculating the absolute value of the difference degree weight of the reconstructed pixel component;
respectively calculating the difference between the current pixel component and the encoded 17 reconstructed pixel components, and calculating to obtain an absolute value ABS (DIFIj) of 17 difference weights;
preferably, the difference degrees of the current pixel component and the encoded 17 reconstructed pixel components are respectively solved, and finally, the absolute value abs (diffij) of the 17 difference weight is obtained through calculation;
step 3, calculating the weight of the reconstructed pixel component;
301, positioning the encoded K reconstructed pixel components around the current pixel component, respectively setting different weight values according to different positions of the encoded K reconstructed pixel components, and finally obtaining K position weights POSij;
step 302, respectively calculating the weight of each reconstructed pixel component according to a weight calculation formula, wherein the weight calculation formula is ABS (DIFIj) + POSij;
preferably, the weight of each of the 17 reconstructed pixel components is abs (diffij) + POSij;
step 4, determining a reference pixel;
and selecting the minimum value of the weights in the K reconstructed pixel components, and setting the reconstructed pixel component corresponding to the minimum value of the weights as a reference pixel of the current pixel component.
Preferably, the reconstructed pixel component corresponding to the minimum weight value of the 17 reconstructed pixel components is selected as the reference pixel of the current pixel component.
Step 4, information coding;
specifically, the number and the prediction residual of the current pixel component are coded, and the coding method, the number and the prediction residual of the current pixel component of the macro block to be predicted are written into a code stream to form a coded stream.
EXAMPLE six
In this embodiment, a detailed description is given to a video encoding apparatus proposed by the present invention based on the above-mentioned embodiments, as shown in fig. 7, fig. 7 is a schematic diagram of a video encoding apparatus provided by an embodiment of the present invention, and the apparatus includes:
the encoding module 11 is configured to encode macroblocks to be encoded according to at least two groups of encoding methods, respectively, to obtain at least two encoded streams;
the operation module 12 is connected to the encoding module 11, and configured to perform operations respectively according to the at least two encoded streams to form encoding information corresponding to each encoded stream;
and the obtaining module 13 is connected to the operation module 12, and configured to select a final encoding method for the macroblock to be encoded according to the encoding information respectively corresponding to the at least two encoding streams.
Wherein each coded stream includes a coding method adopted by the macroblock to be coded and a prediction residual of each pixel component in the macroblock to be coded.
The operation module 12 is specifically configured to: and calculating according to the prediction residual of each pixel component in each coding stream to obtain residual subjective sum so as to form coding information corresponding to each coding stream.
Wherein the obtaining module 13 includes:
a selecting unit 131, configured to select a minimum value of the coding information corresponding to the at least two coding streams;
a determining unit 132, configured to determine the encoding method in the encoded stream corresponding to the minimum value as the final encoding method of the macroblock to be encoded.
In the embodiment, the prediction residual of the current prediction pixel is calculated by calculating the weight of the reconstructed pixel to obtain the reference pixel, and the coding stream of the macroblock to be coded is obtained.
In summary, the present invention has been explained by using specific examples, and the above description of the embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention, and the scope of the present invention should be subject to the appended claims.

Claims (2)

1. A video encoding method, comprising:
respectively coding a macro block to be coded according to at least two groups of coding methods to obtain at least two coding streams; the at least two groups of coding methods comprise a texture gradient coding method and a texture self-adaptive coding method; each coded stream comprises a coding method adopted by the macro block to be coded and a prediction residual error of each pixel component in the macro block to be coded;
calculating the residual absolute value and SAD of the macro block to be coded under each coding method according to the coded stream, and calculating the residual standard deviation E:
Figure FDA0002661004010000011
Figure FDA0002661004010000012
Figure FDA0002661004010000013
wherein Res is a prediction residual; i is the serial number of the pixel component in the current macro block to be coded; m is the number of pixel components in the current macroblock to be coded, m is more than or equal to 1, and n is more than or equal to 1; ABS is an absolute value; AVE is the average residual;
configuring weighting coefficients a1 and a2 according to scenes, wherein a1+ a2 is 1;
calculating residual subjective sum SUBD (a)1×SAD+a2The xE takes the SUBD as coding information corresponding to a coding stream of the macro block to be coded;
selecting minimum values of residual subjective values and SUBD corresponding to the at least two coded streams respectively; and determining the coding method in the coding stream corresponding to the minimum value as the final coding method of the macro block to be coded.
2. A video encoding apparatus, comprising:
the encoding module is used for respectively encoding the macro blocks to be encoded according to at least two groups of encoding methods to obtain at least two encoding streams; the at least two groups of coding methods comprise a texture gradient coding method and a texture self-adaptive coding method; each coded stream comprises a coding method adopted by the macro block to be coded and a prediction residual error of each pixel component in the macro block to be coded;
and the operation module is connected with the coding module and used for calculating the residual absolute value and SAD of the macro block to be coded under each coding method according to the coded stream, and the residual standard deviation E:
Figure FDA0002661004010000021
Figure FDA0002661004010000022
Figure FDA0002661004010000023
wherein Res is a prediction residual; i is the serial number of the pixel component in the current macro block to be coded; m is the number of pixel components in the current macroblock to be coded, m is more than or equal to 1, and n is more than or equal to 1; ABS is an absolute value; AVE is the average residual;
configuring weighting coefficients a1 and a2 according to scenes, wherein a1+ a2 is 1;
calculating residual subjective sum SUBD (a)1×SAD+a2The xE takes the SUBD as coding information corresponding to a coding stream of the macro block to be coded;
the acquisition module is connected with the operation module, and specifically comprises a selection unit used for selecting the minimum value of the residual subjective sum and the SUBD corresponding to the at least two coded streams; and the determining unit is used for determining the coding method in the coding stream corresponding to the minimum value as the final coding method of the macro block to be coded.
CN201811260535.7A 2018-10-26 2018-10-26 Video coding method and device thereof Active CN109510985B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811260535.7A CN109510985B (en) 2018-10-26 2018-10-26 Video coding method and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811260535.7A CN109510985B (en) 2018-10-26 2018-10-26 Video coding method and device thereof

Publications (2)

Publication Number Publication Date
CN109510985A CN109510985A (en) 2019-03-22
CN109510985B true CN109510985B (en) 2021-01-15

Family

ID=65746784

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811260535.7A Active CN109510985B (en) 2018-10-26 2018-10-26 Video coding method and device thereof

Country Status (1)

Country Link
CN (1) CN109510985B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1717675A (en) * 2002-11-26 2006-01-04 高通股份有限公司 System and method for optimizing multimedia compression using plural encoders
CN104410861A (en) * 2014-11-24 2015-03-11 华为技术有限公司 Video encoding method and device
CN108289223A (en) * 2018-02-06 2018-07-17 上海通途半导体科技有限公司 It is a kind of to can be used for the method for compressing image in liquid crystal display over-driving device and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130108948A (en) * 2012-03-26 2013-10-07 한국전자통신연구원 Image encoding method using adaptive preprocessing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1717675A (en) * 2002-11-26 2006-01-04 高通股份有限公司 System and method for optimizing multimedia compression using plural encoders
CN104410861A (en) * 2014-11-24 2015-03-11 华为技术有限公司 Video encoding method and device
CN108289223A (en) * 2018-02-06 2018-07-17 上海通途半导体科技有限公司 It is a kind of to can be used for the method for compressing image in liquid crystal display over-driving device and device

Also Published As

Publication number Publication date
CN109510985A (en) 2019-03-22

Similar Documents

Publication Publication Date Title
JP5752809B2 (en) Perform smoothing operations adaptively
US8369404B2 (en) Moving image decoding device and moving image decoding method
US8681873B2 (en) Data compression for video
RU2533196C2 (en) Video coding with large macroblocks
JP2022502961A (en) Motion estimation using 3D auxiliary data
US11743475B2 (en) Advanced video coding method, system, apparatus, and storage medium
US20150312575A1 (en) Advanced video coding method, system, apparatus, and storage medium
CN107409218A (en) The Fast video coding method split using block
CN110999290B (en) Method and apparatus for intra prediction using cross-component linear model
KR20140029383A (en) Image coding device and image decoding device
JP7402884B2 (en) Efficient patch rotation in point cloud coding
JP7267447B2 (en) Coding and decoding patch data units for point cloud coding
CN117897952A (en) Method and system for performing combined inter and intra prediction
KR20170084213A (en) Systems and methods for processing a block of a digital image
KR20220062655A (en) Lossless coding of video data
JP2023542029A (en) Methods, apparatus, and computer programs for cross-component prediction based on low-bit precision neural networks (NN)
CN114762332A (en) Method for constructing merging candidate list
CN109510985B (en) Video coding method and device thereof
CN109587481B (en) Video encoding method and apparatus
CN109302605B (en) Image coding method and device based on multi-core processor
KR20060085003A (en) A temporal error concealment method in the h.264/avc standard
KR20220063272A (en) Motion compensation method for video coding
JP2022548521A (en) Filter for motion-compensated interpolation with reference downsampling
JP2022526770A (en) Conversion unit classification method for video coding
WO2018165917A1 (en) Condensed coding block headers in video coding systems and methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201230

Address after: Room 3101, 22nd floor, Shenzi building, Binhe District, Hangzhou City

Applicant after: Hangzhou Lianhai Network Technology Co.,Ltd.

Address before: 710065 Xi'an new hi tech Zone, Shaanxi, No. 86 Gaoxin Road, No. second, 1 units, 22 stories, 12202 rooms, 51, B block.

Applicant before: Xi'an Cresun Innovation Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Video coding method and device

Effective date of registration: 20210901

Granted publication date: 20210115

Pledgee: Changhe Branch of Hangzhou United Rural Commercial Bank Co.,Ltd.

Pledgor: Hangzhou Lianhai Network Technology Co.,Ltd.

Registration number: Y2021330001327