CN111246219B - Quick dividing method for depth of CU (Central Unit) in VVC (variable valve timing) frame - Google Patents

Quick dividing method for depth of CU (Central Unit) in VVC (variable valve timing) frame Download PDF

Info

Publication number
CN111246219B
CN111246219B CN202010053572.1A CN202010053572A CN111246219B CN 111246219 B CN111246219 B CN 111246219B CN 202010053572 A CN202010053572 A CN 202010053572A CN 111246219 B CN111246219 B CN 111246219B
Authority
CN
China
Prior art keywords
target coding
target
coding
preset condition
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010053572.1A
Other languages
Chinese (zh)
Other versions
CN111246219A (en
Inventor
李跃
丁平尖
聂明星
陈灵娜
龚向坚
刘杰
罗凌云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aixiesheng Technology Co Ltd
Original Assignee
University of South China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of South China filed Critical University of South China
Priority to CN202010053572.1A priority Critical patent/CN111246219B/en
Publication of CN111246219A publication Critical patent/CN111246219A/en
Application granted granted Critical
Publication of CN111246219B publication Critical patent/CN111246219B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application discloses a method for quickly dividing depth of CU (video/video) in a VVC (variable valve timing) frame, which comprises the following steps: sequentially extracting CUs needing to be coded, and obtaining a brightness pixel difference value of a target coding CU; if the width and the height of the target coding CU are both larger than a preset threshold value, acquiring a further division mode of the target coding CU; when the further division mode of the target coding CU is H _ BT and meets a first preset condition; or, when the further partition mode of the target coding CU is V _ BT and meets a second preset condition; or, when the further partition mode of the target coding CU is H _ TT, and a third preset condition is satisfied; or, when the further partition mode of the target coding CU is V _ TT and a fourth preset condition is met, stopping further partitioning the target coding CU. Obviously, this approach may reduce the complexity in partitioning the target coded CU depth.

Description

Quick dividing method for depth of CU (Central Unit) in VVC (variable valve timing) frame
Technical Field
The invention relates to the technical field of video coding, in particular to a rapid dividing method for depth of CU (video coding) in a VVC (variable value coding) frame.
Background
Because the high-definition video can bring better visual experience effect to people, the video with the format is widely applied in actual life, however, the video data volume which is rapidly increased brings great challenges to video compression coding.
For more efficient compression of Video data, VCEG (Video Coding Experts Group) and MPEG (Moving Picture Experts Group) are jointly making up a new generation of Video compression standard vvc (scalable Video Coding). In the process of performing compression coding on video by the VVC, a coded frame is usually divided into a plurality of CTU sequences, and in order to obtain an optimal division of CTUs, the CTUs first serve as a root node to perform QT (quadtree division), and the obtained leaf nodes may further perform QT, H _ BT (horizontal binary tree division), V _ BT (vertical binary tree division), H _ TT (horizontal ternary tree division) and V _ TT (vertical ternary tree division) recursively. To simplify the partitioning process of the CTU, when a CU is partitioned by H _ BT, V _ BT, H _ TT, and V _ TT, the CU will not perform QT any more. Meanwhile, in order to more effectively remove spatial redundancy of a coded CU, the VVC extends CU intra prediction modes of HEVC (High Efficiency Video Coding, Video compression standard) from 35 to 67, that is, 65 directional prediction modes and 2 non-directional prediction modes (DC mode and Planar mode). This results in that not only further partitioning of all QT, H _ BT, V _ BT, H _ TT and V _ TT needs to be traversed during encoding of the CU, but also the CU needs to perform the calculation of 67 intra prediction modes, and the CU will eventually select only one partitioning mode as the final partitioning mode. Obviously, such CU depth partitioning increases the coding complexity of the CU. At present, no effective solution exists for the problem.
Therefore, how to reduce the coding complexity in the process of partitioning the CU depth is an urgent technical problem to be solved by those skilled in the art.
In view of the above, an object of the present invention is to provide a method for fast dividing a CU depth in a VVC frame, so as to reduce the coding complexity in the process of dividing the CU depth. The specific scheme is as follows:
a method for fast depth division of CUs in a VVC frame comprises the following steps:
sequentially extracting the CU to be coded, and calculating the original brightness pixel value Ori of the target coding CUw×h(i, j) and the predicted luminance pixel value Prew×h(i, j) to obtain a luminance pixel difference value Dif for the target coded CUw×h(i,j);
Wherein, Difw×h(i,j)=|Oriw×h(i,j)-Prew×h(i,j)|;
In the formula, w and h are the width and the height of the target coding CU respectively, and i and j are the abscissa and the ordinate of a pixel point of the target coding CU respectively;
judging whether the width and the height of the target coding CU are both larger than a preset threshold value;
if yes, acquiring a further division mode of the target coding CU;
if the further division mode of the target coding CU is H _ BT, judging whether the target coding CU meets a first preset condition;
if yes, stopping further dividing the target coding CU;
if the further partition mode of the target coding CU is V _ BT, judging whether the target coding CU meets a second preset condition;
if yes, stopping further dividing the target coding CU;
if the further division mode of the target coding CU is H _ TT, judging whether the target coding CU meets a third preset condition;
if yes, stopping further dividing the target coding CU;
if the further partition mode of the target coding CU is V _ TT, judging whether the target coding CU meets a fourth preset condition;
if yes, stopping further dividing the target coding CU;
wherein the expression of the first preset condition is as follows:
Figure GDA0002836421400000021
the expression of the second preset condition is as follows:
Figure GDA0002836421400000022
wherein Th is an adaptive adjustment threshold value,
Figure GDA0002836421400000023
Figure GDA0002836421400000031
and
Figure GDA0002836421400000032
respectively encoding the luminance pixel difference Dif of the target CUw×h(i, j) obtaining a variance value corresponding to a corresponding sub-block after performing horizontal two-equal division and vertical two-equal division, wherein w and h are the width and the height of the target coding CU respectively, and i and j are a horizontal coordinate and a vertical coordinate of a pixel point of the target coding CU respectively;
the expression of the third preset condition is as follows:
Figure GDA0002836421400000033
the expression of the fourth preset condition is as follows:
Figure GDA0002836421400000034
in the formula (I), the compound is shown in the specification,
Figure GDA0002836421400000035
Figure GDA0002836421400000036
Figure GDA0002836421400000037
Figure GDA0002836421400000038
Figure GDA0002836421400000039
Figure GDA00028364214000000310
and
Figure GDA00028364214000000311
respectively encoding the luminance pixel difference Dif of the target CUw×h(i, j) obtaining a variance value corresponding to the corresponding sub-block after performing horizontal four-equal division and vertical four-equal division, wherein Th is a self-adaptive adjustment threshold, w and h are the width and the height of the target coding CU, and i and j are the abscissa and the ordinate of a pixel point of the target coding CU.
Preferably, the preset threshold is specifically 8 px.
Preferably, after the step of obtaining the further partition mode of the target coding CU, the method further includes:
and if the further partitioning mode of the target coding CU is QT, continuing to further partition the target coding CU.
Preferably, after the step of determining whether the target coded CU satisfies the first preset condition, the method further includes:
and if not, further H _ BT is carried out on the target coding CU.
Preferably, after the step of determining whether the target coded CU satisfies a second preset condition, the method further includes:
if not, performing further V _ BT on the target coding CU.
Preferably, after determining whether the target coding CU satisfies a third preset condition, the method further includes:
if not, further H _ TT is carried out on the target coding CU.
Preferably, after the step of determining whether the target coded CU satisfies a fourth preset condition, the method further includes:
if not, performing further V _ TT on the target coding CU.
It can be seen that, in the invention, the difference Dif of luminance pixels because of the currently encoded CUw×h(i, j) can reflect which partitioning mode of the current coding CU is the optimal partitioning or not,on this basis, if the width and height of the target coding CU are both greater than the preset threshold, it indicates that the target coding CU is more likely to need further partitioning. In the process of further dividing the target coding CU, four determination criteria are set to determine whether the target coding CU can end the further division of the QT, H _ BT, V _ BT, H _ TT, and V _ TT in advance, that is, whether the target coding CU can end the division in advance is determined by determining whether the further division of the target coding CU satisfies a first preset condition, a second preset condition, a third preset condition, and a fourth preset condition, and in the process of dividing the target coding CU, the balance between the coding time and the coding rate can be achieved by adaptively adjusting the threshold Th. Obviously, since the method can terminate the step of further partitioning the target coding CU by traversing all QTs, H _ BT, V _ BT, H _ TT, and V _ TT in advance, the coding complexity during the partitioning of the target coding CU depth can be significantly reduced by the VVC intra-frame CU depth fast partitioning method provided by the present invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a method for quickly dividing the depth of a CU in a VVC frame according to an embodiment of the present invention;
fig. 2 is an example of the optimal division of a CTU.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a method for fast dividing a CU depth in a VVC frame according to an embodiment of the present invention, where the method includes:
step S11: sequentially extracting the CU to be coded, and calculating the original brightness pixel value Ori of the target coding CUw×h(i, j) and the predicted luminance pixel value Prew×h(i, j) to obtain the luminance pixel difference Dif for the target coded CUw×h(i,j);
Wherein, Difw×h(i,j)=|Oriw×h(i,j)-Prew×h(i,j)|;
In the formula, w and h are the width and the height of the target coding CU respectively, and i and j are the abscissa and the ordinate of the pixel point of the target coding CU respectively;
step S12: judging whether the width and the height of the target coding CU are both larger than a preset threshold value;
step S13: if yes, acquiring a further division mode of the target coding CU;
it can be understood that, in the prior art, since one CU needs to traverse further partitions of all QTs, H _ BT, V _ BT, H _ TT and V _ TT during depth partitioning, and at the same time, the CU needs to perform 67 intra prediction modes, which significantly increases the complexity during depth partitioning of the CU within the VVC frame. In the actual CU encoding process, a CU only selects one partition mode as the final partition mode, please refer to fig. 2, where fig. 2 is an example of the optimal partition of a CTU, the size of the CTU is 128 × 128, and in fig. 2, each optimal CU partition only belongs to one partition mode, for example, the final partition mode of the CU at the lower left corner 64 × 64 is V _ TT partition, if the partition calculation of H _ TT is performed on the CU at 64, the calculation complexity of encoding is only increased, and the encoding efficiency of the CU is not further improved. Therefore, if it can be predicted in advance and accurately which partition mode of a CU in a CTU may be the optimal partition or which partition mode may not be the optimal partition, thereby terminating and skipping the CU in advance for further computation of the recursive partitioning method of QT, H _ BT, V _ BT, H _ TT, and V _ TT, the coding complexity of the CU can be greatly reduced.
The quick partitioning method for the depth of the CU in the VVC frame provided by this embodiment is to predict in advance and accurately which partitioning mode of the CU in the CTU may not be the optimal partitioning method, so that the CU may omit or skip the tedious steps of traversing the QT, H _ BT, V _ BT, H _ TT, and V _ TT partitioning methods one by one.
Specifically, in this embodiment, CU to be encoded is first sequentially extracted, and original luminance pixel value Ori of target encoded CU is calculatedw×h(i, j) and the predicted luminance pixel value Prew×h(i, j) wherein the predicted luminance pixel value Pre of the target encoded CUw×h(i, j) is calculated according to the 67 intra prediction modes of the target coding CU, that is, the best intra prediction mode of the target coding CU is selected from the 67 intra prediction modes; then, the original luminance pixel value Ori of the target coding CU is obtainedw×h(i, j) and the predicted luminance pixel value Prew×h(i, j) to obtain a difference Dif of luminance pixels of the target encoded CUw×h(i, j). Conceivably, because the luminance pixel difference value Dif of the target coding CUw×h(i, j) can reflect to a large extent which partition mode of the target coding CU is the best partition, so in the present embodiment, the target coding CU depth is partitioned by using the attribute feature of the target coding CU.
When obtaining the difference Dif of the luminance pixels of the target coding CUw×h(i, j), the difference Dif of the luminance pixel of the CU is encoded according to the targetw×h(i, j) judging whether the width and the height of the target coding CU are both larger than a preset threshold value, if so, acquiring a further division mode of the target coding CU, namely, if the original brightness pixel value Ori of the current coding CU is larger than the preset threshold valuew×h(i, j) and the predicted luminance pixel value Prew×h(i, j) is larger, which indicates that the target coding CU needs to be further divided.
Step S14: if the further division mode of the target coding CU is H _ BT, judging whether the target coding CU meets a first preset condition;
step S15: if yes, stopping further dividing the target coding CU;
step S16: if the further division mode of the target coding CU is V _ BT, judging whether the target coding CU meets a second preset condition;
step S17: if yes, stopping further dividing the target coding CU;
step S18: if the further division mode of the target coding CU is H _ TT, judging whether the target coding CU meets a third preset condition;
step S19: if yes, stopping further dividing the target coding CU;
step S20: if the further division mode of the target coding CU is V _ TT, judging whether the target coding CU meets a fourth preset condition;
step S21: if yes, stopping further dividing the target coding CU;
wherein the expression of the first preset condition is as follows:
Figure GDA0002836421400000071
the expression of the second preset condition is as follows:
Figure GDA0002836421400000072
wherein Th is an adaptive adjustment threshold value,
Figure GDA0002836421400000073
Figure GDA0002836421400000074
and
Figure GDA0002836421400000075
respectively encoding the luminance pixel difference Dif of the CU for the targetw×h(i, j) performing horizontal and vertical binary equal division to obtain variance values corresponding to the sub-blocks, wherein w and h are widths of the target coding CUDegree and height, i and j are respectively the abscissa and ordinate of the pixel point of the target coding CU;
the expression of the third preset condition is as follows:
Figure GDA0002836421400000076
the expression of the fourth preset condition is as follows:
Figure GDA0002836421400000077
in the formula (I), the compound is shown in the specification,
Figure GDA0002836421400000078
Figure GDA0002836421400000079
Figure GDA00028364214000000710
Figure GDA00028364214000000711
Figure GDA00028364214000000712
Figure GDA00028364214000000713
and
Figure GDA00028364214000000714
respectively encoding the luminance aberration values Dif of the target CUw×h(i, j) obtaining the variance value corresponding to the corresponding sub-block after performing horizontal four-equal division and vertical four-equal division, Th is the adaptive adjustment threshold, and w and h are dividedThe width and the height of the target coding CU are respectively shown, and i and j are respectively the abscissa and the ordinate of the pixel point of the target coding CU.
After the further partition mode of the target coding CU is obtained, if the further partition mode of the target coding CU is H _ BT and the target coding CU meets a first preset condition, stopping further partition of the target coding CU; if the further partitioning mode of the target coding CU is V _ BT and the target coding CU also meets a second preset condition, stopping further partitioning the target coding CU; if the further division mode of the target coding CU is H _ TT and the target coding CU also meets a third preset condition, stopping further division of the target coding CU; and stopping further dividing the target coding CU if the further dividing mode of the target coding CU is V _ TT and the target coding CU also meets a fourth preset condition.
Obviously, the step S14 to the step S21 is equivalent to predicting in advance which partition mode of the target coding CU may not be the best partition mode, so the step S14 to the step S21 can avoid the tedious step that the target coding CU needs to go through further partitions of all QTs, H _ BT, V _ BT, H _ TT and V _ TT to complete compression coding. Therefore, by the technical scheme provided by the embodiment, not only the encoding quality of the target encoding CU can be ensured, but also the encoding time complexity in the process of performing compression encoding on the target CU can be significantly reduced.
It can be seen that, in the embodiment, the difference Dif of luminance pixels because of the currently encoded CUw×h(i, j) can reflect which partition mode of the current coding CU is the optimal partition, and on this basis, if the width and the height of the target coding CU are both greater than the preset threshold, it indicates that the target coding CU is more likely to need further partition. In the further dividing of the target encoded CU, four determination criteria are set to determine whether the target encoded CU can end the further division of QT, H _ BT, V _ BT, H _ TT, and V _ TT in advance, that is, by determining whether the further division of the target encoded CU satisfies the first preset condition, the second preset condition, the third preset condition, and the fourth preset conditionThe condition is used for judging whether the target coding CU can terminate the division in advance or not, and in the process of dividing the target coding CU, the balance between the coding time and the coding code rate can be realized through self-adaptive adjustment of the threshold Th. Obviously, since the method can terminate the step of further partitioning the target coding CU by traversing all QTs, H _ BT, V _ BT, H _ TT, and V _ TT in advance, the coding complexity during the partitioning of the target coding CU depth can be significantly reduced by the VVC intra-frame CU depth fast partitioning method provided by this embodiment.
Based on the above embodiments, the present embodiment further describes and optimizes the technical solution, and as a preferred implementation, the preset threshold is specifically 8 px.
In the actual operation process, a large number of experiments prove that when the preset threshold is set to 8px, the partitioning efficiency in the process of partitioning the target coding CU can be improved, and the accuracy of the judgment result in the process of judging the further partitioning mode of the target coding CU can also be improved, so that in the embodiment, the preset threshold is set to 8 px. Of course, in the actual operation process, the preset threshold may also be adaptively adjusted according to different actual situations, which is not described herein in detail.
Based on the above embodiments, this embodiment further describes and optimizes the technical solution, specifically, the steps are as follows: after the process of obtaining the further partition mode of the target coding CU, the method further includes:
if the further partition mode of the target coding CU is QT, the target coding CU continues to be further partitioned.
It is to be understood that, in the process of deeply partitioning the target coding CU, in addition to the foregoing situation, a situation that the further partitioning mode of the target coding CU is QT may be encountered, and in this case, it is necessary to continue to further partition the target coding CU and determine the optimal partitioning mode of the target coding CU.
Obviously, the technical solution provided by this embodiment can further ensure the integrity of the target coding CU in the depth partitioning process.
Based on the above embodiments, this embodiment further describes and optimizes the technical solution, and as a preferred implementation, the above steps: after the process of determining whether the target coding CU satisfies the first preset condition, the method further includes:
if not, further H _ BT is performed on the target encoded CU.
In actual operation, if the further partition mode of the target coding CU is H _ BT, but the target coding CU does not satisfy the first preset condition, in this case, it is stated that H _ BT may be the best partition mode of the target coding CU. At this time, further H _ BT needs to be performed on the target coding CU, so as to find the optimal partition mode of the target coding CU.
Based on the above embodiments, this embodiment further describes and optimizes the technical solution, and as a preferred implementation, the above steps: after the process of determining whether the target coding CU satisfies the second preset condition, the method further includes:
if not, a further V _ BT is performed on the target encoded CU.
If the further partition mode of the target coding CU is V _ BT, but the target coding CU does not satisfy the second preset condition, in this case, it indicates that V _ BT may be the optimal partition mode of the target coding CU, and at this time, further V _ BT needs to be performed on the target coding CU for the subsequent process steps to be continuously performed, thereby ensuring the integrity during the process of partitioning the depth of the target coding CU.
Based on the above embodiments, this embodiment further describes and optimizes the technical solution, and as a preferred implementation, the above steps: after determining whether the target coding CU satisfies the third preset condition, the method further includes:
if not, a further H _ TT is performed on the target coded CU.
Or, in an actual operation process, if the further partition mode of the target coding CU is H _ TT, but the target coding CU does not satisfy the third preset condition, in this case, it is stated that H _ TT may be the optimal partition mode of the target coding CU, and at this time, further H _ TT needs to be performed on the target coding CU, and thus, the optimal partition mode of the target coding CU needs to be continuously searched.
Based on the above embodiments, this embodiment further describes and optimizes the technical solution, and as a preferred implementation, the above steps: after the process of determining whether the target coding CU satisfies the fourth preset condition, the method further includes:
if not, a further V _ TT is performed on the target coded CU.
In actual operation, if the further partition mode of the target coding CU is V _ TT, but the target coding CU does not satisfy the second preset condition, in this case, it is stated that V _ TT may be the best partition mode of the target coding CU. At this time, further V _ TT needs to be performed on the target coding CU, and after V _ TT is performed on the target coding CU, the optimal partition mode of the target coding CU is continuously searched according to the result after the target coding CU is completely partitioned.
Obviously, the technical scheme provided by the embodiment can not only meet the requirement of continuously executing the subsequent flow steps, but also enable the technical scheme provided by the application to be more comprehensive and complete.
Based on the technical content disclosed in the foregoing embodiment, in this embodiment, VTM2.0 is used as a test platform, and the foregoing disclosed CU depth partitioning method is executed on a PC of inter (r) core (tm) i7_9700 CPU, 16GB RAM, so as to evaluate the feasibility and effectiveness of the partitioning method.
It is assumed that the test sequences include 5 types, namely, Class _ D (BasketbalPallPass, BlowengBubbles, BQSquare, RaceHorses), Class _ C (RaceHorsessC, PartyScent, BasketbalDrill, BQMall), Class _ E (FourPeople, Kristenna, Johnny), Class _ F (BasketbalDrillText, ChinasSpeed, SlideEditing, SlidedShow), and Class _ B (Captus, Kimono, Parkscreen, BasketAllDrive, BQTerace); then, the coding Quantization Parameter (QP) is set to (22, 27, 32, 37), and the coding configuration is ai (all intra); finally, the performance of the algorithm is measured by using the code Rate change condition (BD-Rate) and the coding Time (TS).
Where TS is defined as:
Figure GDA0002836421400000111
in the formula, T0For the encoding time of the original test pattern, TpFor the encoding time of the present invention after application to the original test model, i represents a different QP value.
Referring to table 1, table 1 shows the performance comparison results of the method provided by the present invention on the VTM2.0 test platform.
Table 1 comparison of the performance of the method of the present invention on VTM2.0 test platform
Figure GDA0002836421400000112
As can be seen from the comparison results of the coding time and the coding Rate in table 1, under the conditions of adaptive adjustment threshold values Th 1, Th 1.2, Th 1.5 and Th 2, the coding time can be saved by 66.9%, 57.2%, 46.5% and 34.3% respectively by using the method provided by the present invention, while the BD-Rate is increased by 2.64%, 1.91%, 1.38% and 0.95% on average. For video sequences with different resolutions, the difference of the saving of the coding time is smaller under the same adaptive adjustment threshold Th, and the saving of the coding time is reduced along with the increase of the adaptive adjustment threshold Th, and meanwhile, the increase of the coding code rate is also reduced.
Meanwhile, as can be seen from the experimental comparison results, when a scene of coding application requires a faster coding speed, the adaptive adjustment threshold Th can be set smaller; when the scene of the coding application is more sensitive to the increase of the coding rate, it can be realized by setting the adaptive adjustment threshold Th to be larger. In summary, the method provided by the present invention not only can effectively achieve the balance between the saving of coding time and the increase of coding rate in the range of acceptable coding quality reduction for human eyes, but also can significantly reduce the coding complexity in the process of deeply dividing the CU.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above detailed description is provided for the rapid depth partitioning method for intra-frame CU of VVC provided by the present invention, and a specific example is applied in the present document to illustrate the principle and the implementation manner of the present invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (2)

1. A method for fast depth partitioning of CUs in a VVC frame is characterized by comprising the following steps:
sequentially extracting the CU to be coded, and calculating the original brightness pixel value Ori of the target coding CUw×h(i, j) and the predicted luminance pixel value Prew×h(i, j) to obtain a luminance pixel difference value Dif for the target coded CUw×h(i,j);
Wherein, Difw×h(i,j)=|Oriw×h(i,j)-Prew×h(i,j)|;
In the formula, w and h are the width and the height of the target coding CU respectively, and i and j are the abscissa and the ordinate of a pixel point of the target coding CU respectively;
judging whether the width and the height of the target coding CU are both larger than a preset threshold value;
if yes, acquiring a further division mode of the target coding CU;
if the further division mode of the target coding CU is H _ BT, judging whether the target coding CU meets a first preset condition;
if yes, stopping further dividing the target coding CU;
if the further partition mode of the target coding CU is V _ BT, judging whether the target coding CU meets a second preset condition;
if yes, stopping further dividing the target coding CU;
if the further division mode of the target coding CU is H _ TT, judging whether the target coding CU meets a third preset condition;
if yes, stopping further dividing the target coding CU;
if the further partition mode of the target coding CU is V _ TT, judging whether the target coding CU meets a fourth preset condition;
if yes, stopping further dividing the target coding CU;
after the step of obtaining the further partition mode of the target coding CU, the method further includes:
if the further partitioning mode of the target coding CU is QT, continuing to further partition the target coding CU;
after the step of determining whether the target CU satisfies the first preset condition, the method further includes:
if not, further H _ BT is carried out on the target coding CU;
after the step of determining whether the target CU satisfies a second predetermined condition, the method further includes:
if not, further V _ BT is carried out on the target coding CU;
after the determining whether the target coding CU satisfies a third preset condition, the method further includes:
if not, further H _ TT is carried out on the target coding CU;
after the step of determining whether the target CU satisfies a fourth predetermined condition, the method further includes:
if not, performing further V _ TT on the target coding CU;
wherein the expression of the first preset condition is as follows:
Figure FDA0002836421390000021
the expression of the second preset condition is as follows:
Figure FDA0002836421390000022
wherein Th is an adaptive adjustment threshold value,
Figure FDA0002836421390000023
Figure FDA0002836421390000024
and
Figure FDA0002836421390000025
respectively encoding the luminance pixel difference Dif of the target CUw×h(i, j) obtaining a variance value corresponding to a corresponding sub-block after performing horizontal two-equal division and vertical two-equal division, wherein w and h are the width and the height of the target coding CU respectively, and i and j are a horizontal coordinate and a vertical coordinate of a pixel point of the target coding CU respectively;
the expression of the third preset condition is as follows:
Figure FDA0002836421390000026
the expression of the fourth preset condition is as follows:
Figure FDA0002836421390000027
in the formula (I), the compound is shown in the specification,
Figure FDA0002836421390000028
Figure FDA0002836421390000029
Figure FDA00028364213900000210
Figure FDA00028364213900000211
Figure FDA00028364213900000212
Figure FDA00028364213900000213
and
Figure FDA00028364213900000214
respectively encoding the luminance pixel difference Dif of the target CUw×h(i, j) obtaining a variance value corresponding to the corresponding sub-block after performing horizontal four-equal division and vertical four-equal division, wherein Th is a self-adaptive adjustment threshold, w and h are the width and the height of the target coding CU, and i and j are the abscissa and the ordinate of a pixel point of the target coding CU.
2. The method for rapidly dividing the depth of a CU within a VVC frame according to claim 1, wherein the preset threshold is 8 px.
CN202010053572.1A 2020-01-17 2020-01-17 Quick dividing method for depth of CU (Central Unit) in VVC (variable valve timing) frame Active CN111246219B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010053572.1A CN111246219B (en) 2020-01-17 2020-01-17 Quick dividing method for depth of CU (Central Unit) in VVC (variable valve timing) frame

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010053572.1A CN111246219B (en) 2020-01-17 2020-01-17 Quick dividing method for depth of CU (Central Unit) in VVC (variable valve timing) frame

Publications (2)

Publication Number Publication Date
CN111246219A CN111246219A (en) 2020-06-05
CN111246219B true CN111246219B (en) 2021-02-02

Family

ID=70872768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010053572.1A Active CN111246219B (en) 2020-01-17 2020-01-17 Quick dividing method for depth of CU (Central Unit) in VVC (variable valve timing) frame

Country Status (1)

Country Link
CN (1) CN111246219B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112153382B (en) * 2020-09-21 2021-07-20 南华大学 Dynamic 3D point cloud compression rapid CU partitioning method and device and storage medium
CN112153381B (en) * 2020-09-21 2023-05-12 南华大学 Method, device and medium for rapidly dividing CU in dynamic 3D point cloud compression frame
CN112714314B (en) * 2020-12-28 2022-07-26 杭州电子科技大学 Multi-type tree structure block partition mode decision-making early termination method
CN113099216B (en) * 2021-03-26 2023-03-24 北京百度网讯科技有限公司 Coding complexity evaluation method, device, equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109688414B (en) * 2018-12-19 2022-11-11 同济大学 VVC intra-frame coding unit candidate prediction mode reduction and block division early termination method
CN110087087B (en) * 2019-04-09 2023-05-12 同济大学 VVC inter-frame coding unit prediction mode early decision and block division early termination method

Also Published As

Publication number Publication date
CN111246219A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN111246219B (en) Quick dividing method for depth of CU (Central Unit) in VVC (variable valve timing) frame
CN110087087B (en) VVC inter-frame coding unit prediction mode early decision and block division early termination method
TWI625053B (en) Method of encoding video data in merge mode
US10264254B2 (en) Image coding and decoding method, image data processing method, and devices thereof
CN108322747B (en) Coding unit division optimization method for ultra-high definition video
WO2020207162A1 (en) Intra-frame prediction coding method and apparatus, electronic device and computer storage medium
WO2015180428A1 (en) Video coding method and video coding device for intra-frame prediction coding
KR101516347B1 (en) Method and Apparatus of Intra Coding for HEVC
JP2006014342A5 (en)
KR102271765B1 (en) Intraframe prediction method and apparatus, video encoding apparatus and storage medium
WO2020248715A1 (en) Coding management method and apparatus based on high efficiency video coding
CN111372079B (en) VVC inter-frame CU deep rapid dividing method
KR20230162989A (en) Multimedia data processing methods, apparatus, devices, computer-readable storage media, and computer program products
CN108881905B (en) Probability-based intra-frame encoder optimization method
JP2005348008A (en) Moving picture coding method, moving picture coder, moving picture coding program and computer-readable recording medium with record of the program
CN112153382B (en) Dynamic 3D point cloud compression rapid CU partitioning method and device and storage medium
CN114827606A (en) Quick decision-making method for coding unit division
CN109618152B (en) Depth division coding method and device and electronic equipment
CN112153381A (en) Method, device and medium for rapidly dividing CU (Central Unit) in dynamic 3D point cloud compression frame
CN108012150B (en) Video interframe coding method and device
CN111064968B (en) VVC inter-frame CU deep rapid dividing method
CN113055670B (en) HEVC/H.265-based video coding method and system
TWI826792B (en) Image enhancement methods and apparatuses
KR102232047B1 (en) Device and method for deciding hevc intra prediction mode
CN110958443B (en) Fast encoding method for 360-degree video interframes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220525

Address after: 518100 Zone D and E, floor 7, building 3, Tingwei Industrial Park, No. 6, Liufang Road, Xin'an street, Bao'an District, Shenzhen, Guangdong

Patentee after: SHENZHEN AIXIESHENG TECHNOLOGY Co.,Ltd.

Address before: 421001 Hunan Province, Hengyang Zhengxiang District Road No. 28 Changsheng

Patentee before: University OF SOUTH CHINA

CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 518100 Zone D and E, floor 7, building 3, Tingwei Industrial Park, No. 6, Liufang Road, Xin'an street, Bao'an District, Shenzhen, Guangdong

Patentee after: Shenzhen Aixiesheng Technology Co.,Ltd.

Address before: 518100 Zone D and E, floor 7, building 3, Tingwei Industrial Park, No. 6, Liufang Road, Xin'an street, Bao'an District, Shenzhen, Guangdong

Patentee before: SHENZHEN AIXIESHENG TECHNOLOGY Co.,Ltd.