CN112055222B - Video encoding and decoding method, electronic device and computer readable storage medium - Google Patents

Video encoding and decoding method, electronic device and computer readable storage medium Download PDF

Info

Publication number
CN112055222B
CN112055222B CN202010852602.5A CN202010852602A CN112055222B CN 112055222 B CN112055222 B CN 112055222B CN 202010852602 A CN202010852602 A CN 202010852602A CN 112055222 B CN112055222 B CN 112055222B
Authority
CN
China
Prior art keywords
video
preset
encoding
size
decoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010852602.5A
Other languages
Chinese (zh)
Other versions
CN112055222A (en
Inventor
张政腾
林聚财
方瑞东
江东
陈瑶
粘春湄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010852602.5A priority Critical patent/CN112055222B/en
Publication of CN112055222A publication Critical patent/CN112055222A/en
Application granted granted Critical
Publication of CN112055222B publication Critical patent/CN112055222B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application discloses a video encoding and decoding method, electronic equipment and a computer readable storage medium. The video encoding and decoding method comprises the following steps: acquiring a video to be encoded and decoded, and analyzing and acquiring preset syntax elements in the video, wherein the preset syntax elements comprise preset sizes and preset encoding and decoding end technologies; acquiring the size of a current coding unit of a video; judging whether the size of the current coding unit is smaller than or equal to a preset size; if yes, adopting a preset encoding and decoding end technology to encode and decode the video. By the method, the flexibility of the technical control of the encoding and decoding end can be improved, and the operation complexity can be reduced.

Description

Video encoding and decoding method, electronic device and computer readable storage medium
Technical Field
The present application relates to the field of video technologies, and in particular, to a video encoding and decoding method, an electronic device, and a computer readable storage medium.
Background
The video image data volume is relatively large, the video pixel data is usually required to be compressed, the compressed data is called a video code stream, and the video code stream is transmitted to a user terminal through a wired or wireless network and then decoded and watched. Inter-frame prediction techniques such as a decoder-side motion vector refinement, DMVR technique, a bi-directional optical flow (bi-directional optical flow, BIO) technique, and a bi-directional inter-frame gradient correction (bi-directional gradient correction, BGC) technique can use the time-domain correlation between image frames to compress images, so as to improve the transmission efficiency of video data.
The inventor of the present application finds in the long-term research and development process that, in the prior art, DMVR technology, BIO technology, BGC technology and the like are performed at the encoding and decoding end, so that the operation complexity at the decoding end is higher, and the real-time decoding requirement cannot be met for some decoding equipment with low operation capability.
Disclosure of Invention
The embodiment of the application mainly solves the technical problem of improving the flexibility of the technical control of the encoding and decoding end and reducing the operation complexity.
In order to solve the technical problems, the application adopts a technical scheme that: there is provided a video encoding and decoding method including: acquiring a video to be encoded and decoded, and analyzing and acquiring preset syntax elements in the video, wherein the preset syntax elements comprise preset sizes and preset encoding and decoding end technologies; acquiring the size of a current coding unit of a video; judging whether the size of the current coding unit is smaller than or equal to a preset size; if yes, adopting a preset encoding and decoding end technology to encode and decode the video.
In order to solve the technical problems, the application adopts a technical scheme that: there is provided an electronic apparatus including: the acquisition module is used for acquiring the video to be encoded and decoded and preset syntax elements in the video; the analysis module is coupled with the acquisition module and is used for analyzing a preset syntax element, wherein the preset syntax element comprises a preset size and a preset encoding and decoding end technology; the acquisition module is further used for acquiring the size of the current coding unit of the video; the judging module is coupled with the analyzing module and is used for judging whether the size of the current coding unit is smaller than or equal to a preset size; the processing module is coupled with the judging module and is used for processing the video by adopting a preset encoding and decoding end technology when the judging module judges that the size of the current encoding unit is smaller than or equal to the preset size.
In order to solve the technical problems, the application adopts a technical scheme that: an electronic device is provided, which includes a processor and a memory coupled to the processor, where the processor is configured to execute program instructions stored in the memory to implement the video encoding and decoding method.
In order to solve the technical problems, the application adopts a technical scheme that: there is provided a computer readable storage medium having stored thereon program instructions which, when executed by a processor, implement the video encoding and decoding method described above.
The beneficial effects of the application are as follows: unlike the prior art, the video encoding and decoding method provided by the embodiment of the application comprises the following steps: acquiring a video to be encoded and decoded, and analyzing and acquiring preset syntax elements in the video, wherein the preset syntax elements comprise preset sizes and preset encoding and decoding end technologies; acquiring the size of a current coding unit of a video; judging whether the size of the current coding unit is smaller than or equal to a preset size; if yes, adopting a preset encoding and decoding end technology to encode and decode the video. In this way, the embodiment of the application sets the preset size and the preset encoding and decoding end technology in the form of the preset syntax element, compares the size of the current encoding unit of the video with the preset size before encoding and decoding the video by adopting the preset encoding and decoding end technology, and only when the size of the current encoding unit of the video is smaller than or equal to the preset size, adopts the preset encoding and decoding end technology to encode and decode the video, thereby reducing the operation complexity of the encoding and decoding end; the preset size and the preset encoding and decoding end technology can be set according to the requirements of users on videos or the performances of electronic equipment, so that the flexibility of the encoding and decoding end technology control can be improved, and the encoding and decoding work efficiency can be effectively improved.
Drawings
FIG. 1 is a flow chart of an embodiment of a video encoding and decoding method of the present application;
FIG. 2 is a schematic diagram of a predictor block in the embodiment of FIG. 1;
FIG. 3 is another schematic diagram of a predictor block in the embodiment of FIG. 1;
FIG. 4 is a schematic diagram of the BGC technology correction process in the embodiment of FIG. 1;
FIG. 5 is a flow chart of an embodiment of a video encoding and decoding method according to the present application;
FIG. 6 is a flow chart of an embodiment of a video encoding and decoding method of the present application;
FIG. 7 is a flow chart of an embodiment of a video encoding and decoding method according to the present application;
FIG. 8 is a flow chart of an embodiment of a video encoding and decoding method of the present application;
FIG. 9 is a schematic diagram of an embodiment of an electronic device of the present application;
FIG. 10 is a schematic diagram illustrating the structure of one embodiment of a computer-readable storage medium of the present application;
Fig. 11 is a schematic structural view of an embodiment of the electronic device of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to fall within the scope of the present application.
The present application first proposes a video encoding and decoding method, as shown in fig. 1, fig. 1 is a flowchart of a first embodiment of the video encoding and decoding method of the present application. The video encoding and decoding method of the embodiment comprises the following steps:
Step S101: and acquiring the video to be encoded and decoded, and analyzing and acquiring preset syntax elements in the video, wherein the syntax elements comprise preset sizes and preset encoding and decoding end technologies.
Syntax elements are the basic units of data in video, each syntax element is composed of several bits, it represents a specific physical meaning, and the code stream is composed of successive syntax elements.
The preset syntax element in this embodiment includes a preset size of a coding unit in the video and a preset codec technology.
Optionally, the preset codec technology at least includes: DMVR technology, BIO technology, or BGC technology.
Video consists of continuous image frames, which are divided into three types, I frames, P frames and B frames. I frames are intra-coded frames, and P frames and B frames are inter-coded frames. In the prediction stage of video, an I frame needs to be subjected to intra-frame prediction, that is, an image is compressed by using spatial correlation in an image frame, and a P frame and a B frame need to be subjected to inter-frame prediction, that is, an image is compressed by using temporal correlation between image frames.
The DMVR technique, the BIO technique, or the BGC technique are inter-frame prediction techniques, by which motion vector (MotionVector, MV) information of a current image frame can be acquired to predict motion information of a current image frame using motion information of a previous image frame and a subsequent image frame in a time domain.
In the conventional Merge mode, DMVR is used in bi-predictive block motion compensation, the size of the block (coding unit) must be 8 or more wide and 128 or more wide by high, and the bi-directional frame weights must be the same. DMVR is a further correction of the bi-directional best MV by finding the corresponding reference block in the reference frame using the MV, and then searching the prediction block with the smallest cost (difference SAD of the forward and backward reference blocks) near the reference block, thus obtaining the best one deltaMV to correct the value of the best MV. The method comprises the following steps:
[a] firstly, respectively acquiring prediction blocks of forward and backward integral pixel positions by using forward and backward MVs.
[B] and then, respectively acquiring forward and backward prediction values of the current block on a reference frame by utilizing the forward and backward MVs, wherein the corresponding forward and backward prediction blocks are pre1 and pre2 respectively.
[C] For each m×n sub-block in the current block, each sub-block needs to calculate the difference SAD of the forward and backward reference blocks of 25 sets of forward and backward prediction values, and by comparing the sizes of all SADs, the two prediction values with the smallest SAD are selected, and the offset at this time is the whole pixel deltaMV of the current sub-block, and each sub-block has one deltaMV.
Specifically, 25 sets of forward and backward prediction values are obtained according to forward and backward prediction blocks pre1 and pre2 in [ b ], where pre1 and pre2 are also divided into sub-blocks with m×n corresponding to the current block, each sub-block forward prediction value is obtained by taking a vertex a of the prediction sub-block in pre1 as a search starting point, performing search traversal of delta MV in a 5*5 pixel area with a as a center point, and the search sequence is a raster scan sequence from left to right and from top to bottom in an upper left corner of the 5*5 area, forming a block with the same size as the sub-block with the point as a vertex for each search point, and obtaining a corresponding pixel value for subsequent calculation of SAD; meanwhile, the backward predicted value is obtained by taking the vertex B of the predicted sub-block in pre2 as a starting point, carrying out deltaMV search traversal in a 5*5 pixel area taking B as a central point, wherein the search sequence is from right to left and from bottom to top in a raster scanning sequence (namely, the search sequence is right opposite to that in pre 1) from the lower right corner of the 5*5 area, and for each search point, a block with the same size as the sub-block is formed by taking the point as the vertex, and the corresponding pixel value is obtained as the predicted value so as to calculate SAD later. In summary, each set of forward and backward MVs is adjusted with the same deltaMV, but in opposite directions. Taking a predictor block in pre1 as an example, the search area is shown in fig. 2 below. The solid line box represents the current predictor block in pre1 and the dashed line box represents the region where the predictor was obtained when the C point was found by the traversal.
Further, to reduce the search complexity, the ET (early termination) algorithm is used in the whole pixel search stage instead of the 25-position full search. Specifically, as shown in fig. 3, the first round of search is performed first, SAD of 5 points (Center and P1 to P4) is compared, if SAD of Center position is minimum, the whole pixel shift search stage is terminated, otherwise, the next step is performed; and according to the minimum SAD position points (P1-P4) check 5 th position P5 obtained in the last step, then taking the position point with the minimum SAD in P1-P5 as a new center point to perform a second round of search, wherein the second round of search is the same as the first round of search, and the SADs calculated in the first round of search can be multiplexed.
[D] And carrying out a sub-pixel motion search process, and calculating deltaMV of sub-pixels by using the SAD value of the integral pixel point. The following formula is shown:
[e] The modified MV is found from the optimal front-to-back direction deltaMV (from the whole pixel deltaMV + sub-pixel deltaMV) for each sub-block:
MV0=MV0+deltaMV
MV1=MV1-deltaMV
the forward and backward MVs respectively carry out unidirectional motion compensation on each sub-block, and the predicted value of each sub-block is directly obtained from the average value of the forward and backward predicted values, so as to obtain the optimal predicted value of the current block.
In another implementation, DMVR technology under AVS3 standard may be used instead of DMVR technology under the Merge mode standard described above, and DMVR technology under AVS3 standard is substantially the same as DMVR process described above, except that it performs the whole pixel search in a different manner. The SAD cost of 21 offset points is found first, and the positions of 4 vertexes of upper left, upper right, lower left and lower right of 5*5 (the 25 groups) offset points are removed.
In the AVS3 video coding standard, the BIO technical process is as follows:
[a] First, normal unidirectional motion compensation operation is performed by using the forward MV and the backward MV, respectively, and the forward predicted value I (0) and the backward predicted value I (1) are obtained.
[B] and then respectively solving the gradient values corresponding to the forward direction and the backward direction. The gradient acquisition mode of each pixel is that the gradient of the image in the x, y (horizontal and vertical) directionsThe calculation formula is as follows:
[c] According to the obtained gradient value and the difference value of the forward and backward predicted values, 5 variables are respectively obtained for each 4*4 sub-blocks in the current block: s 1,S2,S3,S5,S6, using the obtained variables, a motion displacement (v x,vy) (vector field) is obtained for each 4*4 block, and the mathematical relationship of the motion displacement is as follows:
Where th IBO and r are fixed parameter thresholds in the formula. The calculation factor S 1-S6 of (v x,vy) is the autocorrelation and cross-correlation of the calculated gradient directions:
Wherein,
θ(i,j)=I(1)(i,j)-I(0)(i,j)
[D] final way of BIO predictor adjustment:
predBIO(x,y)=(I(0)(x,y)+I(1)(x,y)+b+1)>>1
Wherein I (0)(x,y),I(1) (x, y) is a predicted value of positions corresponding to the two reference blocks in the forward direction and the backward direction, and b is a correction value of bidirectional optical flow adjustment. The bit width in the BIO calculation process is all limited to within 32 bits.
In the BGC technology, let the unidirectional luminance Y component predicted values obtained in two directions be pred0 and pred1, respectively, the bidirectional predicted value before correction be predBI, predBI be the average value of pred0 and pred1, and predBI be the current block predicted value predBIO after BIO correction if BIO is on.
Syntax element for BGC: bgc _flag is a binary variable, bgc _flag is 0 indicates that no gradient correction is performed, and bgc _flag is 1 indicates that gradient correction is performed; bgc _idx is a binary variable, pred=pred BI + (pred1-pred0) > k when bgc _idx is 0; bgc _idx is 1 pred=pred BI + (pred0-pred1) > k, where k represents the correction strength, and the corrected predicted value is Pred:
Where k is set to a fixed value of 3. The V1 formula in fig. 4 corresponds to bgc _flag=1, bgc _idx=0, the V2 formula corresponds to bgc _flag=0, and the V3 formula corresponds to bgc _flag=1, bgc _idx=1.
The BGC is only carried out on the brightness Y component at present, and when the prediction modes are advanced motion vector prediction (advanced motion vector prediction, AMVP), bidirectional inter mode of Affine Affine and symmetric motion information prediction (symmetricMVDmode, SMVD), the motion compensation in the three prediction modes is carried out for BGC adjustment, wherein the BIO process is carried out after the motion compensation of the AMVP and SMVD, and then the BGC is carried out.
In an application scenario, the syntax element in the embodiment of the present application may take the form of log2_max_xxxx_size_minus4, where "xxxx" represents a preset codec technology, such as DMVR technology, BIO technology, BGC technology, and so on.
For example, the syntax element log2_max_ dmvr _size_minus4, which is an unsigned integer, has a value ranging from 0 to 3, which represents the maximum coding unit size of the preset codec end technology DMVR that is allowed to be used, i.e., the preset size is 2 log2_max_dmvr_size_minus4+4.
For example, the syntax element log2_max_bio_size_minus4 is an unsigned integer, and its value ranges from 0 to 3, which indicates that the preset codec end technology BIO that is allowed to be used has a maximum coding unit size, i.e. the preset size is 2 log2_max_bio_size_minus4+4.
For example, the syntax element log2_max_ BGC _size_minus4 is an unsigned integer, whose value ranges from 0 to 3, which represents the maximum coding unit size of the preset codec end technology BGC that is allowed to be used, i.e. the preset size is 2 log2_max_bgc_size_minus4+4.
Step S102: the size of the current coding unit of the video is obtained.
The coding unit is a prediction block of the video described above.
Step S103: and judging whether the size of the current coding unit is smaller than or equal to a preset size.
Step S104: if yes, adopting a preset encoding and decoding end technology to encode and decode the video.
Further, if the size of the current coding unit is larger than the preset size, the video does not need to be coded and decoded by adopting a preset coding and decoding end technology.
When the related coding and decoding end technology is performed on the current coding unit, the size limiting condition of the current coding unit is resolved through the syntax elements, for example, 2 log2_max_xxxx_size_minus4+4, then the current coding unit decides whether to skip the current preset coding and decoding end technology process according to the limiting condition, so that the complexity of the coding and decoding end is controlled and reduced.
The embodiment of the application sets the preset size and the preset encoding and decoding end technology in the form of the preset syntax element, compares the size of the current encoding unit of the video with the preset size before encoding and decoding the video by adopting the preset encoding and decoding end technology, and only when the size of the current encoding unit of the video is smaller than or equal to the preset size, adopts the preset encoding and decoding end technology to encode and decode the video, thereby being capable of reducing the operation complexity of the encoding and decoding end; the preset size and the preset encoding and decoding end technology can be set according to the requirements of users on videos or the performances of electronic equipment, so that the flexibility of the encoding and decoding end technology control can be improved, and the encoding and decoding work efficiency can be effectively improved.
The present application further proposes a video encoding and decoding method of the second embodiment, as shown in fig. 5, the video encoding and decoding method of the present embodiment includes the following steps:
step S201: and acquiring the video to be encoded and decoded, and analyzing and acquiring preset syntax elements in the video, wherein the syntax elements comprise preset sizes and preset encoding and decoding end technologies.
Step S202: and presetting the acquisition enabling data of the encoding and decoding end technology.
The enabling data is used for indicating whether the corresponding preset encoding end technology is used or not, and the enabling data can be set in grammar semantics through the configuration file information.
Step S203: if the enabling data is 1, the size of the current coding unit of the video is obtained.
If the enabling data is 0, the size of the current coding unit of the video does not need to be acquired, and step S204 and step S205 are performed.
For example, as shown in table 1-1, when the enable data dmvr _enable_flag is 1, the size of the current coding unit of the video is obtained; when the enable data dmvr _enable_flag is 0, the size of the current coding unit of the video does not need to be acquired and the subsequent steps are performed.
TABLE 1-1
Sequence header definition Descriptor for a computer
sequence_header(){
if(profile_id==0x32){
dmvr_enable_flag u(1)
For example, as shown in tables 1-2, when the enable data bio_enable_flag is 1, the size of the current coding unit of the video is obtained; when the enabled data bio_enable_flag is 0, the size of the current coding unit of the video does not need to be acquired and the subsequent steps are executed.
TABLE 1-2
Step S204: and judging whether the size of the current coding unit is smaller than or equal to a preset size.
Step S205: if yes, adopting a preset encoding and decoding end technology to encode and decode the video.
Further, if the size of the current coding unit is larger than the preset size, the video does not need to be coded and decoded by adopting a preset coding and decoding end technology.
Step S204 and step S205 are similar to step S103 and step S104 above, and are not described here.
On the basis of the above embodiment, in this embodiment, by setting the enabling data for the preset codec technology, the subsequent codec step is performed only when the enabling data is1, so that the operational complexity of the codec can be further reduced; the enabling data can be set according to the user requirements and the performance of the encoding and decoding equipment, so that the flexibility of the technical control of the encoding and decoding end can be further improved.
The present application further proposes a video encoding and decoding method of a third embodiment, as shown in fig. 6, the video encoding and decoding method of the present embodiment includes the following steps:
Step S301: configuration information of the video is defined to add syntax elements in the syntax definition of the video.
In an application scenario, under the AVS3 video coding standard, the syntax elements of log2_max_xxxx_size_minus4 are set by defining the configuration information of the sequence header, as shown in tables 2-1, 2-2, and 2-3.
TABLE 2-1
Sequence header definition Descriptor for a computer
sequence_header(){
if(profile_id==0x32){
dmvr_enable_flag u(1)
if(dmvr_enable_flag){
log2_max_dmvr_size_minus4 u(2)
}
TABLE 2-2
Sequence header definition Descriptor for a computer
sequence_header(){
if(profile_id==0x32){
bgc_enable_flag u(1)
If(bgc_enable_flag){
log2_max_bgc_size_minus4 u(2)
}
The grammar definition in the present application may further include: any of a sequence header definition, an image header definition, a slice definition, or a coding tree unit definition. The syntax elements of the above embodiments are defined in the sequence header of the video, and can control the codec complexity of an entire sequence. Of course, in other embodiments, the syntax definition may be other definitions or combinations of definitions described above, so as to define syntax elements in the picture header, slice, coding tree unit, and coding unit, and to control whether each coding unit uses the preset codec technology at the picture level, slice level, coding tree unit level, and coding unit level, respectively.
Tables 2 to 3
Sequence header definition Descriptor for a computer
sequence_header(){
if(profile_id==0x32){
bio_enable_flag u(1)
If(bio_enable_flag){
log2_max_bio_size_minus4 u(2)
}
Step S302: and acquiring the video to be encoded and decoded, and analyzing and acquiring preset syntax elements in the video, wherein the syntax elements comprise preset sizes and preset encoding and decoding end technologies.
Step S303: the size of the current coding unit of the video is obtained.
Step S304: and judging whether the size of the current coding unit is smaller than or equal to a preset size.
Step S305: if yes, adopting a preset encoding and decoding end technology to encode and decode the video.
Step S302 to step S305 of the present embodiment are similar to step S101 to step S104 of the above embodiment, and are not repeated here.
Further, if the size of the current coding unit is larger than the preset size, the video does not need to be coded and decoded by adopting a preset coding and decoding end technology.
Specifically, when the related codec technology is performed on the current coding unit, the syntax element of the video is obtained, and the syntax element is parsed to the size constraint condition of the current coding unit, for example, 2 log2_max_xxxx_size_minus4+4, then the current coding unit decides whether to skip the preset codec technology process in the syntax element according to the constraint condition, so that the complexity of the codec is controlled and reduced.
For example, if the sequence header defines that log2_max_xxx_size_minus4 is 0, the related art process of the codec end is performed only when the width and height of the current coding unit are limited to be equal to or less than 16, otherwise, the current coding unit skips the process at the codec end.
If the sequence header log 2_max_xxx_size_minus4 is 1, the related technical process of the encoding and decoding end is only performed when the width and the height of the current encoding unit are limited to be less than or equal to 32, otherwise, the process of the current encoding unit is skipped at the encoding and decoding end.
If the sequence header definition log 2_max_xxx_size_minus4 is 2, the related technical process of the encoding and decoding end is only performed when the width and the height of the current encoding unit are limited to be less than or equal to 64, otherwise, the process of the current encoding unit is skipped at the encoding and decoding end.
If the sequence header log2_max_xxx_size_minus4 is 3, the related technical process of the encoding and decoding end is only performed when the width and the height of the current encoding unit are limited to be less than or equal to 128, otherwise, the process of the current encoding unit is skipped at the encoding and decoding end.
The present application further proposes a video encoding and decoding method of a fourth embodiment, as shown in fig. 7, the video encoding and decoding method of the present embodiment includes the following steps:
Step S401: and acquiring the video to be encoded and decoded, and analyzing and acquiring preset syntax elements in the video, wherein the syntax elements comprise preset sizes and preset encoding and decoding end technologies.
Step S402: the size of the current coding unit of the video is obtained.
Step S403: and judging whether the size of the current coding unit is smaller than or equal to a preset size. If yes, step S404 is executed, and if no, step S405 is executed directly.
Step S404: and carrying out frame prediction on the video by adopting a preset encoding and decoding end technology so as to acquire first video data.
If the size of the current coding unit is greater than the preset size, the video is not required to be subjected to frame prediction, and step S405 is directly performed.
The preset codec technology in this embodiment is an inter-frame prediction technology, such as DMVR technology, BIO technology, BGC technology, etc. Of course, in the frame prediction stage, intra prediction and the like are also required.
In other embodiments, the preset codec technology may also be an intra-frame prediction technology or other codec technologies.
Step S405: and sequentially carrying out change, quantization and entropy coding on the first video data to obtain a code stream of the video.
According to the embodiment, the video is subjected to frame prediction, change, quantization and entropy coding in sequence through the method, so that the video can be coded into a code stream, and channel transmission is facilitated.
The present application further proposes a video encoding and decoding method of a fifth embodiment, as shown in fig. 8, the video encoding and decoding method of the present embodiment includes the following steps:
step S501: and acquiring the video to be encoded and decoded, and analyzing and acquiring preset syntax elements in the video, wherein the syntax elements comprise preset sizes and preset encoding and decoding end technologies.
Step S502: the size of the current coding unit of the video is obtained.
Step S503: and judging whether the size of the current coding unit is smaller than or equal to a preset size. If yes, step S504 is executed, and if no, step S505 is executed directly.
Step S504: and carrying out frame prediction on the video by adopting a preset encoding and decoding end technology so as to acquire first video data.
Step S505: and sequentially carrying out Fourier transform, quantization and entropy coding on the first video data to obtain a code stream of the video.
Steps S501 to S505 are similar to steps S401 to S405 described above, and are not repeated here.
Step S506: and sequentially performing entropy decoding, inverse quantization and inverse variation on the code stream to obtain second video data.
Step S508: and carrying out frame prediction on the second video data by adopting a preset encoding and decoding end technology so as to obtain a decoded video.
On the basis of the above embodiment, the present embodiment can convert a code stream into a video that is convenient for a user to use by sequentially performing entropy decoding, dequantization, inverse variation, and frame prediction on the code stream received from a channel.
The above embodiment is to perform the processing of the method of the above embodiment on the encoding side or on both the encoding side and the decoding side, and of course, in other embodiments, the processing of the method of the above embodiment may be performed only on the decoding side.
The application further provides an electronic device, as shown in fig. 9, and fig. 9 is a schematic structural diagram of an embodiment of the electronic device of the application. The electronic device 901 of the present embodiment includes a processor 902 and a memory 903 coupled to the processor 902, where the processor 902 is configured to execute program instructions stored in the memory 903 to implement the video encoding and decoding methods described above.
The electronic device 901 may be an encoder, a decoder, a codec, a mobile terminal such as a notebook computer, a palm top computer, a personal digital assistant, a portable media player, a navigation device, a wearable device, a pedometer, and a fixed terminal such as a digital television, a desktop computer, a server, and the like.
Compared with the prior art, the embodiment sets the preset size and the preset encoding and decoding end technology through the form of the preset syntax element, compares the size of the current encoding unit of the video with the preset size before encoding and decoding the video by adopting the preset encoding and decoding end technology, and only when the size of the current encoding unit of the video is smaller than or equal to the preset size, adopts the preset encoding and decoding end technology to encode and decode the video, thereby reducing the operation complexity of the encoding and decoding end; the preset size and the preset encoding and decoding end technology can be set according to the requirements of users on videos or the performances of electronic equipment, so that the flexibility of the encoding and decoding end technology control can be improved, and the encoding and decoding work efficiency can be effectively improved.
The application further provides an electronic device, as shown in fig. 11, and fig. 11 is a schematic structural diagram of an embodiment of the electronic device of the application. An electronic device includes: an obtaining module 1102, configured to obtain a video to be encoded and decoded and a preset syntax element in the video; the parsing module 1103 is coupled to the obtaining module 1102, and the parsing module 1103 is configured to parse a visual preset syntax element, where the preset syntax element includes a preset size and a preset codec technology; the obtaining module 1102 is further configured to obtain a size of a current coding unit of the video; the judging module 1104 is coupled to the parsing module 1103, and the judging module 1104 is configured to judge whether the size of the current coding unit is smaller than or equal to a preset size; the processing module 1105 is coupled to the judging module 1104, and the processing module 1105 is configured to process the video by using a preset codec technology when the judging module 1104 judges that the size of the current coding unit is smaller than or equal to the preset size.
The present application further proposes a computer readable storage medium, as shown in fig. 10, and fig. 10 is a schematic structural diagram of an embodiment of the computer readable storage medium of the present application. The computer-readable storage medium 1001 has stored thereon program instructions 1002, which when executed by a processor (not shown) implement the video codec method described above.
The computer readable storage medium 1001 of this embodiment may be, but is not limited to, a usb disk, an SD card, a PD optical drive, a mobile hard disk, a high capacity floppy drive, a flash memory, a multimedia memory card, a server, etc.
The video encoding and decoding method provided by the embodiment of the application comprises the following steps: acquiring a video to be encoded and decoded, and analyzing and acquiring preset syntax elements in the video, wherein the preset syntax elements comprise preset sizes and preset encoding and decoding end technologies; acquiring the size of a current coding unit of a video; judging whether the size of the current coding unit is smaller than or equal to a preset size; if yes, adopting a preset encoding and decoding end technology to encode and decode the video. In this way, the embodiment of the application sets the preset size and the preset encoding and decoding end technology in the form of the preset syntax element, compares the size of the current encoding unit of the video with the preset size before encoding and decoding the video by adopting the preset encoding and decoding end technology, and only when the size of the current encoding unit of the video is smaller than or equal to the preset size, adopts the preset encoding and decoding end technology to encode and decode the video, thereby reducing the operation complexity of the encoding and decoding end; the preset size and the preset encoding and decoding end technology can be set according to the requirements of users on videos or the performances of electronic equipment, so that the flexibility of the encoding and decoding end technology control can be improved, and the encoding and decoding work efficiency can be effectively improved.
In addition, the above-described functions, if implemented in the form of software functions and sold or used as a separate product, may be stored in a mobile terminal-readable storage medium, i.e., the present application also provides a storage device storing program data that can be executed to implement the method of the above-described embodiments, the storage device may be, for example, a U-disk, an optical disk, a server, or the like. That is, the present application may be embodied in the form of a software product comprising instructions for causing a smart terminal to perform all or part of the steps of the method described in the various embodiments.
In the description of the present application, a description of the terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., may be considered as a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device (which can be a personal computer, server, network device, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions). For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
The foregoing description is only of embodiments of the present application, and is not intended to limit the scope of the application, and all equivalent structures or equivalent processes using the descriptions and the drawings of the present application or directly or indirectly applied to other related technical fields are included in the scope of the present application.

Claims (10)

1. A video encoding and decoding method, comprising:
acquiring a video to be encoded and decoded, and analyzing and acquiring preset syntax elements in the video, wherein the preset syntax elements comprise preset sizes and preset encoding and decoding end technologies;
acquiring the size of a current coding unit of the video;
Judging whether the size of the current coding unit is smaller than or equal to the preset size;
if yes, adopting the preset encoding and decoding end technology to encode and decode the video;
The preset encoding and decoding end technology comprises DMVR technology, BIO technology and BGC technology.
2. The video coding method according to claim 1, wherein after the step of parsing and acquiring a preset syntax element in the video, the step of acquiring a size of a current coding unit of the video is preceded by the step of:
Acquiring enabling data of the preset encoding and decoding end technology;
And if the enabling data is 1, executing the step of acquiring the size of the current coding unit of the video.
3. The video encoding and decoding method according to claim 1, wherein the preset size is 2 A+4, wherein a is an unsigned integer and ranges from 0 to 3.
4. The video encoding and decoding method according to claim 1, wherein the step of encoding and decoding the video using the preset encoding and decoding side technique includes:
Performing frame prediction on the video by adopting the preset encoding and decoding end technology to acquire first video data;
The video encoding and decoding method further comprises:
And sequentially carrying out change, quantization and entropy coding on the first video data to obtain a code stream of the video.
5. The video coding method of claim 4, wherein the video coding method further comprises:
And if the size of the current coding unit is larger than the preset size, executing the steps of sequentially changing, quantizing and entropy coding the first video data to obtain the code stream of the video.
6. The video coding method of claim 4, wherein the video coding method further comprises:
Sequentially performing entropy decoding, inverse quantization and inverse variation on the code stream to obtain second video data;
The step of performing the encoding and decoding processing on the video by adopting the preset encoding and decoding end technology further comprises the following steps:
and carrying out frame prediction on the second video data by adopting the preset encoding and decoding end technology so as to obtain a decoded video.
7. The video coding method of claim 1, wherein the video coding method further comprises:
Defining configuration information of the video to add the syntax element in a syntax definition of the video;
wherein, the grammar definition at least comprises: any of a sequence header definition, an image header definition, a slice definition, or a coding tree unit definition.
8. An electronic device, the electronic device comprising:
The acquisition module is used for acquiring the video to be encoded and decoded and the preset syntax elements in the video;
The analysis module is coupled with the acquisition module and is used for analyzing preset syntax elements in the video, wherein the preset syntax elements comprise preset sizes and preset encoding and decoding end technologies;
the acquisition module is further used for the size of the current coding unit of the video;
the judging module is coupled with the analyzing module and is used for judging whether the size of the current coding unit is smaller than or equal to the preset size;
The processing module is coupled with the judging module and is used for processing the video by adopting the preset encoding and decoding end technology when the judging module judges that the size of the current encoding unit is smaller than or equal to the preset size; the preset encoding and decoding end technology comprises DMVR technology, BIO technology and BGC technology.
9. An electronic device comprising a processor and a memory coupled to the processor, the processor configured to execute program instructions stored in the memory to implement the video codec method of any one of claims 1-7.
10. A computer readable storage medium having stored thereon program instructions, which when executed by a processor, implement the video codec method of any one of claims 1 to 7.
CN202010852602.5A 2020-08-21 2020-08-21 Video encoding and decoding method, electronic device and computer readable storage medium Active CN112055222B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010852602.5A CN112055222B (en) 2020-08-21 2020-08-21 Video encoding and decoding method, electronic device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010852602.5A CN112055222B (en) 2020-08-21 2020-08-21 Video encoding and decoding method, electronic device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112055222A CN112055222A (en) 2020-12-08
CN112055222B true CN112055222B (en) 2024-05-07

Family

ID=73599568

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010852602.5A Active CN112055222B (en) 2020-08-21 2020-08-21 Video encoding and decoding method, electronic device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112055222B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204567A (en) * 2016-07-05 2016-12-07 华南理工大学 A kind of natural background video matting method
WO2018169099A1 (en) * 2017-03-13 2018-09-20 엘지전자(주) Method for processing inter prediction mode-based image and device therefor
WO2020103870A1 (en) * 2018-11-20 2020-05-28 Beijing Bytedance Network Technology Co., Ltd. Inter prediction with refinement in video processing
CN111294598A (en) * 2019-02-08 2020-06-16 北京达佳互联信息技术有限公司 Video coding and decoding method and device
JP2020096329A (en) * 2018-12-14 2020-06-18 シャープ株式会社 Prediction image generation device, moving image decoding device, and moving image encoding device
CN111436226A (en) * 2018-11-12 2020-07-21 北京字节跳动网络技术有限公司 Motion vector storage for inter prediction

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204567A (en) * 2016-07-05 2016-12-07 华南理工大学 A kind of natural background video matting method
WO2018169099A1 (en) * 2017-03-13 2018-09-20 엘지전자(주) Method for processing inter prediction mode-based image and device therefor
CN111436226A (en) * 2018-11-12 2020-07-21 北京字节跳动网络技术有限公司 Motion vector storage for inter prediction
WO2020103870A1 (en) * 2018-11-20 2020-05-28 Beijing Bytedance Network Technology Co., Ltd. Inter prediction with refinement in video processing
JP2020096329A (en) * 2018-12-14 2020-06-18 シャープ株式会社 Prediction image generation device, moving image decoding device, and moving image encoding device
CN111294598A (en) * 2019-02-08 2020-06-16 北京达佳互联信息技术有限公司 Video coding and decoding method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Kyohei Unno,et al.CE9: Block size restriction for DMVR (test 9.2.6).《JVET》.2019,全文. *
Versatile Video Coding (Draft 4);Benjamin Bross,et al;《JVET》;29、66 *
基于深度学习的视频编码技术研究;王洋;CNKI;全文 *

Also Published As

Publication number Publication date
CN112055222A (en) 2020-12-08

Similar Documents

Publication Publication Date Title
US10142654B2 (en) Method for encoding/decoding video by oblong intra prediction
CN111971962B (en) Video encoding and decoding device and method
US7426308B2 (en) Intraframe and interframe interlace coding and decoding
US10542284B2 (en) Method and arrangement for video coding
US20200374514A1 (en) Video encoding and decoding method and device, computer device, and storage medium
US11206405B2 (en) Video encoding method and apparatus, video decoding method and apparatus, computer device, and storage medium
US11496732B2 (en) Video image encoding and decoding method, apparatus, and device
CN101313587B (en) Mode selection techniques for multimedia coding
CN102017615B (en) Boundary artifact correction within video units
US20070076795A1 (en) Method and apparatus for determining inter-mode in video encoding
US20110206113A1 (en) Data Compression for Video
CN102883159A (en) High precision edge prediction for intracoding
CN101946516A (en) The decision of macro block increment quantization parameter fast
US7822123B2 (en) Efficient repeat padding for hybrid video sequence with arbitrary video resolution
JP2003209848A (en) Apparatus of motion estimation and mode decision and method thereof
US11212536B2 (en) Negative region-of-interest video coding
US7839933B2 (en) Adaptive vertical macroblock alignment for mixed frame video sequences
US20070133689A1 (en) Low-cost motion estimation apparatus and method thereof
US20050089098A1 (en) Data processing apparatus and method and encoding device of same
JP2011015117A (en) Image coding apparatus, image coding method and video camera
CN108401185B (en) Reference frame selection method, video transcoding method, electronic device and storage medium
CN1457196A (en) Video encoding method based on prediction time and space domain conerent movement vectors
CN112055222B (en) Video encoding and decoding method, electronic device and computer readable storage medium
US20120163462A1 (en) Motion estimation apparatus and method using prediction algorithm between macroblocks
TW201338553A (en) Methods, systems, and computer program products for assessing a macroblock candidate for conversion to a skipped macroblock

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant