CN109874020B - Inseparable lifting wavelet transformation method with gradable quality and complexity - Google Patents

Inseparable lifting wavelet transformation method with gradable quality and complexity Download PDF

Info

Publication number
CN109874020B
CN109874020B CN201910042890.5A CN201910042890A CN109874020B CN 109874020 B CN109874020 B CN 109874020B CN 201910042890 A CN201910042890 A CN 201910042890A CN 109874020 B CN109874020 B CN 109874020B
Authority
CN
China
Prior art keywords
block
pixel
coefficient value
current
prediction coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910042890.5A
Other languages
Chinese (zh)
Other versions
CN109874020A (en
Inventor
宋传鸣
王相海
刘丹
葛明博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Normal University
Original Assignee
Liaoning Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning Normal University filed Critical Liaoning Normal University
Priority to CN201910042890.5A priority Critical patent/CN109874020B/en
Publication of CN109874020A publication Critical patent/CN109874020A/en
Application granted granted Critical
Publication of CN109874020B publication Critical patent/CN109874020B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)

Abstract

The invention discloses a quality and complexity gradable inseparable lifting wavelet transform method, firstly, processing image signals as blocks, realizing real 2D transform, having better anisotropy, larger freedom and better filtering property than separable 2D wavelet transform; secondly, the wavelet transformation is carried out on the input image from the highest bit plane to the lowest bit plane by adopting a bit plane mode, and the wavelet transformation can be directly combined with an embedded coding algorithm based on successive approximation quantization, so that the scalable coding of the quality of the image and the video is easy to realize; finally, in the conversion process, the conversion can be stopped at any bit plane according to the requirement of the target code rate, thereby avoiding the conversion of a lower pixel bit plane and achieving the purpose of gradable calculation complexity. The experimental results verify the quality and computational complexity gradable characteristics of the invention.

Description

Inseparable lifting wavelet transformation method with gradable quality and complexity
Technical Field
The invention relates to the field of scalable coding of images and videos, in particular to a non-separable lifting wavelet transformation method with anisotropy, high operation speed, and scalable quality and complexity.
Background
The Scalable image and video coding algorithm has a good adaptability to the change of network bandwidth, but the adaptability is usually at the expense of coding complexity or coding efficiency, for example, the efficiency of a Fine granular coding (FGS) method is lower than that of non-Scalable MPEG-4 coding, while Progressive Fine granular coding (Progressive FGS) improves the coding efficiency of FGS to a certain extent, and the operation complexity is correspondingly improved, which puts a higher requirement on the processing capability of a decoding terminal. In fact, the application environment of multimedia communication is very complex, especially for video communication between heterogeneous terminals. In this case, the seventh part of the MPEG-21 standard specifically defines a description tool set, namely Digital Item Adaptation (DIA), which describes user characteristics, network characteristics, terminal capabilities, and the like in multimedia applications, and the definition of the terminal computing capability is written into the multimedia application standard at first, which indicates that the terminal computing capability becomes one of important factors that must be considered in image and video communication applications. To meet this application demand, low complexity image and video coding is beginning to receive a great deal of attention from researchers.
Although low complexity coding algorithms enable multimedia communication on a specific platform, it is not practical to customize coding schemes with corresponding complexity for each application platform, and it is therefore necessary to study coding algorithms with scalable computational complexity. By "computational complexity scalable" is meant that an image or video encoder can adaptively adjust between available computational resources and decoded reconstruction quality depending on the application environment. The method can be well applied to video communication among terminals with different computing capabilities under the condition of heterogeneous networks.
Orthogonal transforms (such as discrete cosine transforms and wavelet transforms) are an indispensable loop in image and video coding, which map pixel values of the spatial domain to coefficient values of the frequency domain, thereby achieving concentration of spatial domain energy while removing adjacent pixel correlations. However, since a large number of floating point operations are involved, the calculation amount of the orthogonal transform is the highest in the entire encoding process except for inter prediction. Therefore, designing a transformation method with scalable complexity is significant for calculating images and video coding with scalable complexity.
Richardson et al propose a Discrete Cosine Transform (DCT) complexity management model with a feedback mechanism, which controls the complexity of the Transform by adjusting a threshold value of a coefficient truncation position during an encoding process; zhang Dongming et al designs a butterfly algorithm of a simplified block by establishing a coefficient distribution model of a 3-class simplified block, and further provides a DCT method with gradable complexity; zhang Shu Fang et al introduces the concept of high frequency coefficient truncation, and realizes the complexity scalability of DCT transformation by adjusting the complexity control parameter. Although the DCT transform method described above achieves the purpose of complexity scalability to some extent, the inherent characteristics of DCT transform make it disadvantageous to achieve and ensure efficient image and video quality scalability, spatial scalability coding.
With the development of multi-scale analysis theory such as wavelet, people find that multi-scale transformation can decompose an image into sub-band signals with different spatial resolution, different frequency and direction characteristics, and the multi-scale transformation has nonlinear approximation efficiency superior to DCT, and the multi-resolution characteristic of the multi-scale transformation makes the multi-scale transformation easier to realize flexible scalable coding than DCT. Therefore, the wavelet transform is widely applied to image and video scalable coding, such as EZW, SPIHT, SLCCA, EBCOT algorithms facing static image scalable coding and VidWav, MC-EZBC algorithms facing video scalable coding. However, related research and invention of complexity scalable wavelet transform is not known, and especially few reports of complexity scalable 2D inseparable wavelet transform are available.
Disclosure of Invention
The invention aims to solve the technical problems in the prior art and provides an inseparable lifting wavelet transformation method which has anisotropy, high operation speed, and gradable quality and complexity.
The technical solution of the invention is as follows: a quality and complexity scalable inseparable lifting wavelet forward transform method is characterized by comprising the following steps:
step 1, inputting an image I to be processed, and setting the height of the image I as h pixels and the width as w pixels;
step 2, inputting the stage number L to be convertedmaxAnd the number N of bit-planes to be processed per stage of the transformbpMaking a conversion stage number L ← 1, wherein the ← represents assignment operation;
step 3, counting the maximum absolute value C of the image ImaxCalculating the highest bit plane N according to equation (1)max_bp
Figure GDA0002797319360000021
Step 4, making the current bit plane Ncurrent←Nmax_bp
Step 5, splitting: dividing an image I into blocks which are not overlapped and have the size of 2 multiplied by 2 pixels, wherein for each pixel block, the coordinates of the upper left pixel are (2I,2j), the coordinates of the upper right pixel are (2I,2j +1), the coordinates of the lower left pixel are (2I +1,2j), the coordinates of the lower right pixel are (2I +1,2j +1), I and j are integers, and
Figure GDA0002797319360000033
step 6, initializing the wavelet transform coefficient value of each pixel block to 0, namely making W (2i,2j) ← 0, W (2i,2j +1) ← 0, W (2i +1,2j) ← 0, W (2i +1,2j +1) ← 0, and the W (2i,2j), W (2i,2j +1), W (2i +1,2j) and W (2i +1,2j +1) respectively represent the wavelet transform coefficient values of the upper left corner pixel, the upper right corner pixel, the lower left corner pixel and the lower right corner pixel in the pixel block;
step 7, a prediction stage: according to the step 7.1 to the step 7.5, the Nth of the inseparable wavelet transform is calculated block by blockcurrentThe prediction coefficient value of the bit plane is agreed to refer to the block currently being processed as the current block;
step 7.1, make b ← (1 < N)current) Said "<" represents an arithmetic left shift operation;
step 7.2 calculates the prediction coefficient value T (2i,2j) of the top left pixel in each 2 × 2 pixel block on a block-by-block basis, according to equation (2):
T(2i,2j)=sgn(I(2i,2j))×[abs(I(2i,2j))&b] (2)
the sgn (·) represents a sign function, I (2I,2j) represents a pixel value located at coordinate (2I,2j) in the current block, abs (·) represents an absolute value taking function, and "&" represents a bitwise and operation;
step 7.3 calculates, block by block, the prediction coefficient value T (2i +1,2j) of the lower left pixel in each 2 × 2 pixel block, according to formula (3):
Figure GDA0002797319360000031
i (2I +1,2j) represents a pixel value located at a coordinate (2I +1,2j) in the current block, I (2I +2,2j) represents an upper left pixel value in a neighboring block below the current block, and ">" represents an arithmetic right shift operation;
step 7.4 calculates the prediction coefficient value T (2i +1,2j +1) of the bottom right pixel in each 2 × 2 pixel block, block by block, according to formula (4):
Figure GDA0002797319360000032
the I (2I +1,2j +1) represents a pixel value at a coordinate (2I +1,2j +1) in the current block, I (2I,2j +1) represents a pixel value at a coordinate (2I,2j +1) in the current block, I (2I +2,2j +1) represents an upper-right pixel value of a neighboring block below the current block, T (2I +1,2j) represents a prediction coefficient value of a lower-left pixel in the current block, and T (2I +1,2j +2) represents a prediction coefficient value of a lower-left pixel in a neighboring block on the right side of the current block;
step 7.5 calculates the prediction coefficient value T (2i,2j +1) of the top-right pixel in each 2 × 2 pixel block on a block-by-block basis, according to equation (5):
Figure GDA0002797319360000041
the I (2I,2j +1) represents a pixel value located at the coordinate (2I,2j +1) in the current block, and the I (2I,2j +2) represents an upper-left corner pixel value in the adjacent block on the right side of the current block;
step 8, a lifting stage: according to the step 8.1 to the step 8.4, the Nth of the inseparable wavelet transform is calculated block by blockcurrentA lifting coefficient value of the bit plane;
step 8.1, keeping the prediction coefficient value T (2i +1,2j +1) of the lower right corner pixel in each 2 x 2 pixel block unchanged, and taking the prediction coefficient value T (2i +1,2j +1) as the lifting coefficient value U (2i +1,2j + 1);
step 8.2, according to the formula (6), calculating the lifting coefficient value U (2i,2j +1) of the top-right pixel in each 2 × 2 pixel block by block:
U(2i,2j+1)=T(2i,2j+1)+[T(2i+1,2j+1)+T(2i-1,2j+1)]>>2 (6)
the T (2i,2j +1) represents the prediction coefficient value of the upper-right pixel in the current block, the T (2i +1,2j +1) represents the prediction coefficient value of the lower-right pixel in the current block, and the T (2i-1,2j +1) represents the prediction coefficient value of the lower-right pixel of the adjacent block above the current block;
step 8.3 calculates the lifting coefficient value U (2i,2j) of the top left pixel in each 2 × 2 pixel block on a block-by-block basis, according to the definition of formula (7):
U(2i,2j)=T(2i,2j)+[T(2i,2j-1)+T(2i,2j+1)+T(2i-1,2j)+T(2i+1,2j)]>>2 (7)
the T (2i,2j) represents the prediction coefficient value of the upper left pixel in the current block, T (2i,2j +1) represents the prediction coefficient value of the upper right pixel in the current block, T (2i +1,2j) represents the prediction coefficient value of the lower left pixel in the current block, T (2i,2j-1) represents the prediction coefficient value of the upper right pixel in the left adjacent block of the current block, and T (2i-1,2j) represents the prediction coefficient value of the lower left pixel in the adjacent block above the current block;
step 8.4, according to formula (8), calculating the lifting coefficient value U (2i +1,2j) of the lower left pixel in each 2 × 2 pixel block by block:
U(2i+1,2j)=T(2i+1,2j)+[T(2i+1,2j-1)+T(2i+1,2j+1)]>>2 (8)
the T (2i +1,2j) represents the prediction coefficient value of the lower left pixel in the current block, T (2i +1,2j-1) represents the prediction coefficient value of the lower right pixel in the adjacent block on the left side of the current block, and T (2i +1,2j +1) represents the prediction coefficient value of the lower right pixel in the current block;
step 9, according to the formulas (9) to (12), the Nth step is carried out block by blockcurrentThe lifting coefficient values of the bit-plane are accumulated into the wavelet transform coefficient values of the higher bit-plane:
W(2i,2j)←W(2i,2j)+U(2i,2j) (9)
W(2i,2j+1)←W(2i,2j+1)+U(2i,2j+1) (10)
W(2i+1,2j)←W(2i+1,2j)+U(2i+1,2j) (11)
W(2i+1,2j+1)←W(2i+1,2j+1)+U(2i+1,2j+1) (12)
step 10, let Ncurrent←Ncurrent1, if Ncurrent≥Nmax_bp-Nbp+1 and NcurrentIf the value is more than or equal to 0, turning to the step 7, otherwise, turning to the step 11;
step 11, reorganizing the wavelet transform coefficients of each 2 × 2 pixel block according to the formula (13) to the formula (16) to form LLL、LHL、HLLAnd HHLA sub-band;
LLL(i,j)←W(2i,2j) (13)
HLL(i,j)←W(2i,2j+1) (14)
LHL(i,j)←W(2i+1,2j) (15)
HHL(i,j)←W(2i+1,2j+1) (16)
the LLL、LHL、HLLAnd HHLRespectively representing an LL sub-band, an LH sub-band, an HL sub-band, and an HH sub-band of an L-th transform;
step 12, let L ← L +1, if L<LmaxLet I ← LLLH ← h/2, w ← w/2, proceed to step 3; otherwise, output
Figure GDA0002797319360000051
And HLk、LHk、HHk,1≤k≤LmaxThe unseparatable wavelet transform process ends.
An inverse non-separable lifting wavelet transform method corresponding to the forward non-separable lifting wavelet transform method with scalable quality and complexity is characterized by comprising the following steps:
step 1, inputting a wavelet transformation coefficient matrix M, setting the height of the matrix M as h rows and the width as w columns;
step 2, inputting the stage number L to be convertedmaxAnd the number N of bit-planes to be processed per stage of the transformbpAnd let the conversion stage number L ← LmaxSaid "←" representing a valuation operation;
step 3, counting the coefficient C with the maximum absolute value in the M low-frequency sub-bandmaxAnd further calculates the highest bit plane N according to equation (17)max_bp
Figure GDA0002797319360000062
Step 4, making the current bit plane Ncurrent←Nmax_bp
And 5, according to the formula (18) to the formula (21), storing the block with the size of 2 multiplied by 2 pixels of the wavelet transform coefficient organization into a block with the size of (h/2)L-1)×(w/2L-1) In matrix W of (2):
W(2i,2j)←M(i,j) (18)
W(2i,2j+1)←M(i,j+w/2L) (19)
W(2i+1,2j)←M(i+h/2L,j) (20)
W(2i+1,2j+1)←M(i+h/2L,j+w/2L) (21)
i and j are integers, and i is more than or equal to 0<h/2L,0≤j<w/2L
Step 6, let M (i, j), M (i, j + w/2)L)、M(i+h/2LJ) and M (i + h/2)L,j+w/2L) Zero clearing, wherein i and j are integers, and i is not less than 0<h/2L,0≤j<w/2L
Step 7, let b ← (1 < N)current) Said "<" represents an arithmetic left shift operation;
step 8, reverse lifting stage: according to the step 8.1 to the step 8.4, the Nth of the inseparable inverse wavelet transform is calculated block by blockcurrentThe prediction coefficient value of the bit plane is agreed to refer to the block currently being processed as the current block;
step 8.1 calculates, block by block, the prediction coefficient value T (2i +1,2j) of the lower left pixel in each 2 × 2 pixel block, according to formula (22):
Figure GDA0002797319360000061
sgn (·) represents a sign function, abs (·) represents an absolute value taking function, "&" represents a bitwise and operation, ">" represents an arithmetic right shift operation, W (2i +1,2j) represents a wavelet transform coefficient value of a lower left corner pixel in the current block, W (2i +1,2j-1) represents a wavelet transform coefficient value of a lower right corner pixel in a left neighboring block of the current block, W (2i +1,2j +1) represents a wavelet transform coefficient value of a lower right corner pixel in the current block;
step 8.2 calculates the prediction coefficient value T (2i,2j) of the top left pixel in each 2 x 2 pixel block on a block-by-block basis, according to equation (23):
Figure GDA0002797319360000071
w (2i,2j) represents the wavelet transform coefficient value of the upper left pixel in the current block, W (2i,2j +1) represents the wavelet transform coefficient value of the upper right pixel in the current block, W (2i,2j-1) represents the wavelet transform coefficient value of the upper right pixel in the left adjacent block of the current block, T (2i +1,2j) represents the prediction coefficient value of the lower left pixel in the current block, and T (2i-1,2j) represents the prediction coefficient value of the lower left pixel in the adjacent block above the current block;
step 8.3 calculates the prediction coefficient value T (2i,2j +1) of the top-right pixel in each 2 × 2 pixel block on a block-by-block basis, according to formula (24):
Figure GDA0002797319360000072
w (2i,2j +1) represents the wavelet transform coefficient value of the upper-right pixel in the current block, W (2i +1,2j +1) represents the wavelet transform coefficient value of the lower-right pixel in the current block, and W (2i-1,2j +1) represents the wavelet transform coefficient value of the lower-right pixel in the adjacent block above the current block;
step 8.4 calculates, block by block, the prediction coefficient value T (2i +1,2j +1) of the bottom right pixel in each 2 × 2 pixel block, according to formula (25):
T(2i+1,2j+1)=sgn(W(2i+1,2j+1))×[abs(W(2i+1,2j+1))&b] (25)
w (2i +1,2j +1) represents the wavelet transform coefficient value of the bottom right pixel in the current block;
step 9, an inverse prediction stage: according to the steps from 9.1 to 9.4, one by oneBlock computing Nth of inseparable inverse wavelet transformcurrentPixel values of the bit plane;
step 9.1 calculates the pixel value M' (2i,2j +1) of the top-right pixel in each 2 × 2 pixel block on a block-by-block basis, according to formula (26):
M′(2i,2j+1)=[T(2i,2j)+T(2i,2j+2)]>>1 (26)
the T (2i,2j) represents the prediction coefficient value of the pixel at the upper left corner in the current block, and the T (2i,2j +2) represents the prediction coefficient value of the pixel at the upper left corner in the adjacent block at the right side of the current block;
step 9.2 calculates, block by block, the pixel value M' (2i +1,2j +1) of the lower right pixel in each 2 × 2 pixel block, according to formula (27):
M′(2i+1,2j+1)=[T(2i+1,2j)+T(2i+1,2j+2)+T(2i,2j+1)+T(2i+2,2j+1)]>>1 (27)
the T (2i +1,2j) represents the prediction coefficient value of the lower left pixel in the current block, T (2i +1,2j +2) represents the prediction coefficient value of the lower left pixel in the adjacent block on the right side of the current block, T (2i,2j +1) represents the prediction coefficient value of the upper right pixel in the current block, and T (2i +2,2j +1) represents the prediction coefficient value of the upper right pixel in the adjacent block below the current block;
step 9.3 calculates, block by block, the pixel value M' (2i +1,2j) of the lower left corner pixel in each 2 × 2 pixel block, according to formula (28):
M′(2i+1,2j)=[T(2i,2j)+T(2i+2,2j)]>>1 (28)
the T (2i +2,2j) represents the prediction coefficient value of the upper left pixel in the adjacent block below the current block;
step 9.4 keeps the prediction coefficient value T (2i,2j) of the top left pixel in each 2 × 2 pixel block unchanged as its pixel value M' (2i,2 j);
step 10, according to the formulas (29) to (32), the Nth block is processed block by blockcurrentThe pixel values of the bit-planes are accumulated into the pixel values of the higher bit-planes:
M(2i,2j)←M(2i,2j)+M′(2i,2j) (29)
M(2i,2j+1)←M(2i,2j+1)+M′(2i,2j+1) (30)
M(2i+1,2j)←M(2i+1,2j)+M′(2i+1,2j) (31)
M(2i+1,2j+1)←M(2i+1,2j+1)+M′(2i+1,2j+1) (32)
i and j are integers, and i is more than or equal to 0<h/2L,0≤j<w/2L
Step 11, let Ncurrent←Ncurrent1, if Ncurrent≥Nmax_bp-Nbp+1 and NcurrentIf the value is more than or equal to 0, the step 7 is carried out, otherwise, the step 12 is carried out;
step 12, let L ← L-1, if L >0, go to step 3; otherwise, outputting the matrix M, and finishing the unseparatable wavelet inverse transformation process.
Compared with the prior art, the invention has the characteristics of four aspects: firstly, the inseparable lifting wavelet transform of the present invention processes image signals as blocks, rather than as individual rows and columns, and has better anisotropy, greater freedom and better filtering properties (such as higher order vanishing moments) than the separable 2D wavelet transform, and can provide frequency resolution capability more conforming to the characteristics of the human visual system; secondly, the invention adopts wavelet transformation from the highest bit plane to the lowest bit plane to input the image, can be directly combined with EZW, SPIHT and other embedded coding algorithms based on successive approximation quantization, and is easy to realize the quality scalable coding of the image and the video; then, the conversion process of the invention can be stopped at any bit plane at any time according to the requirement of the target code rate, thereby saving the calculation amount required by converting the lower pixel bit plane and achieving the purpose of gradable calculation complexity; and finally, arithmetic shift operation is introduced to replace multiplication operation of the traditional wavelet transform, floating point number operation is completely not needed, and the overall computation complexity of the wavelet transform is lower than that of the traditional DCT transform. Therefore, the invention has the advantages of anisotropy, high operation speed and gradable quality and complexity.
Drawings
Fig. 1 is an original test image.
Fig. 2 is the result of a 3-level non-separable lifting wavelet forward transform on Lena images using the present invention.
FIG. 3 is the result of a 2-level non-separable lifting wavelet forward transform on a Peppers image using the present invention.
Fig. 4 is the result of a 2-level non-separable lifting wavelet forward transform on a Baboon image using the present invention.
Fig. 5 is the result of a 3-level non-separable lifting wavelet inverse transform applied to Lena images using the present invention.
FIG. 6 is the result of a 2-level non-separable lifting wavelet inverse transform applied to a Peppers image using the present invention.
Fig. 7 is the result of a 2-level non-separable lifting wavelet inverse transform applied to a babon image using the present invention.
Detailed Description
The quality and complexity scalable inseparable lifting wavelet forward transformation method is characterized by comprising the following steps of:
step 1, inputting an image I to be processed, and setting the height of the image I as h pixels and the width as w pixels;
step 2, inputting the stage number L to be convertedmaxAnd the number N of bit-planes to be processed per stage of the transformbpMaking a conversion stage number L ← 1, wherein the ← represents assignment operation;
step 3, counting the maximum absolute value C of the image ImaxCalculating the highest bit plane N according to equation (1)max_bp
Figure GDA0002797319360000091
Step 4, making the current bit plane Ncurrent←Nmax_bp
Step 5, splitting: dividing an image I into blocks which are not overlapped and have the size of 2 multiplied by 2 pixels, wherein for each pixel block, the coordinates of the upper left pixel are (2I,2j), the coordinates of the upper right pixel are (2I,2j +1), the coordinates of the lower left pixel are (2I +1,2j), the coordinates of the lower right pixel are (2I +1,2j +1), I and j are integers, and
Figure GDA0002797319360000092
step 6, initializing the wavelet transform coefficient value of each pixel block to 0, namely making W (2i,2j) ← 0, W (2i,2j +1) ← 0, W (2i +1,2j) ← 0, W (2i +1,2j +1) ← 0, and the W (2i,2j), W (2i,2j +1), W (2i +1,2j) and W (2i +1,2j +1) respectively represent the wavelet transform coefficient values of the upper left corner pixel, the upper right corner pixel, the lower left corner pixel and the lower right corner pixel in the pixel block;
step 7, a prediction stage: according to the step 7.1 to the step 7.5, the Nth of the inseparable wavelet transform is calculated block by blockcurrentThe prediction coefficient value of the bit plane is agreed to refer to the block currently being processed as the current block;
step 7.1, make b ← (1 < N)current) Said "<" represents an arithmetic left shift operation;
step 7.2 calculates the prediction coefficient value T (2i,2j) of the top left pixel in each 2 × 2 pixel block on a block-by-block basis, according to equation (2):
T(2i,2j)=sgn(I(2i,2j))×[abs(I(2i,2j))&b] (2)
the sgn (·) represents a sign function, I (2I,2j) represents a pixel value located at coordinate (2I,2j) in the current block, abs (·) represents an absolute value taking function, and "&" represents a bitwise and operation;
step 7.3 calculates, block by block, the prediction coefficient value T (2i +1,2j) of the lower left pixel in each 2 × 2 pixel block, according to formula (3):
Figure GDA0002797319360000101
i (2I +1,2j) represents a pixel value located at a coordinate (2I +1,2j) in the current block, I (2I +2,2j) represents an upper left pixel value in a neighboring block below the current block, and ">" represents an arithmetic right shift operation;
step 7.4 calculates the prediction coefficient value T (2i +1,2j +1) of the bottom right pixel in each 2 × 2 pixel block, block by block, according to formula (4):
Figure GDA0002797319360000102
the I (2I +1,2j +1) represents a pixel value at a coordinate (2I +1,2j +1) in the current block, I (2I,2j +1) represents a pixel value at a coordinate (2I,2j +1) in the current block, I (2I +2,2j +1) represents an upper-right pixel value of a neighboring block below the current block, T (2I +1,2j) represents a prediction coefficient value of a lower-left pixel in the current block, and T (2I +1,2j +2) represents a prediction coefficient value of a lower-left pixel in a neighboring block on the right side of the current block;
step 7.5 calculates the prediction coefficient value T (2i,2j +1) of the top-right pixel in each 2 × 2 pixel block on a block-by-block basis, according to equation (5):
Figure GDA0002797319360000111
the I (2I,2j +1) represents a pixel value located at the coordinate (2I,2j +1) in the current block, and the I (2I,2j +2) represents an upper-left corner pixel value in the adjacent block on the right side of the current block;
step 8, a lifting stage: according to the step 8.1 to the step 8.4, the Nth of the inseparable wavelet transform is calculated block by blockcurrentA lifting coefficient value of the bit plane;
step 8.1, keeping the prediction coefficient value T (2i +1,2j +1) of the lower right corner pixel in each 2 x 2 pixel block unchanged, and taking the prediction coefficient value T (2i +1,2j +1) as the lifting coefficient value U (2i +1,2j + 1);
step 8.2, according to the formula (6), calculating the lifting coefficient value U (2i,2j +1) of the top-right pixel in each 2 × 2 pixel block by block:
U(2i,2j+1)=T(2i,2j+1)+[T(2i+1,2j+1)+T(2i-1,2j+1)]>>2 (6)
the T (2i,2j +1) represents the prediction coefficient value of the upper-right pixel in the current block, the T (2i +1,2j +1) represents the prediction coefficient value of the lower-right pixel in the current block, and the T (2i-1,2j +1) represents the prediction coefficient value of the lower-right pixel of the adjacent block above the current block;
step 8.3 calculates the lifting coefficient value U (2i,2j) of the top left pixel in each 2 × 2 pixel block on a block-by-block basis, according to the definition of formula (7):
U(2i,2j)=T(2i,2j)+[T(2i,2j-1)+T(2i,2j+1)+T(2i-1,2j)+T(2i+1,2j)]>>2 (7)
the T (2i,2j) represents the prediction coefficient value of the upper left pixel in the current block, T (2i,2j +1) represents the prediction coefficient value of the upper right pixel in the current block, T (2i +1,2j) represents the prediction coefficient value of the lower left pixel in the current block, T (2i,2j-1) represents the prediction coefficient value of the upper right pixel in the left adjacent block of the current block, and T (2i-1,2j) represents the prediction coefficient value of the lower left pixel in the adjacent block above the current block;
step 8.4, according to formula (8), calculating the lifting coefficient value U (2i +1,2j) of the lower left pixel in each 2 × 2 pixel block by block:
U(2i+1,2j)=T(2i+1,2j)+[T(2i+1,2j-1)+T(2i+1,2j+1)]>>2 (8)
the T (2i +1,2j) represents the prediction coefficient value of the lower left pixel in the current block, T (2i +1,2j-1) represents the prediction coefficient value of the lower right pixel in the adjacent block on the left side of the current block, and T (2i +1,2j +1) represents the prediction coefficient value of the lower right pixel in the current block;
step 9, according to the formulas (9) to (12), the Nth step is carried out block by blockcurrentThe lifting coefficient values of the bit-plane are accumulated into the wavelet transform coefficient values of the higher bit-plane:
W(2i,2j)←W(2i,2j)+U(2i,2j) (9)
W(2i,2j+1)←W(2i,2j+1)+U(2i,2j+1) (10)
W(2i+1,2j)←W(2i+1,2j)+U(2i+1,2j) (11)
W(2i+1,2j+1)←W(2i+1,2j+1)+U(2i+1,2j+1) (12)
step 10, let Ncurrent←Ncurrent1, if Ncurrent≥Nmax_bp-Nbp+1 and NcurrentIf the value is more than or equal to 0, turning to the step 7, otherwise, turning to the step 11;
step 11, reorganizing the wavelet transform coefficients of each 2 × 2 pixel block according to the formula (13) to the formula (16) to form LLL、LHL、HLLAnd HHLA sub-band;
LLL(i,j)←W(2i,2j) (13)
HLL(i,j)←W(2i,2j+1) (14)
LHL(i,j)←W(2i+1,2j) (15)
HHL(i,j)←W(2i+1,2j+1) (16)
the LLL、LHL、HLLAnd HHLRespectively representing an LL sub-band, an LH sub-band, an HL sub-band, and an HH sub-band of an L-th transform;
step 12, let L ← L +1, if L<LmaxLet I ← LLLH ← h/2, w ← w/2, proceed to step 3; otherwise, output
Figure GDA0002797319360000121
And HLk、LHk、HHk,1≤k≤LmaxThe unseparatable wavelet transform process ends.
The inverse non-separable lifting wavelet transform method corresponding to the forward non-separable lifting wavelet transform method with gradable quality and complexity is characterized by comprising the following steps of:
step 1, inputting a wavelet transformation coefficient matrix M, setting the height of the matrix M as h rows and the width as w columns;
step 2, inputting the stage number L to be convertedmaxAnd the number N of bit-planes to be processed per stage of the transformbpAnd let the conversion stage number L ← LmaxSaid "←" representing a valuation operation;
step 3, counting the coefficient C with the maximum absolute value in the M low-frequency sub-bandmaxAnd further calculates the highest bit plane N according to equation (17)max_bp
Figure GDA0002797319360000132
Step 4, making the current bit plane Ncurrent←Nmax_bp
And 5, according to the formula (18) to the formula (21), storing the block with the size of 2 multiplied by 2 pixels of the wavelet transform coefficient organization into a block with the size of (h/2)L-1)×(w/2L-1) In matrix W of (2):
W(2i,2j)←M(i,j) (18)
W(2i,2j+1)←M(i,j+w/2L) (19)
W(2i+1,2j)←M(i+h/2L,j) (20)
W(2i+1,2j+1)←M(i+h/2L,j+w/2L) (21)
i and j are integers, and i is more than or equal to 0<h/2L,0≤j<w/2L
Step 6, let M (i, j), M (i, j + w/2)L)、M(i+h/2LJ) and M (i + h/2)L,j+w/2L) Zero clearing, wherein i and j are integers, and i is not less than 0<h/2L,0≤j<w/2L
Step 7, let b ← (1 < N)current) Said "<" represents an arithmetic left shift operation;
step 8, reverse lifting stage: according to the step 8.1 to the step 8.4, the Nth of the inseparable inverse wavelet transform is calculated block by blockcurrentThe prediction coefficient value of the bit plane is agreed to refer to the block currently being processed as the current block;
step 8.1 calculates, block by block, the prediction coefficient value T (2i +1,2j) of the lower left pixel in each 2 × 2 pixel block, according to formula (22):
Figure GDA0002797319360000131
sgn (·) represents a sign function, abs (·) represents an absolute value taking function, "&" represents a bitwise and operation, ">" represents an arithmetic right shift operation, W (2i +1,2j) represents a wavelet transform coefficient value of a lower left corner pixel in the current block, W (2i +1,2j-1) represents a wavelet transform coefficient value of a lower right corner pixel in a left neighboring block of the current block, W (2i +1,2j +1) represents a wavelet transform coefficient value of a lower right corner pixel in the current block;
step 8.2 calculates the prediction coefficient value T (2i,2j) of the top left pixel in each 2 x 2 pixel block on a block-by-block basis, according to equation (23):
Figure GDA0002797319360000141
w (2i,2j) represents the wavelet transform coefficient value of the upper left pixel in the current block, W (2i,2j +1) represents the wavelet transform coefficient value of the upper right pixel in the current block, W (2i,2j-1) represents the wavelet transform coefficient value of the upper right pixel in the left adjacent block of the current block, T (2i +1,2j) represents the prediction coefficient value of the lower left pixel in the current block, and T (2i-1,2j) represents the prediction coefficient value of the lower left pixel in the adjacent block above the current block;
step 8.3 calculates the prediction coefficient value T (2i,2j +1) of the top-right pixel in each 2 × 2 pixel block on a block-by-block basis, according to formula (24):
Figure GDA0002797319360000142
w (2i,2j +1) represents the wavelet transform coefficient value of the upper-right pixel in the current block, W (2i +1,2j +1) represents the wavelet transform coefficient value of the lower-right pixel in the current block, and W (2i-1,2j +1) represents the wavelet transform coefficient value of the lower-right pixel in the adjacent block above the current block;
step 8.4 calculates, block by block, the prediction coefficient value T (2i +1,2j +1) of the bottom right pixel in each 2 × 2 pixel block, according to formula (25):
T(2i+1,2j+1)=sgn(W(2i+1,2j+1))×[abs(W(2i+1,2j+1))&b] (25)
w (2i +1,2j +1) represents the wavelet transform coefficient value of the bottom right pixel in the current block;
step 9, an inverse prediction stage: according to the step 9.1 to the step 9.4, the Nth of the inseparable inverse wavelet transform is calculated block by blockcurrentPixel values of the bit plane;
step 9.1 calculates the pixel value M' (2i,2j +1) of the top-right pixel in each 2 × 2 pixel block on a block-by-block basis, according to formula (26):
M′(2i,2j+1)=[T(2i,2j)+T(2i,2j+2)]>>1 (26)
the T (2i,2j) represents the prediction coefficient value of the pixel at the upper left corner in the current block, and the T (2i,2j +2) represents the prediction coefficient value of the pixel at the upper left corner in the adjacent block at the right side of the current block;
step 9.2 calculates, block by block, the pixel value M' (2i +1,2j +1) of the lower right pixel in each 2 × 2 pixel block, according to formula (27):
M′(2i+1,2j+1)=[T(2i+1,2j)+T(2i+1,2j+2)+T(2i,2j+1)+T(2i+2,2j+1)]>>1 (27)
the T (2i +1,2j) represents the prediction coefficient value of the lower left pixel in the current block, T (2i +1,2j +2) represents the prediction coefficient value of the lower left pixel in the adjacent block on the right side of the current block, T (2i,2j +1) represents the prediction coefficient value of the upper right pixel in the current block, and T (2i +2,2j +1) represents the prediction coefficient value of the upper right pixel in the adjacent block below the current block;
step 9.3 calculates, block by block, the pixel value M' (2i +1,2j) of the lower left corner pixel in each 2 × 2 pixel block, according to formula (28):
M′(2i+1,2j)=[T(2i,2j)+T(2i+2,2j)]>>1 (28)
the T (2i +2,2j) represents the prediction coefficient value of the upper left pixel in the adjacent block below the current block;
step 9.4 keeps the prediction coefficient value T (2i,2j) of the top left pixel in each 2 × 2 pixel block unchanged as its pixel value M' (2i,2 j);
step 10, according to the formulas (29) to (32), the Nth block is processed block by blockcurrentThe pixel values of the bit-planes are accumulated into the pixel values of the higher bit-planes:
M(2i,2j)←M(2i,2j)+M′(2i,2j) (29)
M(2i,2j+1)←M(2i,2j+1)+M′(2i,2j+1) (30)
M(2i+1,2j)←M(2i+1,2j)+M′(2i+1,2j) (31)
M(2i+1,2j+1)←M(2i+1,2j+1)+M′(2i+1,2j+1) (32)
i and j are integers, and i is more than or equal to 0<h/2L,0≤j<w/2L
Step 11, let Ncurrent←Ncurrent1, if Ncurrent≥Nmax_bp-Nbp+1 and NcurrentIf the value is more than or equal to 0, the step 7 is carried out, otherwise, the step 12 is carried out;
step 12, let L ← L-1, if L >0, go to step 3; otherwise, outputting the matrix M, and finishing the unseparatable wavelet inverse transformation process.
In FIG. 1, (a) to (c) are 3 original test images, Lena, Peppers and Baboon, respectively.
The results of performing 3-level wavelet transform on the top 8 bit planes, the top 4 bit planes and the top 2 bit planes of the Lena image by using the present invention are shown in fig. 2(a) - (c), respectively.
The results of performing 2-level wavelet transform on the highest 8 bit planes, the highest 4 bit planes and the highest 2 bit planes of the Peppers image by using the method are respectively shown in fig. 3(a) to (c).
The results of performing 2-level wavelet transform on the highest 8 bit planes, the highest 4 bit planes and the highest 2 bit planes of the Baboon image by using the method are shown in fig. 4(a) to (c), respectively.
The results of performing 3-level wavelet forward transform on the highest 8 bit planes of the Lena image and then performing 3-level wavelet inverse transform on the highest 2 bit planes, the highest 4 bit planes and the highest 8 bit planes of the wavelet transform coefficients are respectively shown in fig. 5(a) - (c).
The results of performing 2-level wavelet forward transform on the highest 8 bit planes of the Peppers image and then performing 2-level wavelet inverse transform on the highest 2 bit planes, the highest 4 bit planes and the highest 8 bit planes of the wavelet transform coefficients are shown in fig. 6(a) - (c), respectively.
The results of performing 2-level wavelet forward transform on the highest 8 bit planes of the Baboon image and then performing 2-level wavelet inverse transform on the highest 2 bit planes, the highest 4 bit planes and the highest 8 bit planes of the wavelet transform coefficients are respectively shown in fig. 7(a) - (c).
As can be seen from fig. 2 to 7, as the number of bit planes increases, the quality of frequency domain decomposition and the quality of reconstruction of the present invention gradually increase, and the present invention exhibits a significant quality scalability characteristic. Meanwhile, due to the fact that the number of bit planes participating in transformation is different, the calculation complexity of the method also has the gradable characteristic.

Claims (2)

1. A quality and complexity scalable inseparable lifting wavelet forward transform method is characterized by comprising the following steps:
step 1, inputting an image I to be processed, and setting the height of the image I as h pixels and the width as w pixels;
step 2, inputting the stage number L to be convertedmaxAnd the number N of bit-planes to be processed per stage of the transformbpAnd let the conversion stage number L ← 1,the "←" represents a valuation operation;
step 3, counting the maximum absolute value C of the image ImaxCalculating the highest bit plane N according to equation (1)max_bp
Figure FDA0002797319350000011
Step 4, making the current bit plane Ncurrent←Nmax_bp
Step 5, splitting: dividing an image I into blocks which are not overlapped and have the size of 2 multiplied by 2 pixels, wherein for each pixel block, the coordinates of the upper left pixel are (2I,2j), the coordinates of the upper right pixel are (2I,2j +1), the coordinates of the lower left pixel are (2I +1,2j), the coordinates of the lower right pixel are (2I +1,2j +1), I and j are integers, and
Figure FDA0002797319350000012
step 6, initializing the wavelet transform coefficient value of each pixel block to 0, namely making W (2i,2j) ← 0, W (2i,2j +1) ← 0, W (2i +1,2j) ← 0, W (2i +1,2j +1) ← 0, and the W (2i,2j), W (2i,2j +1), W (2i +1,2j) and W (2i +1,2j +1) respectively represent the wavelet transform coefficient values of the upper left corner pixel, the upper right corner pixel, the lower left corner pixel and the lower right corner pixel in the pixel block;
step 7, a prediction stage: according to the step 7.1 to the step 7.5, the Nth of the inseparable wavelet transform is calculated block by blockcurrentThe prediction coefficient value of the bit plane is agreed to refer to the block currently being processed as the current block;
step 7.1, make b ← (1 < N)current) Said "<" represents an arithmetic left shift operation;
step 7.2 calculates the prediction coefficient value T (2i,2j) of the top left pixel in each 2 × 2 pixel block on a block-by-block basis, according to equation (2):
T(2i,2j)=sgn(I(2i,2j))×[abs(I(2i,2j))&b] (2)
the sgn (·) represents a sign function, I (2I,2j) represents a pixel value located at coordinate (2I,2j) in the current block, abs (·) represents an absolute value taking function, and "&" represents a bitwise and operation;
step 7.3 calculates, block by block, the prediction coefficient value T (2i +1,2j) of the lower left pixel in each 2 × 2 pixel block, according to formula (3):
Figure FDA0002797319350000021
i (2I +1,2j) represents a pixel value located at a coordinate (2I +1,2j) in the current block, I (2I +2,2j) represents an upper left pixel value in a neighboring block below the current block, and ">" represents an arithmetic right shift operation;
step 7.4 calculates the prediction coefficient value T (2i +1,2j +1) of the bottom right pixel in each 2 × 2 pixel block, block by block, according to formula (4):
Figure FDA0002797319350000022
the I (2I +1,2j +1) represents a pixel value at a coordinate (2I +1,2j +1) in the current block, I (2I,2j +1) represents a pixel value at a coordinate (2I,2j +1) in the current block, I (2I +2,2j +1) represents an upper-right pixel value of a neighboring block below the current block, T (2I +1,2j) represents a prediction coefficient value of a lower-left pixel in the current block, and T (2I +1,2j +2) represents a prediction coefficient value of a lower-left pixel in a neighboring block on the right side of the current block;
step 7.5 calculates the prediction coefficient value T (2i,2j +1) of the top-right pixel in each 2 × 2 pixel block on a block-by-block basis, according to equation (5):
Figure FDA0002797319350000023
the I (2I,2j +1) represents a pixel value located at the coordinate (2I,2j +1) in the current block, and the I (2I,2j +2) represents an upper-left corner pixel value in the adjacent block on the right side of the current block;
step 8, a lifting stage: according to the step 8.1 to the step 8.4, the Nth of the inseparable wavelet transform is calculated block by blockcurrentA lifting coefficient value of the bit plane;
step 8.1, keeping the prediction coefficient value T (2i +1,2j +1) of the lower right corner pixel in each 2 x 2 pixel block unchanged, and taking the prediction coefficient value T (2i +1,2j +1) as the lifting coefficient value U (2i +1,2j + 1);
step 8.2, according to the formula (6), calculating the lifting coefficient value U (2i,2j +1) of the top-right pixel in each 2 × 2 pixel block by block:
U(2i,2j+1)=T(2i,2j+1)+[T(2i+1,2j+1)+T(2i-1,2j+1)]>>2 (6)
the T (2i,2j +1) represents the prediction coefficient value of the upper-right pixel in the current block, the T (2i +1,2j +1) represents the prediction coefficient value of the lower-right pixel in the current block, and the T (2i-1,2j +1) represents the prediction coefficient value of the lower-right pixel of the adjacent block above the current block;
step 8.3 calculates the lifting coefficient value U (2i,2j) of the top left pixel in each 2 × 2 pixel block on a block-by-block basis, according to the definition of formula (7):
U(2i,2j)=T(2i,2j)+[T(2i,2j-1)+T(2i,2j+1)+T(2i-1,2j)+T(2i+1,2j)]>>2 (7)
the T (2i,2j) represents the prediction coefficient value of the upper left pixel in the current block, T (2i,2j +1) represents the prediction coefficient value of the upper right pixel in the current block, T (2i +1,2j) represents the prediction coefficient value of the lower left pixel in the current block, T (2i,2j-1) represents the prediction coefficient value of the upper right pixel in the left adjacent block of the current block, and T (2i-1,2j) represents the prediction coefficient value of the lower left pixel in the adjacent block above the current block;
step 8.4, according to formula (8), calculating the lifting coefficient value U (2i +1,2j) of the lower left pixel in each 2 × 2 pixel block by block:
U(2i+1,2j)=T(2i+1,2j)+[T(2i+1,2j-1)+T(2i+1,2j+1)]>>2 (8)
the T (2i +1,2j) represents the prediction coefficient value of the lower left pixel in the current block, T (2i +1,2j-1) represents the prediction coefficient value of the lower right pixel in the adjacent block on the left side of the current block, and T (2i +1,2j +1) represents the prediction coefficient value of the lower right pixel in the current block;
step 9, according to the formulas (9) to (12), the Nth step is carried out block by blockcurrentThe lifting coefficient values of the bit-plane are accumulated into the wavelet transform coefficient values of the higher bit-plane:
W(2i,2j)←W(2i,2j)+U(2i,2j) (9)
W(2i,2j+1)←W(2i,2j+1)+U(2i,2j+1) (10)
W(2i+1,2j)←W(2i+1,2j)+U(2i+1,2j) (11)
W(2i+1,2j+1)←W(2i+1,2j+1)+U(2i+1,2j+1) (12)
step 10, let Ncurrent←Ncurrent1, if Ncurrent≥Nmax_bp-Nbp+1 and NcurrentIf the value is more than or equal to 0, turning to the step 7, otherwise, turning to the step 11;
step 11, reorganizing the wavelet transform coefficients of each 2 × 2 pixel block according to the formula (13) to the formula (16) to form LLL、LHL、HLLAnd HHLA sub-band;
LLL(i,j)←W(2i,2j) (13)
HLL(i,j)←W(2i,2j+1) (14)
LHL(i,j)←W(2i+1,2j) (15)
HHL(i,j)←W(2i+1,2j+1) (16)
the LLL、LHL、HLLAnd HHLRespectively representing an LL sub-band, an LH sub-band, an HL sub-band, and an HH sub-band of an L-th transform;
step 12, let L ← L +1, if L<LmaxLet I ← LLLH ← h/2, w ← w/2, proceed to step 3; otherwise, output
Figure FDA0002797319350000042
And HLk、LHk、HHk,1≤k≤LmaxThe unseparatable wavelet transform process ends.
2. A non-separable lifting wavelet inverse transform method corresponding to the quality and complexity scalable non-separable lifting wavelet forward transform method of claim 1, characterized by the steps of:
step 1, inputting a wavelet transformation coefficient matrix M, setting the height of the matrix M as h rows and the width as w columns;
step 2. input of the requirementsNumber of stages of transformation LmaxAnd the number N of bit-planes to be processed per stage of the transformbpAnd let the conversion stage number L ← LmaxSaid "←" representing a valuation operation;
step 3, counting the coefficient C with the maximum absolute value in the M low-frequency sub-bandmaxAnd further calculates the highest bit plane N according to equation (17)max_bp
Figure FDA0002797319350000041
Step 4, making the current bit plane Ncurrent←Nmax_bp
And 5, according to the formula (18) to the formula (21), storing the block with the size of 2 multiplied by 2 pixels of the wavelet transform coefficient organization into a block with the size of (h/2)L-1)×(w/2L-1) In matrix W of (2):
W(2i,2j)←M(i,j) (18)
W(2i,2j+1)←M(i,j+w/2L) (19)
W(2i+1,2j)←M(i+h/2L,j) (20)
W(2i+1,2j+1)←M(i+h/2L,j+w/2L) (21)
i and j are integers, and i is more than or equal to 0<h/2L,0≤j<w/2L
Step 6, let M (i, j), M (i, j + w/2)L)、M(i+h/2LJ) and M (i + h/2)L,j+w/2L) Zero clearing, wherein i and j are integers, and i is not less than 0<h/2L,0≤j<w/2L
Step 7, let b ← (1 < N)current) Said "<" represents an arithmetic left shift operation;
step 8, reverse lifting stage: according to the step 8.1 to the step 8.4, the Nth of the inseparable inverse wavelet transform is calculated block by blockcurrentThe prediction coefficient value of the bit plane is agreed to refer to the block currently being processed as the current block;
step 8.1 calculates, block by block, the prediction coefficient value T (2i +1,2j) of the lower left pixel in each 2 × 2 pixel block, according to formula (22):
Figure FDA0002797319350000051
sgn (·) represents a sign function, abs (·) represents an absolute value taking function, "&" represents a bitwise and operation, ">" represents an arithmetic right shift operation, W (2i +1,2j) represents a wavelet transform coefficient value of a lower left corner pixel in the current block, W (2i +1,2j-1) represents a wavelet transform coefficient value of a lower right corner pixel in a left neighboring block of the current block, W (2i +1,2j +1) represents a wavelet transform coefficient value of a lower right corner pixel in the current block;
step 8.2 calculates the prediction coefficient value T (2i,2j) of the top left pixel in each 2 x 2 pixel block on a block-by-block basis, according to equation (23):
Figure FDA0002797319350000052
w (2i,2j) represents the wavelet transform coefficient value of the upper left pixel in the current block, W (2i,2j +1) represents the wavelet transform coefficient value of the upper right pixel in the current block, W (2i,2j-1) represents the wavelet transform coefficient value of the upper right pixel in the left adjacent block of the current block, T (2i +1,2j) represents the prediction coefficient value of the lower left pixel in the current block, and T (2i-1,2j) represents the prediction coefficient value of the lower left pixel in the adjacent block above the current block;
step 8.3 calculates the prediction coefficient value T (2i,2j +1) of the top-right pixel in each 2 × 2 pixel block on a block-by-block basis, according to formula (24):
Figure FDA0002797319350000053
w (2i,2j +1) represents the wavelet transform coefficient value of the upper-right pixel in the current block, W (2i +1,2j +1) represents the wavelet transform coefficient value of the lower-right pixel in the current block, and W (2i-1,2j +1) represents the wavelet transform coefficient value of the lower-right pixel in the adjacent block above the current block;
step 8.4 calculates, block by block, the prediction coefficient value T (2i +1,2j +1) of the bottom right pixel in each 2 × 2 pixel block, according to formula (25):
T(2i+1,2j+1)=sgn(W(2i+1,2j+1))×[abs(W(2i+1,2j+1))&b] (25)
w (2i +1,2j +1) represents the wavelet transform coefficient value of the bottom right pixel in the current block;
step 9, an inverse prediction stage: according to the step 9.1 to the step 9.4, the Nth of the inseparable inverse wavelet transform is calculated block by blockcurrentPixel values of the bit plane;
step 9.1 calculates the pixel value M' (2i,2j +1) of the top-right pixel in each 2 × 2 pixel block on a block-by-block basis, according to formula (26):
M′(2i,2j+1)=[T(2i,2j)+T(2i,2j+2)]>>1 (26)
the T (2i,2j) represents the prediction coefficient value of the pixel at the upper left corner in the current block, and the T (2i,2j +2) represents the prediction coefficient value of the pixel at the upper left corner in the adjacent block at the right side of the current block;
step 9.2 calculates, block by block, the pixel value M' (2i +1,2j +1) of the lower right pixel in each 2 × 2 pixel block, according to formula (27):
M′(2i+1,2j+1)=[T(2i+1,2j)+T(2i+1,2j+2)+T(2i,2j+1)+T(2i+2,2j+1)]>>1 (27)
the T (2i +1,2j) represents the prediction coefficient value of the lower left pixel in the current block, T (2i +1,2j +2) represents the prediction coefficient value of the lower left pixel in the adjacent block on the right side of the current block, T (2i,2j +1) represents the prediction coefficient value of the upper right pixel in the current block, and T (2i +2,2j +1) represents the prediction coefficient value of the upper right pixel in the adjacent block below the current block;
step 9.3 calculates, block by block, the pixel value M' (2i +1,2j) of the lower left corner pixel in each 2 × 2 pixel block, according to formula (28):
M′(2i+1,2j)=[T(2i,2j)+T(2i+2,2j)]>>1 (28)
the T (2i +2,2j) represents the prediction coefficient value of the upper left pixel in the adjacent block below the current block;
step 9.4 keeps the prediction coefficient value T (2i,2j) of the top left pixel in each 2 × 2 pixel block unchanged as its pixel value M' (2i,2 j);
step 10, according to the formulas (29) to (32), the Nth block is processed block by blockcurrentThe pixel values of the bit-planes are accumulated into the pixel values of the higher bit-planes:
M(2i,2j)←M(2i,2j)+M′(2i,2j) (29)
M(2i,2j+1)←M(2i,2j+1)+M′(2i,2j+1) (30)
M(2i+1,2j)←M(2i+1,2j)+M′(2i+1,2j) (31)
M(2i+1,2j+1)←M(2i+1,2j+1)+M′(2i+1,2j+1) (32)
i and j are integers, and i is more than or equal to 0<h/2L,0≤j<w/2L
Step 11, let Ncurrent←Ncurrent1, if Ncurrent≥Nmax_bp-Nbp+1 and NcurrentIf the value is more than or equal to 0, the step 7 is carried out, otherwise, the step 12 is carried out;
step 12, let L ← L-1, if L >0, go to step 3; otherwise, outputting the matrix M, and finishing the unseparatable wavelet inverse transformation process.
CN201910042890.5A 2019-01-17 2019-01-17 Inseparable lifting wavelet transformation method with gradable quality and complexity Active CN109874020B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910042890.5A CN109874020B (en) 2019-01-17 2019-01-17 Inseparable lifting wavelet transformation method with gradable quality and complexity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910042890.5A CN109874020B (en) 2019-01-17 2019-01-17 Inseparable lifting wavelet transformation method with gradable quality and complexity

Publications (2)

Publication Number Publication Date
CN109874020A CN109874020A (en) 2019-06-11
CN109874020B true CN109874020B (en) 2021-03-30

Family

ID=66917831

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910042890.5A Active CN109874020B (en) 2019-01-17 2019-01-17 Inseparable lifting wavelet transformation method with gradable quality and complexity

Country Status (1)

Country Link
CN (1) CN109874020B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101015125A (en) * 2004-06-07 2007-08-08 新加坡科技研究局 System and method for scalable encoding and decoding data
CN101088295A (en) * 2004-12-22 2007-12-12 皇家飞利浦电子股份有限公司 Scalable coding
CN101420618A (en) * 2008-12-02 2009-04-29 西安交通大学 Adaptive telescopic video encoding and decoding construction design method based on interest zone
CN101583032A (en) * 2009-06-26 2009-11-18 西安电子科技大学 Adaptive down-sampling and lapped transform-based image compression method
KR20120040519A (en) * 2010-10-19 2012-04-27 한국전자통신연구원 Adaptive multimedia decoding device and method for scalable satellite broadcasting
CN103347187A (en) * 2013-07-23 2013-10-09 北京师范大学 Remote-sensing image compression method for discrete wavelet transform based on self-adaptation direction prediction

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101015125A (en) * 2004-06-07 2007-08-08 新加坡科技研究局 System and method for scalable encoding and decoding data
CN101088295A (en) * 2004-12-22 2007-12-12 皇家飞利浦电子股份有限公司 Scalable coding
CN101420618A (en) * 2008-12-02 2009-04-29 西安交通大学 Adaptive telescopic video encoding and decoding construction design method based on interest zone
CN101583032A (en) * 2009-06-26 2009-11-18 西安电子科技大学 Adaptive down-sampling and lapped transform-based image compression method
KR20120040519A (en) * 2010-10-19 2012-04-27 한국전자통신연구원 Adaptive multimedia decoding device and method for scalable satellite broadcasting
CN103347187A (en) * 2013-07-23 2013-10-09 北京师范大学 Remote-sensing image compression method for discrete wavelet transform based on self-adaptation direction prediction

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种新的小波域视频可分级运动估计方案;宋传鸣;《计算机学报》;20061231;第29卷(第12期);第2112-2118页 *
图像可分级编码研究进展;王相海;《中国图象图形学报》;20060831;第11卷(第8期);第1051-1061页 *

Also Published As

Publication number Publication date
CN109874020A (en) 2019-06-11

Similar Documents

Publication Publication Date Title
CN108293138B (en) Efficient and scalable intra video/image coding
Liu et al. Data-driven sparsity-based restoration of JPEG-compressed images in dual transform-pixel domain
JP4254017B2 (en) Image coding apparatus and method
US20080123750A1 (en) Parallel deblocking filter for H.264 video codec
EP1397774A1 (en) Method and system for achieving coding gains in wavelet-based image codecs
JP2006074733A (en) System and method for coding image employing combination of hybrid directional prediction and wavelet lifting
WO2000072602A1 (en) Multi-dimensional data compression
WO2008145039A1 (en) Methods, systems and devices for generating upsample filter and downsample filter and for performing encoding
Chew et al. Low–memory video compression architecture using strip–based processing for implementation in wireless multimedia sensor networks
Chang et al. Direction-adaptive discrete wavelet transform via directional lifting and bandeletization
CN102333223A (en) Video data coding method, decoding method, coding system and decoding system
CN1878308A (en) Orthogonal transformation method for image and video compression
CN109874020B (en) Inseparable lifting wavelet transformation method with gradable quality and complexity
Puttaraju et al. FPGA implementation of 5/3 integer dwt for image compression
CN109035350B (en) Improved SPIHT image encoding and decoding method based on energy leakage and amplification
JP4118049B2 (en) Image processing apparatus and method
Iwahashi et al. Reversible 2D 9-7 DWT based on non-separable 2D lifting structure compatible with irreversible DWT
Pandey et al. Hybrid image compression based on fuzzy logic technology
Sultan Image compression by using walsh and framelet transform
SV et al. Fractal Image Compression of Satellite Color Imageries Using Variable Size of Range Block
Chen et al. Region of interest determined by perceptual-quality and rate-distortion optimization in JPEG 2000
Ahmad et al. Low bitrate image coding based on dual tree complex wavelet transform
Naheed et al. Intelligent reversible digital watermarking technique using interpolation errors
Lee et al. Improvement of image sensor performance through implementation of JPEG2000 H/W for optimal DWT decomposition level
CN104602014B (en) A kind of quantization suitable for HEVC standard and inverse quantization hardware multiplexing algorithm and hardware configuration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant