CN113643399B - Color image self-adaptive reconstruction method based on tensor chain rank - Google Patents

Color image self-adaptive reconstruction method based on tensor chain rank Download PDF

Info

Publication number
CN113643399B
CN113643399B CN202110940296.5A CN202110940296A CN113643399B CN 113643399 B CN113643399 B CN 113643399B CN 202110940296 A CN202110940296 A CN 202110940296A CN 113643399 B CN113643399 B CN 113643399B
Authority
CN
China
Prior art keywords
tensor
image
reconstructed
order
rank
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110940296.5A
Other languages
Chinese (zh)
Other versions
CN113643399A (en
Inventor
何静飞
郑绪南
张潇月
高鹏
周亚同
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University of Technology
Original Assignee
Hebei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University of Technology filed Critical Hebei University of Technology
Priority to CN202110940296.5A priority Critical patent/CN113643399B/en
Publication of CN113643399A publication Critical patent/CN113643399A/en
Application granted granted Critical
Publication of CN113643399B publication Critical patent/CN113643399B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a color image self-adaptive reconstruction method based on tensor chain rank, which comprises the steps of processing an image to be reconstructed to generate a reference image and a high-order tensor to be complemented; step two, adding different offsets to a first-order tensor chain rank of an initialized tensor chain rank of a high-order tensor to be complemented to generate m candidate tensor chain ranks; thirdly, carrying out minimization treatment on each candidate tensor chain rank to obtain m reconstructed tensors; converting the reconstructed tensor into a reconstructed image; fourthly, selecting a candidate tensor chain rank corresponding to the reconstructed image with the highest structural similarity as a first-order tensor chain rank according to the structural similarity between the reconstructed image and the reference image; and fifthly, repeatedly executing the second to fourth steps until the structural similarity between the reconstructed image and the reference image is greater than 0.95, and obtaining the optimal reconstructed image. The method can reconstruct the missing image without resorting to the original data structure.

Description

Color image self-adaptive reconstruction method based on tensor chain rank
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a color image self-adaptive reconstruction method based on tensor chain rank.
Background
As digital image processing is widely used in the fields of communication, medicine, aerospace and the like, as an important research field of digital image processing, image restoration gradually becomes a research hotspot. Since color images are natural tensor forms, the reconstruction problem for color images can be considered as a tensor complement problem.
The Tensor chain (TT) decomposition model solves the problem that the efficiency of capturing Tensor global information by a rank minimization scheme is low due to unbalanced matrix size of the traditional Tensor decomposition model expansion matrix by means of a more balanced matrixing mode, and becomes a research hot spot in recent years. The tensor chain decomposition model can fully capture the correlation between data with different dimensions, has successful application in digital signal processing, image processing and the like, converts the image data into a high-order tensor by means of a tensor enhancement technology, and realizes high-quality reconstruction of color images by means of an image complement method based on tensor chains.
At present, the color image reconstruction based on the TT method needs to acquire the rank information of the complete image data in advance, and parameters need to be continuously adjusted according to an experimental result compared with an original image so as to achieve the optimal reconstruction effect. In practical problems, the data of the image to be reconstructed is incomplete, and the rank information of the complete image data cannot be obtained, so that the current TT-based method is difficult to apply to the practical problems. In addition, the existing TT-based method regards the reconstruction quality of the known data as the reconstruction quality of the whole data with an initial rank obtained randomly, which can cause artifacts to seriously affect the reconstruction quality of the known data, and when the image data deletion rate is too high, the reconstruction quality of the known data cannot represent the reconstruction quality of the whole image.
In summary, the application provides a color image self-adaptive reconstruction method based on tensor chain rank, which is applicable to all types of data-missing images.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a color image self-adaptive reconstruction method based on tensor chain rank.
The technical scheme adopted by the invention for solving the technical problems is as follows:
the color image self-adaptive reconstruction method based on tensor chain rank is characterized by comprising the following steps:
the method comprises the steps of firstly, processing an image to be reconstructed to generate a reference image and a high-order tensor to be complemented;
a second step of adding different offsets to a first-order tensor chain rank of an initialized tensor chain rank of a high-order tensor to be complemented to generate m candidate tensor chain ranks, wherein the m candidate tensor chain ranks form a first-order candidate tensor chain rank list;
thirdly, carrying out minimization treatment on each candidate tensor chain rank in the first-order candidate tensor chain rank list, and reconstructing the high-order tensor to be complemented to obtain m reconstructed tensors; converting each reconstructed tensor into an image with the same size as the original data of the image to be reconstructed to obtain m reconstructed images;
a fourth step of respectively calculating the structural similarity between each reconstructed image and the reference image, selecting a candidate tensor chain rank corresponding to the reconstructed image with the highest structural similarity as a first-order tensor chain rank, and finishing the selection of the first-order tensor chain rank and the updating of the initialized tensor chain rank of the high-order tensor to be complemented;
and fifthly, repeatedly executing the operations of the second step to the fourth step on each order of tensor chain rank of the initialized tensor chain rank of the high-order tensor to be complemented obtained in the fourth step, and stopping the selection of the tensor chain rank until the structural similarity between the reconstructed image and the reference image is greater than 0.95, and stopping iteration of the image to be reconstructed to obtain the optimal reconstructed image.
Supplementing the image to be reconstructed by adopting a Delaunay triangulation interpolation method to obtain a reference image; and generating a high-order tensor to be complemented by using a tensor enhancement method.
And thirdly, respectively carrying out the inverse operation of the tensor enhancement method in the first step on the m reconstructed tensors, and converting the reconstructed tensors into images with the same size as the original data of the images to be reconstructed.
Initializing tensor chain rank r of high-order tensor to be complemented 1 =r 2 =…=r N-1 =1, n represents the total order of the high-order tensor to be complemented.
Compared with the prior art, the invention has the beneficial effects that:
firstly, the image to be reconstructed is complemented by using a Delaunay triangulation interpolation method, so that most structural information of the image to be reconstructed is effectively recovered, the interpolated image is used as a reference image for calculating structural similarity, the structural information of the interpolated image is used for guiding a reconstruction direction of minimizing a tensor chain rank of a high-order tensor to be complemented, the missing image can be reconstructed without the aid of an original data structure of the image to be reconstructed, and the method is suitable for images with all types of data missing.
Secondly, under the condition that original complete data of an image to be reconstructed is unknown, a backtracking method is adopted for each order of tensor chain rank to generate candidate tensor chain ranks, the candidate tensor chain rank corresponding to the reconstructed image with the highest structural similarity of the reference image is selected as the tensor chain rank of each order, the image to be reconstructed is iterated successively to obtain an optimal reconstructed image, and the problem that the conventional tensor chain-based completion method is difficult to practically apply is solved.
Drawings
FIG. 1 is an overall flow chart of the present invention;
FIG. 2 is a comparison of structural similarity between a reconstructed image and an original image to be reconstructed using the present invention and the prior art method;
FIG. 3 is a graph showing peak SNR contrast of a reconstructed image and an original image to be reconstructed using the present invention and the prior art method.
Detailed Description
The following describes the technical scheme of the present invention in detail with reference to specific drawings and examples, but is not used for limiting the protection scope of the present application.
The invention relates to a color image self-adaptive reconstruction method (method for short) based on tensor chain rank, which comprises the following steps:
the method comprises the steps of firstly, generating a reference image and a high-order tensor to be complemented;
the image to be reconstructed is the image with partial data missing, and is recorded as
Figure BDA0003214551770000021
The real number domain, p is a positive integer; image to be reconstructed +.>
Figure BDA0003214551770000022
Acting on standard color image for random deletion operator Ω>
Figure BDA0003214551770000023
Obtained above and marked as->
Figure BDA0003214551770000026
The position of the known data in the image to be reconstructed is an index position, and gray values of the rest positions except the index position in the image to be reconstructed are all 0; the image to be reconstructed is complemented by using the Delaunay triangulation interpolation method so as to effectively recover the structure of the image to be reconstructedInformation, get reference image->
Figure BDA0003214551770000024
Delta () represents an interpolation operator based on delaunay triangulation;
image to be reconstructed using tensor enhancement method
Figure BDA0003214551770000025
Is rearranged without changing the data structure, and the image to be reconstructed is converted into the higher-order tensor to be complemented +.>
Figure BDA0003214551770000031
The order n=p+1; the specific process of tensor enhancement is as follows: firstly, constructing an initial data block, and rearranging data of an image to be reconstructed into a plurality of initial data blocks; the pixel position of the initial data block is denoted as i 1 The initial data block is represented as:
Figure BDA0003214551770000032
in the formula (1), c i1j The pixel value representing each point corresponding to color j,
Figure BDA0003214551770000033
representing pixel position i 1 Corresponding orthogonal basis, e 1 =(1,0,0,0),e 2 =(0,1,0,0),e 3 =(0,0,1,0),e 4 =(0,0,0,1);u j Representing the orthonormal corresponding to color j, u 1 =(1,0,0),u 2 =(0,1,0),u 3 =(0,0,1);/>
Figure BDA0003214551770000034
Representing a tensor product;
expanding an initial data block into a second data block containing 4×4 pixels
Figure BDA0003214551770000035
Rearranging data of an image to be reconstructed into a plurality of data blocks II; data block two->
Figure BDA0003214551770000036
Expressed as:
Figure BDA0003214551770000037
in the formula (2), i 2 Representing the pixel location of data block two,
Figure BDA0003214551770000038
pixel value representing each initial data block corresponding to color j +.>
Figure BDA0003214551770000039
Representing pixel position i 2 A corresponding orthogonal basis;
similarly, data blocks containing more pixels are sequentially constructed, and the image to be reconstructed is obtained by continuously expanding the addressing and rearrangement process
Figure BDA00032145517700000310
Rearranged as higher order tensors to be complemented +.>
Figure BDA00032145517700000311
The higher order tensor to be complemented is represented as:
Figure BDA00032145517700000312
in the formula (3), i p Representing a high-order tensor to be complemented
Figure BDA00032145517700000313
Is>
Figure BDA00032145517700000314
Pixel value representing each data block corresponding to color j +.>
Figure BDA00032145517700000315
Representing pixel position i p A corresponding orthogonal basis;
second step, high order tensor to be complemented
Figure BDA00032145517700000316
First order tensor chain rank r of the initialized tensor chain rank of (a) 1 Adding different offsets to generate m candidate tensor chain ranks, wherein the m candidate tensor chain ranks form a first-order candidate tensor chain rank list; high order tensor to be complemented +.>
Figure BDA00032145517700000317
The initialization tensor chain rank is denoted as l= (r 1 ,r 2 ,...,r k ,...,r N-1 ),r 1 =r 2 =…=r N-1 =1; k=1, 2,..n-1, N represents the higher order tensor to be complemented +.>
Figure BDA00032145517700000318
Is a total order of (2); the rank list of the first order candidate tensor chain is
Figure BDA00032145517700000319
Figure BDA00032145517700000320
The d candidate tensor chain rank, delta, representing the first order tensor chain rank d =2 d Representing offset list δ= [2 ] 1 ,2 2 ,...2 d ...,2 m ]The d-th offset in (a);
thirdly, minimizing each candidate tensor chain rank in the first-order candidate tensor chain rank list by using the formula (4), and treating the high-order tensor to be complemented
Figure BDA00032145517700000321
Reconstructing to obtain m reconstructed tensors;
Figure BDA0003214551770000041
wherein I F Representing the Frobenius norm operator,
Figure BDA0003214551770000042
representing a higher order tensor to be complemented +.>
Figure BDA0003214551770000043
Matrixing a kth order tensor chain; />
Figure BDA0003214551770000044
Respectively is matrix C [k] Is a decomposition matrix of (2); />
Figure BDA0003214551770000045
Representing a higher order tensor to be complemented +.>
Figure BDA0003214551770000046
The d candidate tensor chain rank of the k-th order tensor chain rank; n represents the number of matrix columns; alpha k Is a weighting coefficient, is a constant,
Figure BDA0003214551770000047
Figure BDA0003214551770000048
representing the higher order tensor to be complemented after the effect of the random deletion operator Ω>
Figure BDA0003214551770000049
Respectively carrying out the inverse operation of the tensor enhancement method in the first step on the m reconstructed tensors, and converting each reconstructed tensor into an image with the same size as the original data of the image to be reconstructed to obtain m reconstructed images;
fourth, calculating each reconstructed image and reference image respectively
Figure BDA00032145517700000410
Structural similarity R between the d-th reconstructed image +.>
Figure BDA00032145517700000411
And reference image->
Figure BDA00032145517700000412
The structural similarity between them is->
Figure BDA00032145517700000413
d=1, 2,. -%, m; selecting a candidate tensor chain rank corresponding to the reconstructed image with highest structural similarity as a first-order tensor chain rank, completing the selection of the first-order tensor chain rank, and carrying out high-order tensor +.>
Figure BDA00032145517700000414
Updating the initialized tensor chain rank of the image to be reconstructed at the moment to finish the first reconstruction iteration;
fifth step, high order tensor to be complemented
Figure BDA00032145517700000415
And (3) repeatedly executing the second to fourth steps of operations on each order of tensor chain rank of the tensor chain rank, wherein each order of tensor chain rank is selected, the image to be reconstructed is completed by one reconstruction iteration when each order of tensor chain rank is selected, and the selection of the tensor chain rank is stopped until the structural similarity between the reconstructed image and the reference image is greater than 0.95, and the image to be reconstructed is stopped from iteration, so that the optimal reconstructed image is finally obtained.
Fig. 2 and fig. 3 are quality comparisons of reconstructing an image Lena by using the method of the present invention and the existing method, and the reconstruction quality of the image is measured by using the structural similarity and the peak signal-to-noise ratio, respectively, and the higher the numerical value is, the better the reconstruction effect is. As can be seen from the figure, the reconstruction using the delaunay triangle interpolation method directly has the worst effect, since there is not enough known data for the interpolation reconstruction when the deletion rate of the image to be reconstructed is high or when the partial blocking region is missing. It is worth noting that, although the existing TT-based method is relatively close to the reconstruction effect of the present invention, the existing TT-based method requires complete tensor chain rank information of the image to be reconstructed, and parameters are required to be continuously adjusted according to the experimental result to achieve the optimal reconstruction effect, but in practice the complete tensor chain rank information of the image to be reconstructed cannot be obtained, so that the existing TT-based method is difficult to be practically applied. The method has the advantages that the situation of missing data structures of the image to be reconstructed is not limited, good reconstruction effects can be obtained under the condition of unknown data, the reconstruction effects are integrally superior to those of the existing method under different sampling rates, the problem that the existing method cannot be applied under the actual condition is solved, and the method has important practical application value.
The invention is applicable to the prior art where it is not described.

Claims (4)

1. The color image self-adaptive reconstruction method based on tensor chain rank is characterized by comprising the following steps:
the method comprises the steps of firstly, processing an image to be reconstructed to generate a reference image and a high-order tensor to be complemented;
a second step of adding different offsets to a first-order tensor chain rank of an initialized tensor chain rank of a high-order tensor to be complemented to generate m candidate tensor chain ranks, wherein the m candidate tensor chain ranks form a first-order candidate tensor chain rank list;
thirdly, carrying out minimization treatment on each candidate tensor chain rank in the first-order candidate tensor chain rank list, and reconstructing the high-order tensor to be complemented to obtain m reconstructed tensors; converting each reconstructed tensor into an image with the same size as the original data of the image to be reconstructed to obtain m reconstructed images;
a fourth step of respectively calculating the structural similarity between each reconstructed image and the reference image, selecting a candidate tensor chain rank corresponding to the reconstructed image with the highest structural similarity as a first-order tensor chain rank, and finishing the selection of the first-order tensor chain rank and the updating of the initialized tensor chain rank of the high-order tensor to be complemented;
and fifthly, repeatedly executing the operations of the second step to the fourth step on each order of tensor chain rank of the initialized tensor chain rank of the high-order tensor to be complemented obtained in the fourth step, and stopping the selection of the tensor chain rank until the structural similarity between the reconstructed image and the reference image is greater than 0.95, and stopping iteration of the image to be reconstructed to obtain the optimal reconstructed image.
2. The adaptive reconstruction method of color images based on tensor chain rank according to claim 1, wherein the image to be reconstructed is complemented by using delaunay triangulation interpolation method to obtain a reference image; and generating a high-order tensor to be complemented by using a tensor enhancement method.
3. The method according to claim 2, wherein in the third step, the inverse operation of the tensor enhancement method in the first step is performed on the m reconstructed tensors, respectively, so as to convert the reconstructed tensors into an image with the same size as the original data of the image to be reconstructed.
4. A tensor chain rank-based color image adaptive reconstruction method according to any one of claims 1-3, characterized by an initialized tensor chain rank r of the higher order tensor to be complemented 1 =r 2 =…=r N-1 =1, n represents the total order of the high-order tensor to be complemented.
CN202110940296.5A 2021-08-17 2021-08-17 Color image self-adaptive reconstruction method based on tensor chain rank Active CN113643399B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110940296.5A CN113643399B (en) 2021-08-17 2021-08-17 Color image self-adaptive reconstruction method based on tensor chain rank

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110940296.5A CN113643399B (en) 2021-08-17 2021-08-17 Color image self-adaptive reconstruction method based on tensor chain rank

Publications (2)

Publication Number Publication Date
CN113643399A CN113643399A (en) 2021-11-12
CN113643399B true CN113643399B (en) 2023-06-16

Family

ID=78422323

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110940296.5A Active CN113643399B (en) 2021-08-17 2021-08-17 Color image self-adaptive reconstruction method based on tensor chain rank

Country Status (1)

Country Link
CN (1) CN113643399B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012053811A2 (en) * 2010-10-18 2012-04-26 전남대학교산학협력단 Tensor-voting-based color-clustering system and method
KR102055411B1 (en) * 2018-09-18 2019-12-12 인하대학교 산학협력단 Demosaicking with adaptive reference range selection
CN113077403A (en) * 2021-04-22 2021-07-06 河北工业大学 Color image reconstruction method based on local data block tensor enhancement technology

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6901164B2 (en) * 2000-04-14 2005-05-31 Trusight Ltd. Method for automated high speed improvement of digital color images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012053811A2 (en) * 2010-10-18 2012-04-26 전남대학교산학협력단 Tensor-voting-based color-clustering system and method
KR102055411B1 (en) * 2018-09-18 2019-12-12 인하대학교 산학협력단 Demosaicking with adaptive reference range selection
CN113077403A (en) * 2021-04-22 2021-07-06 河北工业大学 Color image reconstruction method based on local data block tensor enhancement technology

Also Published As

Publication number Publication date
CN113643399A (en) 2021-11-12

Similar Documents

Publication Publication Date Title
CN114140353B (en) Swin-Transformer image denoising method and system based on channel attention
CN112884851B (en) Construction method of deep compressed sensing network based on expansion iteration optimization algorithm
CN105631807B (en) The single-frame image super-resolution reconstruction method chosen based on sparse domain
CN107590779B (en) Image denoising and deblurring method based on image block clustering dictionary training
CN107341776B (en) Single-frame super-resolution reconstruction method based on sparse coding and combined mapping
CN111369466B (en) Image distortion correction enhancement method of convolutional neural network based on deformable convolution
CN107154064B (en) Natural image compressed sensing method for reconstructing based on depth sparse coding
CN110136060B (en) Image super-resolution reconstruction method based on shallow dense connection network
CN110634105A (en) Video high-space-time resolution signal processing method combining optical flow method and deep network
CN105427264A (en) Image reconstruction method based on group sparsity coefficient estimation
CN111768340B (en) Super-resolution image reconstruction method and system based on dense multipath network
CN112150354A (en) Single image super-resolution method combining contour enhancement and denoising statistical prior
CN112288632A (en) Single image super-resolution method and system based on simplified ESRGAN
CN115829876A (en) Real degraded image blind restoration method based on cross attention mechanism
CN114757828A (en) Transformer-based video space-time super-resolution method
CN111986092A (en) Image super-resolution reconstruction method and system based on dual networks
CN112541965A (en) Compressed sensing image and video recovery based on tensor approximation and space-time correlation
CN110728728B (en) Compressed sensing network image reconstruction method based on non-local regularization
CN106296583B (en) Based on image block group sparse coding and the noisy high spectrum image ultra-resolution ratio reconstructing method that in pairs maps
Feng et al. Real-world non-homogeneous haze removal by sliding self-attention wavelet network
CN113643399B (en) Color image self-adaptive reconstruction method based on tensor chain rank
CN117911302A (en) Underwater low-illumination image enhancement method based on conditional diffusion model
Guo et al. Joint demosaicking and denoising benefits from a two-stage training strategy
CN117557476A (en) Image reconstruction method and system based on FCTFT
CN111275620B (en) Image super-resolution method based on Stacking integrated learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant