CN113506212B - Improved hyperspectral image super-resolution reconstruction method based on POCS - Google Patents

Improved hyperspectral image super-resolution reconstruction method based on POCS Download PDF

Info

Publication number
CN113506212B
CN113506212B CN202110558431.XA CN202110558431A CN113506212B CN 113506212 B CN113506212 B CN 113506212B CN 202110558431 A CN202110558431 A CN 202110558431A CN 113506212 B CN113506212 B CN 113506212B
Authority
CN
China
Prior art keywords
image
resolution
iteration
images
mean square
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110558431.XA
Other languages
Chinese (zh)
Other versions
CN113506212A (en
Inventor
王玉磊
贺昕昕
宋梅萍
于浩洋
张建祎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202110558431.XA priority Critical patent/CN113506212B/en
Publication of CN113506212A publication Critical patent/CN113506212A/en
Application granted granted Critical
Publication of CN113506212B publication Critical patent/CN113506212B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an improved POCS-based hyperspectral image super-resolution reconstruction method, which comprises the steps of firstly randomly selecting one gray level image from gray level images of a first wave band of a sequence low-resolution hyperspectral image, obtaining an initial reference frame through bicubic interpolation, and relieving the problem of edge blurring of a reconstructed image to a certain extent; then correcting the residual gray level image of the first wave band according to a projection formula introduced with a relaxation operator, so that burrs of a smooth area of the reconstructed image are restrained; after iteration is performed for more than two times, according to whether the mean square error between the reconstructed images of the previous iteration and the subsequent iteration is smaller than a certain threshold value or not as a condition for exiting the iteration, the iteration process is self-adaptive, and subjectivity of manually setting the iteration times is avoided; and finally repeating the process for the gray level image of each wave band of the hyperspectral image to obtain the hyperspectral image with improved spatial resolution. The method can be used as an effective means for improving the spatial resolution of the hyperspectral image.

Description

Improved hyperspectral image super-resolution reconstruction method based on POCS
Technical Field
The invention relates to the technical field of remote sensing image processing, in particular to a POCS-based hyperspectral image super-resolution reconstruction method.
Background
The spectral resolution of hyperspectral images is extremely high, but this high spectral resolution comes at the expense of spatial resolution. When taking hyperspectral images, the camera exposure time is fixed, the number of photons acquired is also fixed, and the higher the spectral resolution, the fewer photons are split into each band for imaging, thus resulting in a lower spatial resolution of the hyperspectral image. The mutual constraint of spectral resolution and spatial resolution limits the application of hyperspectral images, so how to improve the spatial resolution of hyperspectral images is an urgent problem to be solved.
Considering that the performance of the hyperspectral camera is improved from the perspective of hardware, the performance of the hyperspectral camera is limited by the double limits of cost and technical process, the current method for solving the problem of low spatial resolution of the hyperspectral image is to reconstruct the image by using a super-resolution reconstruction algorithm from the perspective of software so as to improve the spatial resolution of the hyperspectral image.
In the super-resolution reconstruction method of the sequence image space domain, the convex set projection algorithm (POCS) is favored by a plurality of scholars due to the characteristics of simple principle, good reconstruction effect and the like. The theory on which the POCS algorithm depends is set theory, the method mainly forms a closed convex set according to a plurality of priori information of the image, and the intersection of the closed convex sets is the solution space of POCS super-resolution reconstruction. However, the image reconstructed by the algorithm has the problems of blurred edges, burrs in a smooth area, excessively subjective iteration times and the like.
Disclosure of Invention
The invention discloses an improved POCS-based hyperspectral image super-resolution reconstruction method, which specifically comprises the following steps of:
s1: selecting a gray level image of a certain wave band from the sequence low-resolution hyperspectral image to obtain a sequence low-resolution gray level image of the wave band;
s2: randomly selecting any one of the sequence low-resolution gray images and carrying out interpolation processing on the selected sequence low-resolution gray images in a bicubic interpolation mode to obtain an initial reference frame;
s3: calculating a gradient map of the image, performing Gaussian filtering treatment on the gradient map, acquiring a relaxation operator according to the gradient map, and adding the relaxation operator into a projection formula to obtain an improved projection formula;
s4: selecting one of the rest low-resolution images, and finding out the corresponding position of the pixel point on the low-resolution image on the initial reference frame through motion estimation;
s5: simulating a degradation process by adopting a point spread function PSF, reducing an initial reference frame image to the same size as a low-resolution image, solving residual errors of the initial reference frame image and the low-resolution image, and correcting and optimizing the initial reference frame of the wave band by utilizing an improved projection formula according to the residual errors;
s6: judging whether all the sequence low-resolution images are used for optimizing the initial reference frame, if so, entering S7, and if not, returning to S4;
s7: solving the mean square error of the high-resolution image reconstructed by the iteration and the high-resolution image reconstructed by the previous iteration, comparing the mean square error with a set threshold, entering S8 if the mean square error is smaller than the set threshold, and returning to S4 if the mean square error is larger than the set threshold;
s8: s1 to S7 are repeated until the gray scale images of all the bands of the hyperspectral image are reconstructed.
The following specific mode is adopted in the S2: the gray values of 16 points around the point to be sampled are used for weighted superposition, so that the gray influence of 4 direct adjacent points is considered, and the influence of the gray value change rate between the adjacent points is also considered. The method specifically adopts the following steps:
s21: calculating the corresponding positions (x+u, y+v) of the points (X, Y) in the enlarged image in the original image according to the multiple k of the original image A and the enlarged image B;
s22: finding 16 pixel points closest to (x+u, y+v) in the original image;
s23: and calculating the horizontal weight and the vertical weight of the 16 pixel points according to the bicubic interpolation basis function, wherein the bicubic interpolation basis function has the following calculation formula:
Figure BDA0003078181580000021
s24: the pixel values of the 16 pixel points are combined with the horizontal weight and the vertical weight to carry out weighted superposition, so as to obtain the pixel values of the points (X, Y) in the amplified image, and the calculation formula is as follows:
Figure BDA0003078181580000022
the following mode is specifically adopted in the S3: the gradient map is used for measuring the difference degree between the current pixel and surrounding pixels, the larger the gradient is, the larger the difference is, and the smaller the gradient is, the smaller the difference is;
the gradient map is obtained by the following steps:
let the original image f= [ F (m) 1 ,m 2 )] M1×M2 Is of size M 1 ×M 2 The weighted gradient of the image is defined as shown in equation (2) according to the neighborhood distribution of the current pixel.
Figure BDA0003078181580000031
Wherein: g 1 ,g 2 ,g 3 ,g 4 The compounds are respectively shown in a formula (3), a formula (4), a formula (5) and a formula (6);
Figure BDA0003078181580000032
Figure BDA0003078181580000033
/>
Figure BDA0003078181580000034
Figure BDA0003078181580000035
wherein: g 1 ,g 2 ,g 3 ,g 4 Respectively weighting gradient values of the current pixel point in all directions;
performing one-time Gaussian filtering treatment on the gradient map obtained by calculation;
the relaxation operator calculated according to the gradient map has the same property as the gradient map, the gradient value is large near the strong edge, the gradient value is small near the weak edge, an improved projection formula can be obtained by adding the relaxation operator into the projection formula, and the improved projection formula can well distinguish the edge area and the smooth area when correcting the initial reference frame, so that correction with different degrees is performed.
The relaxation operator is obtained in the following way:
Figure BDA0003078181580000036
wherein min (g) represents the minimum value of the gradient map, k is an adjustment coefficient, and the value range is [ -1,1];
the improved projection formula is obtained by adopting the following modes:
Figure BDA0003078181580000041
where x is the initial reference frame, λ is the relaxation operator, r (y) As residual, h is PSF template, delta 0 Is a noise related quantity.
The following method is specifically adopted in S7: the mean square error of the high-resolution images reconstructed by the previous iteration and the next iteration is used for measuring the similarity between the two images, if the mean square error is small enough, the two images reconstructed by the adjacent two iterations are considered to be very similar, the difference between the image reconstructed by the iteration and the image reconstructed by the previous iteration is very small, the necessity of continuing the iteration is avoided, the algorithm is converged, otherwise, the algorithm is not converged, and the iteration needs to be continued, and the specific method is as follows:
the mean square error of the reconstructed image of the two previous and subsequent iterations is calculated according to the following:
Figure BDA0003078181580000042
and judging the relative magnitude of the mean square error and the set threshold, if the mean square error is small, exiting iteration, otherwise, returning to the step S4.
Based on the technical scheme, the invention provides an improved POCS-based hyperspectral image super-resolution reconstruction method. According to the method, one gray level image of a first wave band of a sequence low-resolution hyperspectral image is randomly selected, an initial reference frame is obtained through bicubic interpolation, and the problem of edge blurring of a reconstructed image is relieved to a certain extent; then correcting the residual gray level image of the first wave band according to a projection formula introduced with a relaxation operator, so that burrs of a smooth area of the reconstructed image are restrained; after iteration is performed for more than two times, according to whether the mean square error between the reconstructed images of the previous iteration and the subsequent iteration is smaller than a certain threshold value or not as a condition for exiting the iteration, the iteration process is self-adaptive, and subjectivity of manually setting the iteration times is avoided; and finally repeating the process for the gray level image of each wave band of the hyperspectral image to obtain the hyperspectral image with improved spatial resolution. The method can be used as an effective means for improving the spatial resolution of the hyperspectral image, and the hyperspectral image with high spatial resolution is a premise that the hyperspectral image is applied to the work such as classification and detection, so that the method has important application value.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an improved POCS-based hyperspectral image super-resolution reconstruction method provided by the present invention;
FIGS. 2 a-2 b are diagrams of the results after reconstruction of the low resolution AVIRIS Indian Pines dataset used in the present invention;
fig. 3 a-3 b are diagrams of low resolution ROSIS University of Pavia data sets and super resolution reconstruction results used in the present invention.
Fig. 4 is a schematic diagram illustrating pixel dot layout in an embodiment of the method.
Detailed Description
In order to make the technical scheme and advantages of the present invention more clear, the technical scheme in the embodiment of the present invention is clearly and completely described below with reference to the accompanying drawings in the embodiment of the present invention:
the improved POCS-based hyperspectral image super-resolution reconstruction method shown in fig. 1 specifically comprises the following steps:
assume that an original hyperspectral image is noted as
Figure BDA0003078181580000051
Wherein->
Figure BDA0003078181580000052
b represents the total number of bands and N represents the total number of image pixels.
Step 101: selecting a gray level image of a certain wave band in the sequence low-resolution hyperspectral image to obtain a sequence low-resolution gray level image of the wave band;
step 102: randomly selecting any one of the low-resolution images of the sequence, and obtaining an initial reference frame through bicubic interpolation;
assuming that the original data A is m×n in size, the target data B is obtained by amplifying an image A by k times, and the size of B is M×N
Figure BDA0003078181580000053
To find the value of the pixel point (X, Y) in the target image B, first, find the pixel (X, Y) corresponding to the point in the original image a, by finding the 16 nearest pixels to the pixel (X, Y) in the image a, and using bicubic interpolation basis function to find the weight of the 16 points, so that the pixel value of (X, Y) in B is equal to the weighted sum of the 16 pixels, and the point (X, Y) in a may also be referred to as the point to be sampled. The bicubic interpolation basis function is calculated as follows:
Figure BDA0003078181580000061
wherein, a is generally-0.5 or-0.75.
According to the correspondence of two points in A, B diagramThe relationship can be obtained
Figure RE-GDA0003237379510000062
And then the position of the point corresponding to the coordinate P on the A graph can be obtained, namely: />
Figure RE-GDA0003237379510000063
The value of the P coordinate point typically occurs in decimal, assuming P (x+u, y+u), where x, y represent integer parts, and u, v represent fractional parts, respectively. The position of the 16 pixel points nearest to P is represented by a (i, j) (i, j=0, 1,2, 3) as shown in fig. 4.
After the coordinate positions of 16 points closest to P are obtained, the parameter x in the bicubic interpolation basis function is required to be obtained in the next step, so that weights W (x) corresponding to the 16 points are obtained. The bicubic interpolation basis function is a one-dimensional function, and the image is two-dimensional, so that pixel point coordinates in the image are disassembled into rows and columns for single calculation. The parameter X represents the distance from the pixel point to the P point, for example, a (0, 0) is (1+u, 1+v) from P (x+u, y+u), so a (0, 0) has a horizontal weight of W (1+u), a vertical weight of W (1+v), and a (0, 0) has a contribution value of a (0, 0) ×w (1+u) ×w (1+v) to B (X, Y). Further deriving the abscissa weights of a (0, i) as W (1+u), W (u), W (1-u), W (2-u), respectively; the ordinate weights of a (j, 0) are W (1+v), W (v), W (1-v) and W (2-v) respectively; the B (X, Y) pixel values are:
Figure BDA0003078181580000071
wherein W (u+1-i) is a horizontal weight and W (u+1-j) is a vertical weight.
And obtaining the values of all points in the bicubic interpolation image according to the above formula.
Step 103: calculating a gradient map of the image, performing Gaussian filtering on the gradient map to remove noise, and adding a relaxation operator into a projection formula;
image gradients are a common measure of image information that represents the difference between the current pixel and surrounding pixels. The larger the gradient is, the larger the difference between the current pixel and surrounding pixels is, which means that the pixel is positioned in the edge area of the image, and the larger the amount of information is contained; the smaller the gradient, the smaller the difference between the current pixel and surrounding pixels, indicating that the pixel contains less information.
A gaussian gradient map is introduced to measure the importance of the pixel. Setting an original image
Figure BDA0003078181580000072
Is of size M 1 ×M 2 The weighted gradient of the image can be defined as shown in equation (4) based on the neighborhood distribution of the current pixel.
Figure BDA0003078181580000073
/>
Wherein: g 1 ,g 2 ,g 3 ,g 4 The formula is shown in the formula (5), the formula (6), the formula (7) and the formula (8).
Figure BDA0003078181580000074
Figure BDA0003078181580000075
Figure BDA0003078181580000076
Figure BDA0003078181580000081
Wherein: g 1 ,g 2 ,g 3 ,g 4 The weighted gradient values of the current pixel point in the respective directions are used to mitigate the influence of noise. Meanwhile, the gradient with a longer distance takes a smaller weight, and the idea of local mean value is also added.
And calculating a gradient value according to a formula for each pixel in the whole frame of image to obtain a gradient map. Since the difference value between the noise point and the neighborhood is also large, in order to further filter noise, the calculated gradient map is subjected to gaussian filtering once, and the remaining points with large gradients can be basically determined to be pixel points near the edge. The conventional POCS method corrects the initial reference frame indiscriminately, does not distinguish a smooth area and an edge area in the image, and has a gray value change smaller than that of the edge area, but corrects the smooth area to the same extent as that of the edge area, so that burrs can appear in the reconstructed image in the smooth area.
The Gaussian gradient map can represent different information content contained in the pixel neighborhood, and the gradient value is large near the strong edge and is small near the weak edge, so that a relaxation operator of the method is defined by the Gaussian gradient map. The relaxation operator is shown in formula (1) above. The relaxation operator is then added to the projection formula.
Step 104: and selecting one of the rest low-resolution images, and finding out the corresponding position of the pixel point on the low-resolution image on the initial reference frame through motion estimation.
During imaging, slight jitter of the image sensor and slight movement of the target object both result in sub-pixel level displacement of the same object in both images. Therefore, motion parameter estimation must be performed on the sequence images before super-resolution reconstruction of the sequence images is performed.
Motion estimation of sequential images refers to solving the displacement difference of the same objective object between two images, i.e. the difference in the coordinate position of the object between the two images. The application of motion estimation in the image super-resolution reconstruction process specifically refers to accurately positioning each pixel in the low-resolution image to the coordinate position of the corresponding pixel in the high-resolution image. If the corresponding position coordinates are found inaccurately, the constraint set defined by the POCS algorithm can process the pixels at the wrong positions, and therefore the effect of super-resolution reconstruction of the image cannot be achieved.
There are many methods for performing motion estimation on sequence images, and block-based matching is used herein, in which each frame of a current sequence of low resolution images is divided into blocks. And searching the image block with the best matching result in the search window of the target high-resolution reference frame. The difference between the searched optimal image block and the position of the image block in the original image is called a motion vector. Specifically, if the position coordinate of the center pixel of the current image block of size 3×3 in one frame image is (m, n), and the position coordinate of the center pixel of the image block in the other frame image is (m+i, n+j), the displacement difference of this image block between the two images is (i, j), which is a motion vector. The criterion for searching the best matching image blocks is different, the minimum mean square error criterion is adopted herein, the definition is visual, the calculation is relatively simple, and the minimum mean square error criterion is the most commonly used matching criterion at present.
Step 105: simulating a degradation process by using a PSF, reducing an initial reference frame image to be the same size as a low-resolution image, solving residual errors of the initial reference frame image and the low-resolution image, and correcting the initial reference frame according to the residual errors and by using an improved projection formula;
all imaging systems may not be ideal optical imaging systems, and there is always degradation of the image during a particular imaging procedure. A particular imaging point may not be imaged exactly completely in an image to a pixel grid, but rather may create some blurring and some imaging signal may spill over to surrounding imaging grids. This is caused by the point spread function PSF of the imaging system.
In the implementation process of the POCS algorithm, each pixel of the low-resolution image is mapped into the high-resolution imaging grid one by one, and the application range of the PSF is found. And calculating an estimated value of the low-resolution image corresponding to the current pixel according to the PSF and the degradation model of the image, comparing the estimated value with the actual value of the low-resolution image, and if the calculated residual error exceeds a preset range, correcting the relevant pixel point of the current high-resolution reference frame until the residual error is reduced to be within the preset range. According to the principle, the correction process of the POCS algorithm is not one-time correction, but a plurality of iterations are needed to reduce the residual errors calculated by all pixels to be within the allowable range.
In a specific implementation of the POCS algorithm, the image point spread function is determined by the specific imaging system, and h (x, y) represents a common gaussian model, which can be expressed as:
Figure BDA0003078181580000091
in the above, X 0 And Y 0 The center point coordinates representing the point spread function, x and y representing the abscissa and ordinate of the target image pixel, S h The support domain representing the point spread function is typically 3×3 or 5×5 in size.
The modified projection formula is as follows:
Figure BDA0003078181580000101
wherein: x is the initial reference frame, lambda is the relaxation operator, r (y) As residual, h is PSF template, delta 0 Is a noise related quantity. According to the expression (10), the image can be adaptively corrected according to the image gradation change characteristics, and in the smooth region, the degree of correction is small each time, burrs are suppressed, and in the edge region, the degree of correction is large each time, and the convergence speed is increased.
Step 106: judging whether the sequence low resolution images are all used or not;
the sequence of low resolution images is used to guide the correction of the initial reference frame, the more guide images are used, the more spatial information is integrated in the initial reference frame, and the higher the spatial resolution of the high resolution image which is finished by the final correction is, so that the low resolution images should be used as much as possible to guide the correction. If the sequence of low resolution images are all used, then go to the next step; if there are more sequential low resolution images not used, then return to step 104 where the low resolution images are again selected to guide the correction.
Step 107: solving the mean square error of the high-resolution image reconstructed by the iteration and the high-resolution image reconstructed by the previous iteration, and comparing the mean square error with a set threshold value;
the mean square error of the high resolution images reconstructed in the two previous and subsequent iterations can be used to measure the similarity between the two images. If the mean square error is small enough, the two images are considered to be similar, the algorithm is converged, the gray level image of the wave band is reconstructed, and the next step is carried out; otherwise, the algorithm does not converge and should return to step 104 for the next iteration.
Step 108: and (3) using steps 101 to 107 to finish super-resolution reconstruction of the hyperspectral image.
True hyperspectral data experiment
The application effect analysis and evaluation are carried out on the improved POCS-based hyperspectral image super-resolution reconstruction method provided by the invention by adopting two groups of publicly-real hyperspectral image data sets.
1. Data set and parameter settings
(1) CAVE dataset
Both data sets used for the experiment were from the CAVE database, the "face" and "fake and real food" data, respectively. The spectral band range of the CAVE dataset image is 400nm to 700nm for a total of 31 spectral bands, with the hyperspectral image size of each band being 512 x 512. FIG. 2a shows a grayscale image obtained at the 10 th band after 2-fold downsampling of the face dataset; fig. 3a shows a grayscale image of the 10 th band of the fake and real food dataset after 2-fold downsampling.
Experimental evaluation index
(1) Mean square error (Mean Square Error, MSE)
The mean square error of an image refers to the sum of the absolute values of the gray differences of all corresponding pixels of two images divided by the total number of pixels of the image. The smaller the mean square error, the smaller the degree of difference between the two images, and the more similar the images. The image mean square error MSE is defined as:
Figure BDA0003078181580000111
wherein x and y represent two images respectively, and M and N represent the lengths of the images in the horizontal and vertical directions respectively.
(2) Peak Signal-to-Noise Ratio (PSNR)
The peak signal-to-noise ratio PSNR is defined as:
Figure BDA0003078181580000112
(3) Structural similarity (Structural SIMilarity, SSIM)
The structural similarity is an index for measuring the similarity of two images, and the definition form is as follows:
Figure BDA0003078181580000113
where x and y are two images, μ x Is the average value of x, mu y Is the average value of y and is,
Figure BDA0003078181580000114
is the variance of x>
Figure BDA0003078181580000115
Is the variance of y, sigma xy Covariance C of x and y 1 And C 2 Is used to maintain a constant.
2. Analysis and evaluation of experimental results
The results of experiments using two sets of real hyperspectral image data are shown in tables 1-2, and corresponding reconstructed result images are shown in figures 2b and 3 b.
The experiment introduces a traditional POCS method, and the reconstruction result is compared with an improved POCS method:
compared with the traditional POCS method, the improved POCS method eliminates two problems of blurring edges and burrs in smooth areas of the reconstructed image to a certain extent, adapts iteration times, and avoids subjectivity of manually setting the iteration times.
TABLE 1 face dataset super-resolution reconstruction results
Figure BDA0003078181580000121
Table 2 fake and real food dataset super-resolution reconstruction results
Figure BDA0003078181580000122
Aiming at the problem of low spatial resolution of the hyperspectral image, the invention provides an improved POCS-based super-resolution reconstruction method for improving the spatial resolution of the hyperspectral image. The theoretical basis of the method is set theory, a plurality of priori information of the image is mainly utilized, each priori information can be seen to be a closed convex set, and the intersection of the closed convex sets is the solution space of the POCS method. According to the method, one gray level image of a first wave band of a sequence low-resolution hyperspectral image is randomly selected, an initial reference frame is obtained through bicubic interpolation, and the problem of edge blurring of a reconstructed image is relieved to a certain extent; then correcting the residual gray level image of the first wave band according to a projection formula introduced with a relaxation operator, so that burrs of a smooth area of the reconstructed image are restrained; after iteration is performed for more than two times, according to whether the mean square error between the reconstructed images of the previous iteration and the subsequent iteration is smaller than a certain threshold value or not as a condition for exiting the iteration, the iteration process is self-adaptive, and subjectivity of setting the iteration times is avoided; and finally repeating the process for the gray level image of each wave band of the hyperspectral image to obtain the hyperspectral image with improved spatial resolution. The experimental results of two groups of truly disclosed hyperspectral data sets prove the effectiveness of the improved POCS-based hyperspectral image super-resolution reconstruction method.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should make equivalent substitutions or modifications according to the technical scheme of the present invention and the inventive concept thereof, and should be covered by the scope of the present invention.

Claims (4)

1. An improved POCS-based hyperspectral image super-resolution reconstruction method is characterized in that: the method comprises the following steps:
s1: selecting a gray level image of a certain wave band from the sequence low-resolution hyperspectral image to obtain a sequence low-resolution gray level image of the wave band;
s2: randomly selecting any one of the sequence low-resolution gray images and carrying out interpolation processing on the selected sequence low-resolution gray images in a bicubic interpolation mode to obtain an initial reference frame;
s3: calculating a gradient map of the image, performing Gaussian filtering treatment on the gradient map, acquiring a relaxation operator according to the gradient map, and adding the relaxation operator into a projection formula to obtain an improved projection formula;
s4: selecting one of the rest low-resolution images, and finding out the corresponding position of the pixel point on the low-resolution image on the initial reference frame through motion estimation;
s5: simulating a degradation process by adopting a point spread function PSF, reducing an initial reference frame image to the same size as a low resolution image, acquiring residual errors of the degraded image and the original low resolution image, and correcting and optimizing the initial reference frame of the wave band by utilizing an improved projection formula according to the residual errors;
s6: judging whether all the sequence low-resolution images are used for optimizing the initial reference frame, if so, entering S7, and if not, returning to S4;
s7: solving the mean square error of the high-resolution image reconstructed by the iteration and the high-resolution image reconstructed by the previous iteration, comparing the mean square error with a set threshold, entering S8 if the mean square error is smaller than the set threshold, and returning to S4 if the mean square error is larger than the set threshold;
s8: s1 to S7 are repeated until the gray scale images of all the bands of the hyperspectral image are reconstructed.
2. The improved POCS-based hyperspectral image super-resolution reconstruction method of claim 1, further characterized by: the following specific mode is adopted in the S2:
s21: calculating the corresponding positions (x+u, y+v) of the points (X, Y) in the enlarged image in the original image according to the multiple k of the original image A and the enlarged image B;
s22: finding 16 pixel points closest to (x+u, y+v) in the original image;
s23: and calculating the horizontal weight and the vertical weight of the 16 pixel points according to the bicubic interpolation basis function, wherein the bicubic interpolation basis function has the following calculation formula:
Figure FDA0003078181570000021
s24: the pixel values of the 16 pixel points are combined with the horizontal weight and the vertical weight to carry out weighted superposition, so as to obtain the pixel values of the points (X, Y) in the amplified image, and the calculation formula is as follows:
Figure FDA0003078181570000022
3. an improved POCS-based hyperspectral image super-resolution reconstruction method as claimed in claim 1, further characterized by: the following mode is specifically adopted in the S3:
s31: the weighted gradient value of the current pixel point in each direction is calculated according to the following formula:
Figure FDA0003078181570000023
Figure FDA0003078181570000024
Figure FDA0003078181570000025
Figure FDA0003078181570000026
s32: and calculating the total weighted gradient of each pixel in the image by the following formula to obtain an image gradient map with the self-adaptive region correction characteristic:
Figure FDA0003078181570000027
s33: performing Gaussian filtering on the obtained gradient map for one time to remove noise;
s34: obtaining a relaxation operator according to the obtained gradient map:
Figure FDA0003078181570000028
s35: with residual error r (y) And noise figure delta 0 Taking the relative size of the initial reference frame x as a criterion, taking the difference between the residual error and the noise coefficient, taking the PSF template h as normalization processing, and carrying out self-adaptive correction on each pixel point in the initial reference frame x by combining a relaxation operator lambda, wherein the process is expressed as follows by an improved projection formula:
Figure 1
4. an improved POCS-based hyperspectral image super-resolution reconstruction method as claimed in claim 1, further characterized by: the following method is specifically adopted in S7:
the mean square error of the reconstructed image of the two previous and subsequent iterations is calculated according to the following:
Figure FDA0003078181570000032
and judging the relative magnitude of the mean square error and the set threshold, if the mean square error is small, exiting iteration, otherwise, returning to the step S4.
CN202110558431.XA 2021-05-21 2021-05-21 Improved hyperspectral image super-resolution reconstruction method based on POCS Active CN113506212B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110558431.XA CN113506212B (en) 2021-05-21 2021-05-21 Improved hyperspectral image super-resolution reconstruction method based on POCS

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110558431.XA CN113506212B (en) 2021-05-21 2021-05-21 Improved hyperspectral image super-resolution reconstruction method based on POCS

Publications (2)

Publication Number Publication Date
CN113506212A CN113506212A (en) 2021-10-15
CN113506212B true CN113506212B (en) 2023-05-23

Family

ID=78008481

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110558431.XA Active CN113506212B (en) 2021-05-21 2021-05-21 Improved hyperspectral image super-resolution reconstruction method based on POCS

Country Status (1)

Country Link
CN (1) CN113506212B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188305B (en) * 2023-02-16 2023-12-19 长春理工大学 Multispectral image reconstruction method based on weighted guided filtering

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005026765A1 (en) * 2003-09-18 2005-03-24 The Institute Of Cancer Research; Royal Cancer Hospital Dynamic mr imaging involving constrained reconstruction using pocs algorithm
CN103136734A (en) * 2013-02-27 2013-06-05 北京工业大学 Restraining method on edge Halo effects during process of resetting projections onto convex sets (POCS) super-resolution image
CN108537728A (en) * 2018-03-05 2018-09-14 中国地质大学(武汉) High spectrum image super-resolution forming method and system based on spectrum fidelity
CN108765288A (en) * 2018-05-25 2018-11-06 杭州电子科技大学 A kind of POCS Image Super-resolution Reconstruction methods kept based on edge

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005026765A1 (en) * 2003-09-18 2005-03-24 The Institute Of Cancer Research; Royal Cancer Hospital Dynamic mr imaging involving constrained reconstruction using pocs algorithm
CN103136734A (en) * 2013-02-27 2013-06-05 北京工业大学 Restraining method on edge Halo effects during process of resetting projections onto convex sets (POCS) super-resolution image
CN108537728A (en) * 2018-03-05 2018-09-14 中国地质大学(武汉) High spectrum image super-resolution forming method and system based on spectrum fidelity
CN108765288A (en) * 2018-05-25 2018-11-06 杭州电子科技大学 A kind of POCS Image Super-resolution Reconstruction methods kept based on edge

Also Published As

Publication number Publication date
CN113506212A (en) 2021-10-15

Similar Documents

Publication Publication Date Title
CN108734659B (en) Sub-pixel convolution image super-resolution reconstruction method based on multi-scale label
CN106952228B (en) Super-resolution reconstruction method of single image based on image non-local self-similarity
CN107025632B (en) Image super-resolution reconstruction method and system
CN106127688B (en) A kind of super-resolution image reconstruction method and its system
JP2007188493A (en) Method and apparatus for reducing motion blur in motion blur image, and method and apparatus for generating image with reduced motion blur by using a plurality of motion blur images each having its own blur parameter
CN102231204A (en) Sequence image self-adaptive regular super resolution reconstruction method
CN106169174B (en) Image amplification method
CN103020898B (en) Sequence iris image super resolution ratio reconstruction method
CN111783583B (en) SAR image speckle suppression method based on non-local mean algorithm
CN110070539A (en) Image quality evaluating method based on comentropy
US20200364556A1 (en) Signal enhancement and manipulation using a signal-specific deep network
CN112184549B (en) Super-resolution image reconstruction method based on space-time transformation technology
CN104063849A (en) Video super-resolution reconstruction method based on image block self-adaptive registration
CN109327712A (en) The video of fixed scene disappears fluttering method
CN113506212B (en) Improved hyperspectral image super-resolution reconstruction method based on POCS
CN109064402A (en) Based on the single image super resolution ratio reconstruction method for enhancing non local total variation model priori
CN115082336A (en) SAR image speckle suppression method based on machine learning
Liu et al. Image super-resolution based on adaptive joint distribution modeling
CN106920213B (en) Method and system for acquiring high-resolution image
KR101341617B1 (en) Apparatus and method for super-resolution based on error model of single image
CN111986079A (en) Pavement crack image super-resolution reconstruction method and device based on generation countermeasure network
CN108062743B (en) Super-resolution method for noisy image
CN112488920B (en) Image regularization super-resolution reconstruction method based on Gaussian-like fuzzy core
Song et al. Unsupervised denoising for satellite imagery using wavelet subband cyclegan
CN114170087A (en) Cross-scale low-rank constraint-based image blind super-resolution method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant