CN104933678B - A kind of image super-resolution rebuilding method based on image pixel intensities - Google Patents

A kind of image super-resolution rebuilding method based on image pixel intensities Download PDF

Info

Publication number
CN104933678B
CN104933678B CN201510373726.4A CN201510373726A CN104933678B CN 104933678 B CN104933678 B CN 104933678B CN 201510373726 A CN201510373726 A CN 201510373726A CN 104933678 B CN104933678 B CN 104933678B
Authority
CN
China
Prior art keywords
resolution
image
pixel
neighborhood
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510373726.4A
Other languages
Chinese (zh)
Other versions
CN104933678A (en
Inventor
王晓峰
曾能亮
周弟东
王姣
徐冰超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Realect Electronic Development Co ltd
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201510373726.4A priority Critical patent/CN104933678B/en
Publication of CN104933678A publication Critical patent/CN104933678A/en
Application granted granted Critical
Publication of CN104933678B publication Critical patent/CN104933678B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of image super-resolution rebuilding method based on image pixel intensities, specifically implement according to following steps:Registration is carried out first with several low-resolution images, the image after registration is mapped in fine-resolution meshes;For each grid node, construct a series of nested neighborhoods, the low-resolution pixel searched in each neighborhood, define the strength plane of each low-resolution pixel, distance of each low-resolution pixel to the strength plane in calculating neighborhood, and the mean square deviation of distance value is calculated, using the inverse of mean square error as weights weighted average, estimate the pixel value of fine-resolution meshes node;Finally using maximum a posteriori probability (MAP) estimation high-resolution pixel value, the present invention solve the problems, such as present in prior art because influence of the abnormal data to estimate and caused by be difficult to improve reconstruction quality.

Description

Image super-resolution reconstruction method based on pixel intensity
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image super-resolution reconstruction method based on pixel intensity.
Background
With the development of information technology, the demand for high-quality digital images is more and more urgent. However, digital images are limited by imaging equipment (manufacturing process and cost) in the acquisition process and are affected by factors such as motion blur, optical blur, random noise and low sampling rate in the imaging process, and the quality of the images obtained by shooting is not ideal. The super-resolution (SR) technique is a signal processing technique for improving the resolution of an image, and it reconstructs a high-resolution image or an image sequence using a single or multiple low-resolution images, increasing high-frequency information, and removing degradation caused during the imaging process, thereby improving the image quality and improving the visual effect of the image. In recent years, image super-resolution reconstruction has become a research hotspot in the field of digital image processing, has very important theoretical significance and wide application prospect, and is widely applied in the fields of remote sensing imaging, medical imaging, satellite imaging, mode identification, military reconnaissance, safety monitoring, traffic identification, case reconnaissance and the like.
The image super-resolution reconstruction method can be mainly divided into the following categories: frequency (frequency) domain methods, space (spatial) domain methods, learning-based methods.
The frequency domain method reconstructs high resolution images mainly using aliasing of low resolution images. The earliest of this class of methods mainly utilized the principles of aliasing and translation properties of fourier transforms. After that, many improvements have been made. The frequency domain method relies on an observation model of an image, is limited to global translational motion and linear space invariant blurring, and is limited in practical application.
In order to overcome the defects of the frequency domain method, a plurality of space domain methods are provided, and the main methods comprise: non-uniform interpolation, iterative back projection, convex set projection, maximum posterior probability.
The theoretical basis of the non-uniform interpolation method is that the low-resolution image is non-uniform when being mapped onto the high-resolution grid after motion estimation, so that the pixel values of the sampling points on the high-resolution grid can be obtained by interpolating the non-uniform sampling points, and the computation amount is small. This type of approach requires that all Low Resolution (LR) images have the same noise profile and blur function. At present, the latest representative of the method is an Image Super-Resolution reconstruction method Based on Interpolation and multi-surface Fitting (MFISR for short) proposed by Fei Zhou. The iterative back projection method is to use an image degradation model to generate a low-resolution image, to be differentiated from the observed low-resolution image, to project a residual error into the estimated high-resolution image, and to repeat the process until the estimated high-resolution image meets the condition of iteration stop. The algorithm has the advantages of simple principle and high calculation speed, and has the defects of difficulty in utilizing the priori knowledge of the image, no clear method for selecting the most appropriate back projection operator and non-unique solution. The convex set projection method is a super-resolution reconstruction method based on the set theory, can well keep detailed information such as image edges and the like, and is easy to add prior knowledge. However, the reconstruction result is greatly influenced by the initial estimation value, and the solution is not unique. The maximum posterior probability method is a super-resolution reconstruction method based on statistical theory, is one of the most promising super-resolution reconstruction algorithms at present, and has the principle that the posterior probability of a high-resolution image is maximum under the condition of a known low-resolution image sequence. The method has the main advantages that the prior knowledge of the image is conveniently added to regularize the problem of ill-definite inverse, and the method has strong denoising capability and has the defect of large computation amount.
Another reconstruction method is a method based on machine learning principles. The method is firstly proposed by Freeman, a learning model is generated by training a low-resolution image set, and the high-frequency detail information of the image is calculated by using the model. In addition, the scholars introduce the idea of manifold learning into image super-resolution reconstruction, provide an image super-resolution reconstruction method based on neighborhood embedding, and obtain a better reconstruction effect. It has also been proposed to use sparse representation for image super-resolution reconstruction, and to find a consistent sparse system in a high-resolution image dictionary by calculating the sparse coefficient of a low-resolution image, so as to obtain a high-resolution image.
Disclosure of Invention
The invention aims to provide an image super-resolution reconstruction method based on pixel intensity, which solves the problem that reconstruction quality is difficult to improve due to the influence of abnormal data on an estimated value in the prior art.
The invention adopts the technical scheme that an image super-resolution reconstruction method based on pixel intensity is implemented according to the following steps:
step 1, carrying out image registration on a low-resolution image;
step 2, carrying out image fusion on the registered image obtained in the step 1;
and 3, carrying out image reconstruction on the image obtained in the step 2 to obtain a final high-resolution pixel value image.
The present invention is also characterized in that,
in the step 1, the method comprises the following steps of,let the low-resolution picture be T i (i =1,2, \ 8230;, N), N being the number of images,
the step 1 specifically comprises the following steps:
for multiple low-resolution images T i (i =1,2, \8230;, N) PSNR and FSIMc quality evaluations were performed, and one image with the largest PSNR and FSIMc values was selected as a reference image, and set as T 1 Extracting T in low-resolution image by SIFT algorithm i Characteristic point p of (i =1,2, \8230;, N) i Using sets of feature points P i Respectively, as follows:
in the above formula, each feature point set P i M in (1) i For each low-resolution image T separately i Wherein i =1,2, \ 8230;, N, for the reference image T 1 Each feature point p in (1) i Calculating other low resolution images T i All feature points in (i =2, \8230;, N) to the reference image T 1 Is represented as follows:
in the above formula, j represents an index of 128-dimensional SIFT feature, t 1 =1,2,...,m 1 ,t 2 =1,2,...,max(m 2 ,m 3 ,...,m N ) The distances are arranged in ascending order, and the minimum distance d in the distances is extracted 1 And a sub-small distance d 2 And find the minimum distance d 1 And a sub-small distance d 2 Matching ratio of r, r = d 1 /d 2 When the minimum distance d 1 And a sub-small distance d 2 Matching ratio r of&When is eta, eta is threshold value, the reference image T is described 1 When the feature point in (b) is successfully matched with the feature point closest to the feature point, the minimum distance d is used 1 And a sub-small distance d 2 Matching ratio r of&Eta, the reference image T is described 1 The feature point in the image is unsuccessfully matched with the feature point closest to the feature point, and then the reference image T is obtained 1 With the low-resolution image T i (i =2, \8230;, N) corresponding matching pointsIs paired with Q i
In the formula (2), m i In order to match the number of pairs of points,
obtaining affine transformation parameters by an affine transformation formula for the formula (2): angle of rotation theta between two images i And amount of displacement Δ x i 、Δy i The affine transformation formula is expressed as:
Q 1 =R i Q i +ΔD i (3)
in the formula (3), R i Is a 2 × 2 rotation matrix, Δ D i Is a 2 × 1 translation vector, R i And Δ D i Respectively expressed as:
θ i is the angle of rotation, Δ x i 、Δy i Respectively, displacement amounts on the abscissa and the ordinate.
Threshold η =0.85 in step 1.
The step 2 specifically comprises the following steps: when the step 1 obtains the image T to be registered i (i =2, \ 8230;, N) and a reference image T 1 Angle of rotation theta therebetween i And displacement amount Δ D i Then, the registered low-resolution image is interpolated and amplified according to the size of the scaling factor, and then the amplified low-resolution image is amplifiedThe image is processed according to the affine transformation parameters: angle of rotation theta i And amount of displacement Δ x i 、Δy i Registration is performed and mapped onto a high resolution grid.
The step 3 is: firstly, constructing an initial neighborhood for each high-resolution grid node in the step 2 to perform surface fitting to obtain an estimated value, then expanding the neighborhood range according to the step length to perform surface fitting to obtain an estimated value, and finally estimating a high-resolution pixel value by using an MAP method, wherein the method specifically comprises the following steps:
step (3.1), neighborhood expansion: for the image mapped onto the high resolution grid in step 2, an initial neighborhood value b is selected for each grid node 1 And a maximum neighborhood value b 2 Step length is set to 0.1;
step (3.2), pixel searching: slave b in step (3.1) 1 Start to b 2 And finishing, performing neighborhood search on the nodes on the high-resolution grid to obtain K neighborhood NB i (i =1,2, \8230;, K), according to b 1 、b 2 Sum step valueThe number of low resolution pixels of the corresponding search is recorded as m i (i=1,2,…,K);
Step (3.3), establishing a pixel intensity plane: taking an XOY plane where the high-resolution grid nodes are located as an image plane, taking a direction perpendicular to the XOY plane as a Z axis, constructing a coordinate system, and aligning the neighborhood NB in the step (3.2) i Each low resolution pixel L of (i =1,2, \8230;, K) ij (j=1,2,…,m i ) The pixel value is taken as high and is parallel to the XOY coordinate plane as a plane, and a pixel intensity plane is obtained and recorded as L ij Plane, the following formula:
is the neighborhood NB i Inner L ij The intensity plane of (a) is,is L ij A pixel value of (a);
and (3.4) calculating pixel intensity: neighborhood NB i Inner pixel L it To the pixel L ij The distance between is defined as: l is it To L ij Distance of plane, calculating neighborhood NB i Internal removing L ij Each low resolution pixel other than L ij Is to L, i.e. to L ij Distance of the plane:
1≤k≤m i and k ≠ j (7)
In the neighborhood NB i Inner, calculated mean square error, mean square errorIs expressed as follows:
mean square errorA larger size indicates a smaller effect of the plane on the high resolution pixel estimation, and therefore, letAll omega are obtained j (j=1,2,…,m i ) Then, the NB corresponding to the neighborhood is calculated by weighting i An estimated pixel value of:
step (3.5), estimating maximum a posteriori probability (MA)P): when all of step (3.4) are performedAfter being estimated, according to the gaussian assumption, there are:
in the formula (10), the compound represented by the formula (10),f 0 (Ph) is an a priori estimate of f (Ph), and λ is an empirical parameter, giving the gradient value of equation (10) 0, i.e.Obtaining a pixel value of the high-resolution pixel Ph:
and obtaining the final high-resolution pixel value image.
Selecting 1 × 1 image block in step (3.1), and initializing neighborhood value b 1 =0.5, maximum neighborhood value b 2 =1.5。
λ =0 in step (3.5).
The image super-resolution reconstruction method based on the pixel intensity has the advantages that the intensity of each low-resolution pixel is utilized, the influence of abnormal data on an estimated value is reduced, the high-resolution pixel value is estimated by adopting the maximum posterior probability (MAP), the noise of a reconstructed image is inhibited, iteration is not needed, and the complexity of calculation is reduced.
Drawings
FIG. 1 is a flow chart of the super-resolution image reconstruction method based on pixel intensity according to the present invention;
FIG. 2 is a schematic diagram of mapping from a low-resolution image to a high-resolution grid in the pixel intensity-based image super-resolution reconstruction method of the present invention;
FIG. 3 is a schematic diagram of neighborhood expansion of the image super-resolution reconstruction method based on pixel intensity according to the present invention;
FIG. 4 is a plane illustration of pixel intensity in the super-resolution image reconstruction method based on pixel intensity according to the present invention;
5-10 are original test images in simulation verification experiments in the pixel intensity-based image super-resolution reconstruction method of the invention;
FIG. 11 is a PSNR curve for a reconstructed image of the low resolution images of FIGS. 5 and 6 in a simulation verification test in the pixel intensity-based image super-resolution reconstruction method of the present invention;
FIGS. 12 to 17 are low-resolution images sequentially corresponding to the original test images of FIGS. 5 to 7 in the simulation verification test in the pixel intensity-based image super-resolution reconstruction method according to the present invention;
FIG. 18 is a graph of reconstruction effect of the image super-resolution reconstruction method based on interpolation and multi-face fitting on FIG. 12;
FIG. 19 is a graph of the reconstruction effect of the super-resolution image reconstruction method based on pixel intensity of the present invention on FIG. 12;
FIG. 20 is a graph of reconstruction effect of the image super-resolution reconstruction method based on interpolation and multi-face fitting on FIG. 13;
FIG. 21 is a graph of the reconstruction effect of the super-resolution image reconstruction method based on pixel intensity of the present invention on FIG. 13;
FIG. 22 is a graph of the reconstruction effect of the image super-resolution reconstruction method based on interpolation and multi-face fitting on FIG. 14;
FIG. 23 is a graph of the reconstruction effect of the super-resolution image reconstruction method based on pixel intensity of the present invention on FIG. 14;
FIG. 24 is a graph of the reconstruction effect of the image super-resolution reconstruction method based on interpolation and multi-surface fitting on FIG. 15;
FIG. 25 is a graph of the reconstruction effect of the super-resolution image reconstruction method based on pixel intensity of the present invention on FIG. 15;
FIG. 26 is a graph of the reconstruction effect of the image super-resolution reconstruction method based on interpolation and multi-surface fitting on FIG. 16;
FIG. 27 is a graph of the reconstruction effect of the super-resolution image reconstruction method based on pixel intensity of the present invention on FIG. 16;
FIG. 28 is a graph of the reconstruction effect of the super-resolution image reconstruction method based on interpolation and multi-surface fitting on FIG. 17;
FIG. 29 is a graph of the reconstruction effect of the super-resolution image reconstruction method based on pixel intensity according to the present invention on FIG. 17.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention relates to an image super-resolution reconstruction method based on pixel intensity, which is characterized in that a plurality of low-resolution images are used for registration, the registered images are mapped to a high-resolution grid, for each grid node, a series of nested neighborhoods are firstly constructed, low-resolution pixels in each neighborhood are searched, an intensity plane of each low-resolution pixel is defined, the distance from each low-resolution pixel in each neighborhood to the intensity plane is calculated, the mean square error of the distance value is calculated, the pixel value of the high-resolution grid node is estimated by taking the reciprocal of the mean square error as a weighted average of weights, and finally the high-resolution pixel value is estimated by adopting the maximum posterior probability (MAP).
The invention relates to an image super-resolution reconstruction method based on pixel intensity, which is implemented by the following steps:
step 1, setting a low-resolution image T i (i =1,2, \ 8230;, N), N being the number of images,
performing image registration on the low-resolution image, specifically:
for multiple low-resolution images T i (i =1,2, \ 8230;, N) PSNR and FSIMc quality evaluations were performed, and one image with the largest PSNR and FSIMc values was selected as a reference image and set as T 1 Extracting T in low-resolution image by SIFT algorithm i Characteristic point p of (i =1,2, \8230;, N) i Using feature point setsAnd P is i Respectively, as follows:
in the above formula, each feature point set P i M in (1) i For each low-resolution image T separately i Wherein i =1,2, \ 8230;, N, for the reference image T 1 Each feature point p in (1) i Calculating other low resolution images T i All feature points in (i =2, \8230;, N) to the reference image T 1 The distance of the feature points of (1) is expressed as follows:
in the above formula, j represents an index of 128-dimensional SIFT feature, t 1 =1,2,...,m 1 ,t 2 =1,2,...,max(m 2 ,m 3 ,...,m N ) The distances are arranged in ascending order, and the minimum distance d in the distances is extracted 1 And a sub-small distance d 2 And find the minimum distance d 1 And a sub-small distance d 2 Matching ratio of r, r = d 1 /d 2 When the minimum distance d 1 And a sub-small distance d 2 Matching ratio r of&When eta is the threshold value, eta =0.85, the reference image T is described 1 When the feature point in (b) is successfully matched with the feature point closest to the feature point, the minimum distance d is used 1 And a sub-small distance d 2 Matching ratio r of&Eta, the reference image T is explained 1 The feature point in the reference image T is unsuccessfully matched with the feature point closest to the feature point, and then the reference image T is obtained 1 With the low-resolution image T i (i =2, \8230;, N) corresponding matching pointsIs paired with Q i
In the formula (2), m i In order to match the number of pairs of points,
obtaining affine transformation parameters by an affine transformation formula for the formula (2): angle of rotation theta between two images i And the amount of displacement Deltax i 、Δy i The affine transformation formula is expressed as:
Q 1 =R i Q i +ΔD i (3)
in the formula (3), R i Is a 2 × 2 rotation matrix, Δ D i Is a 2 × 1 translation vector, R i And Δ D i Respectively expressed as:
θ i is the angle of rotation, Δ x i 、Δy i Displacement amounts of the abscissa and the ordinate, respectively;
step 2, carrying out image fusion on the registered image obtained in the step 1, specifically: when the step 1 obtains the image T to be registered i (i =2, \ 8230;, N) and a reference image T 1 Angle of rotation theta therebetween i And displacement amount Δ D i Then, as shown in fig. 2, the registered low-resolution image is interpolated and enlarged according to the magnitude of the scaling factor, and then the enlarged image is further processed according to the affine transformation parameters: angle of rotation theta i And amount of displacement Δ x i 、Δy i Registration is performed and then mapped onto a high resolution grid.
And 3, performing image reconstruction on the image obtained in the step 2 to obtain a final pixel value of the high-resolution pixel, as shown in fig. 3, specifically: firstly, constructing an initial neighborhood for each high-resolution grid node in the step 2 to perform surface fitting to obtain an estimated value, then expanding the neighborhood range according to the step length to perform surface fitting to obtain an estimated value, and finally estimating a high-resolution pixel value by using an MAP method, wherein the method specifically comprises the following steps:
step (3.1), neighborhood expansion: for the image mapped onto the high resolution grid in step 3, an initial neighborhood value b is selected for each grid node 1 And a maximum neighborhood value b 2 Selecting 1 × 1 image block and initial neighborhood value b 1 =0.5, maximum neighborhood value b 2 =1.5, the step size is set to 0.1 depending on the degree of degradation of the low resolution image, and therefore, here b 1 、b 2 Are all integer multiples of 0.1;
step (3.2), pixel searching: slave b in step (3.1) 1 Start to b 2 And finishing, performing neighborhood search on the nodes on the high-resolution grid to obtain K neighborhood NB i (i =1,2, \8230;, K), according to b 1 、b 2 Sum step valueThe number of low-resolution pixels corresponding to the search is recorded as m i (i=1,2,…,K);
Step (3.3), establishing a pixel intensity plane: constructing a coordinate system by taking the XOY plane where the high-resolution grid nodes are located as an image plane and taking the direction perpendicular to the XOY plane as a Z axis, and aligning the neighborhood NB in the step (3.2) i Each low resolution pixel L of (i =1,2, \8230;, K) ij (j=1,2,…,m i ) The pixel value is taken as high and is parallel to the XOY coordinate plane as a plane, namely a pixel intensity plane, which is marked as L as shown in FIG. 4 ij Plane, as follows:
is the neighborhood NB i Inner L ij The plane of intensity of (a) is,is L ij The pixel value of (a);
and (3.4) calculating pixel intensity: neighborhood NB i Inner pixel L it To the pixel L ij The distance between is defined as: l is it To L ij Distance of plane, calculating neighborhood NB i Internal removing L ij Each low resolution pixel other than L to L ij Is to L, i.e. to L ij Distance of plane:
1≤k≤m i and k ≠ j (7)
In the neighborhood NB i Inner, calculated mean square error, mean square errorIs represented as follows:
mean square errorA larger size indicates a smaller effect of the plane on the high resolution pixel estimation, and therefore, letAll omega are obtained j (j=1,2,…,m i ) Thereafter, the corresponding neighborhood NB is then calculated by weighting i An estimated pixel value of:
step (3.5), estimating maximum posterior probability (MAP): when all of step (3.4)After being estimated, according to the gaussian assumption, there are:
in the formula (10), the compound represented by the formula (10),f 0 (Ph) is an a priori estimate of f (Ph), λ is an empirical parameter, λ =0, let the gradient value of equation (10) be 0, i.e.Obtaining a pixel value of the high-resolution pixel Ph:
and obtaining the final high-resolution pixel value image.
The simulation experiment result of the invention is as follows:
in order to illustrate the effectiveness and performance of the method, the visual effect is tested through simulation experiments and numerical result analysis is carried out, the experimental environment is MATLAB2010a, a computer processor is Pentium (R) Dual-Core CPU, [email protected] and 2.70GHz, and the memory is 2.00GB (available at 1.87 GB).
Fig. 5-10 are original test images in a simulation verification test in the pixel intensity-based image super-resolution reconstruction method of the present invention, and the low-resolution images used in the reconstruction in the present invention are obtained from these original images through a degradation model, such as fig. 12-17, fig. 12 is the degraded image of fig. 5, fig. 13 is the degraded image of fig. 6, fig. 14 is the degraded image of fig. 7, fig. 15 is the degraded image of fig. 8, fig. 16 is the degraded image of fig. 9, and fig. 17 is the degraded image of fig. 10, first, a plurality of images (70) are obtained by performing different size translation operations on each original image, and white gaussian noise with noise variance of 0.01 is added to the obtained images, then, performing appropriate blurring operations on the obtained low-resolution images, and finally performing a down-sampling operation with a sampling factor of 4 on the obtained images to obtain a plurality of low-resolution images (70) used in the experiment.
The simulation verification is analyzed from two aspects as follows:
(1) Visual effects
(1) PSNR (Peak Signal to Noise Ratio) curve comparison:
taking fig. 5 and fig. 6 as examples, PSNR curves of the reconstructed image when the magnitudes of the low-resolution images are 15, 20, 25, 30, and 35, respectively, are shown in fig. 11, where a first curve from top to bottom is the PSNR curve of the reconstructed image when the low-resolution image in fig. 6 is 20, and the variance of the gaussian noise added is 0.01; the second curve is a PSNR curve of the reconstructed image when the low-resolution image of fig. 5 is 20 images and the variance of the gaussian noise added is 0.01; the third curve is a PSNR curve of the reconstructed image when the low-resolution image of fig. 6 is 30 and the variance of the added gaussian noise is 0.03; the fourth curve is the PSNR curve of the reconstructed image when the number of the low-resolution images in fig. 5 is 30 and the variance of the gaussian noise added is 0.03, and it can be seen from fig. 11 that the reconstruction effect using 20 low-resolution images is good under the condition that the variance of the gaussian noise added is 0.01, and the reconstruction effect using 30 low-resolution images is good under the condition that the variance of the gaussian noise is 0.03.
(2) Visual effect contrast
For the test image: fig. 5, 6, 7, 8, 9, 10, 20 low-resolution images, such as fig. 12, 13, 14, 15, 16, 17, with a gaussian noise variance of 0.01 added to the low-resolution images, fig. 18 is a graph of the reconstruction effect of the super-resolution image reconstruction method based on interpolation and multi-surface fitting on fig. 12, fig. 19 is a graph of the reconstruction effect of the method of the present invention on fig. 12, fig. 20 is a graph of the reconstruction effect of the super-resolution image reconstruction method based on interpolation and multi-surface fitting on fig. 13, fig. 21 is a graph of the reconstruction effect of the method of the present invention on fig. 13, fig. 22 is a graph of the reconstruction effect of the image reconstruction method based on interpolation and multi-surface fitting on fig. 14, fig. 23 is a graph of the reconstruction effect of the method of the present invention on fig. 14, fig. 24 is a graph of the reconstruction effect of the image reconstruction method based on interpolation and multi-surface fitting on fig. 15, fig. 25 is a graph of the reconstruction effect of the method of the present invention on fig. 15, fig. 26 is a graph of the reconstruction effect of the super-resolution image reconstruction method based on fig. 16, fig. 17 is a graph of the reconstruction effect of the super-resolution image reconstruction method based on the reconstruction method of the present invention on fig. 17, and multi-surface fitting on the reconstruction method of the present invention on fig. 17.
From the visual effect of the reconstructed images, the image super-resolution reconstruction method based on the pixel intensity can be used for reconstructing the image super-resolution, and a clearer reconstructed image can be obtained.
(2) Numerical analysis: PSNR and FSIMc
Two image quality evaluation standards, PSNR and FSIMc, are adopted as evaluation indexes of numerical analysis, FSIM is called Feature-Similarity index totally, and is also called Feature Similarity index, FSIMc is the Feature Similarity index aiming at color images, and the numerical test result table 1 shows that:
PSNR and FSIMc and their comparison results
By comparing and analyzing the PSNR and FSIMc values in Table 1, it can be seen that the image super-resolution reconstruction method based on pixel intensity is superior to the image super-resolution reconstruction method based on interpolation and multi-surface fitting in both PSNR and FSIMc evaluation standards, and compared with the image super-resolution reconstruction method based on interpolation and multi-surface fitting, the reconstruction quality of the image with a complex background is greatly improved by the method of the invention, for example, for FIG. 5, the PSNR value of the reconstructed image is improved by about 2.55db compared with the image super-resolution reconstruction method based on interpolation and multi-surface fitting, and the FSIMc value is improved by about 0.05 compared with the image super-resolution reconstruction method based on interpolation and multi-surface fitting; in fig. 6, the PSNR value of the reconstructed image is improved by about 2.4db compared with the image super-resolution reconstruction method based on interpolation and multi-surface fitting, and the FSIMc value is improved by about 0.08 compared with the image super-resolution reconstruction method based on interpolation and multi-surface fitting. For example, for FIG. 7, the PSNR value is improved by about 0.6db compared with the image super-resolution reconstruction method based on interpolation and multi-face fitting, and the FSIMc value is improved by about 0.02 compared with the image super-resolution reconstruction method based on interpolation and multi-face fitting; for fig. 10, the psnr value is improved by about 0.13db compared with the image super-resolution reconstruction method based on interpolation and multi-surface fitting, and the FSIMc value is improved by about 0.005 compared with the image super-resolution reconstruction method based on interpolation and multi-surface fitting. The Image super-resolution reconstruction method based on interpolation and multi-face fitting is an advanced method in the similar method published in IEEE Transactions on Image Processing, which fully shows that the method has more advantages and can obtain clearer high-resolution images.
The image super-resolution reconstruction method based on the pixel intensity utilizes the intensity of each low-resolution pixel, reduces the influence of abnormal data on an estimated value, estimates the high-resolution pixel value by adopting the maximum a posteriori probability (MAP), inhibits the noise of a reconstructed image, does not need iteration and reduces the complexity of calculation.

Claims (6)

1. An image super-resolution reconstruction method based on pixel intensity is characterized by comprising the following steps:
step 1, carrying out image registration on the low-resolution image, specifically comprising the following steps:
for multiple low-resolution images T i (i =1,2, \ 8230;, N) performing quality evaluation of peak signal-to-noise ratio (PSNR) and characteristic similarity (FSIMc), selecting an image with the maximum PSNR and FSIMc value as a reference image, and setting the reference image as T 1 Extracting T in low-resolution image by SIFT algorithm i Characteristic point p of (i =1,2, \8230;, N) i Using sets of feature points P i Respectively, as follows:
in the above formula, each feature point set P i M in (1) i For each low resolution image T separately i Wherein i =1,2, \ 8230, N, for the reference image T 1 Each feature point p in (1) i Calculating other low resolution images T i All feature points in (i =2, \8230;, N) to the reference image T 1 The distance of the feature points of (1) is expressed as follows:
in the above formula, j represents an index of 128-dimensional SIFT feature, t 1 =1,2,...,m 1 ,t 2 =1,2,...,max(m 2 ,m 3 ,...,m N ) The distances are arranged in ascending order, and the minimum distance d in the distances is extracted 1 And a sub-small distance d 2 And find the minimum distance d 1 And a sub-small distance d 2 Matching ratio of r, r = d 1 /d 2 When the minimum distance d 1 And a sub-small distance d 2 Matching ratio of (2)&When the image is gt, eta is a threshold value, the reference image T is described 1 When the feature point in (b) is successfully matched with the feature point closest to the feature point, the minimum distance d is used 1 And a sub-small distance d 2 Matching ratio of (2)&Eta, the reference image T is described 1 The feature point in the reference image T is unsuccessfully matched with the feature point closest to the feature point, and then the reference image T is obtained 1 With low resolution image T i (i =2, \8230;, N) corresponding to the matching point q im i Is paired with Q i
Q i ={q i,1 ,q i,2 ,…,q i,mi } (2)
In the formula (2), m i In order to match the number of pairs of points,
obtaining affine transformation parameters by an affine transformation formula for the formula (2): angle of rotation theta between two images i And amount of displacement Δ x i 、Δy i The affine transformation formula is expressed as:
Q 1 =R i Q i +ΔD i (3)
in the formula (3), R i Is a 2 × 2 rotation matrix, Δ D i Is a 2 × 1 translation vector, R i And Δ D i Respectively expressed as:
θ i is the angle of rotation, Δ x i 、Δy i Displacement amounts of the abscissa and the ordinate, respectively;
step 2, carrying out image fusion on the registered images obtained in the step 1;
step 3, performing image reconstruction on the image obtained in the step 2 to obtain a final image with a high-resolution pixel value, specifically: firstly, constructing an initial neighborhood for each high-resolution grid node in the step 2 to perform surface fitting to obtain an estimated value, then expanding the neighborhood range according to the step length to perform surface fitting to obtain an estimated value, and finally estimating a high-resolution pixel value by using an MAP method, wherein the method specifically comprises the following steps:
step (3.1), neighborhood expansion: for the image mapped onto the high resolution grid in step 2, an initial neighborhood value b is selected for each grid node 1 And a maximum neighborhood value b 2 Step length is set to 0.1;
step (3.2), pixel searching: slave b in said step (3.1) 1 Start to b 2 And finally, performing neighborhood search on the nodes on the high-resolution grid to obtain K neighborhood NBs i (i =1,2, \ 8230;, K) according to b 1 、b 2 And value of step sizeCorresponding toThe number of searched low-resolution pixels is recorded as m i (i=1,2,…,K);
Step (3.3), establishing a pixel intensity plane: constructing a coordinate system by taking the XOY plane where the high-resolution grid nodes are located as an image plane and taking the direction perpendicular to the XOY plane as a Z axis, and aligning the neighborhood NB in the step (3.2) i Each low resolution pixel L of (i =1,2, \8230;, K) ij (j=1,2,…,m i ) Taking the pixel value as high and parallel to the XOY coordinate plane as a plane to obtain a pixel intensity plane, and recording as L ij Plane, as follows:
is the neighborhood NB i Inner L ij The intensity plane of (a) is,is L ij A pixel value of (a);
and (3.4) calculating pixel intensity: neighborhood NB i Inner pixel L it To the pixel L ij The distance between is defined as: l is a radical of an alcohol it To L ij The distance of the plane, calculate the neighborhood NB i Internal removal of L ij Each low resolution pixel other than L ij Is to L, i.e. to L ij Distance of the plane:
1≤k≤m i and k ≠ j (7)
In the neighborhood NB i Inner, calculated mean square error, mean square errorIs expressed as follows:
mean square errorA larger size indicates a smaller effect of the plane on the high resolution pixel estimation, and therefore, letAll omega are obtained j (j=1,2,…,m i ) Thereafter, the corresponding neighborhood NB is then calculated by weighting i One estimated pixel value of:
step (3.5), estimating maximum posterior probability (MAP): when all of the steps (3.4) are performedAfter being estimated, according to the gaussian assumption, there are:
in the formula (10), the reaction mixture is,f 0 (Ph) is an a priori estimate of f (Ph), λ is an empirical parameter, and the gradient value of equation (10) is made 0, i.e.Obtaining a pixel value of the high-resolution pixel Ph:
i.e. the resulting image of high resolution pixel values.
2. The method for reconstructing super-resolution images based on pixel intensity as claimed in claim 1, wherein the low-resolution image in step 1 is set to T i (i =1,2, \8230;, N), N is the number of images.
3. The method for reconstructing super-resolution images based on pixel intensity as claimed in claim 1, wherein the threshold η =0.85 in step 1.
4. The method for reconstructing super-resolution images based on pixel intensity according to claim 1, wherein the step 2 specifically comprises: when the image T to be registered is obtained in the step 1 i (i =2, \ 8230;, N) and a reference image T 1 Angle of rotation theta therebetween i And displacement amount Δ D i Then, carrying out interpolation amplification on the registered low-resolution image according to the size of the scaling factor, and then carrying out interpolation amplification on the amplified image according to the matched affine transformation parameters: angle of rotation theta i And amount of displacement Δ x i 、Δy i Registration is performed and then mapped onto a high resolution grid.
5. The method for super-resolution image reconstruction based on pixel intensity as claimed in claim 1, wherein the step (3.1) selects 1 × 1 image block, initial neighborhood value b 1 =0.5, maximum neighborhood value b 2 =1.5。
6. The method for reconstructing super-resolution images based on pixel intensity as claimed in claim 1, wherein λ =0 in step (3.5).
CN201510373726.4A 2015-06-30 2015-06-30 A kind of image super-resolution rebuilding method based on image pixel intensities Active CN104933678B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510373726.4A CN104933678B (en) 2015-06-30 2015-06-30 A kind of image super-resolution rebuilding method based on image pixel intensities

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510373726.4A CN104933678B (en) 2015-06-30 2015-06-30 A kind of image super-resolution rebuilding method based on image pixel intensities

Publications (2)

Publication Number Publication Date
CN104933678A CN104933678A (en) 2015-09-23
CN104933678B true CN104933678B (en) 2018-04-10

Family

ID=54120833

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510373726.4A Active CN104933678B (en) 2015-06-30 2015-06-30 A kind of image super-resolution rebuilding method based on image pixel intensities

Country Status (1)

Country Link
CN (1) CN104933678B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931210B (en) * 2016-04-15 2019-10-15 中国航空工业集团公司洛阳电光设备研究所 A kind of high resolution image reconstruction method
CN106204440A (en) * 2016-06-29 2016-12-07 北京互信互通信息技术有限公司 A kind of multiframe super resolution image reconstruction method and system
CN107967473B (en) * 2016-10-20 2021-09-24 南京万云信息技术有限公司 Robot autonomous positioning and navigation based on image-text recognition and semantics
US10438408B2 (en) * 2017-07-28 2019-10-08 The Boeing Company Resolution adaptive mesh for performing 3-D metrology of an object
CN107977931A (en) * 2017-12-14 2018-05-01 元橡科技(北京)有限公司 Utilize the method for calibrated more mesh cameras generation super-resolution image
CN109308683A (en) * 2018-07-23 2019-02-05 华南理工大学 A kind of method of flexible integration circuit substrate image super-resolution rebuilding
CN110415242B (en) * 2019-08-02 2020-05-19 中国人民解放军军事科学院国防科技创新研究院 Super-resolution magnification evaluation method based on reference image
CN110660022A (en) * 2019-09-10 2020-01-07 中国人民解放军国防科技大学 Image super-resolution reconstruction method based on surface fitting
CN113487487B (en) * 2021-07-28 2024-03-19 国电南京自动化股份有限公司 Super-resolution reconstruction method and system for heterogeneous stereo image
CN113674157B (en) * 2021-10-21 2022-02-22 广东唯仁医疗科技有限公司 Fundus image stitching method, computer device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101719266A (en) * 2009-12-25 2010-06-02 西安交通大学 Affine transformation-based frontal face image super-resolution reconstruction method
CN102243711A (en) * 2011-06-24 2011-11-16 南京航空航天大学 Neighbor embedding-based image super-resolution reconstruction method
KR20140081481A (en) * 2012-12-21 2014-07-01 서울시립대학교 산학협력단 Block based image Registration for Super Resolution Image Reconstruction Method and Apparatus
CN104008539A (en) * 2014-05-29 2014-08-27 西安理工大学 Image super-resolution rebuilding method based on multiscale geometric analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101719266A (en) * 2009-12-25 2010-06-02 西安交通大学 Affine transformation-based frontal face image super-resolution reconstruction method
CN102243711A (en) * 2011-06-24 2011-11-16 南京航空航天大学 Neighbor embedding-based image super-resolution reconstruction method
KR20140081481A (en) * 2012-12-21 2014-07-01 서울시립대학교 산학협력단 Block based image Registration for Super Resolution Image Reconstruction Method and Apparatus
CN104008539A (en) * 2014-05-29 2014-08-27 西安理工大学 Image super-resolution rebuilding method based on multiscale geometric analysis

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于MAP的图像超分辨率重建算法;洪逸飞等;《电视技术》;20140731;第38卷(第7期);20-25 *

Also Published As

Publication number Publication date
CN104933678A (en) 2015-09-23

Similar Documents

Publication Publication Date Title
CN104933678B (en) A kind of image super-resolution rebuilding method based on image pixel intensities
CN107301661B (en) High-resolution remote sensing image registration method based on edge point features
Pan et al. Super-resolution based on compressive sensing and structural self-similarity for remote sensing images
CN103310453B (en) A kind of fast image registration method based on subimage Corner Feature
CN102800071B (en) Method for reconstructing super resolution of sequence image POCS
CN102122359B (en) Image registration method and device
CN108932699B (en) Three-dimensional matching harmonic filtering image denoising method based on transform domain
CN111340702A (en) Sparse reconstruction method for high-frequency ultrasonic microscopic imaging of tiny defects based on blind estimation
CN106157240B (en) Remote sensing image super-resolution method based on dictionary learning
CN106934398B (en) Image de-noising method based on super-pixel cluster and rarefaction representation
CN103903239B (en) A kind of video super-resolution method for reconstructing and its system
CN112837220B (en) Method for improving resolution of infrared image and application thereof
CN112614170B (en) Fourier power spectrum-based single particle image registration method for cryoelectron microscope
Zou et al. Iterative denoiser and noise estimator for self-supervised image denoising
Niu An overview of image super-resolution reconstruction algorithm
CN112435211B (en) Method for describing and matching dense contour feature points in endoscope image sequence
Zin et al. Local image denoising using RAISR
CN112927169B (en) Remote sensing image denoising method based on wavelet transformation and improved weighted kernel norm minimization
CN107767342B (en) Wavelet transform super-resolution image reconstruction method based on integral adjustment model
CN110660022A (en) Image super-resolution reconstruction method based on surface fitting
CN115953299A (en) Infrared remote sensing image super-resolution method and system
CN112686814B (en) Affine low-rank based image denoising method
CN106570911B (en) Method for synthesizing facial cartoon based on daisy descriptor
CN112215942B (en) Method and system for reconstructing partial tomographic three-dimensional image of refrigeration electron microscope
CN103824286A (en) Singular value decomposition-random sample consensus (SVD-RANSAC) sub-pixel phase correlation matching method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200407

Address after: 710100 Xi'an city Xi'an City, Shaanxi, No. 299, No. 299, construction technology base, No. 3 building, 7 floor

Patentee after: XI'AN REALECT ELECTRONIC DEVELOPMENT Co.,Ltd.

Address before: 710048 Shaanxi city of Xi'an Province Jinhua Road No. 5

Patentee before: XI'AN UNIVERSITY OF TECHNOLOGY