CN116645300A - Simple lens point spread function estimation method - Google Patents
Simple lens point spread function estimation method Download PDFInfo
- Publication number
- CN116645300A CN116645300A CN202310528965.7A CN202310528965A CN116645300A CN 116645300 A CN116645300 A CN 116645300A CN 202310528965 A CN202310528965 A CN 202310528965A CN 116645300 A CN116645300 A CN 116645300A
- Authority
- CN
- China
- Prior art keywords
- point spread
- spread function
- psf
- single lens
- generated
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000012549 training Methods 0.000 claims abstract description 19
- 238000013528 artificial neural network Methods 0.000 claims abstract description 18
- 230000009466 transformation Effects 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 101
- 238000003384 imaging method Methods 0.000 claims description 31
- 238000004364 calculation method Methods 0.000 claims description 11
- 238000005457 optimization Methods 0.000 claims description 10
- 238000001228 spectrum Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 7
- 238000009826 distribution Methods 0.000 claims description 6
- 230000003595 spectral effect Effects 0.000 claims description 6
- 238000009827 uniform distribution Methods 0.000 claims description 5
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 101100173586 Schizosaccharomyces pombe (strain 972 / ATCC 24843) fft2 gene Proteins 0.000 claims description 3
- 239000006185 dispersion Substances 0.000 claims description 3
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims 1
- 239000000126 substance Substances 0.000 claims 1
- 238000003062 neural network model Methods 0.000 abstract description 5
- 238000012545 processing Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/10—Image enhancement or restoration using non-spatial domain filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20056—Discrete and fast Fourier transform, [DFT, FFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The application discloses a simple lens point spread function estimation method, which comprises the following steps: the point spread function generated according to the characteristics of the single lens acts on the clear image to obtain a simulated single lens blurred image; the single lens blurred image is subjected to Fourier transformation to be used as input of an end-to-end neural network, and an estimated point spread function is generated; the estimated point spread function and the generated point spread function calculate a loss function, and the end-to-end neural network parameters are pushed to be updated along with training iteration; a final estimated point spread function is obtained. The application carries out accurate parametric modeling and generation on the single lens point spread function, so that the generated point spread function is more in line with the characteristics of the single lens, a series of image processing technical characteristics are provided, and finally, the generated countermeasure neural network model is utilized to obtain the more accurate point spread function.
Description
Technical Field
The application relates to the field of digital image processing, in particular to a point spread function (Point Spread Function, PSF) estimation method applied to simple lens imaging.
Background
In recent years, with the continuous development of computational photography and optical design, simple lens computational imaging technology is becoming a new research direction. Simple lens imaging refers to the phenomenon whereby light passes through a single lens to form a real image at the focal plane. Compared with the traditional complex lens imaging, the simple lens imaging can obviously reduce the economic cost of an imaging system, but the imaging effect is affected by the fact that the light paths of the simple lens cannot be converged into one point at a focal plane, and the phenomenon of blurring can be presented. The point spread function is a functional representation of the imaging of light emitted by a point source through a lens or lens onto a focal plane. Estimating the point spread function of a simple lens at different imaging block positions according to a blurred image of a scene obtained by the lens is one of the preceding steps of image restoration of the blurred image.
Regarding single lens computed imaging, different methods have now been proposed to estimate the PSF of a single lens optical imaging system. Patent zl.201410064041.7 proposes a fast calibration method for PSF of single lens imaging, which averages N PSF values, uses the PSF template as a PSF template of a certain type of single lens, and uses the PSF template as an iteration initial value of a PSF correction process to fast estimate and obtain the PSF of the single lens. The patent ZL.2015122290.9 proposes a single lens calculation imaging PSF quick calibration method based on symmetry, which uses the spatial symmetry characteristic of the single lens PSF, takes the estimated PSF of an image block as the initial value of PSF estimation of the symmetrical image block, thereby reducing the iterative process required by PSF estimation and PSF estimation time. Patent zl.2015197305.2 proposes a single lens calculation imaging PSF estimation method based on sparse representation, which is characterized in that clear images in an objective function are expressed as products of an overcomplete dictionary and sparse coefficients, the sparse coefficients are constrained, and then a fuzzy kernel, the overcomplete dictionary and the sparse coefficients are alternately estimated in sequence through an iterative optimization algorithm, so that the PSF of the single lens is estimated.
With the continuous development of single lens calculation imaging technology research and the improvement of image quality requirements, the existing method can estimate the PSF of the single lens, but the estimated PSF precision can not meet the actual requirement of single lens calculation imaging. Simple lens imaging generally features small aberrations in the central region and large aberrations in the peripheral region of the image. Therefore, the entire simple lens imaging cannot be modeled using a single point spread function, but the image is divided into blocks to model one by one according to the distance of the imaging region to the center of the image.
Disclosure of Invention
The application aims to solve the problems in the prior art and provides a simple lens point spread function estimation method so as to solve the problem that the existing PSF estimation method is not accurate enough.
The application aims at realizing a complex network game attacker strategy optimization method based on intuitionistic ambiguity by adopting the following technical scheme, wherein the method comprises the following steps of:
a simple lens point spread function estimation method comprising the steps of:
step 1, a point spread function generated according to the characteristics of a single lens acts on a clear image to obtain a simulated single lens blurred image;
step 2, the single lens blurred image is subjected to Fourier transformation to be in a frequency domain and used as input of an end-to-end neural network to generate an estimated point spread function;
step 3, calculating a loss function by the estimated point spread function and the generated point spread function, and pushing the end-to-end neural network parameters to update along with training iteration;
and 4, obtaining a final estimated point spread function.
Specifically, the generating process of the point spread function in the step 1 includes the following steps:
parametric modeling and generation are carried out on the single lens point spread function;
let the number of angles forming a star polygon be n and the outer diameter be D o ∈R n An inner diameter of D i ∈R n The off angle is A r ∈R n The kth dimension of the outer diameter and the inner diameter is recorded asAnd->The outer diameter was randomly initialized with a uniform distribution:the calculation formula for initializing the inner diameter is as follows: />Wherein r-U (0.1, 0.3) are randomly generated,/I>The first dimension to the 1 dimension of the outer diameter is represented, and the off angle is initialized by adopting uniform distribution;
calculating star polygon vertex coordinates { P } according to parameters k =(x k ,y k )|k=1,…,2n},x k Represents the abscissa of the kth vertex, y k Representing the ordinate of the kth vertex, and recording the polar coordinate angle A of the star-shaped polygon vertex 0 ∈R 2n+1 And (2) and representing the polar coordinate angle of the 0 th vertex and the initial polar coordinate angle, and recording the polar coordinate angle A epsilon R of the generated star-shaped polygon vertex 2n+1 And-> A k Kth dimension representing polar angle A, < >>The difference value between the kth dimension representing the polar coordinate angle A and the initial polar coordinate angle is obtained through random generation, and when k is an even number, the calculation formula of the generated star polygon vertex coordinates is as follows:
the calculation formula is as follows when k is odd:
screening out the star-shaped polygon S P An internal set of points g= { (i, j) | (i, j) ∈s p I represents the abscissa of the points inside the star polygon, j represents the ordinate inside the star polygon;
modeling the point energy: e (i, j) =αexp (- β iii) 2 +j 2 II) wherein alpha, beta is a predetermined constant, II is the absolute value calculated, and the point location energy satisfies the constraint that the energy sum is 1, namely II (i,j)∈G E(i,j)=1;
This can be achieved by: α=1/(. Pi) (i,j)∈G exp(-β‖i 2 +j 2 II), the undetermined constant beta represents the dispersion degree of the point spread function energy;
after the undetermined constant is determined, the energy distribution of the point spread function in the star polygon is calculated to obtain the generated point spread function { E (i, j) |i, j=1, …, m }, wherein m is N + Is the size of the point spread function.
Preferably, the generated point spread function is a convolution of the point spread function energy distribution with a smooth convolution kernel κ: psf=e×κ.
Preferably, the convolution kernel is smoothed
Pending coefficients beta-U (0.004,0.008).
Specifically, in the process that the point spread function generated according to the characteristics of the single lens acts on the clear image, the clear image block y is convolved with the generated point spread function PSF to obtain a simulated single lens blurred image block x, namely: x=y PSF; in the process that the single lens blurred image is subjected to Fourier transform to a frequency domain, firstly, the blurred image block x is converted into the frequency domain image through two-dimensional discrete Fourier transform, and a zero frequency point is moved to the center of the frequency spectrum: x=fftshift (fft 2 (X)), fft2 (·) represents a two-dimensional discrete fourier transform, fftshift (·) represents moving the zero frequency point to the center of the spectrum, and then calculating the power spectral density:wherein the functions real (·) and imag (·) represent the real and imaginary parts of the extracted frequency domain image, respectively.
Specifically, the input of the end-to-end neural network is the power spectral density of a certain block frequency domain blurred image, and the output is the point spread function of a certain block estimation of a single lens;
the end-to-end neural network adopts the form of generating countermeasure for training, wherein a generator adopts a U-Net structure and is marked as G θ (. Cndot.) where θ is the set of model parameters, the loss function trained by the generator is:
L G =λ‖G θ (S)-PSF‖ 1 +PSF·(G θ (S)-PSF) 2 +‖D φ (G θ (S))-1‖ 2
wherein the first sub-term is a fidelity term, lambda is a coefficient, G θ (S) represents U-Net with parameter set θ, II 1 Representing the calculated 1-norm; the second partial term is a shape prior, the effect of a true value PSF on the model output is highlighted, and the PSF represents a generated point spread function; the third component is the authenticity component, which passes through the discriminator D φ (-) the difference between the result generated by the square metric generator of the difference of the discrimination result and 1 and the truth of the true value PSF, II 2 Representing the calculated 2-norm; wherein phi is a model parameter set of the discriminator;
distinguishing device D φ (. Cndot.) with residual classification network ResNet-34, training loss function is:
L D =‖D φ (PSF)-1‖ 2 +‖D φ (G θ (S))-0‖ 2
the first term causes the discriminator to tend to identify the true PSF as true, and the second term causes the discriminator to tend to identify the generator G θ The resulting PSF is false.
Preferably, λ=100.
Specifically, the model training of the end-to-end neural network adopts a random gradient descent method, the batch size is 8, the optimization operator adopts an Adam operator, and the initial learning rate is 1e -4 The learning rate is reduced to 1/10 of the current learning rate every 2 ten thousand times at intervals until the learning rate is reduced to 1e -7 The method comprises the steps of carrying out a first treatment on the surface of the The total training iteration number is 20 ten thousand times; calculating a generator loss function at first when each iteration is optimized, and calculating gradient update generator parameters theta through an optimizing operator; and calculating a loss function of the discriminator, and calculating a gradient update generator parameter phi through an optimization operator.
Further, the segmented single lens imaging image is converted into a gray scale image, and the power spectrum density of the gray scale image is input into a trained generator G θ (. Sum.) the output is the estimated point spread function for each sub-block of the single lens imaging map.
Drawings
FIG. 1 shows a schematic flow diagram of an embodiment of the present application;
FIG. 2 shows a single lens structure diagram of an embodiment of the application;
FIG. 3 shows a single lens imaging block partition map of an embodiment of the application;
fig. 4 shows a schematic diagram of a flow chart for estimating a point spread function in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
As shown in fig. 1, a simple lens point spread function estimation method includes the steps of:
step 1, a point spread function generated according to the characteristics of a single lens acts on a clear image to obtain a simulated single lens blurred image;
step 2, the single lens blurred image is subjected to Fourier transformation to be in a frequency domain and used as input of an end-to-end neural network to generate an estimated point spread function;
step 3, calculating a loss function by the estimated point spread function and the generated point spread function, and pushing the end-to-end neural network parameters to update along with training iteration;
and 4, obtaining a final estimated point spread function.
As shown in fig. 2, the economic cost of the imaging system can be significantly reduced by simple lens imaging, but the imaging effect is affected by the fact that the optical paths of the simple lenses cannot be converged into a point at the focal plane, and the phenomenon of blurring can be presented. Estimating the point spread function of a simple lens at different imaging block positions according to a blurred image of a scene obtained by the lens is one of the preceding steps of image restoration of the blurred image.
The image dividing method of the present embodiment is shown in fig. 3. The single lens imaging with the ratio of 4:3 is divided into 12×9 blocks, and then the fuzzy kernel estimation is carried out block by block.
The single lens point spread function estimation flow proposed in this embodiment is shown in fig. 4. And the point spread function generated according to the characteristics of the single lens acts on the clear image to obtain a simulated single lens blurred image. The image is fourier transformed to the frequency domain and the real part is extracted as input to the end-to-end neural network, generating an estimated point spread function. And calculating a loss function by the estimated point spread function and the generated point spread function, and pushing the neural network parameters to be updated along with training iteration.
A Point Spread Function (PSF) is generated. Due to processing accuracy problems, the actual measurement result of the blocked point spread function of a simple lens presents an irregular "quadrangle star" shape. For this purpose, parametric modeling and generation of the single lens point spread function are required. Let the number of angles forming a star polygon be n and the outer diameter be D o ∈R n An inner diameter of D i ∈R n The off angle is A r ∈R n . The kth dimension of the outer diameter and the inner diameter is recorded asAnd->. Random initialization of outer diameter with uniform distribution
Inner diameter initialization calculation formula (2)
Wherein r-U (0.1, 0.3) are randomly generated. Off angle A i As with the outer diameter, the uniform distribution of equation (1) is used for initialization.
Calculating star polygon vertex coordinates { P } according to parameters k =(x k ,y k ) I k=1, …,2n. Polar coordinate angle A of star-shaped polygon vertex 0 ∈R 2n+1 And (2) and
the polar coordinate angles A epsilon R of the vertices of the generated star-shaped polygon are noted 2n+1 And is also provided with
The generated star polygon vertex sitting flag is calculated using equation (5) when k is even
When k is odd, calculate by using formula (6)
Generating a point spread function { E (i, j) |i, j=1, …, m } from the star polygon vertex coordinates, where m E N + Is the size of the point spread function. Firstly, screening out a star polygon S P An internal set of points g= { (i, j) | (i, j) ∈s p -a }; secondly, modeling the point energy:
E(i,j)=αexp(-β‖i 2 +j 2 ‖). (7)
where α, β is the undetermined constant. The point energy should satisfy the constraint that the energy sum be 1, i.e
∏ (i,j)∈G E(i,j)=1 (8)
By taking formula (7) into (8)
α=1/(∏ (i,j)∈G exp(-β‖i 2 +j 2 ‖)) (9)
The undetermined constant beta characterizes the dispersion degree of the point spread function energy, and is generally beta to U (0.004,0.008). When the constant beta is determined, the constant alpha can be calculated according to the formula (8), and then the energy distribution of the point spread function in the star-shaped polygon can be calculated according to the formula (7).
The point spread function is smoothed using convolution. The point spread function generated by the above steps may have the problem of edge discontinuity, and for this purpose, a smooth convolution kernel κ is constructed, and the final point spread function PSF is the convolution of the energy distribution E and the smooth convolution kernel κ
PSF=E*κ (10)
Wherein the convolution kernel is taken
Input data processing of the neural network model. Firstly, convolving a clear image block y with a point spread function PSF generated in the step S3 to obtain a fuzzy image block x simulating single lens imaging, namely
x=y*PSF (12)
And secondly, the power spectrum density of the blurred image formed by the single lens in the frequency domain is correlated with the form of the point spread function of the single lens. Thus, the blurred image block x is first converted into a frequency domain image by two-dimensional discrete Fourier transform (fft 2) and the zero frequency point is shifted to the center of the spectrum (fftshift)
X=fftshift(fft2(x)) (13)
Re-calculating its power spectral density
Wherein the functions real (·) and imag (·) represent the real and imaginary parts of the extracted frequency domain image, respectively.
S5, a neural network model structure and training method. The neural network model is input into a power spectrum density S of a certain block frequency domain fuzzy image; the output is an estimated single lens some block point spread function. Training the model in a form of generating countermeasure, generating a neural network model structure by adopting a U-Net structure, and marking as G θ (. Cndot.) where θ is the set of model parameters. The loss function of the generator U-Net model training is
L G =λ‖G θ (S)-PSF‖ 1 +PSF·(G θ (S)-PSF) 2 +‖D φ (G θ (S))-1‖ 2 (15)
Wherein the first term is a fidelity term, coefficient λ=100; the second term is shape prior, and the influence of the true value PSF on the model output is highlighted; the third item is the authenticity item, which is passed through the arbiter D φ (. Sum.) the square metric of the difference between the discrimination result and 1 generates the true value PSF and the result generated by the networkDifferences in sex; where phi is the model parameter set of the arbiter. Distinguishing device D φ (. Cndot.) implemented with residual classification network ResNet-34 with training loss function of
L D =‖D φ (PSF)-1‖ 2 +‖D φ (G θ (S))-0‖ 2 (16)
The first term causes the model trend to identify the true PSF as true, and the second term causes the model trend to identify the generator G θ The resulting PSF is false.
The model training and parameter optimization adopts a random gradient descent method, the batch size is 8, the optimization operator adopts an Adam operator, and the initial learning rate is 1e -4 The learning rate is reduced to 1/10 of the current learning rate every 2 ten thousand times at intervals until the learning rate is reduced to 1e -7 The method comprises the steps of carrying out a first treatment on the surface of the The total training iteration number is 20 ten thousand times; calculating a generator loss function through a formula (15) and calculating a gradient update generator parameter theta through an optimization operator when each iteration is optimized; and then calculating a loss function of the discriminator through a formula (16), and calculating a gradient update generator parameter phi through an optimization operator.
And estimating the point spread function of the single lens. Converting the single lens imaging image after the blocking into a gray level image, and inputting the power spectrum density of the gray level image into a neural network G θ The neural network output is the estimated point spread function of each sub-block of the single lens imaging map.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Claims (10)
1. A method for estimating a simple lens point spread function, comprising the steps of:
step 1, a point spread function generated according to the characteristics of a single lens acts on a clear image to obtain a simulated single lens blurred image;
step 2, the single lens blurred image is subjected to Fourier transformation to be in a frequency domain and used as input of an end-to-end neural network to generate an estimated point spread function;
step 3, calculating a loss function by the estimated point spread function and the generated point spread function, and pushing the end-to-end neural network parameters to update along with training iteration;
and 4, obtaining a final estimated point spread function.
2. The method for estimating a point spread function of a simple lens according to claim 1, wherein the generating process of the point spread function in step 1 comprises the steps of:
parametric modeling and generation are carried out on the single lens point spread function;
let the number of angles forming a star polygon be n and the outer diameter be D o ∈R n An inner diameter of D i ∈R n The off angle is A r ∈R n The kth dimension of the outer diameter and the inner diameter is recorded asAnd->Random initialization of outer diameter with uniform distribution:the calculation formula for initializing the inner diameter is as follows: />Wherein r-U (0.1, 0.3) are randomly generated,/I>The k-1 dimension representing the outer diameter, the off angle adopts the initial uniform distributionPerforming chemical treatment;
calculating star polygon vertex coordinates { P } according to parameters k =(x k ,y k )|k=1,…,2n},x k Represents the abscissa of the kth vertex, y k Representing the ordinate of the kth vertex, and recording the polar coordinate angle A of the star-shaped polygon vertex 0 ∈R 2n+1 And (2) and representing the polar coordinate angle of the 0 th vertex and the initial polar coordinate angle, and recording the polar coordinate angle A epsilon R of the generated star-shaped polygon vertex 2n+1 And-> A k Kth dimension representing polar angle A, < >>The difference value between the kth dimension representing the polar coordinate angle A and the initial polar coordinate angle is obtained through random generation, and when k is an even number, the calculation formula of the generated star polygon vertex coordinates is as follows:
when k is an even number
The calculation formula is as follows when k is odd:
when k is an odd number
Screening out the star-shaped polygon S P An internal set of points g= { (i, j) | (i, j) ∈s p I represents a star shapeThe abscissa of the points inside the polygon, j representing the ordinate inside the star-shaped polygon;
modeling the point energy: e (i, j) =αexp (- β iii) 2 +k 2 II) wherein alpha, beta is a predetermined constant, II is the absolute value calculated, and the point location energy satisfies the constraint that the energy sum is 1, namely II (i,j)∈G E(i,j)=1;
This can be achieved by: α=1/(. Pi) (i,j)∈G exp(-β‖i 2 +j 2 II), the undetermined constant beta represents the dispersion degree of the point spread function energy;
and after the undetermined constant is determined, calculating the energy distribution of the point spread function in the star-shaped polygon, and obtaining the generated point spread function.
3. The method for estimating a point spread function of a simple lens according to claim 2, wherein the generated point spread function is a convolution of a point spread function energy distribution E with a smooth convolution kernel κ: psf=e×κ.
4. A simple lens point spread function estimating method according to claim 3, characterized by smoothing a convolution kernel
5. A simple lens point spread function estimating method according to claim 2, characterized by the undetermined coefficients β -U (0.004,0.008).
6. The method for estimating a point spread function of a simple lens according to claim 1, wherein in the process of applying the point spread function generated according to the characteristics of the single lens to the clear image, the clear image block y is convolved with the generated point spread function PSF to obtain a simulated single lens blurred image block x, namely: x=y PSF; in the process that the single lens blurred image is subjected to Fourier transform to a frequency domain, the blurred image blocks x are firstly transformed through two-dimensional discrete Fourier transformChanging to a frequency domain image, and moving a zero frequency point to the center of a frequency spectrum: x=fftshift (fft 2 (X)), fft2 (·) represents a two-dimensional discrete fourier transform, fftshift (·) represents moving the zero frequency point to the center of the spectrum, and then calculating the power spectral density:wherein the functions real (·) and imag (·) represent the real and imaginary parts of the extracted frequency domain image, respectively.
7. The method for estimating a point spread function of a simple lens according to claim 6, wherein the input of the end-to-end neural network is the power spectral density of a block frequency domain blurred image, and the output is the point spread function estimated by a block of a single lens;
the end-to-end neural network adopts the form of generating countermeasure for training, wherein a generator adopts a U-Net structure and is marked as G θ (. Cndot.) where θ is the set of model parameters, the loss function trained by the generator is:
L G =λ‖G θ (S)-PSF‖ 1 +PSF·(G θ (S)-PSF) 2 +‖D φ (G θ (S))-1‖ 2
wherein the first sub-term is a fidelity term, lambda is a coefficient, G θ (S) represents U-Net with parameter set θ, II 1 Representing the calculated 1-norm; the second partial term is a shape prior, the influence of the true value PSE on the model output is highlighted, and the PSF represents the generated point spread function; the third component is the authenticity component, which passes through the discriminator D φ (-) the difference between the result generated by the square metric generator of the difference of the discrimination result and 1 and the truth of the true value PSF, II 2 Representing the calculated 2-norm; wherein phi is a model parameter set of the discriminator;
distinguishing device D φ (. Cndot.) with residual classification network ResNet-34, training loss function is:
L D =‖D φ (PSF)-1‖ 2 +‖D φ (G θ (S))-0‖ 2
the first term causes the discriminator to trendThe other true PSF is true, the second component tends to identify the generator G θ The resulting PSF is false.
8. A simple lens point spread function estimating method according to claim 7, characterized in that λ=100.
9. The method for estimating a point spread function of a simple lens according to claim 7, wherein the model training of the end-to-end neural network adopts a random gradient descent method, the batch size is 8, the optimization operator adopts Adam operator, and the initial learning rate is 1e -4 The learning rate is reduced to 1/10 of the current learning rate every 2 ten thousand times at intervals until the learning rate is reduced to 1e -7 The method comprises the steps of carrying out a first treatment on the surface of the The total training iteration number is 20 ten thousand times; calculating a generator loss function at first when each iteration is optimized, and calculating gradient update generator parameters theta through an optimizing operator; and calculating a loss function of the discriminator, and calculating a gradient update generator parameter phi through an optimization operator.
10. The method for estimating a point spread function of a simple lens according to claim 1, wherein the segmented single lens imaging image is converted into a gray scale image, and the power spectral density of the gray scale image is input to a trained generator G θ (. Sum.) the output is the estimated point spread function for each sub-block of the single lens imaging map.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310528965.7A CN116645300A (en) | 2023-05-11 | 2023-05-11 | Simple lens point spread function estimation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310528965.7A CN116645300A (en) | 2023-05-11 | 2023-05-11 | Simple lens point spread function estimation method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116645300A true CN116645300A (en) | 2023-08-25 |
Family
ID=87639133
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310528965.7A Pending CN116645300A (en) | 2023-05-11 | 2023-05-11 | Simple lens point spread function estimation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116645300A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117876720A (en) * | 2024-03-11 | 2024-04-12 | 中国科学院长春光学精密机械与物理研究所 | Method for evaluating PSF image similarity |
-
2023
- 2023-05-11 CN CN202310528965.7A patent/CN116645300A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117876720A (en) * | 2024-03-11 | 2024-04-12 | 中国科学院长春光学精密机械与物理研究所 | Method for evaluating PSF image similarity |
CN117876720B (en) * | 2024-03-11 | 2024-06-07 | 中国科学院长春光学精密机械与物理研究所 | Method for evaluating PSF image similarity |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Bako et al. | Kernel-predicting convolutional networks for denoising Monte Carlo renderings. | |
Yin et al. | Highly accurate image reconstruction for multimodal noise suppression using semisupervised learning on big data | |
CN110378844B (en) | Image blind motion blur removing method based on cyclic multi-scale generation countermeasure network | |
CN113487739A (en) | Three-dimensional reconstruction method and device, electronic equipment and storage medium | |
CN116645300A (en) | Simple lens point spread function estimation method | |
CN113450396A (en) | Three-dimensional/two-dimensional image registration method and device based on bone features | |
CN113095333A (en) | Unsupervised feature point detection method and unsupervised feature point detection device | |
CN109410158B (en) | Multi-focus image fusion method based on convolutional neural network | |
CN111179333B (en) | Defocus blur kernel estimation method based on binocular stereo vision | |
Dinesh et al. | 3D point cloud color denoising using convex graph-signal smoothness priors | |
Kollem et al. | Image denoising by using modified SGHP algorithm | |
CN114283058A (en) | Image super-resolution reconstruction method based on countermeasure network and maximum mutual information optimization | |
US20100322472A1 (en) | Object tracking in computer vision | |
Shabanian et al. | A novel factor graph-based optimization technique for stereo correspondence estimation | |
CN112767269B (en) | Panoramic image defogging method and device | |
CN112750156B (en) | Light field imaging system, processing method and device | |
CN115439669A (en) | Feature point detection network based on deep learning and cross-resolution image matching method | |
Zhang et al. | Steganography with Generated Images: Leveraging Volatility to Enhance Security | |
Tian et al. | A modeling method for face image deblurring | |
CN114764746A (en) | Super-resolution method and device for laser radar, electronic device and storage medium | |
CN108256633B (en) | Method for testing stability of deep neural network | |
CN113066165A (en) | Three-dimensional reconstruction method and device for multi-stage unsupervised learning and electronic equipment | |
Sahragard et al. | Image restoration by variable splitting based on total variant regularizer | |
Rai et al. | Learning to generate atmospheric turbulent images | |
CN113822823B (en) | Point neighbor restoration method and system for aerodynamic optical effect image space-variant fuzzy core |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |