CN110675333B - Microscopic imaging processing method based on neural network super-resolution technology - Google Patents

Microscopic imaging processing method based on neural network super-resolution technology Download PDF

Info

Publication number
CN110675333B
CN110675333B CN201910790869.3A CN201910790869A CN110675333B CN 110675333 B CN110675333 B CN 110675333B CN 201910790869 A CN201910790869 A CN 201910790869A CN 110675333 B CN110675333 B CN 110675333B
Authority
CN
China
Prior art keywords
neural network
convolution
matrix
pictures
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910790869.3A
Other languages
Chinese (zh)
Other versions
CN110675333A (en
Inventor
李歧强
张中豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201910790869.3A priority Critical patent/CN110675333B/en
Publication of CN110675333A publication Critical patent/CN110675333A/en
Application granted granted Critical
Publication of CN110675333B publication Critical patent/CN110675333B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a microscopic imaging processing method based on a neural network super-resolution technology, which comprises the following steps: training the full convolution neural network and arranging the trained full convolution neural network M on a computer for controlling the microscope, controlling the microscope to shoot pictures, and compensating the shot pictures in real time to obtain clear pictures. The processing method disclosed by the invention greatly improves the shooting speed, improves the picture quality, and inhibits defocusing blur, especially when a plurality of pictures are shot for a sample; even can replace the automatic focusing, remove the motor that controls the lens and move up and down, simplify the optical detection system.

Description

Microscopic imaging processing method based on neural network super-resolution technology
Technical Field
The invention relates to an image processing method, in particular to a microscopic imaging processing method based on a neural network super-resolution technology.
Background
Microscopic imaging techniques are an effective means of observing cells with high spatial and temporal resolution. Before observing the sample, researchers need to focus the microscope once, but when the sample area is too large and needs to be observed continuously, the operation has problems: if the sample is curved more than the depth of field of the optical detection system used, a sharp partial image or a blurred partial image will occur. This situation can seriously affect the subsequent analysis and judgment.
The super-resolution technology refers to a process of restoring a high-resolution image from a given low-resolution image by using a specific algorithm and a processing flow, using knowledge related to the fields of digital image processing, computer vision, and the like. The method aims to overcome or compensate the problems of imaging image blurring, low quality, insignificant region of interest and the like caused by the limitation of an image acquisition system or an acquisition environment. Super-resolution is directed to blurring due to the resolution degradation caused by bicubic sampling, but can also be applied to overcome defocus blurring in theory.
In the prior art, when an imaging system scans, a lens is moved up and down, a small overlap area exists in three continuous images, and the position of a focal plane is determined by calculating the ambiguity of the overlap area. The method is essentially equivalent to sampling in multiple places, and is slow and time-consuming.
Disclosure of Invention
In order to solve the technical problems, the invention provides a microscopic imaging processing method based on a neural network super-resolution technology, so as to achieve the purposes of greatly improving the shooting speed, improving the picture quality and inhibiting defocus blur.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a microscopic imaging processing method based on neural network super-resolution technology comprises the following steps:
(1) Training of the full convolution neural network:
shooting a group of clear pictures Y by a microscope, then carrying out Gaussian filtering on the Y to obtain corresponding defocused blurred pictures X, converting image information data X, Y into numpy arrays, then carrying out normalization processing on the arrays, and respectively recording the normalized arrays as X norm 、Y norm Where X is norm 、Y norm Is two in shapeThe shape of the matrix is uniformly marked as L, W, H and c, L is the number of the taken clear pictures, W is the number of rows of the matrix, H is the number of columns of the matrix, if the taken grey pictures are grey pictures, c is 1, and if the taken grey pictures are color pictures, c is 3;
with X norm Being an input to the network, Y norm Setting the learning rate to be 3E-4 for the output of the network, and training the network by adopting an Adam optimizer during training to obtain a full convolution neural network M;
(2) Microscopic imaging treatment:
arranging the trained full convolution neural network M on a computer for controlling a microscope, controlling the microscope to shoot pictures, simultaneously utilizing the trained full convolution neural network M to compensate the shot pictures in real time, firstly carrying out normalization operation on the shot pictures to obtain an array of 1W H c, taking the array as the input of the network to obtain the output with the shape of 1W H c, normalizing the output, and mapping the pixel value to 0-255 to obtain a clear picture.
In the above scheme, the full convolution neural network convolves the input of the network, and is provided with a two-dimensional matrix a and a matrix B, and a calculation formula of a convolution result matrix C of the two-dimensional matrix a and the matrix B is as follows:
C(j,k)=∑ pq A(p,q)B(j-p+1,k-q+1)
wherein p and q are respectively the abscissa and the ordinate of the matrix A, j and k are respectively the abscissa and the ordinate of the matrix C, and if the matrix elements exceed the boundary, the values are replaced by 0.
In a further technical scheme, when the full convolution neural network performs convolution, the input shape is W input *H input * c matrix with n W filter *H filter * c, respectively convolving the convolution kernels to obtain the shape W output *H output * n, W is the row number of the matrix, H is the column number of the matrix, subscript represents the matrix to which the subscript belongs, and c is the number of characteristic channels of the matrix;
the output is regarded as the characteristic extracted by the convolution layer, when the loss function is appointed and the truth value is given, the parameter in the convolution kernel is updated according to the size of the loss function and the gradient descending direction to the fastest descending direction, wherein
W output =(W input -W filter +2P)/S+1
H output =(H input -H filter +2P)/S+1
P is the padding size and S is the step size.
In a further technical solution, the loss function formula is as follows:
Figure BDA0002179504940000021
wherein,
Figure BDA0002179504940000022
is a loss function of the network>
Figure BDA0002179504940000023
Is the output of the network>
Figure BDA0002179504940000024
Is a true value, | × | non-conducting phosphor 1 Is the L1 norm.
In a further technical scheme, when the full convolution neural network performs convolution, the number of convolution kernels in each of up-sampling and down-sampling is 32; the down-sampling comprises two convolution layers and a maximum pooling layer, the size of the convolution kernel is 3*3, and the maximum pooling step length is 2*2; adding a convolution layer behind the tensor obtained by jump connection, synthesizing the characteristics between different layers, wherein the size of a convolution kernel is 3*3, and then performing up-sampling; the up-sampling adopts deconvolution, the convolution kernel size is 2*2, and the step size is 2*2.
By the technical scheme, the microscopic imaging processing method based on the neural network super-resolution technology compensates image blur by the super-resolution technology, the trouble of focusing at each shooting position can be avoided by the technology, the shooting speed is greatly improved, and especially when a plurality of pictures are shot on a sample; even can replace the automatic focusing, remove the electrical machinery which controls the lens to move up and down, simplify the optical detection system. During shooting, the method carries out convolution processing on the shot picture by using a full convolution neural network, and carries out real-time compensation on the picture, thereby obtaining a clear picture.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a schematic diagram of a full convolution neural network according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a convolution process according to an embodiment of the present invention;
FIG. 3 is a pre-processed image as disclosed in an embodiment of the present invention;
fig. 4 is a processed image as disclosed in an embodiment of the invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
The invention provides a microscopic imaging processing method based on a neural network super-resolution technology, which can improve the picture quality and inhibit defocusing blur.
A microscopic imaging processing method based on a neural network super-resolution technology comprises the following steps:
(1) Training of the full convolution neural network:
shooting a group of clear pictures Y by a microscope, then carrying out Gaussian filtering on the Y to obtain corresponding defocused blurred pictures X, converting image information data X, Y into numpy arrays, then carrying out normalization processing on the arrays, and respectively recording the normalized arrays as X norm 、Y norm Here Y is norm 、Y norm Is two matrixes with the same shape, the shapes are marked as L W H C, L is the number of clear pictures taken, W is the number of rows of the matrixes, H is the number of columns of the matrixes, if the grey pictures are taken, c is 1, if the grey pictures are taken, the grey pictures are coloredColor picture, c is 3;
with X norm As input to the network, Y norm And (3) setting the learning rate as 3E-4 for the output of the network, and training the network by adopting an Adam optimizer during training to obtain the full convolution neural network M.
Full convolution neural network M As shown in FIG. 1, the input is obtained by successive down-sampling 0,0 、X 1,0 、X 2,0 、X 3,0 、X 4 ,0 The down-sampling can increase the robustness to some small disturbances of the input image, such as image translation, rotation, etc., reduce the risk of over-fitting, reduce the amount of computation, and increase the size of the receptive field. Then respectively adding X 1,0 、X 2,0 、X 3,0 、X 4,0 Upsampling, the effect of which is to re-decode the abstract features to the size of the original image, i.e. X 1,0 Up-sampling to obtain X 0,1 X is to be 2,0 Up-sampling obtains X in turn 1,1 、X 0,2 X is to be 3,0 The up-sampling obtains X in sequence 2,1 、X 1,2 、X 0,3 X is to be 4,0 The up-sampling obtains X in sequence 3,1 、X 2,2 、X 1 ,3 、X 0,4 . In addition, in order to integrate features of different layers, a large number of hopping connections, such as X, are added to the network 0,0 To X 0,1 、X 0,2 、X 0,3 、X 0,4 There is a jump connection. Finally, in order to make the network converge better, a strategy of deep supervision is added, namely X 0 ,1 、X 0,2 、X 0,3 、X 0,4 Will be compared with the ground route and participate in the calculation of the loss function.
The invention adopts convolution-convolution mode, in order to reduce memory occupation and accelerate calculation speed, the number of convolution kernels of each layer of down-sampling is fixed to 32, the down-sampling comprises two layers of convolution layers and one layer of maximum pooling layer, the size of the convolution kernel is 3*3, and the maximum pooling step length is 2*2. Adding a convolution layer behind the tensor obtained by jump connection, synthesizing the characteristics between different layers, wherein the size of the convolution kernel is 3*3,and then upsampled. The up-sampling adopts deconvolution, the convolution kernel size is 2*2, the step size is 2*2, and the number of convolution kernels in each up-sampling is also fixed to be 32. In order to reduce the memory occupation, the invention adopts an add mode for jump connection. Finally, loss function is added
Figure BDA0002179504940000041
Defined as the L1 norm, defined specifically as follows:
Figure BDA0002179504940000042
wherein,
Figure BDA0002179504940000043
for the output of the network>
Figure BDA0002179504940000044
Is true value, | × | cals 1 Is the L1 norm.
The full convolution neural network is used for convolving the input of the network, and is provided with a two-dimensional matrix A and a matrix B, and the calculation formula of a convolution result matrix C of the two-dimensional matrix A and the matrix B is as follows:
C(j,k)=∑ pq A(p,q)B(j-p+1,k-q+1)
wherein p and q are respectively the abscissa and the ordinate of the matrix A, j and k are respectively the abscissa and the ordinate of the matrix C, and if the matrix elements exceed the boundary, the values are replaced by 0.
When the fully convolutional neural network performs convolution, as shown in fig. 2, the input shape is W input *H input * c matrix with n W filter *H filter * c, respectively convolving the convolution kernels to obtain the shape W output *H output * n, W is the row number of the matrix, H is the column number of the matrix, subscript represents the matrix to which the subscript belongs, and c is the number of characteristic channels of the matrix;
the output is regarded as the characteristic extracted by the convolution layer, when the loss function is appointed and the truth value is given, the parameter in the convolution kernel is updated according to the size of the loss function and the gradient descending direction to the fastest descending direction, wherein
W output =(W input -W filter +2P)/S+1
H output =(H input -H filter +2P)/S+1
P is the padding size and S is the step size.
(2) Microscopic imaging treatment:
arranging the trained full convolution neural network M on a computer for controlling a microscope, controlling the microscope to shoot pictures, obtaining pictures as shown in figure 3, simultaneously utilizing the trained full convolution neural network M to compensate the shot pictures in real time, namely firstly carrying out normalization operation on the shot pictures to obtain an array of 1W H c, taking the array as the input of the network to obtain the output with the shape of 1W H c, carrying out normalization on the output, and mapping pixel values to 0-255 to obtain clear pictures as shown in figure 4.
To further save processing time, two threads or two processes may be opened up in the CPU of the computer. One thread/process is responsible for controlling the microscope to take pictures; and the other one is responsible for compensating the shot picture in real time and improving the picture quality, so that the shooting time cannot be increased.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (3)

1. A microscopic imaging processing method based on a neural network super-resolution technology is characterized by comprising the following steps:
(1) Training of the full convolution neural network:
taking a set of clear pictures with a microscopeY, then carrying out Gaussian filtering on Y to obtain a corresponding defocused blurred picture X, converting image information data X, Y into a numpy array, then carrying out normalization processing on the array, and respectively recording the normalized array as X norm 、Y norm Where X is norm 、Y norm The method comprises the following steps that two matrixes with the same shape are provided, the shapes are uniformly marked as L, W, H and c, L is the number of clear pictures to be shot, W is the number of rows of the matrixes, H is the number of columns of the matrixes, if grey pictures are shot, c is 1, and if color pictures are shot, c is 3;
with X norm Being an input to the network, Y norm For the output of the network, the learning rate is set to be 3E-4, an Adam optimizer is adopted during training, the network is trained, and a full convolution neural network M is obtained;
(2) Microscopic imaging treatment:
arranging the trained full convolution neural network M on a computer for controlling a microscope, controlling the microscope to shoot pictures, simultaneously utilizing the trained full convolution neural network M to compensate the shot pictures in real time, firstly carrying out normalization operation on the shot pictures to obtain an array of 1W H c, taking the array as the input of the network to obtain the output with the shape of 1W H c, normalizing the output, and mapping pixel values to 0-255 to obtain a clear picture;
the full convolution neural network is used for performing convolution on the input of the network, and is provided with a two-dimensional matrix A and a matrix B, and the calculation formula of a convolution result matrix C of the two-dimensional matrix A and the matrix B is as follows:
C(j,k)=∑ pq A(p,q)B(j-p+1,k-q+1)
wherein, p and q are respectively the abscissa and the ordinate of the matrix A, j and k are respectively the abscissa and the ordinate of the matrix C, and if the matrix elements exceed the boundary, the values are replaced by 0;
when the full convolution neural network is convoluted, the input shape is W input *H input * c matrix with n W filter *H filter * c, respectively convolving the convolution kernels to obtain the shape W output *H output * n, W is the number of rows of the matrix, H is the number of columns of the matrix, the subscripts denote the matrix to which they belong, c is that of the matrixThe number of characteristic channels;
the output is regarded as the characteristic extracted by the convolution layer, when the loss function is appointed and the truth value is given, the parameter in the convolution kernel is updated according to the size of the loss function and the gradient descending direction to the fastest descending direction, wherein
W output =(W input -W filter +2P)/S+1
H output =(H input -H filter +2P)/S+1
P is the padding size and S is the step size.
2. The microscopic imaging processing method based on the neural network super-resolution technology of claim 1, wherein the loss function formula is as follows:
Figure FDA0003930777690000021
wherein,
Figure FDA0003930777690000022
is a loss function of the network>
Figure FDA0003930777690000023
For the output of the network>
Figure FDA0003930777690000024
Is a true value, | × | non-conducting phosphor 1 Is the L1 norm.
3. The microscopic imaging processing method based on the neural network super-resolution technology as claimed in claim 1, wherein when the fully convolutional neural network performs convolution, the number of convolution kernels in each of the up-sampling and the down-sampling is 32; the downsampling comprises two convolution layers and a maximum pooling layer, the size of the convolution kernel is 3*3, and the maximum pooling step size is 2*2; adding a convolution layer behind the tensor obtained by jump connection, synthesizing the characteristics between different layers, wherein the size of a convolution kernel is 3*3, and then performing up-sampling; the up-sampling adopts deconvolution, the convolution kernel size is 2*2, and the step size is 2*2.
CN201910790869.3A 2019-08-26 2019-08-26 Microscopic imaging processing method based on neural network super-resolution technology Active CN110675333B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910790869.3A CN110675333B (en) 2019-08-26 2019-08-26 Microscopic imaging processing method based on neural network super-resolution technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910790869.3A CN110675333B (en) 2019-08-26 2019-08-26 Microscopic imaging processing method based on neural network super-resolution technology

Publications (2)

Publication Number Publication Date
CN110675333A CN110675333A (en) 2020-01-10
CN110675333B true CN110675333B (en) 2023-04-07

Family

ID=69075569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910790869.3A Active CN110675333B (en) 2019-08-26 2019-08-26 Microscopic imaging processing method based on neural network super-resolution technology

Country Status (1)

Country Link
CN (1) CN110675333B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311522B (en) * 2020-03-26 2023-08-08 重庆大学 Neural network-based two-photon fluorescence microscopic image restoration method and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846463A (en) * 2017-01-13 2017-06-13 清华大学 Micro-image three-dimensional rebuilding method and system based on deep learning neutral net
CN108549892A (en) * 2018-06-12 2018-09-18 东南大学 A kind of license plate image clarification method based on convolutional neural networks
CN109035146A (en) * 2018-08-09 2018-12-18 复旦大学 A kind of low-quality image oversubscription method based on deep learning
CN109087247A (en) * 2018-08-17 2018-12-25 复旦大学 The method that a kind of pair of stereo-picture carries out oversubscription
CN109345449A (en) * 2018-07-17 2019-02-15 西安交通大学 A kind of image super-resolution based on converged network and remove non-homogeneous blur method
CN109636733A (en) * 2018-10-26 2019-04-16 华中科技大学 Fluorescent image deconvolution method and system based on deep neural network
CN109801215A (en) * 2018-12-12 2019-05-24 天津津航技术物理研究所 The infrared super-resolution imaging method of network is generated based on confrontation
CN110163800A (en) * 2019-05-13 2019-08-23 南京大学 A kind of micro- phase recovery method and apparatus of chip based on multiple image super-resolution

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846463A (en) * 2017-01-13 2017-06-13 清华大学 Micro-image three-dimensional rebuilding method and system based on deep learning neutral net
CN108549892A (en) * 2018-06-12 2018-09-18 东南大学 A kind of license plate image clarification method based on convolutional neural networks
CN109345449A (en) * 2018-07-17 2019-02-15 西安交通大学 A kind of image super-resolution based on converged network and remove non-homogeneous blur method
CN109035146A (en) * 2018-08-09 2018-12-18 复旦大学 A kind of low-quality image oversubscription method based on deep learning
CN109087247A (en) * 2018-08-17 2018-12-25 复旦大学 The method that a kind of pair of stereo-picture carries out oversubscription
CN109636733A (en) * 2018-10-26 2019-04-16 华中科技大学 Fluorescent image deconvolution method and system based on deep neural network
CN109801215A (en) * 2018-12-12 2019-05-24 天津津航技术物理研究所 The infrared super-resolution imaging method of network is generated based on confrontation
CN110163800A (en) * 2019-05-13 2019-08-23 南京大学 A kind of micro- phase recovery method and apparatus of chip based on multiple image super-resolution

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Effective deep learning training for single-image super-resolution in endomicroscopy exploiting video-registration-based reconstruction;Daniele Ravì等;《International Journal of Computer Assisted Radiology and Surgery》;20180423;第917-924页 *
U-Net: Convolutional Networks for Biomedical Image Segmentation;Olaf Ronneberger等;《arXiv:1505.04597v1》;20150518;第1-8页 *
UNet++: A Nested U-Net Architecture for Medical Image Segmentation;Zongwei Zhou等;《arXiv:1807.10165v1》;20180718;第1-8页 *
基于深度卷积网络的单图像超分辨率重建;韦玉婧等;《闽江学院学报》;20190331;第40卷(第2期);第70-75页 *
基于神经网络的三维宽场显微图像复原研究;陈华等;《光子学报》;20060331;第35卷(第3期);第473-476页 *
无透镜数字全息显微图像超分辨率重建方法及应用研究;李喆;《中国优秀硕士学位论文全文数据库信息科技辑》;20170515(第5期);I138-1069 *

Also Published As

Publication number Publication date
CN110675333A (en) 2020-01-10

Similar Documents

Publication Publication Date Title
CN109360171B (en) Real-time deblurring method for video image based on neural network
CN108549892B (en) License plate image sharpening method based on convolutional neural network
CN109801215B (en) Infrared super-resolution imaging method based on countermeasure generation network
CN109636733B (en) Fluorescence image deconvolution method and system based on deep neural network
CN116071243B (en) Infrared image super-resolution reconstruction method based on edge enhancement
CN110770784A (en) Image processing apparatus, imaging apparatus, image processing method, program, and storage medium
CN111127336A (en) Image signal processing method based on self-adaptive selection module
CN111091503A (en) Image out-of-focus blur removing method based on deep learning
CN112164011A (en) Motion image deblurring method based on self-adaptive residual error and recursive cross attention
Min et al. Blind deblurring via a novel recursive deep CNN improved by wavelet transform
CN114998141A (en) Space environment high dynamic range imaging method based on multi-branch network
CN111932452B (en) Infrared image convolution neural network super-resolution method based on visible image enhancement
Fu et al. Ad 2 attack: Adaptive adversarial attack on real-time uav tracking
CN116847209B (en) Log-Gabor and wavelet-based light field full-focusing image generation method and system
CN114596233A (en) Attention-guiding and multi-scale feature fusion-based low-illumination image enhancement method
Zhou et al. High dynamic range imaging with context-aware transformer
CN110675333B (en) Microscopic imaging processing method based on neural network super-resolution technology
CN114463196B (en) Image correction method based on deep learning
CN117952883A (en) Backlight image enhancement method based on bilateral grid and significance guidance
CN113096032A (en) Non-uniform blur removing method based on image area division
Oh et al. Residual dilated u-net with spatially adaptive normalization for the restoration of under display camera images
CN116523790A (en) SAR image denoising optimization method, system and storage medium
CN110675320A (en) Method for sharpening target image under spatial parameter change and complex scene
WO2022089917A1 (en) Method and image processing device for improving signal-to-noise ratio of image frame sequences
CN108665412B (en) Method for performing multi-frame image super-resolution reconstruction by using natural image priori knowledge

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant