CN112884677B - Image restoration processing method based on optical synthetic aperture imaging - Google Patents
Image restoration processing method based on optical synthetic aperture imaging Download PDFInfo
- Publication number
- CN112884677B CN112884677B CN202110308146.2A CN202110308146A CN112884677B CN 112884677 B CN112884677 B CN 112884677B CN 202110308146 A CN202110308146 A CN 202110308146A CN 112884677 B CN112884677 B CN 112884677B
- Authority
- CN
- China
- Prior art keywords
- network
- data set
- training
- processing
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 46
- 230000003287 optical effect Effects 0.000 title claims abstract description 21
- 238000003672 processing method Methods 0.000 title claims abstract description 13
- 238000012545 processing Methods 0.000 claims abstract description 53
- 238000012549 training Methods 0.000 claims abstract description 53
- 238000000034 method Methods 0.000 claims abstract description 37
- 230000008569 process Effects 0.000 claims abstract description 28
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims abstract description 8
- 238000013527 convolutional neural network Methods 0.000 claims description 19
- 230000006870 function Effects 0.000 claims description 18
- 238000010606 normalization Methods 0.000 claims description 12
- 238000012360 testing method Methods 0.000 claims description 12
- 238000012795 verification Methods 0.000 claims description 10
- 238000001228 spectrum Methods 0.000 claims description 6
- 230000007246 mechanism Effects 0.000 claims description 5
- 238000003062 neural network model Methods 0.000 claims description 5
- 238000005070 sampling Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 9
- 239000000470 constituent Substances 0.000 description 14
- 238000005516 engineering process Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000011084 recovery Methods 0.000 description 4
- 230000014509 gene expression Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000010485 coping Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an image restoration processing method based on optical synthetic aperture imaging, relates to the field of image restoration, and solves the problems of serious influence on the definition of an imaging result, prior information requirement, noise influence, time consumption for restoration and unstable restoration effect caused by system noise. B, using the image data set obtained in the step A, adopting an Adam optimizer to carry out step-by-step training on a RestoreNet-Plus network, adding random noise on the input of the data set as the training input of the DenoiseNet network in the process of preferentially training the DenoiseNet network, and inputting the data set as the training label of the DenioseNet network; in the process of training the DeblurNet network, the input of the data set is subjected to the operation processing of the DenoiseNet network to suppress noise, the output of the DenoiseNet network is used as the training input of the DeblurNet network, and the label of the data set is used as the training label of the DeblurNet network. The method has no prior information requirement, overcomes the noise influence of an imaging system, and has the advantages of less restoration time consumption and stable restoration effect.
Description
Technical Field
The invention relates to the field of image restoration, in particular to an image restoration processing method based on optical synthetic aperture imaging.
Background
Due to the existence of diffraction limit, the imaging resolution of a general optical imaging system can be improved only by increasing the aperture of the system under a certain working waveband. In the optical synthetic aperture imaging technology, a precisely positioned sub-aperture lens array can replace a traditional large-aperture imaging lens.
The sub-aperture lens array meeting the requirement of the common phase can converge light beams and form an interference image on an imaging plane, thereby effectively improving the imaging resolution capability of the system. However, the clear Aperture area of an Optical Synthetic Aperture Imaging System (OSAI) is still smaller than that of an equivalent single Aperture Imaging System, and the limitation of the System Aperture causes spatial frequency components except the System cutoff frequency to be intercepted and not to participate in Imaging, so that the actual Point Spread Function (PSF) of the System expands, and the intermediate frequency response of the Modulation Transfer Function (MTF) decreases or even lacks. Meanwhile, due to the existence of imaging system noise, the imaging definition of the system is seriously affected, and a high-resolution image needs to be obtained by combining an image restoration technology.
The conventional image restoration technology mainly includes a Wiener Filter (Wiener Filter), Lucy-Richardson algorithm (Lucy-Richardson algorithm), and super Laplacian apriori algorithm (Hyper-Laplacian Priors algorithm). However, the conventional image restoration technology needs characteristic analysis for disturbance in the imaging process and relies heavily on an artificially designed information extractor. Under the condition that the imaging process is unknown, the reasons of image blurring and distortion are often difficult to determine, so that the difficulty of image restoration is increased, and a non-blind restoration algorithm based on the point spread function of the imaging system fails. For images under different scenes, the traditional image restoration technology needs to perform professional parameter adjustment to achieve the optimal effect; for different images in the same scene, the restoration effect is very easily interfered by the target type and external factors. Conventional image restoration techniques have problems, and usually need to be used together to achieve the best effect.
The conventional image restoration method has the following defects: 1. non-blind restoration, requiring prior information of the imaging system; 2. the interference of noise information in the imaging process is large; 3. the parameter adjustment is needed for each image, and the effect is difficult to guarantee; 4. the recovery is time-consuming and unstable, and the time cost is difficult to estimate.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the invention provides an image restoration processing method based on optical synthetic aperture imaging, which solves the problems that the imaging result definition is seriously influenced due to system noise, and the prior information requirement, the noise influence, the restoration time and the restoration effect are unstable.
The invention is realized by the following technical scheme:
the image restoration processing method based on the optical synthetic aperture imaging comprises the following steps:
step A: constructing an image data set, wherein the image data set comprises data set input and a data set label, performing optical synthetic aperture imaging interference on a target through a visible light spectrum camera to obtain a fuzzy image as the data set input to obtain image data of the target, and performing large-aperture imaging on the target through the visible light spectrum camera to obtain a clear image as the data set label;
and B: establishing a network model, namely constructing a deep convolutional neural network denoise Net based on a residual error structure and cavity convolution by adopting a U-shaped convolutional neural network model; then, combining an attention mechanism, a residual error module and a multi-scale module to construct a deep convolutional neural network DeblurNet;
the DenoiseNet network is connected with the DebluSent network in series, the DenoiseNet network restrains noise in image data, and the DebluSent network carries out image restoration aiming at a fuzzy phenomenon existing in optical synthetic aperture imaging;
the denoiser network and the DeblurNet network are connected in series to form a RestoreNet-Plus network;
and C: network training, namely performing step-by-step training on a RestoreNet-Plus network by using the image data set obtained in the step A and an Adam optimizer, adding random noise on data set input as training input of the DenoiseNet network in the process of preferentially training the DenoiseNet network, and inputting the data set as a training label of the DenioseNet network; in the process of training the DeblurNet network, firstly, the input of a data set is subjected to operation processing of the DenoiseNet network to suppress noise, the output of the DenoiseNet network is used as the training input of the DeblurNet network, and a data set label is used as a training label of the DeblurNet network;
step D: and C, restoring the image data of the target in the step A by adopting the RestoreNet-Plus network trained in the step C.
Further, the data set input comprises a training set part and a test set part, and a data set label is used as a verification set part;
constructing an image data set for a target image including restoration, wherein the image data set is divided into a training set part, a verification set part and a test set part;
and the training set part and the verification set part are used for the training process of the step C, the test set part is used as the data set input of the step D and is directly input into the RestoreNet-Plus network trained in the step C for operation, and the RestoreNet-Plus network can directly output the restored clear image.
Further, in the step C, the image data set obtained in the step A is utilized, an Adam optimizer is adopted to conduct step-by-step training on the RestoreNet-Plus network, and the learning rate is set to be 10-4The exponential decay factors for calculating the first order moment estimate and the second order moment estimate are set to 0.9 and 0.999 respectively, the weight decay is set to exponential decay, and the decay rate is set to 0.9.
Further, the U-shaped convolutional neural network model in the step B is a neural network model with a U-shaped multi-layer convolutional layer.
Further, constructing a deep convolutional neural network denoiseNet based on a residual error structure and a cavity convolution in the step B;
the input data are subjected to multiple times of modular processing, the modular processing consists of sequential spatial convolution, batch normalization and Leaky ReLU function processing, and the results of the multiple times of same modular processing and the results of the first modular processing are combined and output.
Further, the denoise Net network is treated by four times of modularization, namely, the data set input into the denoise Net network is treated by first modularization and then two paths are output backwards, wherein one path is treated by three times of modularization continuously again for the result of the first modularization treatment, and the other path is combined with the result of the third modularization treatment and output.
Further, the multi-scale module in step B includes three parallel processes performed on the data input to the multi-scale module, where the three processes are respectively 3 × 3 spatial convolution, batch normalization and leak ReLU function processing, 5 × 5 spatial convolution, batch normalization and leak ReLU function processing, and 7 × 7 spatial convolution, batch normalization and leak ReLU function processing;
in the three processes, the processing path result of the 3 × 3 spatial convolution is output and added to the processing path result of the 5 × 5 spatial convolution, and the processing path result of the 5 × 5 spatial convolution is output and added to the processing path result of the 7 × 7 spatial convolution.
And further, the final processing result of the multi-scale module is output to a residual attention module, and the residual attention module performs multi-layer feature processing on the input data based on the mask branch and the main branch, performs down sampling and backward jumping connection at the same time, and outputs the data.
The invention adopts a step-by-step scheme to construct a network RestoreNet-Plus, aiming at the noise problem and the fuzzy problem in the image restoration processing process, the coping modes are respectively as follows:
for the problem of noise suppression, a U-shaped convolution neural network model is adopted, and a deep convolution neural network denoise Net is constructed by combining a residual error structure and a cavity convolution;
for the deblurring problem, a deep convolutional neural network DeblurNet is constructed by combining an attention mechanism, a residual error module and a multi-scale module;
software and hardware devices for implementing the present invention include, but are not limited to, the following:
the software and hardware equipment adopted for training is as follows: 1 GeForce 2080Ti graphics card, an Ubuntu 18.04.3 operating system, a Python 3.7 programming language, a PyTorch 1.2.0 deep learning framework and a PyCharm compiling environment;
wherein the version of the display card is at least above the GeForce 2080Ti display card.
The invention has the following advantages and beneficial effects:
the invention overcomes the defects of prior information requirement, noise influence, recovery time consumption, unstable recovery effect and the like in the prior art, and provides a new image recovery method of an optical synthetic aperture interference imaging system;
the method has no prior information requirement, overcomes the noise influence of an imaging system, and has the advantages of less restoration time consumption and stable restoration effect.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:
fig. 1 is a training flowchart of an image restoration processing method according to an embodiment of the present invention.
Fig. 2 is a test flowchart of an image restoration processing method according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a U-shaped convolutional neural network according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a residual structure and a hole convolution structure of the denoiser net according to the embodiment of the present invention.
Fig. 5 is a schematic diagram showing comparison of results of restoration of RestoreNet-Plus constructed by using the scheme of the present invention, wherein the results are clear images, blurred images, images restored by a wiener filter and a richard-richardson algorithm, which are taken under a weak noise environment, and six different types of typical targets provided by the embodiment of the present invention.
Detailed Description
Hereinafter, the term "comprising" or "may include" used in various embodiments of the present invention indicates the presence of the invented function, operation or element, and does not limit the addition of one or more functions, operations or elements. Furthermore, as used in various embodiments of the present invention, the terms "comprises," "comprising," "includes," "including," "has," "having" and their derivatives are intended to mean that the specified features, numbers, steps, operations, elements, components, or combinations of the foregoing, are only meant to indicate that a particular feature, number, step, operation, element, component, or combination of the foregoing, and should not be construed as first excluding the existence of, or adding to the possibility of, one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
In various embodiments of the invention, the expression "or" at least one of a or/and B "includes any or all combinations of the words listed simultaneously. For example, the expression "a or B" or "at least one of a or/and B" may include a, may include B, or may include both a and B.
Expressions (such as "first", "second", and the like) used in various embodiments of the present invention may modify various constituent elements in various embodiments, but may not limit the respective constituent elements. For example, the above description does not limit the order and/or importance of the elements described. The above description is only intended to distinguish one element from another. For example, the first user device and the second user device indicate different user devices, although both are user devices. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of various embodiments of the present invention.
It should be noted that: if it is described that one constituent element is "connected" to another constituent element, the first constituent element may be directly connected to the second constituent element, and a third constituent element may be "connected" between the first constituent element and the second constituent element. In contrast, when one constituent element is "directly connected" to another constituent element, it is understood that there is no third constituent element between the first constituent element and the second constituent element.
The terminology used in the various embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the various embodiments of the invention. As used herein, the singular forms are intended to include the plural forms as well, unless the context clearly indicates otherwise. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of the present invention belong. The terms (such as those defined in commonly used dictionaries) should be interpreted as having a meaning that is consistent with their contextual meaning in the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in various embodiments of the present invention.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not meant to limit the present invention.
The image restoration processing method based on the optical synthetic aperture imaging comprises the following steps:
step A: constructing an image data set, wherein the image data set comprises data set input and a data set label, performing optical synthetic aperture imaging interference on a target through a visible light spectrum camera to obtain a fuzzy image as the data set input to obtain image data of the target, and performing large-aperture imaging on the target through the visible light spectrum camera to obtain a clear image as the data set label;
and B: establishing a network model, namely constructing a deep convolutional neural network denoise Net based on a residual error structure and cavity convolution by adopting a U-shaped convolutional neural network model; then, combining an attention mechanism, a residual error module and a multi-scale module to construct a deep convolutional neural network DeblurNet;
the DenoiseNet network is connected with the DebluSent network in series, the DenoiseNet network restrains noise in image data, and the DebluSent network carries out image restoration aiming at a fuzzy phenomenon existing in optical synthetic aperture imaging;
the denoiser network and the DeblurNet network are connected in series to form a RestoreNet-Plus network;
and C: network training, namely performing step-by-step training on a RestoreNet-Plus network by using the image data set obtained in the step A and an Adam optimizer, adding random noise on data set input as training input of the DenoiseNet network in the process of preferentially training the DenoiseNet network, and inputting the data set as a training label of the DenioseNet network; in the process of training the DeblurNet network, firstly, the input of a data set is subjected to operation processing of the DenoiseNet network to suppress noise, the output of the DenoiseNet network is used as the training input of the DeblurNet network, and a data set label is used as a training label of the DeblurNet network;
step D: and C, restoring the image data of the target in the step A by using the RestoreNet-Plus network trained in the step C.
Further, the data set input comprises a training set part and a test set part, and a data set label is used as a verification set part;
constructing an image data set for a target image including restoration, wherein the image data set is divided into a training set part, a verification set part and a test set part;
and the training set part and the verification set part are used for the training process of the step C, the test set part is used as the data set input of the step D and is directly input into the RestoreNet-Plus network trained in the step C for operation, and the RestoreNet-Plus network can directly output the restored clear image.
Further, in the step C, the image data set obtained in the step A is utilized, an Adam optimizer is adopted to conduct step-by-step training on the RestoreNet-Plus network, and the learning rate is set to be 10-4The exponential decay factors for calculating the first order moment estimate and the second order moment estimate are set to 0.9 and 0.999 respectively, the weight decay is set to exponential decay, and the decay rate is set to 0.9.
Further, the U-shaped convolutional neural network model in step B is a neural network model in which the multilayer convolutional layer presents a U shape.
Further, constructing a deep convolutional neural network denoiseNet based on a residual error structure and a cavity convolution in the step B;
the input data are subjected to multiple times of modular processing, the modular processing consists of sequential spatial convolution, batch normalization and Leaky ReLU function processing, and the results of the multiple times of same modular processing and the results of the first modular processing are combined and output.
Further, the denoise Net network is treated by four times of modularization, namely, the data set input into the denoise Net network is treated by first modularization and then two paths are output backwards, wherein one path is treated by three times of modularization continuously again for the result of the first modularization treatment, and the other path is combined with the result of the third modularization treatment and output.
Further, the multi-scale module in step B includes three parallel processes performed on the data input to the multi-scale module, where the three processes are respectively 3 × 3 spatial convolution, batch normalization and leak ReLU function processing, 5 × 5 spatial convolution, batch normalization and leak ReLU function processing, and 7 × 7 spatial convolution, batch normalization and leak ReLU function processing;
in the three processes, the processing path result of the 3 × 3 spatial convolution is output and added to the processing path result of the 5 × 5 spatial convolution, and the processing path result of the 5 × 5 spatial convolution is output and added to the processing path result of the 7 × 7 spatial convolution.
And further, the final processing result of the multi-scale module is output to a residual attention module, and the residual attention module performs multi-layer feature processing on the input data for multiple times based on the mask branch and the main branch, then performs down sampling and simultaneously performs backward jump connection to output the data.
The invention adopts a step-by-step scheme to construct the network restore-Plus, aiming at the noise problem and the fuzzy problem in the image restoration processing process, the coping modes are respectively as follows:
for the problem of noise suppression, a U-shaped convolution neural network model is adopted, and a deep convolution neural network denoise Net is constructed by combining a residual error structure and a cavity convolution;
for the deblurring problem, a deep convolutional neural network DeblurNet is constructed by combining an attention mechanism, a residual error module and a multi-scale module;
example 1:
the software and hardware equipment adopted for training is as follows: 1 block of GeForce 2080Ti graphics card, Ubuntu 18.04.3 operating system, Python 3.7 programming language, PyTorch 1.2.0 deep learning framework, PyCharm compiling environment.
Imaging to obtain a blurred image as a data set input; and imaging the target through large-aperture imaging to obtain a clear image as a data set label.
The data set is as follows 8: 1: the scale of 1 is divided into a training set part, a validation set part and a test set part. The training and testing of the network are performed according to the flow charts of fig. 1 and fig. 2.
As shown in fig. 3 and fig. 4, a restoret-Plus based on a U-shaped convolutional neural network structure is established, the picture size is set to be 256 × 256, and Adam is adopted in an optimization algorithm; training the network by using a training set for 200 times, debugging the hyper-parameters by using a verification set, and storing the trained weight parameters;
fig. 5 is a schematic diagram showing comparison between results of restoration of restoret-Plus constructed by using the scheme of the present invention and clear images, blurred images, and images restored by a wiener filter and a richard-richardson algorithm, which are taken under a weak noise environment, and six different types of typical objects provided by the embodiment of the present invention. Wherein, PSNR refers to peak signal-to-noise ratio, SSIM refers to structural similarity, MSSSIM refers to multi-scale structural similarity, and the three are image evaluation indexes. The image restoration work is performed on the image shown in fig. 5(b) by using the trained model, the restoration speed is about 25 hz, no parameter adjustment is needed, and the image restoration result is shown in fig. 5 (e). Fig. 5(c) and (d) show the restoration results of the wiener filter and the luci-richardson algorithm as comparison, and the restoration process adopts an accurate point spread function and accurate noise information as prior information, and iterates more than 50 times.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (8)
1. The image restoration processing method based on the optical synthetic aperture imaging is characterized by comprising the following steps of:
step A: constructing an image data set, wherein the image data set comprises data set input and a data set label, performing optical synthetic aperture imaging interference on a target through a visible light spectrum camera to obtain a fuzzy image as the data set input to obtain image data of the target, and performing large-aperture imaging on the target through the visible light spectrum camera to obtain a clear image as the data set label;
and B: establishing a network model, namely constructing a deep convolutional neural network denoise Net based on a residual error structure and cavity convolution by adopting a U-shaped convolutional neural network model; then, combining an attention mechanism, a residual error module and a multi-scale module to construct a deep convolutional neural network DeblurNet;
the DenoiseNet network is connected with the DebluSent network in series, the DenoiseNet network restrains noise in image data, and the DebluSent network carries out image restoration aiming at a fuzzy phenomenon existing in optical synthetic aperture imaging;
the denoiser network and the DeblurNet network are connected in series to form a RestoreNet-Plus network;
and C: network training, namely performing step-by-step training on a RestoreNet-Plus network by using the image data set obtained in the step A and an Adam optimizer, adding random noise on data set input as training input of the DenoiseNet network in the process of preferentially training the DenoiseNet network, and inputting the data set as a training label of the DenioseNet network; in the process of training the DeblurNet network, firstly, the input of a data set is subjected to operation processing of the DenoiseNet network to suppress noise, the output of the DenoiseNet network is used as the training input of the DeblurNet network, and a data set label is used as a training label of the DeblurNet network;
step D: and C, restoring the image data of the target in the step A by adopting the RestoreNet-Plus network trained in the step C.
2. The method for image restoration processing based on optical synthetic aperture imaging according to claim 1, wherein the data set input comprises a training set part and a test set part, and the data set label is used as a verification set part;
constructing an image data set for a target image including restoration, wherein the image data set is divided into a training set part, a verification set part and a test set part;
and the training set part and the verification set part are used for the training process of the step C, the test set part is used as the data set input of the step D and is directly input into the RestoreNet-Plus network trained in the step C for operation, and the RestoreNet-Plus network can directly output the restored clear image.
3. The method for image restoration processing based on optical synthetic aperture imaging according to claim 1, wherein in step C, using the image dataset obtained in step a, an Adam optimizer is used to perform step training on the restenet-Plus network, and the learning rate is set to 10-4The exponential decay factors for calculating the first order moment estimate and the second order moment estimate are set to 0.9 and 0.999 respectively, the weight decay is set to exponential decay, and the decay rate is set to 0.9.
4. The image restoration processing method based on the OSA imaging according to claim 1, wherein the U-shaped convolutional neural network model in the step B is a neural network model with a U-shaped multi-layer convolutional layer.
5. The image restoration processing method based on optical synthetic aperture imaging according to claim 1, wherein in step B, a deep convolutional neural network denoiser net is constructed based on residual structure and hole convolution;
the input data are subjected to multiple times of modular processing, the modular processing consists of sequential spatial convolution, batch normalization and Leaky ReLU function processing, and the results of the multiple times of same modular processing and the results of the first modular processing are combined and output.
6. The image restoration processing method based on the optical synthetic aperture imaging according to claim 5, wherein four times of modularization processing is adopted for the denoise net network, that is, two paths are outputted backwards after the first time of modularization processing is performed on the data set input into the denoise net network, wherein one path performs three consecutive times of modularization processing on the result of the first time of modularization processing again, and the other path is merged with the result of the third time of modularization processing and outputted.
7. The method for image restoration processing based on OSA imaging according to claim 1, wherein the multiscale module in step B includes three processes in parallel for the data inputted into the multiscale module, which are 3 × 3 spatial convolution, batch normalization and Leaky ReLU function process, 5 × 5 spatial convolution, batch normalization and Leaky ReLU function process, 7 × 7 spatial convolution, batch normalization and Leaky ReLU function process;
in the three processes, the processing path result of the 3 × 3 spatial convolution is output and added to the processing path result of the 5 × 5 spatial convolution, and the processing path result of the 5 × 5 spatial convolution is output and added to the processing path result of the 7 × 7 spatial convolution.
8. The method of claim 7, wherein a final processing result of the multi-scale module is output to a residual attention module, and the residual attention module performs a plurality of multi-layer feature processing on the input data based on the mask branch and the trunk branch, performs down-sampling and backward skip connection to output the data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110308146.2A CN112884677B (en) | 2021-03-23 | 2021-03-23 | Image restoration processing method based on optical synthetic aperture imaging |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110308146.2A CN112884677B (en) | 2021-03-23 | 2021-03-23 | Image restoration processing method based on optical synthetic aperture imaging |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112884677A CN112884677A (en) | 2021-06-01 |
CN112884677B true CN112884677B (en) | 2022-05-24 |
Family
ID=76041799
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110308146.2A Active CN112884677B (en) | 2021-03-23 | 2021-03-23 | Image restoration processing method based on optical synthetic aperture imaging |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112884677B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11183606A (en) * | 1997-12-19 | 1999-07-09 | Mitsubishi Electric Corp | Radar signal processing device |
CN111383192A (en) * | 2020-02-18 | 2020-07-07 | 清华大学 | SAR-fused visible light remote sensing image defogging method |
CN112330549A (en) * | 2020-10-16 | 2021-02-05 | 西安工业大学 | Blind deconvolution network-based blurred image blind restoration method and system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10963737B2 (en) * | 2017-08-01 | 2021-03-30 | Retina-Al Health, Inc. | Systems and methods using weighted-ensemble supervised-learning for automatic detection of ophthalmic disease from images |
-
2021
- 2021-03-23 CN CN202110308146.2A patent/CN112884677B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11183606A (en) * | 1997-12-19 | 1999-07-09 | Mitsubishi Electric Corp | Radar signal processing device |
CN111383192A (en) * | 2020-02-18 | 2020-07-07 | 清华大学 | SAR-fused visible light remote sensing image defogging method |
CN112330549A (en) * | 2020-10-16 | 2021-02-05 | 西安工业大学 | Blind deconvolution network-based blurred image blind restoration method and system |
Non-Patent Citations (2)
Title |
---|
"Image restoration for synthetic aperture systems with a non-blind deconvolution algorithm via a deep convolutional neural network";Mei Hui等;《Optics Express》;20200323;全文 * |
"一种基于深度学习的光学合成孔径成像***图像复原方法";唐雎等;《光学学报》;20200831;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112884677A (en) | 2021-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Pang et al. | Recorrupted-to-recorrupted: unsupervised deep learning for image denoising | |
Oh et al. | Crowd counting with decomposed uncertainty | |
KR101967089B1 (en) | Convergence Neural Network based complete reference image quality evaluation | |
Wang et al. | Enhancing low light videos by exploring high sensitivity camera noise | |
Molina et al. | Bayesian multichannel image restoration using compound Gauss-Markov random fields | |
EP0990222A1 (en) | Image processing method and system involving contour detection steps | |
Kim et al. | Multiple level feature-based universal blind image quality assessment model | |
EP3731144A1 (en) | Deep adversarial artifact removal | |
Han et al. | Transferring microscopy image modalities with conditional generative adversarial networks | |
CN116152591B (en) | Model training method, infrared small target detection method and device and electronic equipment | |
CN114266898A (en) | Liver cancer identification method based on improved EfficientNet | |
CN115018711B (en) | Image super-resolution reconstruction method for warehouse scheduling | |
CN116563916A (en) | Attention fusion-based cyclic face super-resolution method and system | |
De los Reyes et al. | Bilevel optimization methods in imaging | |
CN114757844A (en) | Image moire elimination method and device | |
Prabhakar et al. | Self-gated memory recurrent network for efficient scalable HDR deghosting | |
KR102357350B1 (en) | Statistical image restoration for low-dose ct image using deep learning | |
CN112884677B (en) | Image restoration processing method based on optical synthetic aperture imaging | |
CN114972130B (en) | Training method, device and training equipment for denoising neural network | |
CN116664446A (en) | Lightweight dim light image enhancement method based on residual error dense block | |
CN116091893A (en) | Method and system for deconvolution of seismic image based on U-net network | |
US20230394632A1 (en) | Method and image processing device for improving signal-to-noise ratio of image frame sequences | |
Li et al. | Joint learning of motion deblurring and defocus deblurring networks with a real-world dataset | |
Kornilova et al. | Deep learning framework for mobile microscopy | |
Zheng et al. | A Multi-scale feature modulation network for efficient underwater image enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |