CN115293983A - Self-adaptive image super-resolution restoration method fusing multi-level complementary features - Google Patents
Self-adaptive image super-resolution restoration method fusing multi-level complementary features Download PDFInfo
- Publication number
- CN115293983A CN115293983A CN202210937030.XA CN202210937030A CN115293983A CN 115293983 A CN115293983 A CN 115293983A CN 202210937030 A CN202210937030 A CN 202210937030A CN 115293983 A CN115293983 A CN 115293983A
- Authority
- CN
- China
- Prior art keywords
- resolution
- network
- image super
- image
- dynamic parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 230000000295 complement effect Effects 0.000 title claims abstract description 41
- 238000013507 mapping Methods 0.000 claims abstract description 30
- 238000000605 extraction Methods 0.000 claims abstract description 14
- 238000012163 sequencing technique Methods 0.000 claims abstract description 10
- 238000013528 artificial neural network Methods 0.000 claims abstract description 8
- 238000012549 training Methods 0.000 claims description 15
- 230000003044 adaptive effect Effects 0.000 claims description 14
- 230000004927 fusion Effects 0.000 claims description 12
- 230000003068 static effect Effects 0.000 claims description 10
- 238000005070 sampling Methods 0.000 claims description 7
- 230000004044 response Effects 0.000 claims description 6
- 230000003321 amplification Effects 0.000 claims description 4
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 4
- 238000013527 convolutional neural network Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000005259 measurement Methods 0.000 abstract description 2
- 238000007500 overflow downdraw method Methods 0.000 abstract description 2
- 238000010187 selection method Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the field of image super-resolution restoration, and discloses a self-adaptive image super-resolution restoration method fusing multi-level complementary features, which comprises the following steps: modeling an image super-resolution mapping model by adopting a dynamic parameter neural network; acquiring multi-level features of color, gradient, texture and semantics of the image by using a feature extraction network; a complementary feature selection and fusion method is provided by utilizing the sequencing distance and the uncertainty measurement; self-adaptively generating image super-resolution network dynamic parameters by fusing multi-level complementary features; and the dynamic parameters adaptively adjust the image super-resolution mapping model in a characteristic mapping mode. According to the method, the multi-level complementary features are introduced into the image super-resolution restoration, the image super-resolution mapping model is adjusted in a self-adaptive mode, and the image texture information can be restored more accurately and vividly.
Description
Technical Field
The invention belongs to the field of image super-resolution, and particularly relates to a self-adaptive image super-resolution restoration method fusing multi-level complementary features.
Background
The super-resolution restoration of an image is a technique for obtaining a high-resolution image of the same scene from a low-resolution image by increasing the pixel density. Existing image super-resolution restoration techniques can be broadly classified into interpolation-based methods and sample-based methods. Since a large amount of high-frequency information is lost in a low-resolution image, a sample-based method can obtain a better restoration effect because high-frequency information required for obtaining a high-resolution image can be learned from an external sample, and is a mainstream method in current image super-resolution restoration research. Among them, the image super-resolution restoration method based on deep learning has an overwhelming performance advantage due to the modeling capability of a complex model, and is a research hotspot in recent years.
The image super-resolution restoration method based on deep learning mostly adopts a fixed parameter neural network to model the mapping relation between high-resolution images and low-resolution images, and neglects the mapping difference of high-resolution information and low-resolution information of different images, so that the super-resolution result cannot effectively reconstruct the high-frequency information of the images. In order to solve this problem, successive scholars have proposed an adaptive image super-resolution restoration method based on deep learning. According to the method, the dynamic parameter neural network is built, the self-adaptive adjustment is carried out on the super-resolution mapping model according to the self characteristics of the low-resolution image, the self-adaptive super-resolution restoration of the image is realized, and the super-resolution restoration performance of the image is further improved. However, when the current method is used for adaptively adjusting the image super-resolution mapping model, only single feature information of the image is used, and the mapping relation description between high resolution and low resolution of different types of images has limitations and uncertainties, so that the image super-resolution restoration performance is difficult to further improve.
Because the characteristics of different levels have complementarity when describing the super-resolution mapping relation of the images, the multi-level complementary characteristics of the images are fused and utilized to more accurately distinguish and describe the super-resolution mapping difference among different images, and a more accurate model self-adaptive mechanism is constructed to obtain a more accurate and vivid restored image.
Disclosure of Invention
The invention aims to provide a self-adaptive image super-resolution restoration method fused with multi-level complementary features, which is used for improving the accuracy and the vividness of an image super-resolution restoration result and can solve the problems that the static model representation force is poor and a dynamic model method based on single features cannot accurately distinguish image super-resolution mapping differences in the existing image super-resolution restoration method.
The technical scheme of the invention is as follows:
the invention firstly discloses a self-adaptive image super-resolution restoration method fusing multi-level complementary features, which comprises the following steps:
s1, preparing a training data set and constructing a self-adaptive image super-resolution network, wherein the self-adaptive image super-resolution network comprises an image super-resolution backbone network and a dynamic parameter generation network;
s2, simultaneously inputting the low-resolution images into a super-resolution backbone network and a dynamic parameter generation network;
s3, generating dynamic parameters by extracting and fusing multi-level complementary features of the images through a dynamic parameter generation network;
s4, the image super-resolution trunk network adaptively adjusts an image super-resolution mapping model according to the dynamic parameters and outputs an adaptive image super-resolution restoration result;
and S5, optimizing the network on the training data set by using the loss function until the network is converged.
Further, in the step S1:
preparing a plurality of groups of paired high-resolution and low-resolution images, and performing data amplification to serve as a training data set;
the image super-resolution trunk network is a dynamic parameter neural network and comprises a feature extraction module, a feature mapping module and an image up-sampling module, wherein the feature mapping module comprises N dynamic parameter layers, and the feature extraction module and the image up-sampling module do not comprise the dynamic parameter layers;
after model training is finished, the parameters of a static parameter layer of the image super-resolution trunk network are fixed, and the parameters of a dynamic parameter layer are gamma g And beta g From the input low-resolution image I by a dynamic parameter generating network LR Adaptive generation, g ∈ [1, 2., N ∈ [ ]];
The dynamic parameter generation network is a static neural network and comprises three modules of multi-level feature extraction, complementary feature selection and fusion and dynamic parameter generation.
Further, in step S2:
image super-resolution backbone network G θ Is a low resolution image I LR And the result is output as an image super-resolution restoration result I S ,I S Both width and height of (I) LR Width and height are S times;
input to the dynamic parameter generating network is a low resolution image I LR The output is N sets of dynamic parameter layer parameters [ gamma ] g ,β g }。
Further, in step S3:
the multi-level feature extraction network in the dynamic parameter generation network utilizes the convolutional neural network to extract the color, gradient, texture and semantic features of the image;
the difference between the features is measured by checking the sequencing distance between response sequencing of the same sample set through different convolution cores in the image super-resolution main network, and complementary features are selected from the sequencing distance;
measuring complementary features by evidence function { F 1 ,F 2 ,…,F K The uncertainty of the method, the complementary features are weighted and fused to obtain a fusion feature F c (ii) a According to the fusion characteristics F c Obtaining N groups of dynamic parameter layer parameters [ gamma ] through convolution transformation g ,β g }。
Further, in step S4:
according to the N groups of dynamic parameter layer parameters, a mapping model of the image super-resolution trunk network is adaptively adjusted through feature map mapping:wherein an |, indicates a pixel-by-pixel multiplication,is the input feature map of the g-th dynamic parameter layer,the output characteristic diagram of the g-th dynamic parameter layer is obtained;
output of image super-resolution backbone network I S Namely the self-adaptive image super-resolution restoration result.
Further, in step S5:
and calculating a loss function according to a high-resolution image truth value in the training data and an image super-resolution restoration result, and iteratively optimizing the self-adaptive image super-resolution network until the network converges.
Compared with the prior art, the invention has the following beneficial effects:
(1) The method utilizes the multi-level complementary features of the image to generate dynamic parameters, the multi-level complementary features have complementarity to the description of the image super-resolution mapping relationship, and the difference between the super-resolution mapping relationships of different types of images can be more effectively distinguished compared with the single feature.
(2) According to the method, the multi-level features are utilized according to the complementarity fusion among the features, and the complementary feature selection and fusion method is provided by utilizing the sequencing distance and the uncertainty measurement, so that the multi-level features can be more effectively utilized in the fusion, and the self-adaptive generation of a subsequent image super-resolution mapping model is facilitated.
(3) The dynamic parameter layer is mapped through a feature map, so that the prior features can be flexibly introduced into an image super-resolution static model, and the image super-resolution trunk network is subjected to self-adaptive adjustment. The method can flexibly utilize the beneficial results of the existing static model-based image super-resolution method, and further improves the restoration performance of the image super-resolution on the basis.
Drawings
In order to fully reflect the technical features of the embodiments of the present invention, the drawings used in the embodiments of the present invention will be briefly described below.
Fig. 1 is a general flowchart of an adaptive image super-resolution restoration method fusing multi-level complementary features according to the present invention.
Fig. 2 is a dynamic model structure diagram of the adaptive image super-resolution restoration method with multi-level complementary features fused according to the present invention.
FIG. 3 is a comparison of the results of the adaptive image super-resolution restoration method with multi-level complementary features and other image super-resolution restoration methods provided by the present invention
Detailed Description
The technical solutions of the present invention are specifically and thoroughly described below with reference to the accompanying drawings of the embodiments of the present invention, so that the technical features of the present invention can be more easily understood by those skilled in the art. It should be noted that the specific embodiments listed herein are only illustrative of the present invention, and do not limit the scope of the present invention.
The invention provides a self-adaptive image super-resolution restoration method fusing multi-level complementary features, and FIG. 1 is a general flow chart of the self-adaptive image super-resolution restoration method fusing multi-level complementary features, which mainly comprises the following steps:
s1, preparing a training data set and constructing a self-adaptive image super-resolution network, wherein the self-adaptive image super-resolution network comprises an image super-resolution main network and a dynamic parameter generation network;
s2, simultaneously inputting the low-resolution images into a super-resolution backbone network and a dynamic parameter generation network;
s3, generating dynamic parameters by extracting and fusing multi-level complementary features of the images through a dynamic parameter generation network;
s4, the image super-resolution trunk network adaptively adjusts an image super-resolution mapping model according to the dynamic parameters and outputs an adaptive image super-resolution restoration result;
and S5, optimizing the self-adaptive image super-resolution network on the training data set by using the loss function until the network converges.
Preferably, in S1:
and preparing a plurality of groups of paired high-resolution images and low-resolution images, and performing data amplification to obtain a training data set.
The high-resolution image in the data set is a clear image obtained by high-definition imaging equipment, and the low-resolution image corresponding to the high-resolution image is an image obtained after the high-resolution image is subjected to down-sampling processing.
The data amplification processing adopts methods such as scaling, mirror image, random rotation and the like to amplify the sample.
Further, fig. 2 shows a structure diagram of the adaptive image super-resolution network, which includes:
the image super-resolution backbone network comprises a feature extraction module, a feature mapping module and an image up-sampling module, wherein the feature mapping module comprises N dynamic parameter layers, and the feature extraction module and the image up-sampling module only have static parameter layers;
after model training is finished, the parameters of a static parameter layer of the image super-resolution main network are fixed, and the parameters gamma of a dynamic parameter layer are fixed g And beta g From the input low-resolution image I by a dynamic parameter generating network LR Adaptive generation, g ∈ [1, 2., N ∈];
The dynamic parameter generation network is a static neural network and comprises three modules of multi-level feature extraction, complementary feature selection and fusion and dynamic parameter generation.
Preferably, in S2:
the input of the image super-resolution trunk network is a low-resolution image I LR The result is output as an image super-resolution restoration result I S ,I S Both width and height of (I) LR Width and height are S times;
input to the dynamic parameter generating network is a low resolution image I LR The output is N sets of dynamic parameter layer parameters [ gamma ] g ,β g };
Preferably, in S3:
the multi-level feature extraction network in the dynamic parameter generation network utilizes the convolutional neural network to extract the color, gradient, texture and semantic features of the image;
the difference between the features is measured by checking the sequencing distance between response sequencing of the same sample set through different convolution cores in the image super-resolution main network, and complementary features are selected from the sequencing distance;
for a given image sample set { I 1 ,I2,…,I S Calculating the response value e of each sample of each convolution kernel with Q convolution kernels for extracting image features ij Where i ∈ [1, S ]],j∈[1,Q]. For convolution kernel C j According to e ij (i∈[1,S]) The size of the sample is sequenced to obtain
Wherein the content of the first and second substances,to make a convolution kernel C j And obtaining the sample serial number corresponding to the ith large response. Thus, the convolution kernel C p And C q The difference between the two can be represented by V p And V q The rank distance between is measured:
sorting V p And ordering V q The larger the difference, the larger the sorting distance, and the stronger the complementarity. And finally, selecting the characteristics extracted by K groups of sorting corresponding to K groups of convolution kernels, wherein the distance between every two sorts is required to be larger than a threshold value T, thereby completing the selection of the complementary characteristics.
Measuring complementary features by evidence function { F 1 ,F2,…,F K Uncertainty of the feature, fusion of complementary features to obtain a fusion feature F c ;
The structural evidence function m represents the pair of features F i The support degree of (c):
wherein, the first and the second end of the pipe are connected with each other,andare respectively F i For low resolution samples L j And the response values of the high resolution samples Hj.
Fusing the complementary features to obtain a fused feature F c :
F c =concat(m(F i )F i ) (4)
Wherein concat (. Cndot.) is a characteristic connection.
According to the fusion characteristics F c Obtaining N groups of dynamic parameter layer parameters [ gamma ] through convolution transformation g ,β g }。
Preferably, in S4:
according to the N groups of dynamic parameter layer parameters, a mapping model of the image super-resolution trunk network is adaptively adjusted through feature map mapping:
wherein, l indicates pixel-by-pixel multiplication,is the input feature map of the g-th dynamic parameter layer,is the output characteristic diagram of the g-th dynamic parameter layer.
Output of image super-resolution backbone network I S Namely the self-adaptive image super-resolution restoration result.
Preferably, in S5:
the loss function includes a pixel loss function and a characteristic loss function.
The loss functions are all provided with weights, the weight of the pixel loss function is set to be 1, and the weight of the characteristic loss function is set to be 0.1.
The Loss of pixel function Loss P The pixel precision of the image super-resolution restoration result is represented by the following calculation formula:
in the formula I S As a result of super-resolution restoration of the image, I HR For high resolution images in the training set, H and W represent the height and width of the image, respectively.
The Loss of feature function Loss F The feature accuracy of the image super-resolution restoration result is represented by the following calculation formula:
in the formula, alpha represents feature extraction, and k represents a feature channel.
And (4) performing iterative optimization on the self-adaptive image super-resolution network according to the loss function until the network converges.
Further, fig. 3 shows a comparison experiment between the method of the present invention and the prior art, where LR is a low resolution image and HR is a true value of a high resolution image, and the method of the present invention has more real and vivid texture compared with the image restoration results of Bicubic, SRCNN, SFT-GAN, and EDSR methods.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and any simple modifications and equivalent variations of the above embodiment according to the technical spirit of the present invention are within the scope of the present invention.
Claims (6)
1. An adaptive image super-resolution restoration method fused with multi-level complementary features, which is characterized by comprising the following steps:
s1, preparing a training data set and constructing a self-adaptive image super-resolution network, wherein the self-adaptive image super-resolution network comprises an image super-resolution main network and a dynamic parameter generation network;
s2, simultaneously inputting the low-resolution images into a super-resolution backbone network and a dynamic parameter generation network;
s3, generating dynamic parameters by extracting and fusing multi-level complementary features of the images through a dynamic parameter generation network;
s4, the image super-resolution trunk network adaptively adjusts an image super-resolution mapping model according to the dynamic parameters and outputs an adaptive image super-resolution restoration result;
and S5, optimizing the self-adaptive image super-resolution network on the training data set by using the loss function until the network converges.
2. The method for self-adaptive image super-resolution restoration fusing multi-level complementary features according to claim 1, wherein in S1,
preparing a plurality of groups of paired high-resolution and low-resolution images, and performing data amplification to serve as a training data set;
the image super-resolution trunk network is a dynamic parameter neural network and comprises a feature extraction module, a feature mapping module and an image up-sampling module, wherein the feature mapping module comprises N dynamic parameter layers, and the feature extraction module and the image up-sampling module do not comprise the dynamic parameter layers;
after model training is finished, the parameters of a static parameter layer of the image super-resolution trunk network are fixed, and the parameters of a dynamic parameter layer are gamma g And beta g From the input low-resolution image I by a dynamic parameter generating network LR Adaptive generation, g ∈ [1, 2., N];
The dynamic parameter generation network is a static neural network and comprises three modules of multi-level feature extraction, complementary feature selection and fusion and dynamic parameter generation.
3. The adaptive image super-resolution restoration method fused with multi-level complementary features according to claim 1, wherein in S2,
the input of the image super-resolution trunk network is a low-resolution image I LR The result is output as an image super-resolution restoration result I S ,I S Both width and height of (I) LR S times of width and height;
input to the dynamic parameter generating network is a low resolution image I LR The output is N sets of dynamic parameter layer parameters [ gamma ] g ,β g }。
4. The method for self-adaptive image super-resolution restoration fusing multi-level complementary features according to claim 1, wherein in S3,
the multi-level feature extraction network in the dynamic parameter generation network utilizes the convolutional neural network to extract the color, gradient, texture and semantic features of the image;
measuring differences among the features by checking sequencing distances among response sequences of the same sample set through different convolution cores in the image super-resolution main network, and selecting K complementary features from the sequencing distances;
measuring complementary features by evidence function { F 1 ,F 2 ,...,F K The uncertainty of the method, the complementary features are weighted and fused to obtain a fusion feature F c ;
According to the fusion characteristics F c Obtaining N groups of dynamic parameter layer parameters [ gamma ] through convolution transformation g ,β g }。
5. The adaptive image super-resolution restoration method fused with multi-level complementary features according to claim 1, wherein in S4,
according to the N groups of dynamic parameter layer parameters, a mapping model of the image super-resolution trunk network is adaptively adjusted through feature map mapping;
output of image super-resolution backbone network I S Namely the self-adaptive image super-resolution restoration result.
6. The method for self-adaptive image super-resolution restoration fusing multi-level complementary features according to claim 1, wherein in S5,
and calculating a loss function according to the high-resolution image truth value in the training data and the image super-resolution restoration result, and iteratively optimizing the self-adaptive image super-resolution network until the network converges.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210937030.XA CN115293983A (en) | 2022-08-05 | 2022-08-05 | Self-adaptive image super-resolution restoration method fusing multi-level complementary features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210937030.XA CN115293983A (en) | 2022-08-05 | 2022-08-05 | Self-adaptive image super-resolution restoration method fusing multi-level complementary features |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115293983A true CN115293983A (en) | 2022-11-04 |
Family
ID=83828952
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210937030.XA Pending CN115293983A (en) | 2022-08-05 | 2022-08-05 | Self-adaptive image super-resolution restoration method fusing multi-level complementary features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115293983A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117974634A (en) * | 2024-03-28 | 2024-05-03 | 南京邮电大学 | Evidence deep learning-based reliable detection method for anchor-frame-free surface defects |
-
2022
- 2022-08-05 CN CN202210937030.XA patent/CN115293983A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117974634A (en) * | 2024-03-28 | 2024-05-03 | 南京邮电大学 | Evidence deep learning-based reliable detection method for anchor-frame-free surface defects |
CN117974634B (en) * | 2024-03-28 | 2024-06-04 | 南京邮电大学 | Evidence deep learning-based reliable detection method for anchor-frame-free surface defects |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111754403B (en) | Image super-resolution reconstruction method based on residual learning | |
CN113160234B (en) | Unsupervised remote sensing image semantic segmentation method based on super-resolution and domain self-adaptation | |
CN109727195B (en) | Image super-resolution reconstruction method | |
CN111524135A (en) | Image enhancement-based method and system for detecting defects of small hardware fittings of power transmission line | |
CN110675321A (en) | Super-resolution image reconstruction method based on progressive depth residual error network | |
CN112347970B (en) | Remote sensing image ground object identification method based on graph convolution neural network | |
CN110349087B (en) | RGB-D image high-quality grid generation method based on adaptive convolution | |
CN112801904B (en) | Hybrid degraded image enhancement method based on convolutional neural network | |
CN114331842B (en) | DEM super-resolution reconstruction method combining topographic features | |
CN112884668A (en) | Lightweight low-light image enhancement method based on multiple scales | |
CN111402138A (en) | Image super-resolution reconstruction method of supervised convolutional neural network based on multi-scale feature extraction fusion | |
CN113177592B (en) | Image segmentation method and device, computer equipment and storage medium | |
CN111861884A (en) | Satellite cloud image super-resolution reconstruction method based on deep learning | |
CN113538234A (en) | Remote sensing image super-resolution reconstruction method based on lightweight generation model | |
CN111798469A (en) | Digital image small data set semantic segmentation method based on deep convolutional neural network | |
CN116486074A (en) | Medical image segmentation method based on local and global context information coding | |
CN115293983A (en) | Self-adaptive image super-resolution restoration method fusing multi-level complementary features | |
WO2022206149A1 (en) | Three-dimensional spectrum situation completion method and apparatus based on generative adversarial network | |
CN114663880A (en) | Three-dimensional target detection method based on multi-level cross-modal self-attention mechanism | |
CN117115359B (en) | Multi-view power grid three-dimensional space data reconstruction method based on depth map fusion | |
CN115511705A (en) | Image super-resolution reconstruction method based on deformable residual convolution neural network | |
CN111696167A (en) | Single image super-resolution reconstruction method guided by self-example learning | |
CN114677281B (en) | FIB-SEM super-resolution method based on generation of countermeasure network | |
CN114862679A (en) | Single-image super-resolution reconstruction method based on residual error generation countermeasure network | |
CN115482434A (en) | Small sample high-quality generation method based on multi-scale generation countermeasure network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |