CN108921783B - Satellite image super-resolution reconstruction method based on mixed loss function constraint - Google Patents

Satellite image super-resolution reconstruction method based on mixed loss function constraint Download PDF

Info

Publication number
CN108921783B
CN108921783B CN201810556589.1A CN201810556589A CN108921783B CN 108921783 B CN108921783 B CN 108921783B CN 201810556589 A CN201810556589 A CN 201810556589A CN 108921783 B CN108921783 B CN 108921783B
Authority
CN
China
Prior art keywords
image
network
resolution
loss function
constraint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810556589.1A
Other languages
Chinese (zh)
Other versions
CN108921783A (en
Inventor
王中元
江奎
易鹏
韩镇
邵振峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201810556589.1A priority Critical patent/CN108921783B/en
Publication of CN108921783A publication Critical patent/CN108921783A/en
Application granted granted Critical
Publication of CN108921783B publication Critical patent/CN108921783B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a satellite image super-resolution reconstruction method based on hybrid loss function constraint, which comprises two processes of non-local network construction and L1 and L2 hybrid paradigm constraint training. In the non-local network construction, for a non-target area, the correlation between the non-target area and the feature vector of the current target area is calculated, the description information related to the target is extracted through convolution, and the information of the current input is enhanced, so that the expression capacity of the network is improved. In the process of constraint training of the mixed norms of L1 and L2, the characteristics of norms of L1 and L2 on sparsity expression of different levels of feature information are considered, and the two norms are used as target loss functions in stages, so that the convergence speed and performance of the network are improved. The invention greatly enhances the feature expression capability of the deep learning network and can reconstruct the high-resolution satellite image with fine texture.

Description

Satellite image super-resolution reconstruction method based on mixed loss function constraint
Technical Field
The invention belongs to the technical field of digital image processing, relates to a satellite image super-resolution reconstruction method, and particularly relates to a satellite image super-resolution reconstruction method of a L1 and L2 mixed paradigm constraint non-local network.
Background
The spatial resolution of satellite images is an important index for measuring the shooting capability of satellites, and how to obtain higher resolution at the existing hardware level has become a development target of satellites in various countries. The application of the super-resolution technology in the field of remote sensing satellites can improve the spatial resolution of the satellites, and can also reduce the focal length of an optical instrument under the condition of keeping the resolution of the satellites, so that the satellite camera is miniaturized, and the volume and the weight of the satellite camera are reduced.
The video satellite is a novel remote sensing satellite, can capture continuous dynamic videos instead of static images, and is very suitable for observing large dynamic targets such as ships, airplanes and the like. Video satellites achieve high temporal resolution at the expense of spatial resolution compared to traditional remote sensing satellites, so it is necessary to recover their ground resolution and fine target structure by means of image super-resolution.
The image super-resolution technology obviously improves the performance of image reconstruction from an early interpolation algorithm to a current deep learning-based method. Although the interpolation-based method has a high operation speed, the resulting image lacks details, and particularly for applications such as video satellite images which require high-precision detail textures, the interpolation method is not suitable. The SA (Shift and Add) method proposed by Elad et al obtains a high-resolution image through an information non-iterative fusion process based on the principles of translation and space invariant blurring, but the algorithm requires that a certain translation relation must be satisfied between low-resolution image sequences, otherwise the phenomenon of 'black grids' is easy to occur. Recently, Patrick Hagerty et al proposed a deep learning based approach that embeds the details of high resolution images in a deep neural network and extracts these details when enhancing similar images. However, the method does not consider and utilize the spatial similarity characteristic of the landform of the object in the large-scale range of the video satellite image, and the self information of the image is not fully utilized, so that the hyper-resolution result excessively depends on the network model parameters, and the expansibility is poor.
Disclosure of Invention
In order to solve the technical problem, the invention provides a satellite image super-resolution reconstruction method based on hybrid loss function constraint, wherein a non-local enhancement block is inserted into a residual network, the self-similarity of a satellite image on the space is fully utilized, useful information outside a receptive field target region is obtained to enhance a current region, and meanwhile, L1 norm and L2 norm are adopted for constraint in stages to obtain a fine reconstruction effect.
The technical scheme adopted by the invention is as follows: a satellite image super-resolution reconstruction method based on hybrid loss function constraint is characterized by comprising the following steps:
step 1: selecting a plurality of satellite image data as training samples, intercepting image blocks with the size of NxN pixels from each frame of image as data samples, and simultaneously down-sampling the data samples by N times to obtain low-resolution image blocks with the corresponding size of MxM as the input of a residual error network, wherein N is N x M;
step 2: extracting image features through a convolution kernel;
and step 3: transmitting the output information of each residual block into a non-local block, and performing characteristic enhancement processing on the input;
and 4, step 4: inputting the obtained characteristic information into a reconstruction layer, simultaneously using L1 and L2 norms as loss functions of the network in stages, optimizing parameters of the network, and outputting to obtain a residual error map;
and 5: simultaneously, amplifying the original input interpolation to obtain an interpolation image with the same size as the high-definition image;
step 6: and adding the interpolation image and the residual image learned by the residual network to obtain a final super-resolution output image.
According to the invention, non-local input information is considered, and meanwhile, the L1 and L2 norms are adopted for feature learning in a targeted manner, so that the expression capability of the network is greatly enhanced, and clear hyper-resolution images can be obtained.
Drawings
Fig. 1 is a schematic diagram of a non-local network block of the present invention.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
Referring to fig. 1, the method for reconstructing super-resolution satellite images based on hybrid loss function constraint provided by the present invention is characterized by comprising the following steps:
step 1: selecting a plurality of satellite image data as training samples, intercepting image blocks with the size of NxN (128 is taken as N in the example) pixels from each frame of image as data samples, and simultaneously downsampling the data samples to obtain low-resolution image blocks with the corresponding size of MxM (32 is taken as N in the example) as input of a residual error network; wherein N is nxm;
step 2: extracting image features through a convolution kernel;
and step 3: transmitting the output information of each residual block into a non-local block, and performing feature enhancement processing on the input by adding additional non-local information;
to achieve better results, the non-local block insertion location is behind the shallow residual block for an enhancement of the front convolution region field.
In the embodiment, a non-local network is adopted to enhance the characteristics, the association vector between the non-local area and the target area is determined by using formula (1) in the network training process through a softmax function, and the non-local area y is calculated in a weighting modeiContribution to target area Wz*yi
zi=Wzyi+xi (1)
Wherein x isiRepresenting a residual join, also an input of non-local blocks, ziRepresenting information output enhanced by non-local blocks, yiCorresponding to areas of the image block, W, differing from the target areazAnd representing the contribution degree of different image block regions to the current target region, and obtaining the weight in a network self-adaptive learning mode.
And 4, step 4: inputting the obtained characteristic information into a reconstruction layer, simultaneously using L1 and L2 norms as loss functions of the network in stages, optimizing parameters of the network, and outputting to obtain a residual error map;
due to the sparse characteristic of the L1 norm, most of the eigenvector coefficients are 0, and only a small part of the eigenvectors are utilized. For this reason, L1 was used earlier as a loss function of the network to learn the characteristics of the low and medium frequency levels to obtain smoother results. The L2 norm is also sparse, but its sparse coefficient is close to 0 but not equal to 0, so that more non-zero feature vectors can be obtained, i.e. information of high frequency level can be learned. Therefore, at the late stage of network optimization, the form of L2 and L1 norm joint constraint is adopted in order to reconstruct better results.
In the embodiment, a mixed paradigm of L1 and L2 is adopted for carrying out constraint in stages, and finally two loss functions are used for carrying out unified constraint, and the expression error of the reconstructed image signal is controlled by using a formula (2);
Figure BDA0001682618680000031
wherein the first part | y-Wx | ceiling1Express L1 loss letterNumber, second part
Figure BDA0001682618680000032
Representing the LOSS of L2, λ represents the weight, x corresponds to the initial input of the network, i.e. the low resolution image, y is the high resolution supervised image, W represents the parameters of the filter in the super resolution network and is also the reconstruction coefficient of the high resolution image, LOSS represents the overall LOSS function of the network, including two parts of the L1 LOSS function and the L2 LOSS;
obtaining an output layer (r) related to the amplification factor using a reconstruction by a sub-pixel convolution operation2C, wherein r is an amplification coefficient, and C is the number of color channels of the output image), and reconstructing to obtain a residual error map; the reconstruction is performed by adopting the sub-pixel convolution operation, which is different from the traditional deconvolution operation, so that the structural information of high-level features can be retained to the maximum extent, and finer detail textures can be obtained.
And 5: simultaneously, amplifying the original input by using a double-cubic interpolation method to obtain an interpolation image with the same size as the high-definition image;
step 6: and adding the interpolation image and the residual learned by the residual network to obtain a final super-resolution output image.
The invention can extract the characteristic information related to the current target area from the non-local area, and enhance the input information, thereby greatly improving the expression capability of the shallow network; and meanwhile, an over-scoring result with better performance is obtained on the premise of ensuring the calculation efficiency.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (3)

1. A satellite image super-resolution reconstruction method based on hybrid loss function constraint is characterized by comprising the following steps:
step 1: selecting a plurality of satellite image data as training samples, intercepting image blocks with the size of NxN pixels from each frame of image as data samples, and simultaneously down-sampling the data samples by N times to obtain low-resolution image blocks with the corresponding size of MxM as the input of a residual error network, wherein N is N x M;
step 2: extracting image features through a convolution kernel;
and step 3: transmitting the output information of each residual block into a non-local block, and performing characteristic enhancement processing on the input;
the method comprises the steps of adopting a non-local network to enhance features, utilizing equation (1) in the network training process, determining an association vector between a non-local area and a target area through a softmax function, and calculating the non-local area y through a weighting modeiA contribution to the target area;
zi=Wzyi+xi (1)
wherein x isiRepresenting a residual join, also an input of non-local blocks, ziRepresenting information output enhanced by non-local blocks, yiCorresponding to areas of the image block, W, differing from the target areazRepresenting the contribution degree of different image block regions to the current target region, and obtaining the weight in a network self-adaptive learning mode;
and 4, step 4: inputting the feature information after the feature enhancement processing in the step 3 into a reconstruction layer, simultaneously optimizing parameters of the network by using L1 and L2 norms as loss functions of the network in stages, and outputting to obtain a residual error map;
the mixed paradigm of L1 and L2 is adopted for carrying out constraint in stages, and finally two loss functions are used for carrying out unified constraint, and the expression error of the reconstructed image signal is controlled by using the formula (2);
Figure FDA0003521555240000011
wherein the first part | y-Wx | ceiling1Representing the L1 loss function, second part
Figure FDA0003521555240000012
Represents the L2 loss, λ represents the weight; x corresponds to the initial input to the network, i.e. the low resolution image; y is a high-resolution monitoring image, W represents parameters of a filter in the super-resolution network and is a reconstruction coefficient of the high-resolution image, and LOSS represents an overall LOSS function of the network and comprises an L1 LOSS function and an L2 LOSS function;
and 5: simultaneously, interpolating and amplifying the satellite image data selected in the step 1 to obtain an interpolated image with the same size as the high-definition image;
step 6: and adding the interpolation image and the residual image learned by the residual network to obtain a final super-resolution output image.
2. The hybrid loss function constraint-based satellite image super-resolution reconstruction method of claim 1, wherein: and 4, adopting sub-pixel convolution operation reconstruction to obtain an output layer related to the amplification factor, and reconstructing to obtain a residual error map.
3. The hybrid loss function constraint-based satellite image super-resolution reconstruction method of claim 1, wherein: in step 5, the original input is amplified by a double cubic interpolation method.
CN201810556589.1A 2018-06-01 2018-06-01 Satellite image super-resolution reconstruction method based on mixed loss function constraint Active CN108921783B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810556589.1A CN108921783B (en) 2018-06-01 2018-06-01 Satellite image super-resolution reconstruction method based on mixed loss function constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810556589.1A CN108921783B (en) 2018-06-01 2018-06-01 Satellite image super-resolution reconstruction method based on mixed loss function constraint

Publications (2)

Publication Number Publication Date
CN108921783A CN108921783A (en) 2018-11-30
CN108921783B true CN108921783B (en) 2022-04-15

Family

ID=64418700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810556589.1A Active CN108921783B (en) 2018-06-01 2018-06-01 Satellite image super-resolution reconstruction method based on mixed loss function constraint

Country Status (1)

Country Link
CN (1) CN108921783B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353944B (en) * 2018-12-20 2024-05-28 深圳市中兴微电子技术有限公司 Image reconstruction method, device and computer readable storage medium
CN109671022B (en) * 2019-01-22 2022-11-18 北京理工大学 Image texture super-resolution enhancement method based on depth feature translation network
CN110245310B (en) * 2019-03-06 2023-10-13 腾讯科技(深圳)有限公司 Object behavior analysis method, device and storage medium
CN110009568A (en) * 2019-04-10 2019-07-12 大连民族大学 The generator construction method of language of the Manchus image super-resolution rebuilding
CN111010566A (en) * 2019-12-04 2020-04-14 杭州皮克皮克科技有限公司 Non-local network-based video compression distortion restoration method and system
TWI733341B (en) * 2020-02-20 2021-07-11 瑞昱半導體股份有限公司 Method and system for image scaling and enhancement
CN113313636A (en) * 2020-02-27 2021-08-27 瑞昱半导体股份有限公司 Method and apparatus for image magnification and enhancement
CN118037549B (en) * 2024-04-11 2024-06-28 华南理工大学 Video enhancement method and system based on video content understanding

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107833183A (en) * 2017-11-29 2018-03-23 安徽工业大学 A kind of satellite image based on multitask deep neural network while super-resolution and the method for coloring

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102142137B (en) * 2011-03-10 2012-10-24 西安电子科技大学 High-resolution dictionary based sparse representation image super-resolution reconstruction method
CN105825477B (en) * 2015-01-06 2019-04-16 南京理工大学 The Remote sensed image super-resolution reconstruction method merged based on more dictionary learnings with non-local information
CN105023240B (en) * 2015-07-08 2019-07-09 北京大学深圳研究生院 The dictionary class image super-resolution system and method rebuild based on iterative projection
CN106683067B (en) * 2017-01-20 2020-06-23 福建帝视信息科技有限公司 Deep learning super-resolution reconstruction method based on residual sub-images
CN106952228B (en) * 2017-03-10 2020-05-22 北京工业大学 Super-resolution reconstruction method of single image based on image non-local self-similarity
CN106960416B (en) * 2017-03-20 2019-05-10 武汉大学 A kind of video satellite that content complexity is adaptive compression image super-resolution method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107833183A (en) * 2017-11-29 2018-03-23 安徽工业大学 A kind of satellite image based on multitask deep neural network while super-resolution and the method for coloring

Also Published As

Publication number Publication date
CN108921783A (en) 2018-11-30

Similar Documents

Publication Publication Date Title
CN108921783B (en) Satellite image super-resolution reconstruction method based on mixed loss function constraint
Chen et al. Real-world single image super-resolution: A brief review
CN108122197B (en) Image super-resolution reconstruction method based on deep learning
CN114119444B (en) Multi-source remote sensing image fusion method based on deep neural network
CN109146787B (en) Real-time reconstruction method of dual-camera spectral imaging system based on interpolation
CN110349087B (en) RGB-D image high-quality grid generation method based on adaptive convolution
Tu et al. SWCGAN: Generative adversarial network combining swin transformer and CNN for remote sensing image super-resolution
CN113034361B (en) Remote sensing image super-resolution reconstruction method based on improved ESRGAN
CN113222825B (en) Infrared image super-resolution reconstruction method based on visible light image training and application
CN111242173A (en) RGBD salient object detection method based on twin network
Yang et al. License plate image super-resolution based on convolutional neural network
Shen et al. Deeper super-resolution generative adversarial network with gradient penalty for sonar image enhancement
CN115578255A (en) Super-resolution reconstruction method based on inter-frame sub-pixel block matching
CN115526779A (en) Infrared image super-resolution reconstruction method based on dynamic attention mechanism
CN112396554B (en) Image super-resolution method based on generation of countermeasure network
Feng et al. Guided filter‐based multi‐scale super‐resolution reconstruction
CN113379606B (en) Face super-resolution method based on pre-training generation model
CN114359041A (en) Light field image space super-resolution reconstruction method
Zhang et al. Enhanced visual perception for underwater images based on multistage generative adversarial network
Hou et al. Joint learning of image deblurring and depth estimation through adversarial multi-task network
CN114022356A (en) River course flow water level remote sensing image super-resolution method and system based on wavelet domain
CN117173025A (en) Single-frame image super-resolution method and system based on cross-layer mixed attention transducer
CN112184552A (en) Sub-pixel convolution image super-resolution method based on high-frequency feature learning
CN113674154B (en) Single image super-resolution reconstruction method and system based on generation countermeasure network
CN116503251A (en) Super-resolution reconstruction method for generating countermeasure network remote sensing image by combining hybrid expert

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant