CN110490799A - Based on the target in hyperspectral remotely sensed image super-resolution method from fusion convolutional neural networks - Google Patents
Based on the target in hyperspectral remotely sensed image super-resolution method from fusion convolutional neural networks Download PDFInfo
- Publication number
- CN110490799A CN110490799A CN201910676794.6A CN201910676794A CN110490799A CN 110490799 A CN110490799 A CN 110490799A CN 201910676794 A CN201910676794 A CN 201910676794A CN 110490799 A CN110490799 A CN 110490799A
- Authority
- CN
- China
- Prior art keywords
- target
- remotely sensed
- sensed image
- hyperspectral remotely
- resolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 41
- 230000004927 fusion Effects 0.000 title claims abstract description 30
- 238000012549 training Methods 0.000 claims abstract description 58
- 238000012360 testing method Methods 0.000 claims abstract description 19
- 238000005070 sampling Methods 0.000 claims abstract description 15
- 238000013461 design Methods 0.000 claims abstract description 13
- 238000012545 processing Methods 0.000 claims description 12
- 230000001186 cumulative effect Effects 0.000 claims description 8
- 241001269238 Data Species 0.000 claims description 4
- 238000012512 characterization method Methods 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 3
- 238000003475 lamination Methods 0.000 claims description 3
- 238000001228 spectrum Methods 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 6
- 238000002474 experimental method Methods 0.000 description 4
- VMXUWOKSQNHOCA-UKTHLTGXSA-N ranitidine Chemical compound [O-][N+](=O)\C=C(/NC)NCCSCC1=CC=C(CN(C)C)O1 VMXUWOKSQNHOCA-UKTHLTGXSA-N 0.000 description 4
- 230000003595 spectral effect Effects 0.000 description 4
- 238000000701 chemical imaging Methods 0.000 description 3
- 230000001537 neural effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 210000004218 nerve net Anatomy 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 206010057249 Phagocytosis Diseases 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000000465 moulding Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008782 phagocytosis Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
- G06T3/4076—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of based on the target in hyperspectral remotely sensed image super-resolution method for merging convolutional neural networks certainly, and design is comprising there are three the fusion convolutional neural networks certainly of parallel-convolution process first;Normalized, 2 times of bicubic down-samplings, 2 times of bicubic up-samplings are successively taken turns doing respectively to 8 width target in hyperspectral remotely sensed image, then to obtained 8 width image project training data and training label;Then it is trained using obtained training data and training label to from fusion convolutional neural networks, until training process iteration terminates, network after being trained;Trained network finally is inputted using low resolution target in hyperspectral remotely sensed image to be processed as test data, obtains target high-resolution target in hyperspectral remotely sensed image.The present invention solves the problems, such as that target in hyperspectral remotely sensed image spatial resolution existing in the prior art is low.
Description
Technical field
The invention belongs to remote sensing image technical fields, and in particular to a kind of EO-1 hyperion based on from fusion convolutional neural networks
Remote sensing image super-resolution method.
Background technique
Hyperspectral imager can obtain the spectral information of atural object as one of the core load in current deep space exploration simultaneously
And spatial information.Wherein spectral information can be used for the material composition of inverting atural object, and what spatial information then reflected is atural object
The information such as shape, texture, layout.By the combination of empty spectrum information, the accurate detection to atural object, identification may be implemented and quantify
Change the application such as attributive analysis.
However, since optical spectrum imagers contain more light splitting channel, the limited amount of photon on each channel is caused existing
There is target in hyperspectral remotely sensed image spatial resolution low, target and background contrast is low, often leads to the discrimination of ground object target and distinguishes
The property known is poor, it is difficult to realize high-precision classification and attributive analysis.A kind of post processing of image method of the super-resolution as classics, can
In the case where not changing hardware imaging device condition, the spatial resolution of input picture is improved.Therefore, for high-spectrum remote-sensing
The super-resolution processing of image is just gradually by the extensive concern of industry.Convolutional neural networks are a kind of comprising convolutional calculation and tool
There is the feedforward neural network of depth structure, is one of representative algorithm of deep learning.It can be formed more by combination low-level feature
Add abstract high-level characteristic, to find insoluble image profound level characteristic present problem in traditional algorithm, realizes that image is special
Sign makes full use of.In view of both having included close correlation between the adjacent band inside target in hyperspectral remotely sensed image, while
Some detail of the high frequency also show difference, by using convolutional neural networks carry out between internal wave band from fusion treatment,
The super-resolution processing for carrying out target in hyperspectral remotely sensed image, makes full use of information entrained by Hyperspectral imaging, to obtain image
High spatial resolution description.
Summary of the invention
The object of the present invention is to provide a kind of target in hyperspectral remotely sensed image super-resolutions based on from fusion convolutional neural networks
Method solves the problems, such as that target in hyperspectral remotely sensed image spatial resolution existing in the prior art is low.
The technical scheme adopted by the invention is that based on the target in hyperspectral remotely sensed image super-resolution from fusion convolutional neural networks
Rate method, is specifically implemented according to the following steps:
Step 1, design are comprising there are three the fusion convolutional neural networks certainly of parallel-convolution process;
Step 2 successively takes turns doing normalized, 2 times of bicubic down-samplings, 2 times to 8 width target in hyperspectral remotely sensed image respectively
Bicubic up-sampling, then to obtained 8 width image project training data and training label;
Step 3 merges convolutional Neural net in step 1 using training data obtained in step 2 and training label certainly
Network is trained, until training process iteration terminates, network after being trained;
Step 4, using low resolution target in hyperspectral remotely sensed image to be processed as trained in test data input step 3
Network obtains target high-resolution target in hyperspectral remotely sensed image.
The features of the present invention also characterized in that
Step 1 is specifically implemented according to the following steps:
Step 1.1, design include three parallel-convolution processes, include the nerve net of three-layer coil lamination in each convolution process
Network model, the input data of three parallel-convolutions are current band respectively, next wave band of current band and two wave bands it
Between difference;
Step 1.2, the three-layer coil product of three parallel-convolution process internals are consistent, and wherein first layer convolution includes n1
The convolution kernel of s1*s1, second layer convolution include the convolution kernel of n2 s2*s2, and third layer convolution includes the volume of n3 s3*s3
Product core is operated without Boundary filling for the characteristic pattern after convolution, and step-length is 1, and first and second convolutional layers it
All guarantee the nonnegativity for the mapping relations that network is characterized along with a linear error correction unit afterwards;
Step 1.3, a characteristic pattern for finally exporting three parallel-convolution processes respectively by design eltwise layers into
Row is cumulative, obtains the new characteristic pattern of a width;
Step 1.4 is due to the adjacent band that the characteristic pattern to add up in step 1.3 is by current band and current band
Characteristic pattern is cumulative to be obtained, therefore designs power layers divided by 2 to all elements in the characteristic pattern after cumulative, obtains the new spy of a width
Sign figure;
Characteristic pattern obtained in step 1.4 and label are designed loss layers of calculating error by step 1.5, and error is reversely passed
It broadcasts, carries out from the update for merging convolutional neural networks parameter.
N1=64 in step 1.2, s1*s1=9x9.
N2=32 in step 1.2, s2*s2=1x1.
N3=1 in step 1.2, s3*s3=5x5.
Step 2 is specifically implemented according to the following steps:
Step 2.1 successively does normalized to the target in hyperspectral remotely sensed image that 8 width sizes are 1304*1392*519, owns
Pixel is distributed between [0,1];
Step 2.2 carries out 2 times of bicubic down-samplings to the 8 width images after step 1.1 normalized, and correspondence obtains 8
The target in hyperspectral remotely sensed image of width low resolution;
Step 2.3 successively carries out bicubic up-sampling to the target in hyperspectral remotely sensed image of 8 width low resolution, and correspondence obtains 8 width
Target in hyperspectral remotely sensed image;
Step 2.4, the generation to data are trained in 8 width target in hyperspectral remotely sensed image in step 2.3, after in advance
The sequence of column wave band again successively intercepts 50000 small image blocks for training from every width image, wherein each for training
Small tile size is m1*m1*2, and wherein 2 in the third dimension indicate current band and next wave band, for generating small image block
Step-length be set as b1, the record wherein band number i where the 50000th small training data block;
Step 2.5, to after normalized in step 2.1 8 width original high resolution target in hyperspectral remotely sensed image interception it is small
Image block corresponds respectively to the label of 50000 training datas in step 2.4, and specific interception way is according to Row Column
The sequence of wave band again causes after three-layer coil product processing most since the characteristic pattern after convolution in step 1 is without Boundary filling
The characteristic pattern size exported eventually is (m1-s1-s2-s3+3) * (m1-s1-s2-s3+3), therefore label is that corresponding region is most intermediate
(m1-s1-s2-s3+3) * (m1-s1-s2-s3+3) block, and the third dimension of label be 1, corresponding to current in training data
Wave band;
Step 2.6, according to the sequence of Row Column wave band again, using i+1 wave band as starting wave band, referring to step
2.4 and step 2.5 generate 10000 test data blocks and corresponding label respectively.
M1=63 in step 2.4, b1=34.
M1-s1-s2-s3+3=51 in step 2.5.
Step 3 is specifically implemented according to the following steps:
Step 3.1, be arranged training the number of iterations be 15000000 times, learning rate is fixed as 0.001 in training process, not into
Row weight is decayed, and is criticized in training process and is dimensioned to 128;
Step 3.2 starts to train, until iteration terminates, obtains final network model.
Step 4 is specific as follows:
By wave band each in the target in hyperspectral remotely sensed image for needing super-resolution processing and its next wave band while as input
Data are input in the final network model that training obtains in step 3, obtain the corresponding high-resolution characterization of current band,
To obtain the high-resolution target in hyperspectral remotely sensed image after final super-resolution.
The beneficial effects of the invention are as follows a kind of target in hyperspectral remotely sensed image super-resolutions based on from fusion convolutional neural networks
Method, while current band wave band adjacent thereto is inputted as training data, the corresponding high-resolution wave band conduct of current band
The convolutional neural networks of label obtain the complementary information between current band wave band adjacent thereto, internal fusion are realized, to make
It is of the invention not merely be mapping relations between a large amount of training datas and label, input the spatial information of wave band, also
Spatial information entrained by its adjacent band, and complementary information between the two, to obtain better super-resolution performance.
Detailed description of the invention
Fig. 1 is a kind of target in hyperspectral remotely sensed image super-resolution method stream based on internal fusion convolutional neural networks of the present invention
Cheng Tu;
Fig. 2 is the subjective vision effect picture of experimental data used in the present invention;
Fig. 3 is the increase with the number of iterations, the loss change curve of training network;
Fig. 4 is to carry out 2 times of super-resolutions for one group of test data, between model performance of the invention and the number of iterations
Variation tendency;
Fig. 5 is to be directed to same group of test data equally to carry out 2 times of super-resolutions, the present invention and no inside from Fusion Model
Common convolutional neural networks, the performance comparison with the increase of the number of iterations, in this index of PSNR;
Fig. 6 (a) is network designed by the present invention at iteration 15000000 times, in the test data of input
100 wave bands, the extracted 18 width characteristic pattern of first convolutional layer;
Fig. 6 (b) is network designed by the present invention at iteration 15000000 times, in the test data of input
100 wave bands, the extracted 4 width characteristic pattern of second convolutional layer;
Fig. 6 (c) is network designed by the present invention at iteration 15000000 times, in the test data of input
100 wave bands, the extracted 1 width characteristic pattern of third convolutional layer.
Specific embodiment
The following describes the present invention in detail with reference to the accompanying drawings and specific embodiments.
The present invention is based on the target in hyperspectral remotely sensed image super-resolution method from fusion convolutional neural networks, flow chart such as Fig. 1
It is shown, it is specifically implemented according to the following steps:
Step 1, design comprising merging convolutional neural networks there are three parallel-convolution process certainly, specifically according to the following steps
Implement:
Step 1.1, design include three parallel-convolution processes, include the nerve net of three-layer coil lamination in each convolution process
Network model, the input data of three parallel-convolutions are current band respectively, next wave band of current band and two wave bands it
Between difference;
Step 1.2, the three-layer coil product of three parallel-convolution process internals are consistent, and wherein first layer convolution includes n1
The convolution kernel of s1*s1, second layer convolution include the convolution kernel of n2 s2*s2, and third layer convolution includes the volume of n3 s3*s3
Product core is operated without Boundary filling for the characteristic pattern after convolution, and step-length is 1, and first and second convolutional layers it
All guarantee the nonnegativity for the mapping relations that network is characterized along with a linear error correction unit afterwards;
Step 1.3, a characteristic pattern for finally exporting three parallel-convolution processes respectively by design eltwise layers into
Row is cumulative, obtains the new characteristic pattern of a width;
Step 1.4 is due to the adjacent band that the characteristic pattern to add up in step 1.3 is by current band and current band
Characteristic pattern is cumulative to be obtained, therefore designs power layers divided by 2 to all elements in the characteristic pattern after cumulative, obtains the new spy of a width
Sign figure;
Characteristic pattern obtained in step 1.4 and label are designed loss layers of calculating error by step 1.5, and error is reversely passed
It broadcasts, carries out from the update for merging convolutional neural networks parameter.
N1=64 in step 1.2, s1*s1=9x9.
N2=32 in step 1.2, s2*s2=1x1.
N3=1 in step 1.2, s3*s3=5x5.
Step 2 successively takes turns doing normalized, 2 times of bicubic down-samplings, 2 times to 8 width target in hyperspectral remotely sensed image respectively
Bicubic up-sampling is specifically implemented according to the following steps then to obtained 8 width image project training data and training label:
Step 2.1 successively does normalized to the target in hyperspectral remotely sensed image that 8 width sizes are 1304*1392*519, owns
Pixel is distributed between [0,1];
Step 2.2 carries out 2 times of bicubic down-samplings to the 8 width images after step 1.1 normalized, and correspondence obtains 8
The target in hyperspectral remotely sensed image of width low resolution;
Step 2.3 successively carries out bicubic up-sampling to the target in hyperspectral remotely sensed image of 8 width low resolution, and correspondence obtains 8 width
Target in hyperspectral remotely sensed image;
Step 2.4, the generation to data are trained in 8 width target in hyperspectral remotely sensed image in step 2.3, after in advance
The sequence of column wave band again successively intercepts 50000 small image blocks for training from every width image, wherein each for training
Small tile size is m1*m1*2, and wherein 2 in the third dimension indicate current band and next wave band, for generating small image block
Step-length be set as b1, the record wherein band number i where the 50000th small training data block;
Step 2.5, to after normalized in step 2.1 8 width original high resolution target in hyperspectral remotely sensed image interception it is small
Image block corresponds respectively to the label of 50000 training datas in step 2.4, and specific interception way is according to Row Column
The sequence of wave band again causes after three-layer coil product processing most since the characteristic pattern after convolution in step 1 is without Boundary filling
The characteristic pattern size exported eventually is (m1-s1-s2-s3+3) * (m1-s1-s2-s3+3), and label should be original high-resolution
Region corresponding with training data space in image, the phagocytosis due to convolution process for boundary, label are pair
(m1-s1-s2-s3+3) * (m1-s1-s2-s3+3) block that region is most intermediate is answered, and the third dimension of label is 1, corresponds to training
Current band in data;
Step 2.6, according to the sequence of Row Column wave band again, using i+1 wave band as starting wave band, referring to step
2.4 and step 2.5 generate 10000 test data blocks and corresponding label respectively.
M1=63 in step 2.4, b1=34.
M1-s1-s2-s3+3=51 in step 2.5.
Step 3 merges convolutional Neural net in step 1 using training data obtained in step 2 and training label certainly
Network is trained, until training process iteration terminates, network after being trained is specifically implemented according to the following steps:
Step 3.1, be arranged training the number of iterations be 15000000 times, learning rate is fixed as 0.001 in training process, not into
Row weight is decayed, and is criticized in training process and is dimensioned to 128;
Step 3.2 starts to train, until iteration terminates, obtains final network model.
Step 4, using low resolution target in hyperspectral remotely sensed image to be processed as trained in test data input step 3
Network obtains target high-resolution target in hyperspectral remotely sensed image, specific as follows:
By wave band each in the target in hyperspectral remotely sensed image for needing super-resolution processing and its next wave band while as input
Data are input in the final network model that training obtains in step 3, obtain the corresponding high-resolution characterization of current band,
To obtain the high-resolution target in hyperspectral remotely sensed image after final super-resolution.
The validity that super-resolution processing is carried out to verify the present invention for target in hyperspectral remotely sensed image, passes through emulation experiment
The comparative experiments provided.Experiment porch is that processor is on Intel core i5 2.8GHZ, the interior Windows for saving as 16.0GB
MATLAB (R2015b), specific network training platform be NVIDIA GTX1080Ti chip.The comparison algorithm of use have through
The bicubic interpolation method of allusion quotation, and do not include the general network of spectrum difference module, the wherein number of plies and convolution of general network
The setting of core size and number is consistent with the present invention.
Selected data set is tested to be created by Israel Ben Guli peace university's interdiscipline computer vision laboratory
(Interdisciplinary Computational Vision Lab, icvl) data set, the present invention therefrom has chosen 8 width
The target in hyperspectral remotely sensed image that size is 1304x1392x519 is used for performance verification, this 519 wave bands cover the upper 400- of spectrum dimension altogether
1000 nanometers of sections.In view of there is this 8 width image enough space sizes and wave band number to be used for the generation of training data, therefore
Suitable for carrying out the training of model and the algorithm performance verifying of model.
Experimental work is by comparing without the internal common convolutional neural networks model from fusion and using institute of the present invention
The convolutional neural networks model of the inside of proposition from fusion carrys out the feasibility of verification algorithm.
Used objective standard diagrams have: related coefficient (CC), mean square deviation (RMSE), Y-PSNR (PSNR), knot
Structure similarity (SSIM), relatively global fusion error (ERGAS).Wherein CC, PSNR, SSIM are bigger, indicate image after rebuilding
It is higher with the similarity of raw video, illustrate that the performance of algorithm is better, the optimal value of CC and SSIM are the optimal value of 1, PSNR
For infinity.ERGAS reflection be in image after super-resolution processing all wave bands with the spectrum of original reference image entirety
Degreeof tortuosity, ERGAS is smaller, indicates that the spectrum introduced distortion is smaller, illustrates that the performance of algorithm is better.
If I, I' respectively indicate the Hyperspectral imaging obtained with reference to image and super-resolution rebuilding, size is Rsw×sh×n, s
Indicate the multiple of super-resolution processing, what n was indicated is total wave band number in image, after sw and sh respectively indicate super-resolution processing
The width and height of image.Wherein k-th of wave band I'kRMSE calculation formula are as follows:
I and j respectively indicates the position of spatially transverse and longitudinal coordinate.M and n respectively indicates image line number spatially and columns, final right
The RMSE value of all wave bands is averaged to obtain the RMSE value of current image.In the initialization procedure of image, all EO-1 hyperions
The element value of image has normalized to [0,1] section, therefore the calculation formula of PSNR can indicate are as follows:SSIM indicates structural similarity, and calculation formula can indicate are as follows:Wherein μIIt is the average value of I, μI'It is the average value of I', σIAnd σI'Point
It is not the mean square deviation of I and I'.σII'It is then the covariance of I and I'.c1And c2It is for maintaining stable constant.Related coefficient CC's
Calculation formula are as follows:
Wherein,Indicate single
Correlation between wave band.The calculation formula of ERGAS is then are as follows:
The value is smaller, then it represents that the image after super-resolution rebuilding is closer to original reference image.
(1) target in hyperspectral remotely sensed image super-resolution is tested:
Table 1 is that classical bicubic interpolation side is respectively adopted for the 93-518 wave band in the 6th width image in Fig. 2
Method, using the general neural network of spectral differences and the inside proposed by the invention based on spectrum difference module from converged network
The items for carrying out obtained high-resolution Hyperspectral imaging after 2 times of super-resolutions objectively evaluate index.Using the 6th width shadow in Fig. 2
As the main reason for be that bridge this main body in the image includes more curve, be more suitable for differentiating that super-resolution is calculated
The superiority and inferiority of method performance.
By the experimental result in table 1 it is found that compared to classical bicubic interpolation method, the present invention significantly improves bloom
Compose the spatial information of remote sensing image;Compared to general network, the present invention introduces light in the case where maintaining former network parameter setting
Spectral difference module carries out internal fusion certainly, to obtain the promotion of super-resolution performance.
Objective indicator after table 1 distinct methods, 2 times of super-resolutions
The present invention is based on from the target in hyperspectral remotely sensed image super-resolution method for merging convolutional neural networks, can effectively improve
The spatial resolution of target in hyperspectral remotely sensed image.At the same time, spectrum difference module employed in the present invention, may be directly applied to
Other models, to obtain the promotion of the model super-resolution performance.
(2) the degree of convergence experiment in training process:
Fig. 3 illustrates model of the invention in the process kind being trained, and the loss of training network is with the number of iterations
Change curve.As can be seen that with the accumulation of the number of iterations, network error gradually becomes smaller, tends to restrain.Meanwhile the present invention
Use obtained image after the 100th to 105 wave band progress 2 times of down-samplings of bicubic in this panel height spectrum image of gavyam
As test data, specific difference model is for shown in curve in the datagram 4 of this test performance.According to the curve in Fig. 4
As can be seen that model performance proposed by the invention is become better and better, this also demonstrates of the invention with the increase of the number of iterations
Model training process is convergent.
(3) performance comparison between the present invention and common convolutional neural networks:
Fig. 5 describes equally to carry out 2 times of super-resolutions for same group of test data, and the present invention is melted certainly with without internal
The common convolutional neural networks of molding type, the performance comparison with the increase of the number of iterations, in this index of PSNR.It can see
Out, either common convolutional neural networks or it is proposed by the invention include the internal convolutional Neural net from Fusion Module
Network, with the increase of the number of iterations, the performance of network is all become better and better.In addition, model proposed by the invention is on 9000000 left sides
Performance starts to surmount common convolutional neural networks model after right iteration, this is because model parameter number proposed by the invention
Extra common convolutional neural networks model, it is therefore desirable to more iterative process seeks the optimal value to model parameter, and
Model performance ascendant trend proposed by the invention is better than common convolutional neural networks always.Fig. 6 (a)~Fig. 6 (c) is illustrated pair
In this group of test data, Fig. 6 (a) is network designed by the present invention at iteration 15000000 times, for the test number of input
The 100th wave band in, the extracted 18 width characteristic pattern of first convolutional layer;Fig. 6 (b) is network designed by the present invention repeatedly
At generation 15000000 times, for the 100th wave band in the test data of input, the extracted 4 width characteristic pattern of second convolutional layer;
Fig. 6 (c) is network designed by the present invention at iteration 15000000 times, for the 100th wave band in the test data of input,
The extracted 1 width characteristic pattern of third convolutional layer.The network trained of the present invention different convolutional layers some characteristic patterns,
As can be seen that the extracted network characterization of the present invention is more and more specific and comprehensive with the increase of the number of plies, good oversubscription is obtained
Resolution effect.
Claims (10)
1. based on from merge convolutional neural networks target in hyperspectral remotely sensed image super-resolution method, which is characterized in that specifically according to
Following steps are implemented:
Step 1, design are comprising there are three the fusion convolutional neural networks certainly of parallel-convolution process;
Step 2 successively takes turns doing normalized, 2 times of bicubic down-samplings, 2 times double three to 8 width target in hyperspectral remotely sensed image respectively
Secondary up-sampling, then to obtained 8 width image project training data and training label;
Step 3, using training data obtained in step 2 and training label in step 1 from fusion convolutional neural networks into
Row training, until training process iteration terminates, network after being trained;
Step 4, using low resolution target in hyperspectral remotely sensed image to be processed as trained net in test data input step 3
Network obtains target high-resolution target in hyperspectral remotely sensed image.
2. the target in hyperspectral remotely sensed image super-resolution method according to claim 1 based on from fusion convolutional neural networks,
It is characterized in that, the step 1 is specifically implemented according to the following steps:
Step 1.1, design include three parallel-convolution processes, include the neural network mould of three-layer coil lamination in each convolution process
Type, the input data of three parallel-convolutions is current band respectively, between next wave band of current band and two wave bands
Difference;
Step 1.2, the three-layer coil product of three parallel-convolution process internals are consistent, and wherein first layer convolution includes n1 s1*
The convolution kernel of s1, second layer convolution include the convolution kernel of n2 s2*s2, and third layer convolution includes the convolution of n3 s3*s3
Core is operated without Boundary filling for the characteristic pattern after convolution, and step-length is 1, and after first and second convolutional layers
All guarantee the nonnegativity for the mapping relations that network is characterized along with a linear error correction unit;
Step 1.3, by a characteristic pattern that three parallel-convolution processes finally export respectively by design eltwise layers carry out it is tired
Add, obtains the new characteristic pattern of a width;
Step 1.4 is by the feature of the adjacent band of current band and current band due to the characteristic pattern to add up in step 1.3
Figure is cumulative to be obtained, therefore designs power layers divided by 2 to all elements in the characteristic pattern after cumulative, obtains the new characteristic pattern of a width;
Characteristic pattern obtained in step 1.4 and label are designed loss layers of calculating error by step 1.5, and by error back propagation,
It carries out from the update for merging convolutional neural networks parameter.
3. the target in hyperspectral remotely sensed image super-resolution method according to claim 2 based on from fusion convolutional neural networks,
It is characterized in that, n1=64, s1*s1=9x9 in the step 1.2.
4. the target in hyperspectral remotely sensed image super-resolution method according to claim 2 based on from fusion convolutional neural networks,
It is characterized in that, n2=32, s2*s2=1x1 in the step 1.2.
5. the target in hyperspectral remotely sensed image super-resolution method according to claim 2 based on from fusion convolutional neural networks,
It is characterized in that, n3=1, s3*s3=5x5 in the step 1.2.
6. the target in hyperspectral remotely sensed image super-resolution method according to claim 2 based on from fusion convolutional neural networks,
It is characterized in that, the step 2 is specifically implemented according to the following steps:
Step 2.1 successively does normalized, all pixels to the target in hyperspectral remotely sensed image that 8 width sizes are 1304*1392*519
Point is distributed between [0,1];
Step 2.2 carries out 2 times of bicubic down-samplings to the 8 width images after step 1.1 normalized, it is corresponding obtain 8 it is low
The target in hyperspectral remotely sensed image of resolution ratio;
Step 2.3 successively carries out bicubic up-sampling to the target in hyperspectral remotely sensed image of 8 width low resolution, and correspondence obtains 8 panel height light
Compose remote sensing image;
Step 2.4, the generation to data are trained in 8 width target in hyperspectral remotely sensed image in step 2.3, again according to Row Column
The sequence of wave band successively intercepts 50000 small image blocks for training from every width image, wherein each small figure for training
Picture block size is m1*m1*2, and wherein 2 in the third dimension indicate current bands and next wave band, for generating the step of small image block
It is long to be set as b1, record the wherein band number i where the 50000th small training data block;
Step 2.5 intercepts small image to 8 width original high resolution target in hyperspectral remotely sensed image after normalized in step 2.1
Block corresponds respectively to the label of 50000 training datas in step 2.4, and specific interception way is according to Row Column wave again
The sequence of section causes final defeated after three-layer coil product processing since the characteristic pattern after convolution in step 1 is without Boundary filling
Characteristic pattern size out is (m1-s1-s2-s3+3) * (m1-s1-s2-s3+3), therefore label is that corresponding region is most intermediate
(m1-s1-s2-s3+3) * (m1-s1-s2-s3+3) block, and the third dimension of label is 1, works as prewave corresponding in training data
Section;
Step 2.6, according to the sequence of Row Column wave band again, using i+1 wave band as starting wave band, referring to step 2.4 and
Step 2.5 generates 10000 test data blocks and corresponding label respectively.
7. the target in hyperspectral remotely sensed image super-resolution method according to claim 6 based on from fusion convolutional neural networks,
It is characterized in that, m1=63, b1=34 in the step 2.4.
8. the target in hyperspectral remotely sensed image super-resolution method according to claim 6 based on from fusion convolutional neural networks,
It is characterized in that, m1-s1-s2-s3+3=51 in the step 2.5.
9. the target in hyperspectral remotely sensed image super-resolution method according to claim 3 based on from fusion convolutional neural networks,
It is characterized in that, the step 3 is specifically implemented according to the following steps:
Step 3.1, setting training the number of iterations are 15000000 times, and learning rate is fixed as 0.001 in training process, without power
Value decays, and criticizes in training process and is dimensioned to 128;
Step 3.2 starts to train, until iteration terminates, obtains final network model.
10. the target in hyperspectral remotely sensed image super-resolution method according to claim 9 based on from fusion convolutional neural networks,
It is characterized in that, the step 4 is specific as follows:
By wave band each in the target in hyperspectral remotely sensed image for needing super-resolution processing and its next wave band while it being used as input data,
It is input in the final network model that training obtains in step 3, obtains the corresponding high-resolution characterization of current band, thus
High-resolution target in hyperspectral remotely sensed image after to final super-resolution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910676794.6A CN110490799B (en) | 2019-07-25 | 2019-07-25 | Hyperspectral remote sensing image super-resolution method based on self-fusion convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910676794.6A CN110490799B (en) | 2019-07-25 | 2019-07-25 | Hyperspectral remote sensing image super-resolution method based on self-fusion convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110490799A true CN110490799A (en) | 2019-11-22 |
CN110490799B CN110490799B (en) | 2021-09-24 |
Family
ID=68548368
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910676794.6A Active CN110490799B (en) | 2019-07-25 | 2019-07-25 | Hyperspectral remote sensing image super-resolution method based on self-fusion convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110490799B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111192193A (en) * | 2019-11-26 | 2020-05-22 | 西安电子科技大学 | Hyperspectral single-image super-resolution method based on 1-dimensional-2-dimensional convolution neural network |
CN112464733A (en) * | 2020-11-04 | 2021-03-09 | 北京理工大学重庆创新中心 | High-resolution optical remote sensing image ground feature classification method based on bidirectional feature fusion |
CN112801929A (en) * | 2021-04-09 | 2021-05-14 | 宝略科技(浙江)有限公司 | Local background semantic information enhancement method for building change detection |
CN113628111A (en) * | 2021-07-28 | 2021-11-09 | 西安理工大学 | Hyperspectral image super-resolution method based on gradient information constraint |
WO2022089064A1 (en) * | 2020-10-31 | 2022-05-05 | 华为技术有限公司 | Image recognition method and electronic device |
CN114820741A (en) * | 2022-04-29 | 2022-07-29 | 辽宁工程技术大学 | Hyperspectral image full-waveband hyper-resolution reconstruction method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103530860A (en) * | 2013-09-26 | 2014-01-22 | 天津大学 | Adaptive autoregressive model-based hyper-spectral imagery super-resolution method |
CN107240066A (en) * | 2017-04-28 | 2017-10-10 | 天津大学 | Image super-resolution rebuilding algorithm based on shallow-layer and deep layer convolutional neural networks |
CN108537731A (en) * | 2017-12-29 | 2018-09-14 | 西安电子科技大学 | Image super-resolution rebuilding method based on compression multi-scale feature fusion network |
CN109509160A (en) * | 2018-11-28 | 2019-03-22 | 长沙理工大学 | Hierarchical remote sensing image fusion method utilizing layer-by-layer iteration super-resolution |
CN109785236A (en) * | 2019-01-21 | 2019-05-21 | 中国科学院宁波材料技术与工程研究所 | A kind of image super-resolution method based on super-pixel and convolutional neural networks |
CN109801218A (en) * | 2019-01-08 | 2019-05-24 | 南京理工大学 | Multi-spectral remote sensing image Pan-sharpening method based on multi-layer-coupled convolutional neural networks |
-
2019
- 2019-07-25 CN CN201910676794.6A patent/CN110490799B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103530860A (en) * | 2013-09-26 | 2014-01-22 | 天津大学 | Adaptive autoregressive model-based hyper-spectral imagery super-resolution method |
CN107240066A (en) * | 2017-04-28 | 2017-10-10 | 天津大学 | Image super-resolution rebuilding algorithm based on shallow-layer and deep layer convolutional neural networks |
CN108537731A (en) * | 2017-12-29 | 2018-09-14 | 西安电子科技大学 | Image super-resolution rebuilding method based on compression multi-scale feature fusion network |
CN109509160A (en) * | 2018-11-28 | 2019-03-22 | 长沙理工大学 | Hierarchical remote sensing image fusion method utilizing layer-by-layer iteration super-resolution |
CN109801218A (en) * | 2019-01-08 | 2019-05-24 | 南京理工大学 | Multi-spectral remote sensing image Pan-sharpening method based on multi-layer-coupled convolutional neural networks |
CN109785236A (en) * | 2019-01-21 | 2019-05-21 | 中国科学院宁波材料技术与工程研究所 | A kind of image super-resolution method based on super-pixel and convolutional neural networks |
Non-Patent Citations (4)
Title |
---|
JING HU,ETC: "Hyperspectral Image Super-Resolution by Deep Spatial-Spectral Exploitation", 《REMOTE SENSING》 * |
WEISHENG DONG,ETC: "Hyperspectral Image Super-Resolution via Non-Negative Structured Sparse Representation", 《 IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
XIAN-HUA HAN,ETC: "SSF-CNN: Spatial and Spectral Fusion with CNN for Hyperspectral Image Super-Resolution", 《2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)》 * |
高春波: "生成对抗网络的图像超分辨率重建", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111192193A (en) * | 2019-11-26 | 2020-05-22 | 西安电子科技大学 | Hyperspectral single-image super-resolution method based on 1-dimensional-2-dimensional convolution neural network |
CN111192193B (en) * | 2019-11-26 | 2022-02-01 | 西安电子科技大学 | Hyperspectral single-image super-resolution method based on 1-dimensional-2-dimensional convolution neural network |
WO2022089064A1 (en) * | 2020-10-31 | 2022-05-05 | 华为技术有限公司 | Image recognition method and electronic device |
CN112464733A (en) * | 2020-11-04 | 2021-03-09 | 北京理工大学重庆创新中心 | High-resolution optical remote sensing image ground feature classification method based on bidirectional feature fusion |
CN112801929A (en) * | 2021-04-09 | 2021-05-14 | 宝略科技(浙江)有限公司 | Local background semantic information enhancement method for building change detection |
CN113628111A (en) * | 2021-07-28 | 2021-11-09 | 西安理工大学 | Hyperspectral image super-resolution method based on gradient information constraint |
CN113628111B (en) * | 2021-07-28 | 2024-04-12 | 西安理工大学 | Hyperspectral image super-resolution method based on gradient information constraint |
CN114820741A (en) * | 2022-04-29 | 2022-07-29 | 辽宁工程技术大学 | Hyperspectral image full-waveband hyper-resolution reconstruction method |
Also Published As
Publication number | Publication date |
---|---|
CN110490799B (en) | 2021-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110490799A (en) | Based on the target in hyperspectral remotely sensed image super-resolution method from fusion convolutional neural networks | |
CN110717354B (en) | Super-pixel classification method based on semi-supervised K-SVD and multi-scale sparse representation | |
CN111784602B (en) | Method for generating countermeasure network for image restoration | |
CN108734661B (en) | High-resolution image prediction method for constructing loss function based on image texture information | |
CN105761234A (en) | Structure sparse representation-based remote sensing image fusion method | |
CN109711413A (en) | Image, semantic dividing method based on deep learning | |
Talavera-Martinez et al. | Hair segmentation and removal in dermoscopic images using deep learning | |
CN103020939B (en) | Method for removing large-area thick clouds for optical remote sensing images through multi-temporal data | |
CN108491849A (en) | Hyperspectral image classification method based on three-dimensional dense connection convolutional neural networks | |
CN104318243B (en) | High-spectral data dimension reduction method based on rarefaction representation and empty spectrum Laplce's figure | |
CN103366347B (en) | Image super-resolution rebuilding method based on rarefaction representation | |
CN102542296B (en) | Method for extracting image characteristics by multivariate gray model-based bi-dimensional empirical mode decomposition | |
CN105243670A (en) | Sparse and low-rank joint expression video foreground object accurate extraction method | |
CN110533077A (en) | Form adaptive convolution deep neural network method for classification hyperspectral imagery | |
Wang et al. | MCT-Net: Multi-hierarchical cross transformer for hyperspectral and multispectral image fusion | |
He et al. | DsTer: A dense spectral transformer for remote sensing spectral super-resolution | |
CN116468645B (en) | Antagonistic hyperspectral multispectral remote sensing fusion method | |
Chen et al. | SDFNet: Automatic segmentation of kidney ultrasound images using multi-scale low-level structural feature | |
CN113902622B (en) | Spectrum super-resolution method based on depth priori joint attention | |
CN112818920B (en) | Double-temporal hyperspectral image space spectrum joint change detection method | |
Chen et al. | Semisupervised spectral degradation constrained network for spectral super-resolution | |
CN112464891A (en) | Hyperspectral image classification method | |
Khader et al. | NMF-DuNet: Nonnegative matrix factorization inspired deep unrolling networks for hyperspectral and multispectral image fusion | |
CN110956601A (en) | Infrared image fusion method and device based on multi-sensor mode coefficients and computer readable storage medium | |
Li et al. | Progressive spatial information-guided deep aggregation convolutional network for hyperspectral spectral super-resolution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Hu Jing Inventor after: Zhao Minghua Inventor before: Zhao Minghua |
|
GR01 | Patent grant | ||
GR01 | Patent grant |