CN107194872B - Remote sensed image super-resolution reconstruction method based on perception of content deep learning network - Google Patents

Remote sensed image super-resolution reconstruction method based on perception of content deep learning network Download PDF

Info

Publication number
CN107194872B
CN107194872B CN201710301990.6A CN201710301990A CN107194872B CN 107194872 B CN107194872 B CN 107194872B CN 201710301990 A CN201710301990 A CN 201710301990A CN 107194872 B CN107194872 B CN 107194872B
Authority
CN
China
Prior art keywords
image
complexity
perception
deep learning
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710301990.6A
Other languages
Chinese (zh)
Other versions
CN107194872A (en
Inventor
王中元
韩镇
杜博
邵振峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201710301990.6A priority Critical patent/CN107194872B/en
Publication of CN107194872A publication Critical patent/CN107194872A/en
Application granted granted Critical
Publication of CN107194872B publication Critical patent/CN107194872B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of Remote sensed image super-resolution reconstruction methods based on perception of content deep learning network, the invention proposes the comprehensive measurement indexs and calculation method of picture material complexity, based on this, sample image is classified by content complexity, the deep layer GAN network model of building and the high, medium and low three kinds of complexity of training not etc., then it according to the content complexity of the input picture to oversubscription, chooses corresponding network and is rebuild.In order to improve the learning performance of GAN network, the present invention gives a kind of loss function definition of optimization simultaneously.The present invention overcomes the contradictions of over-fitting generally existing in the super-resolution rebuilding based on machine learning and poor fitting, effectively improve the super-resolution rebuilding precision of remote sensing image.

Description

Remote sensed image super-resolution reconstruction method based on perception of content deep learning network
Technical field
The invention belongs to technical field of image processing, are related to a kind of image super-resolution rebuilding method, and in particular to a kind of Remote sensed image super-resolution reconstruction method based on perception of content deep learning network.
Background technique
The remote sensing image of high spatial resolution can carry out finer description to atural object, provide details letter abundant Breath, therefore, people are often desirable to that the image of high spatial resolution can be obtained.With the rapid hair of space exploration theory and technology Exhibition, the remote sensing image (such as IKNOS and QuickBird) of meter level even sub-meter grade spatial resolution gradually move towards application, however Its temporal resolution is generally relatively low.In contrast, some that there is the sensor (such as MODIS) compared with low spatial resolution but to have Very high temporal resolution, they can obtain large-scale remote sensing image interior in short-term.If can be from these compared with low spatial point The image of high spatial resolution is reconstructed in the image of resolution, then can get while there is high spatial resolution and height The remote sensing image of temporal resolution.Therefore, the remote sensing image of low resolution is rebuild to obtain the image of high-resolution It is very important.
In recent years, deep learning is widely used in solving the problems, such as various in computer vision and image procossing.2014, C.Dong of Hong Kong Chinese University et al. takes the lead in that the super-resolution rebuilding of depth CNN study introducing image is achieved and relatively pass by Mainstream sparse expression the better effect of method;2015, J.Kim of South Korea Seoul national university et al. it is further proposed that Improved method based on RNN, performance have further promotion;2016, Y.Romano of Google company et al. development A kind of quick and accurate learning method;Then soon, C.Ledig of Twitter company et al. is by GAN network (production pair Anti- network) it is used for image super-resolution, achieve reconstruction effect best so far.Moreover, the bottom of GAN is depth conviction Network is no longer strictly dependent on the study of supervision, even if in the situation of no one-to-one high-low resolution image pattern pair Under can also train.
After deep learning model and the network architecture determine, the performance of the super-resolution method based on deep learning very great Cheng It is determined on degree by the quality that network model is trained.The training of deep learning network model is not more more thorough more effective, but should It carries out sufficiently and suitable sample learning (as the number of plies of deep layer network model is not The more the better).For complicated figure Picture needs more sample trainings, can just acquire more characteristics of image in this way, but such network is to the simple image of content It is easy to appear over-fitting, causes super-resolution result fuzzy;Conversely, reducing training strength, it is avoided that the mistake of content simple image Fitting phenomenon, but will cause the poor fitting problem of content complicated image, reduce naturalness and the fidelity of reconstructed image.How Accomplish the demand that trained network can combine content complexity and simple image superior quality is rebuild, is that practical super-resolution is answered The problem of method in based on deep learning cannot avoid.
Summary of the invention
In order to solve the above-mentioned technical problem, the invention proposes a kind of remote sensing figures based on perception of content deep learning network As super resolution ratio reconstruction method.
The technical scheme adopted by the invention is that: a kind of remote sensing images super-resolution based on perception of content deep learning network Rate method for reconstructing, which comprises the following steps:
Step 1: collecting high-low resolution remote sensing images sample, and carry out piecemeal processing;
Step 2: calculate the complexity of each image block, be divided into high, medium and low three classes by complexity, respectively constitute it is high, in, The training sample set of low complex degree;
Step 3: three kinds of GAN networks of high, medium and low complexity being respectively trained using the sample set of acquisition;
Step 4: the complexity of calculating input image chooses corresponding GAN network reconnection according to complexity.
Compared with existing image super-resolution method, the present invention is had the advantages that:
(1) present invention successfully overcomes the super-resolution based on machine learning by using this simple ideas of image classification The contradiction of generally existing over-fitting and poor fitting, effectively improves the super-resolution rebuilding precision of remote sensing image in rate reconstruction;
(2) the method for the present invention based on deep learning network model be GAN network, which does not depend on stringent in training The high-low resolution sample block being aligned one by one, thus improve using universality, it is particularly suitable for remote sensing fields high-low resolution The asynchronous imaging circumstances of the multi-source of image.
Detailed description of the invention
Fig. 1 is the flow chart of the embodiment of the present invention.
Specific embodiment
Understand for the ease of those of ordinary skill in the art and implement the present invention, with reference to the accompanying drawings and embodiments to this hair It is bright to be described in further detail, it should be understood that implementation example described herein is merely to illustrate and explain the present invention, not For limiting the present invention.
Referring to Fig.1, a kind of Remote Sensing Image Super Resolution weight based on perception of content deep learning network provided by the invention Construction method, comprising the following steps:
Step 1: collecting high-low resolution remote sensing images sample, high-definition picture is equably cut into 128x128's Image block, low-resolution image are equably cut into the image block of 64x64;
Step 2: calculate the complexity of each image block, be divided into high, medium and low three classes by complexity, respectively constitute it is high, in, The training sample set of low complex degree;
The Computing Principle and method of image complexity are as follows:
The complexity of picture material includes texture complexity and structural complexity, and comentropy and gray consistency can be preferably Portray texture complexity, and in structure is complicated sexual compatibility image target the description of edge ratio.The content complexity degree of image Figureofmerit C is by comentropy H, gray consistency U and edge ratio R, and weighting is constituted as the following formula:
C=wh×H+wu×U+we×R;
Here wh, wu, weIt is respective weight respectively, weight is determined by experiment.
Comentropy, texture homogeneity and the respective calculation method of edge ratio is given below.
(1) comentropy
Comentropy reflects the number of image gray levels and the appearance situation of each gray-level pixels, and entropy is higher to show figure As texture is more complicated.The calculation formula of image information entropy H are as follows:
N is the number of gray level, niFor the number that each gray level occurs, K is number of grayscale levels.
(2) gray consistency
Gray consistency can reflect the uniform level of image, if its value is smaller, then correspond to simple image, otherwise right Answer complicated image.Gray consistency formula are as follows:
Wherein, M, N are respectively the line number and columns of image, and f (i, j) is the gray value at pixel (i, j),Be with The gray average of 3 × 3 neighborhood territory pixels centered on (i, j).
(3) edge ratio
How much target number directly reflects the complexity of image in map sheet, if target number is more, then the image It is typically complex, vice versa.Since the counting of target is related to complicated figure segmentation, it is not easy to calculate, object edge How much reflect indirectly object in image number and its complexity, therefore can be used to describe the complexity of image.Figure Ratio shared by object edge can be described with edge ratio as in, calculation formula are as follows:
Wherein, M and N is respectively the line number and columns of image, and E is the number of edge pixel in image.Target in image Edge shows as the place of gray scale significant changes, can be sought by difference algorithm, generally (such as by edge detection operator Canny operator, Sobel operator etc.) detection image edge pixel.
Its middle high-resolution sample set image number of blocks is no less than 500000, and medium resolution image number of blocks is no less than 300000, low-resolution image number of blocks is no less than 200000.
Step 3: three kinds of GAN networks of high, medium and low complexity being respectively trained using the sample set of acquisition;
The loss function of GAN network training is defined as follows:
The loss function of GAN network training includes content loss, generation-confrontation loss and total variation loss.Content loss The distortion of picture material is featured, generation-confrontation loss describes to generate the statistical property and this kind of number of natural image of result According to discrimination, total variation, which is lost, then features the continuity of picture material.Overall loss function is weighted by three kinds of loss functions Composition:
Here wv, wg, wtIt is respective weight respectively, weight is determined by experiment.
The calculation method of every kind of loss function is given below.
(1) content loss
MSE (pixel mean square error) expression of traditional content loss function, the pixel-by-pixel loss of image under consideration content, base Desalinate the radio-frequency component on picture structure in the network training of MSE, causes image excessively fuzzy.To overcome this defect, here Introduce the characteristic loss function of image.Due to Manual definition and extract a valuable characteristics of image inherently complicated work Make, while in view of deep learning has the ability for automatically extracting feature, this method borrows the hidden layer that VGG network training obtains Feature is measured.Use φi,jIndicate the characteristic pattern that j-th of convolutional layer in VGG network before i-th of pond layer obtains, it will be special Sign loss is defined as reconstructed imageWith reference pictureVGG feature Euclidean distance, it may be assumed that
Here Wi,j, Hi,jIndicate the dimension of VGG characteristic pattern.
(2) generation-confrontation loss
The production function of GAN network is paid attention in generation-confrontation loss, and network is encouraged to generate and natural image manifold The consistent solution in space, so that arbiter can not will generate result and be distinguished with natural image.Generation-confrontation loss is based on differentiation Device measures the differentiation probability of all training samples, and formula is as follows:
Here,Indicate arbiter D by reconstruction resultIt is determined as the probability of natural image;N is indicated Training sample sum.
(3) total variation is lost
Increasing total variation loss is the local coherence in order to reinforce learning outcome in picture material, calculation formula Are as follows:
Here the width and height of W, H expression reconstructed image.
Step 4: the complexity of calculating input image chooses corresponding GAN network reconnection according to complexity.
Specifically it is made of following sub-step:
Step 4.1: input picture being evenly dividing into 16 equal portions subgraphs, calculates the complexity of each subgraph, and judge to belong to In the type of high, medium and low complexity;
Step 4.2: corresponding GAN network being chosen according to complexity type and carries out super-resolution rebuilding.
The present invention classifies sample image by picture material complexity, the deep layer network mould of building and training complexity not etc. Type is chosen corresponding network and is rebuild then according to the content complexity of the input picture to oversubscription.Remote sensing image record Large scale range scene, because the fine information not by ground target is influenced, content complexity consistent space homogeneity area compared with More and area is big, such as city, dry land, paddy field, lake, the large-scale atural object in mountainous region, thus compares the training and again of being suitble to presort It builds.
Here GAN deep learning network model is used, GAN network is not only due to and gives super-resolution best at present Performance belongs to the multidate of asynchronous shooting moreover, the high low spatial resolution remote sensing image source as training sample is different Image, it is impossible to there are the alignment one by one in pixel meaning, this greatly limits the training of CNN network, and GAN network right and wrong Supervised learning network, therefore this problem is not present.
It should be understood that the part that this specification does not elaborate belongs to the prior art.
It should be understood that the above-mentioned description for preferred embodiment is more detailed, can not therefore be considered to this The limitation of invention patent protection range, those skilled in the art under the inspiration of the present invention, are not departing from power of the present invention Benefit requires to make replacement or deformation under protected ambit, fall within the scope of protection of the present invention, this hair It is bright range is claimed to be determined by the appended claims.

Claims (10)

1. a kind of Remote sensed image super-resolution reconstruction method based on perception of content deep learning network, which is characterized in that including Following steps:
Step 1: collecting high-low resolution remote sensing images sample, and carry out piecemeal processing;
Step 2: calculating the complexity of each image block, be divided into high, medium and low three classes by complexity, respectively constitute high, medium and low multiple The training sample set of miscellaneous degree;
The wherein complexity of described image block, calculation method are as follows:
C=wh×H+wu×U+we×R;
Wherein, C indicates the complexity of image block, and H indicates that image information entropy, U indicate image grayscale consistency, and R indicates image side Edge ratio, wh, wu, weIt is respective weight respectively, weight is determined by experiment;
Step 3: three kinds of GAN networks of high, medium and low complexity being respectively trained using the sample set of acquisition;
The wherein loss function of GAN network training is defined as:
Wherein, C indicates the loss function of network training,Indicate content loss function,Indicate that letter is lost in generation-confrontation Number,Indicate total variation loss function;wv, wg, wtIt is respective weight respectively, weight is determined by experiment;
Step 4: the complexity of calculating input image chooses corresponding GAN network reconnection according to complexity;
The specific implementation of step 4 includes following sub-step:
Step 4.1: input picture being evenly dividing, the complexity of each subgraph is calculated, and judges to belong to high, medium and low complexity Type;
Step 4.2: corresponding GAN network being chosen according to complexity type and carries out super-resolution rebuilding.
2. the Remote sensed image super-resolution reconstruction method according to claim 1 based on perception of content deep learning network, It is characterized by: image block, the low-resolution image that high-definition picture is equably cut into 128x128 are uniform in step 1 Ground is cut into the image block of 64x64.
3. the Remote sensed image super-resolution reconstruction method according to claim 1 based on perception of content deep learning network, It is characterized in that, the calculation formula of described image comentropy H are as follows:
Wherein, N is the number of gray level, niFor the number that each gray level occurs, K is number of grayscale levels.
4. the Remote sensed image super-resolution reconstruction method according to claim 1 based on perception of content deep learning network, It is characterized in that, described image gray consistency U formula are as follows:
Wherein, M, N are respectively the line number and columns of image, and f (i, j) is the gray value at pixel (i, j),Be with (i, J) gray average of 3 × 3 neighborhood territory pixels centered on.
5. the Remote sensed image super-resolution reconstruction method according to claim 1 based on perception of content deep learning network, It is characterized in that, described image edge ratio R calculation formula are as follows:
Wherein, M and N is respectively the line number and columns of image;E is the number of edge pixel in image, is sought by difference algorithm.
6. the Remote Sensing Image Super Resolution based on perception of content deep learning network described in -5 any one according to claim 1 Method for reconstructing, it is characterised in that: the training sample set of high, medium and low complexity described in step 2, wherein the training sample of high complexity This collection image number of blocks is no less than 500000, and the training sample set image number of blocks of middle complexity is no less than 300000, low complexity The training sample set image number of blocks of degree is no less than 200000.
7. the Remote sensed image super-resolution reconstruction method according to claim 1 based on perception of content deep learning network, It is characterized in that, the content loss functionAre as follows:
Wherein, φi,jIndicate the characteristic pattern that j-th of convolutional layer in VGG network before i-th of pond layer obtains, Wi,j, Hi,jTable Show the dimension of VGG characteristic pattern;Indicate reference picture,Indicate reconstructed image.
8. the Remote sensed image super-resolution reconstruction method according to claim 1 based on perception of content deep learning network, It is characterized in that, the generation-confrontation loss functionAre as follows:
Wherein,Indicate reconstructed image, D (G (ILR)) indicate arbiter D by reconstruction resultIt is determined as nature figure The probability of picture;The sum of N expression training sample.
9. the Remote sensed image super-resolution reconstruction method according to claim 1 based on perception of content deep learning network, It is characterized in that, the total variation loss functionAre as follows:
Wherein, G (ILR) indicating reconstructed image, W, H indicate the width and height of reconstructed image.
10. the Remote sensed image super-resolution reconstruction method according to claim 1 based on perception of content deep learning network, It is characterized by: input picture is evenly dividing into 16 equal portions subgraphs in step 4.1.
CN201710301990.6A 2017-05-02 2017-05-02 Remote sensed image super-resolution reconstruction method based on perception of content deep learning network Active CN107194872B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710301990.6A CN107194872B (en) 2017-05-02 2017-05-02 Remote sensed image super-resolution reconstruction method based on perception of content deep learning network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710301990.6A CN107194872B (en) 2017-05-02 2017-05-02 Remote sensed image super-resolution reconstruction method based on perception of content deep learning network

Publications (2)

Publication Number Publication Date
CN107194872A CN107194872A (en) 2017-09-22
CN107194872B true CN107194872B (en) 2019-08-20

Family

ID=59872637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710301990.6A Active CN107194872B (en) 2017-05-02 2017-05-02 Remote sensed image super-resolution reconstruction method based on perception of content deep learning network

Country Status (1)

Country Link
CN (1) CN107194872B (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767384B (en) * 2017-11-03 2021-12-03 电子科技大学 Image semantic segmentation method based on countermeasure training
WO2019162241A1 (en) * 2018-02-21 2019-08-29 Robert Bosch Gmbh Real-time object detection using depth sensors
CN108346133B (en) * 2018-03-15 2021-06-04 武汉大学 Deep learning network training method for super-resolution reconstruction of video satellite
CN108665509A (en) * 2018-05-10 2018-10-16 广东工业大学 A kind of ultra-resolution ratio reconstructing method, device, equipment and readable storage medium storing program for executing
CN108711141B (en) * 2018-05-17 2022-02-15 重庆大学 Motion blurred image blind restoration method using improved generation type countermeasure network
CN108876870B (en) * 2018-05-30 2022-12-13 福州大学 Domain mapping GANs image coloring method considering texture complexity
CN108830209B (en) * 2018-06-08 2021-12-17 西安电子科技大学 Remote sensing image road extraction method based on generation countermeasure network
CN108961217B (en) * 2018-06-08 2022-09-16 南京大学 Surface defect detection method based on regular training
CN108921791A (en) * 2018-07-03 2018-11-30 苏州中科启慧软件技术有限公司 Lightweight image super-resolution improved method based on adaptive important inquiry learning
CN110738597A (en) * 2018-07-19 2020-01-31 北京连心医疗科技有限公司 Size self-adaptive preprocessing method of multi-resolution medical image in neural network
CN109117944B (en) * 2018-08-03 2021-01-15 北京悦图数据科技发展有限公司 Super-resolution reconstruction method and system for ship target remote sensing image
CN109949219B (en) * 2019-01-12 2021-03-26 深圳先进技术研究院 Reconstruction method, device and equipment of super-resolution image
CN109903223B (en) * 2019-01-14 2023-08-25 北京工商大学 Image super-resolution method based on dense connection network and generation type countermeasure network
CN109785270A (en) * 2019-01-18 2019-05-21 四川长虹电器股份有限公司 A kind of image super-resolution method based on GAN
CN109951654B (en) 2019-03-06 2022-02-15 腾讯科技(深圳)有限公司 Video synthesis method, model training method and related device
CN110033033B (en) * 2019-04-01 2023-04-18 南京谱数光电科技有限公司 Generator model training method based on CGANs
CN110163852B (en) * 2019-05-13 2021-10-15 北京科技大学 Conveying belt real-time deviation detection method based on lightweight convolutional neural network
US11263726B2 (en) 2019-05-16 2022-03-01 Here Global B.V. Method, apparatus, and system for task driven approaches to super resolution
CN110599401A (en) * 2019-08-19 2019-12-20 中国科学院电子学研究所 Remote sensing image super-resolution reconstruction method, processing device and readable storage medium
CN110807740B (en) * 2019-09-17 2023-04-18 北京大学 Image enhancement method and system for monitoring scene vehicle window image
CN110689086B (en) * 2019-10-08 2020-09-25 郑州轻工业学院 Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network
CN111144466B (en) * 2019-12-17 2022-05-13 武汉大学 Image sample self-adaptive depth measurement learning method
CN111260705B (en) * 2020-01-13 2022-03-15 武汉大学 Prostate MR image multi-task registration method based on deep convolutional neural network
CN111275713B (en) * 2020-02-03 2022-04-12 武汉大学 Cross-domain semantic segmentation method based on countermeasure self-integration network
CN111915545B (en) * 2020-08-06 2022-07-05 中北大学 Self-supervision learning fusion method of multiband images
CN113139576B (en) * 2021-03-22 2024-03-12 广东省科学院智能制造研究所 Deep learning image classification method and system combining image complexity
CN113421189A (en) * 2021-06-21 2021-09-21 Oppo广东移动通信有限公司 Image super-resolution processing method and device and electronic equipment
CN113538246B (en) * 2021-08-10 2023-04-07 西安电子科技大学 Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network
CN116402691B (en) * 2023-06-05 2023-08-04 四川轻化工大学 Image super-resolution method and system based on rapid image feature stitching

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825477A (en) * 2015-01-06 2016-08-03 南京理工大学 Remote sensing image super-resolution reconstruction method based on multi-dictionary learning and non-local information fusion
CN105931179A (en) * 2016-04-08 2016-09-07 武汉大学 Joint sparse representation and deep learning-based image super resolution method and system
CN106203269A (en) * 2016-06-29 2016-12-07 武汉大学 A kind of based on can the human face super-resolution processing method of deformation localized mass and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9589323B1 (en) * 2015-08-14 2017-03-07 Sharp Laboratories Of America, Inc. Super resolution image enhancement technique

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825477A (en) * 2015-01-06 2016-08-03 南京理工大学 Remote sensing image super-resolution reconstruction method based on multi-dictionary learning and non-local information fusion
CN105931179A (en) * 2016-04-08 2016-09-07 武汉大学 Joint sparse representation and deep learning-based image super resolution method and system
CN106203269A (en) * 2016-06-29 2016-12-07 武汉大学 A kind of based on can the human face super-resolution processing method of deformation localized mass and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度学***,等;《铁道警察学院学报》;20161231;第26卷(第121期);第5-10页

Also Published As

Publication number Publication date
CN107194872A (en) 2017-09-22

Similar Documents

Publication Publication Date Title
CN107194872B (en) Remote sensed image super-resolution reconstruction method based on perception of content deep learning network
Hu et al. Revisiting single image depth estimation: Toward higher resolution maps with accurate object boundaries
CN111524135B (en) Method and system for detecting defects of tiny hardware fittings of power transmission line based on image enhancement
CN108961229A (en) Cardiovascular OCT image based on deep learning easily loses plaque detection method and system
WO2018023734A1 (en) Significance testing method for 3d image
CN109191476A (en) The automatic segmentation of Biomedical Image based on U-net network structure
CN106780546B (en) The personal identification method of motion blur encoded point based on convolutional neural networks
CN109766835A (en) The SAR target identification method of confrontation network is generated based on multi-parameters optimization
CN106991686B (en) A kind of level set contour tracing method based on super-pixel optical flow field
CN107564022A (en) Saliency detection method based on Bayesian Fusion
CN104282008B (en) The method and apparatus that Texture Segmentation is carried out to image
CN112926652B (en) Fish fine granularity image recognition method based on deep learning
CN110084782A (en) Full reference image quality appraisement method based on saliency detection
Guo et al. Liver steatosis segmentation with deep learning methods
CN101976444A (en) Pixel type based objective assessment method of image quality by utilizing structural similarity
CN104966348B (en) A kind of bill images key element integrality detection method and system
CN107123130A (en) Kernel correlation filtering target tracking method based on superpixel and hybrid hash
CN108805825A (en) A kind of reorientation image quality evaluating method
CN104102928A (en) Remote sensing image classification method based on texton
CN104021567B (en) Based on the fuzzy altering detecting method of image Gauss of first numeral law
CN103984963A (en) Method for classifying high-resolution remote sensing image scenes
CN110211193A (en) Three dimensional CT interlayer image interpolation reparation and super-resolution processing method and device
Luo et al. Bi-GANs-ST for perceptual image super-resolution
CN116630971A (en) Wheat scab spore segmentation method based on CRF_Resunate++ network
Zhang et al. Real-Time object detection for 360-degree panoramic image using CNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant