CN106250856A - A kind of high-definition picture scene classification method based on non-supervisory feature learning - Google Patents

A kind of high-definition picture scene classification method based on non-supervisory feature learning Download PDF

Info

Publication number
CN106250856A
CN106250856A CN201610629096.7A CN201610629096A CN106250856A CN 106250856 A CN106250856 A CN 106250856A CN 201610629096 A CN201610629096 A CN 201610629096A CN 106250856 A CN106250856 A CN 106250856A
Authority
CN
China
Prior art keywords
image
feature
image block
significance
rho
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610629096.7A
Other languages
Chinese (zh)
Inventor
张帆
杜博
张良培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201610629096.7A priority Critical patent/CN106250856A/en
Publication of CN106250856A publication Critical patent/CN106250856A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2136Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on sparsity criteria, e.g. with an overcomplete basis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/38Outdoor scenes
    • G06V20/39Urban scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to field of remote sensing image processing, particularly relate to a kind of high-resolution remote sensing image scene classification method based on non-supervisory feature learning.The present invention is first with the significance information of significance Detection and Extraction image, and according to the significance information of image diverse location, from image, the size stochastical sampling according to significance chooses image block;The openness object function of feature and minimal reconstruction residual error object function study image block characteristics is utilized to express;Finally, utilize the feature operator learning out, the view data of remote sensing image to be identified is carried out convolution operation, extracts characteristics of image, the feature extracted input support vector machine is classified.The present invention utilizes the significance information of significance Detection and Extraction image, utilize sparse own coding from image learning feature representation, make study to feature in scene Recognition, have more robustness.

Description

A kind of high-definition picture scene classification method based on non-supervisory feature learning
Technical field
The invention belongs to field of remote sensing image processing, particularly relate to a kind of high-resolution based on non-supervisory feature learning distant Sense image scene classification method.
Background technology
High-resolution remote sensing image (referring generally to more than the remote sensing images of 1 meter of spatial resolution) has the highest sky due to it Between resolution, abundant spatial information can be that Objects recognition provides fine information, has been widely used in every field.So And, owing to image spatial resolution is higher, the scene being identified generally comprises multiple different classes of atural object and is mixed to get Pixel, these atural objects often have different structural informations, but owing to spectral resolution is relatively low, it tends to be difficult to distinguish.With The maturation of high-resolution imaging technology and the reduction of cost, high-definition picture is more and more used, but high-resolution The scene Recognition of rate image yet suffers from some restrictive conditions:
1) high-resolution existence makes scene be often made up of Various Complex atural object, causes foreign body with composing phenomenon, makes essence Really interpretation becomes a difficult point, because cannot determine which kind of material this pixel belongs on earth.
2) carry out scene Recognition, need to utilize the space of image, structure, texture and the semantic information obtained, from semanteme Interpret image in aspect, but adaptive feature learning is a difficult point.
3) some algorithms only make use of the shallow-layer information in high-definition picture, such as artificial designs such as structure, texture informations Feature, often have ignored the construction features of data itself.These methods can not efficiently extract the feature representation of image.
Accordingly, it would be desirable to one can utilize the high-level feature of data itself, the method having again the semantic information of excellence to express is come Carry out adaptive feature learning.
Summary of the invention
The present invention mainly provides a kind of non-supervisory feature learning method and solves the problem existing for existing method, it is provided that One can utilize the high-level feature of data itself, has again the method that the semantic information of excellence is expressed.
The technical scheme that the present invention provides is, a kind of high-definition picture scene classification side based on non-supervisory feature learning Method, comprises the steps of
Step 1, significance detect, utilize diverse location in image image block local and the overall situation similarity to calculate image Significance, it is achieved mode is as follows,
For different image blocks, first one image block similar function (1) of definition,
d ( x i , x j ) = d c o l o r ( x i , x j ) 1 + c · d p o s i t i o n ( x i , x j ) - - - ( 1 )
Wherein dcolorFor image block at the Euclidean distance of CIELab color space, dpositionFor image block in picture position The Euclidean distance in space, xi,xjBeing the image block of optional position, c is constant, chooses and correspondence image block xiK most like figure As block xk, wherein k=1,2 ... K, utilize formula (2) finally to calculate the significance of image block position,
S i = 1 - exp { - 1 K Σ k = 1 K d ( x i , x k ) } - - - ( 2 )
Wherein, SiRefer to image block xiSignificance;
Step 2, specific image block samples, and detects according to the significance described in step 1 and obtains the notable of image block in image Degree information, the random image block choosing default fixed size from image;
Step 3, utilizes the openness object function of feature and minimal reconstruction residual error object function study image block characteristics to express, According to the openness and minimal reconstruction residual error of objective optimization formula (5) optimization aim feature simultaneously,
J ( X , Z ) = 1 2 Σ i = 1 m | | x i - z i | | 2 + λ 2 | | W | | 2 - - - ( 3 )
K L ( ρ | | ρ ^ ) = ρ l o g ρ ρ ^ + ( 1 - ρ ) l o g 1 - ρ 1 - ρ ^ - - - ( 4 )
Y = J ( X , Z ) + β Σ j = 1 K K L ( ρ | | ρ ^ ) - - - ( 5 )
Wherein J (X, Z) is original image and the residual sum weights size through decoded image, and wherein X represents original graph As block, Z represents the image block of reconstruct, and i represents that image block is numbered, and m represents the number of image block, ziFor image block xiReconstruct knot Really, W represents the weights of object function, is intended to the feature extraction operator learnt, and λ is bound term weights;It it is neuron Degree of rarefication, ρ is target characteristic degree of rarefication,Being current signature degree of rarefication, β is the weights of degree of rarefication;Y is global error letter Number;
Step 4, is strengthened by data and random drop further enhances the image block characteristics expression that step 3 study is arrived;
Step 5, solves and updates feature operator, according to the objective optimization formula (5) described in step 3, utilizes stochastic gradient Descent method solves and updates feature operator;
Step 6, it is judged that whether iteration reaches maximum frequency of training, if reaching maximum frequency of training, goes to step 7;Otherwise Go to step 5;
Step 7, utilizes feature operator that image carries out convolutional neural networks operation and extracts characteristics of image;
Step 8, utilizes support vector machine to carry out scene Recognition.
And, the data described in step 4 strengthen by image carries out Random-Rotation translation realization, and described random drop leads to Crossing in network training process, random masked segment neuron realizes.
Compared with prior art, the invention has the beneficial effects as follows:
The present invention uses sparse own coding that feature carries out adaptive learning, and introduces data enhancing and random drop to spy The optimal solution levying operator is updated, it is to avoid data " over-fitting " problem.At the design aspect of object function, ingenious combination is sparse Object function and minimal reconstruction residual error object function carry out feature learning, can realize feature under rational time cost Study, learn to feature in scene Recognition, have more robustness.
Accompanying drawing explanation
Fig. 1 is the non-supervisory feature learning schematic diagram in the embodiment of the present invention.
Fig. 2 is that the convolutional neural networks in the embodiment of the present invention extracts characteristics of image schematic diagram.
Detailed description of the invention
Below in conjunction with the drawings and specific embodiments, the present invention is described further.
The present invention provides a kind of method for high-resolution remote sensing image scene classification, by the characterology in scene classification Practise and regard a non-supervisory problem concerning study as, utilize the significance information of significance Detection and Extraction image, utilize sparse own coding From image learning feature representation, make feature learning in conjunction with the openness object function of feature and minimal reconstruction residual error object function Result has robustness.
Introducing significance detection algorithm, significance detection algorithm is a kind of calculation learnt by human eye vision mechanism introduced feature Method.The image block with significance feature generally comprises the feature that image most represents, and therefore can ensure that study is to optimum Feature representation.Embodiment utilizes the image block of the image space positions comprising notable information to carry out feature coding study.So learn The reason practising coding is, the image block comprising significance information often has the image spatial feature most represented with sufficiently Semantic information.By the statistical information of image, it is known that texture and structural information in image are static, different local letters Breath is the most similar, utilizes the representational image block of significance information searching, can be with higher probability learning to complete The feature representation that office is optimum.
Introduce sparse own coding and feature is carried out coding study.The feature representation of different images usually contains different stricture of vaginas Reason, structurally and semantically information.Utilize sparse own coding method, the immanent structure of adaptive study image and feature representation, this Sample can make study to feature there is best scene separability.Utilize sparse constraint simultaneously, to study to feature carry out Optimize so that feature more robust.Embodiment, in order to avoid occurring data " over-fitting " problem during feature learning, introduces Data enhancing and random drop carry out the renewal of optimal solution position.Adapt to feature representation the most one-dimensional abandons operation, So can make learning algorithm more robust, improve the effectiveness of feature simultaneously.
Fig. 1 is non-supervisory feature learning schematic diagram, and non-supervisory feature learning is divided into two stages to carry out, and the first stage is base Image block in significance is sampled, first with the significance information of significance Detection and Extraction image, according to image diverse location Significance information, from image, the size stochastical sampling according to significance chooses image block (such as 10 × 10 sizes).Second-order Section is to utilize sparse own coding to carry out feature learning, based on the image block choosing out, utilizes adaptive of sparse own coding Practise feature extraction operator.Sparse own coding utilizes stochastic gradient descent method to solve.Stochastic gradient descent method is a kind of tradition Optimum algorithm of multi-layer neural network.Finally, utilize the feature operator learning out, the view data of remote sensing image to be identified is entered Row convolution operation, extracts characteristics of image, then the feature input support vector machine extracted is classified, such as, be divided into residence People district, runway, meadow, airport, industrial occupancy.
The flow process that embodiment provides specifically includes following steps:
Step 1, significance detects: utilize the similarity of the local of the image block of diverse location and the overall situation in image to calculate The significance of image.For different image blocks (embodiment is 10 × 10 sizes), first one image block similar function of definition (1),
d ( x i , x j ) = d c o l o r ( x i , x j ) 1 + c · d p o s i t i o n ( x i , x j ) - - - ( 1 )
Wherein dcolorFor image block at the Euclidean distance of CIELab color space, dpositionFor image block in picture position The Euclidean distance in space, xi,xjBeing the image block of optional position, c is constant, and the present embodiment takes c=3.According to similar function, its It is worth two image blocks of the biggest expression the most similar, chooses and correspondence image block xiK most like image block xk(k=1,2 ... K, tool When body is implemented, K can be preset value voluntarily by those skilled in the art, it is proposed that value 9-15), utilize formula (2) finally to calculate The significance of this image block position,
S i = 1 - exp { - 1 K Σ k = 1 K d ( x i , x k ) } - - - ( 2 )
Wherein, SiRefer to image block xiSignificance.
Step 2, specific image block sampling: according to the significance information of image, random choosing from image is preset fixing The image block (such as 10 × 10) of size.It is the image comprising notable information that the present embodiment sets significance more than remarkable threshold 0.75 Block, less than non-significant threshold value 0.25 for non-significant degree block;200,000 image blocks are randomly selected from remotely-sensed data to be identified, Wherein 75% is significance block, and 25% is non-significant degree block.When being embodied as, remarkable threshold, non-significant threshold value, image block numbers Can be preset voluntarily by those skilled in the art.
Step 3, utilizes the openness object function of feature and minimal reconstruction residual error object function study image block characteristics to express.
Utilize sparse own coding adaptive learning characteristic operator from image block.Sparse own coding is that one comprises one The neutral net of hidden layer, mainly encodes according to input picture and decodes, and obtains the forced coding method of image, simultaneously Make to solve code error minimum.Openness and the minimal reconstruction residual error of optimization aim feature while of coming according to objective optimization formula (5):
J ( X , Z ) = 1 2 Σ i = 1 m | | x i - z i | | 2 + λ 2 | | W | | 2 - - - ( 3 )
L ( ρ | | ρ ^ ) = ρ l o g ρ ρ ^ + ( 1 - ρ ) l o g 1 - ρ 1 - ρ ^ - - - ( 4 )
Y = J ( X , Z ) + β Σ j = 1 K K L ( ρ | | ρ ^ ) - - - ( 5 )
Wherein J (X, Z) is original image and the residual sum weights size through decoded image, and wherein X represents original graph As block, Z represents the image block of reconstruct, and i represents that image block is numbered, and m represents the number (being 200,000 in embodiment) of image block, ziFor Image block xiReconstruction result, W represents the weights of object function, the feature extraction operator namely learnt, and λ is bound term Weights, are used for retraining the numerical values recited of W, general value 0.005.Being the degree of rarefication of neuron, KL represents Kullback-Leibler divergence[1], ρ is target characteristic degree of rarefication, is used for keeping feature openness, when being embodied as Can be pre-set by those skilled in the art, typically take 0-1,It it is current signature degree of rarefication.β is the weights of degree of rarefication, typically Value 0-1.Y is global error function.
In embodiment, the ingenious openness object function of combination feature and minimal reconstruction residual error object function carry out optimum special That levies chooses.The openness object function of feature requires that feature has diversity, can the sparse expression of learning characteristic effectively;? Little reconstructed residual object function calculates the reconstructed error of feature, and the optimum combination of learning characteristic is expressed.So at feature learning During, use the openness object function of feature and minimal reconstruction residual error is object function simultaneously, the optimum combination of learning characteristic Expression model, guarantees the openness of feature simultaneously.
[1]S.Kullback and R.A.Leibler,“On information and sufficiency,” Ann.Math.Stat.,vol.22,pp.79–86,1951.
Step 4, is strengthened by data and random drop further enhances the image block characteristics expression that study is arrived.Due to god Need substantial amounts of data through network training, and be difficult at present obtain.Embodiment, by image is carried out Random-Rotation translation, reaches The effect strengthened to data, and simulate the change between different pieces of information.Simultaneously special in order to improve the data of step 3 study further Levy expression, by adding random drop mechanism, i.e. in network training process, random masked segment neuron, add random Property, make feature more sane[2]
[2]G.E.Hinton,N.Srivastava,A.Krizhevsky,I.Sutskever,and R.R.Salakhutdinov,“Improving neural networks by preventing co-adaptation of feature detectors,”Arxiv preprint arXiv:1207.0580,2012.
Step 5, solves and updates feature operator.Objective optimization formula (5) according to step 3, utilizes stochastic gradient descent Method[3]Carry out solving and updating feature operator.
[3]D.E.Rumelhart,G.E.Hinton,and R.J.Williams,“Learning representations by back-propagating errors,”Nature,vol.323,pp.533–536,1986.
Step 6, it is judged that whether iteration reaches maximum frequency of training, if reaching maximum frequency of training, goes to step 7;Otherwise Go to step 5.
Step 7, utilizes feature operator that image carries out convolutional neural networks operation and extracts characteristics of image.
Fig. 2 is the schematic diagram that convolutional neural networks extracts characteristics of image, inputs the data of a panel height image in different resolution, has k =3 wave bands.First with study to feature operator image carried out convolution operation, wherein n be input picture size (as N × n), w is the size (such as w × w) of feature operator, and s is the step-length of convolution operation, represents that being spaced several pixels once rolls up Long-pending operation, finally obtains the size characteristic pattern for (n-w)/s+1.What k represented is the number of feature operator, calculates through k feature The convolution operation of son, can obtain k width characteristic image, i.e. k wave band, obtain characteristic of correspondence figure, and characteristic pattern is by each Feature operator obtains.Utilize characteristic pattern, proceed pondization operation, reduce the dimension of characteristic pattern, obtain more effective feature. Pond is a kind of characteristic statistics mode, and characteristic pattern divides according to net lattice control, then asks for the average or of each grid Big value.As in figure 2 it is shown, characteristic pattern is divided into the grid of 4 × 4, each grid asks for maximum therein, obtains the spy of Chi Huahou Levy figure.Finally characteristic pattern is carried out vectorization, the feature of corresponding 16 dimensions of each characteristic pattern, obtain k × 16 after full connection and (figure is remembered For k*16) the final characteristic of division of dimension, input support vector machine with this and classify.
Step 8, utilizes support vector machine[4]Carry out scene Recognition.
[4]Cortes C,Vapnik V.Support-vector networks[J].Machine learning, 1995,20(3):273-297.
Based on significance the non-supervisory feature learning method being above the present embodiments relate to realize step.Pass through The study of significance detection algorithm, sparse own coding, data strengthen and the introducing of random drop, can be at more rational time cost Under, extract the non-supervisory feature learning operator of data.
Following points for attention are also had implementing when:
Maximum iteration time in the optimizing stage arranges aspect, owing to initial value is randomly provided, it is possible that need More iteration searches out the optimal solution of feature operator;Random drop introduces again more randomness, algorithm the convergence speed Relatively slow, those skilled in the art can preset maximum iteration time the most based on experience value.
When being embodied as, method provided by the present invention can realize automatic operational process based on software engineering.
It is emphasized that embodiment of the present invention is illustrative rather than determinate.Bag the most of the present invention Include the embodiment being not limited to described in detailed description of the invention, every by those skilled in the art according to technical scheme Other embodiments drawn, also belong to the scope of protection of the invention.

Claims (2)

1. a high-definition picture scene classification method based on non-supervisory feature learning, it is characterised in that comprise following step Rapid:
Step 1, significance detects, and utilizes the image block local of diverse location in image and overall situation similarity to calculate the aobvious of image Work degree, it is achieved mode is as follows,
For different image blocks, first one image block similar function (1) of definition,
d ( x i , x j ) = d c o l o r ( x i , x j ) 1 + c · d p o s i t i o n ( x i , x j ) - - - ( 1 )
Wherein dcolorFor image block at the Euclidean distance of CIELab color space, dpositionFor image block in space, picture position Euclidean distance, xi,xjBeing the image block of optional position, c is constant, chooses and correspondence image block xiK most like image block xk, wherein k=1,2 ... K, utilize formula (2) finally to calculate the significance of image block position,
S i = 1 - exp { - 1 K Σ k = 1 K d ( x i , x k ) } - - - ( 2 )
Wherein, SiRefer to image block xiSignificance;
Step 2, specific image block is sampled, and detects the significance letter obtaining image block in image according to the significance described in step 1 Breath, the random image block choosing default fixed size from image;
Step 3, utilizes the openness object function of feature and minimal reconstruction residual error object function study image block characteristics to express, according to Openness and the minimal reconstruction residual error of objective optimization formula (5) optimization aim feature simultaneously,
J ( X , Z ) = 1 2 Σ i = 1 m | | x i - z i | | 2 + λ 2 | | W | | 2 - - - ( 3 )
K L ( ρ | | ρ ^ ) = ρ l o g ρ ρ ^ + ( 1 - ρ ) l o g 1 - ρ 1 - ρ ^ - - - ( 4 )
Y = J ( X , Z ) + β Σ j = 1 K K L ( ρ | | ρ ^ ) - - - ( 5 )
Wherein J (X, Z) is original image and the residual sum weights size through decoded image, and wherein X represents original picture block, Z represents the image block of reconstruct, and i represents that image block is numbered, and m represents the number of image block, ziFor image block xiReconstruction result, W table Showing the weights of object function, be intended to the feature extraction operator learnt, λ is bound term weights;It is the dilute of neuron Dredging degree, ρ is target characteristic degree of rarefication,Being current signature degree of rarefication, β is the weights of degree of rarefication;Y is global error function;
Step 4, is strengthened by data and random drop further enhances the image block characteristics expression that step 3 study is arrived;
Step 5, solves and updates feature operator, according to the objective optimization formula (5) described in step 3, utilizes stochastic gradient descent Method solves and updates feature operator;
Step 6, it is judged that whether iteration reaches maximum frequency of training, if reaching maximum frequency of training, goes to step 7;Otherwise go to Step 5;
Step 7, utilizes feature operator that image carries out convolutional neural networks operation and extracts characteristics of image;
Step 8, utilizes support vector machine to carry out scene Recognition.
A kind of high-definition picture scene classification method based on non-supervisory feature learning, it is special Levy and be: the data described in step 4 strengthen by image carries out Random-Rotation translation realization, and described random drop is by net During network training, random masked segment neuron realizes.
CN201610629096.7A 2016-08-03 2016-08-03 A kind of high-definition picture scene classification method based on non-supervisory feature learning Pending CN106250856A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610629096.7A CN106250856A (en) 2016-08-03 2016-08-03 A kind of high-definition picture scene classification method based on non-supervisory feature learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610629096.7A CN106250856A (en) 2016-08-03 2016-08-03 A kind of high-definition picture scene classification method based on non-supervisory feature learning

Publications (1)

Publication Number Publication Date
CN106250856A true CN106250856A (en) 2016-12-21

Family

ID=57607228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610629096.7A Pending CN106250856A (en) 2016-08-03 2016-08-03 A kind of high-definition picture scene classification method based on non-supervisory feature learning

Country Status (1)

Country Link
CN (1) CN106250856A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358262A (en) * 2017-07-13 2017-11-17 京东方科技集团股份有限公司 The sorting technique and sorter of a kind of high-definition picture
CN108491856A (en) * 2018-02-08 2018-09-04 西安电子科技大学 A kind of image scene classification method based on Analysis On Multi-scale Features convolutional neural networks
CN108898145A (en) * 2018-06-15 2018-11-27 西南交通大学 A kind of image well-marked target detection method of combination deep learning
CN109784283A (en) * 2019-01-21 2019-05-21 陕西师范大学 Based on the Remote Sensing Target extracting method under scene Recognition task
CN110781926A (en) * 2019-09-29 2020-02-11 武汉大学 Support vector machine multi-spectral-band image analysis method based on robust auxiliary information reconstruction
CN111028255A (en) * 2018-10-10 2020-04-17 千寻位置网络有限公司 Farmland area pre-screening method and device based on prior information and deep learning
CN111914850A (en) * 2019-05-07 2020-11-10 百度在线网络技术(北京)有限公司 Picture feature extraction method, device, server and medium
CN112598043A (en) * 2020-12-17 2021-04-02 杭州电子科技大学 Cooperative significance detection method based on weak supervised learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942564A (en) * 2014-04-08 2014-07-23 武汉大学 High-resolution remote sensing image scene classifying method based on unsupervised feature learning
CN104778704A (en) * 2015-04-20 2015-07-15 北京航空航天大学 Detection method for image area of interest based on random glancing image sparse signal reconstruction
CN105574540A (en) * 2015-12-10 2016-05-11 中国科学院合肥物质科学研究院 Method for learning and automatically classifying pest image features based on unsupervised learning technology

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942564A (en) * 2014-04-08 2014-07-23 武汉大学 High-resolution remote sensing image scene classifying method based on unsupervised feature learning
CN104778704A (en) * 2015-04-20 2015-07-15 北京航空航天大学 Detection method for image area of interest based on random glancing image sparse signal reconstruction
CN105574540A (en) * 2015-12-10 2016-05-11 中国科学院合肥物质科学研究院 Method for learning and automatically classifying pest image features based on unsupervised learning technology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FAN ZHANG等: "Saliency-Guided Unsupervised Feature Learning for Scene Classification", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358262A (en) * 2017-07-13 2017-11-17 京东方科技集团股份有限公司 The sorting technique and sorter of a kind of high-definition picture
CN107358262B (en) * 2017-07-13 2020-01-14 京东方科技集团股份有限公司 High-resolution image classification method and classification device
CN108491856A (en) * 2018-02-08 2018-09-04 西安电子科技大学 A kind of image scene classification method based on Analysis On Multi-scale Features convolutional neural networks
CN108898145A (en) * 2018-06-15 2018-11-27 西南交通大学 A kind of image well-marked target detection method of combination deep learning
CN111028255A (en) * 2018-10-10 2020-04-17 千寻位置网络有限公司 Farmland area pre-screening method and device based on prior information and deep learning
CN111028255B (en) * 2018-10-10 2023-07-21 千寻位置网络有限公司 Farmland area pre-screening method and device based on priori information and deep learning
CN109784283A (en) * 2019-01-21 2019-05-21 陕西师范大学 Based on the Remote Sensing Target extracting method under scene Recognition task
CN109784283B (en) * 2019-01-21 2021-02-09 陕西师范大学 Remote sensing image target extraction method based on scene recognition task
CN111914850A (en) * 2019-05-07 2020-11-10 百度在线网络技术(北京)有限公司 Picture feature extraction method, device, server and medium
CN111914850B (en) * 2019-05-07 2023-09-19 百度在线网络技术(北京)有限公司 Picture feature extraction method, device, server and medium
CN110781926A (en) * 2019-09-29 2020-02-11 武汉大学 Support vector machine multi-spectral-band image analysis method based on robust auxiliary information reconstruction
CN110781926B (en) * 2019-09-29 2023-09-19 武汉大学 Multi-spectral band image analysis method of support vector machine based on robust auxiliary information reconstruction
CN112598043A (en) * 2020-12-17 2021-04-02 杭州电子科技大学 Cooperative significance detection method based on weak supervised learning
CN112598043B (en) * 2020-12-17 2023-08-18 杭州电子科技大学 Collaborative saliency detection method based on weak supervised learning

Similar Documents

Publication Publication Date Title
CN106250856A (en) A kind of high-definition picture scene classification method based on non-supervisory feature learning
CN106250931A (en) A kind of high-definition picture scene classification method based on random convolutional neural networks
CN110458844B (en) Semantic segmentation method for low-illumination scene
CN105184312B (en) A kind of character detecting method and device based on deep learning
CN105095862B (en) A kind of human motion recognition method based on depth convolution condition random field
CN103871029B (en) A kind of image enhaucament and dividing method
CN105069825B (en) Image super-resolution rebuilding method based on depth confidence network
CN107766894A (en) Remote sensing images spatial term method based on notice mechanism and deep learning
CN107229904A (en) A kind of object detection and recognition method based on deep learning
CN108388896A (en) A kind of licence plate recognition method based on dynamic time sequence convolutional neural networks
CN107563433B (en) Infrared small target detection method based on convolutional neural network
CN104182772A (en) Gesture recognition method based on deep learning
CN107909015A (en) Hyperspectral image classification method based on convolutional neural networks and empty spectrum information fusion
CN108090447A (en) Hyperspectral image classification method and device under double branch's deep structures
CN107180248A (en) Strengthen the hyperspectral image classification method of network based on associated losses
CN103942557B (en) A kind of underground coal mine image pre-processing method
CN111476249B (en) Construction method of multi-scale large-receptive-field convolutional neural network
CN112597815A (en) Synthetic aperture radar image ship detection method based on Group-G0 model
CN107239759A (en) A kind of Hi-spatial resolution remote sensing image transfer learning method based on depth characteristic
CN109087375A (en) Image cavity fill method based on deep learning
CN109753996B (en) Hyperspectral image classification method based on three-dimensional lightweight depth network
CN112884033B (en) Household garbage classification detection method based on convolutional neural network
CN109300128A (en) The transfer learning image processing method of structure is implied based on convolutional Neural net
CN114677673A (en) Potato disease identification method based on improved YOLO V5 network model
Salem et al. Semantic image inpainting using self-learning encoder-decoder and adversarial loss

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20161221

RJ01 Rejection of invention patent application after publication