CN106709515A - Downward-looking scene matching area selection criteria intervention method - Google Patents

Downward-looking scene matching area selection criteria intervention method Download PDF

Info

Publication number
CN106709515A
CN106709515A CN201611167481.0A CN201611167481A CN106709515A CN 106709515 A CN106709515 A CN 106709515A CN 201611167481 A CN201611167481 A CN 201611167481A CN 106709515 A CN106709515 A CN 106709515A
Authority
CN
China
Prior art keywords
image
cloud
gray
classification
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611167481.0A
Other languages
Chinese (zh)
Inventor
李娜
张立平
张令川
魏宁
万增录
赵晓鹰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Huahang Radio Measurement Research Institute
Original Assignee
Beijing Huahang Radio Measurement Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Huahang Radio Measurement Research Institute filed Critical Beijing Huahang Radio Measurement Research Institute
Priority to CN201611167481.0A priority Critical patent/CN106709515A/en
Publication of CN106709515A publication Critical patent/CN106709515A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/192Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
    • G06V30/194References adjustable by an adaptive method, e.g. learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a downward-looking scene matching area selection criteria intervention method. The method comprises the following steps of 1) selecting a training set; 2) carrying out characteristic extraction on the selected training set so as to acquire a characteristic quantity; 3) inputting the characteristic quantity and a classification label corresponding to the characteristic quantity into a SVM classifier to carry out learning training so as to establish a corresponding classification decision model; 4) acquiring an image to be measured, and carrying out partitioning processing on the image so as to acquire a subblock image; 5) carrying out characteristic extraction on the subblock image; 6) inputting a characteristic extracted from the sub-block image into the classification decision model to carry out classification decision; determining whether each subblock is covered by a cloud layer, and if the each subblock is covered by the cloud layer, marking as 1, and changing the corresponding subblock image into black in an original graph; otherwise, retaining the subblock image; and (7) outputting the image whose classification is determined. The method possesses characteristics that easy operability is possessed; practicality is good; and efficiency is high.

Description

One kind is applied to Downward-looking scene matching constituency criterion interference method
Technical field
The invention belongs to Downward-looking scene matching method field, it is related to a kind of suitable for the criterion intervention of Downward-looking scene matching constituency Method.
Background technology
Lower visible image matching locating method be using aircraft imager captured in real-time lower images be pre-stored in flight Satellite visible Matching band in device is matched, and determines position of the realtime graphic in satellite visible Matching band image, real Existing round-the-clock location navigation purpose.Satellite visible image is high with its imaging definition, and information is objective abundant, ageing and real With property it is strong the advantages of be widely applied.But if the satellite visible Matching band image in being pre-stored in aircraft contains cloud Layer etc. is unfavorable for that the factor of matching is present, and cloud layer blocks the availability that can reduce land object information in image, and some have sternly The image of weight cloud cover cannot even be used.Realtime graphic and visible ray Matching band images match can cause in this case Downward-looking scene matching performance reduction even mismatch.
The cloud detection and identification of early stage are time-consuming, laborious and with very strong mainly by judgement, this method is manually made Subjective limitation.With the development of satellite remote sensing method and digital image processing method, satellite remote sensing date is being utilized to cloud sector In the research field for being detected and being classified, there are a large amount of renewals, better method to be suggested, with regard to current existing method Speech, can mainly be summarized as two major classes:The first kind is to start with being studied from spectral characteristic.Using spectral characteristic threshold detection method Although simple and easy to apply, threshold value carries certain subjectivity really, while also very high to priori requirement.And in the time and Also with limitation it is because the threshold value of the difference of spectral quality its cloud detection of different regions difference phase also can not on region Can be identical, and make detection speed relatively slow because the multigroup threshold value of setting carries out differentiation.Equations of The Second Kind is based on cloud layer texture etc. The research method of feature, i.e., on the basis of being analyzed based on correlated characteristic, including pattern-recognition, clustering method, maximum likelihood are estimated Meter method, neural net method etc. are had been applied in cloud detection in interior mathematical method.The nineties in 20th century grows up SVMs (svm) method can process nonlinear data, can effectively limit again and fit, be currently for small sample The optimal theory of the problems such as classification and recurrence.
Current existing Scene matching area system of selection can not effectively weed out the Scene matching area figure containing cloud layer As, it is necessary to analyze to weed out the striograph containing cloud layer by artificial interpretation.The invention provides one kind for realizing missile-borne The manual intervention automatic mode of Downward-looking scene matching orthophotoquad, the method makes the Downward-looking scene matching area for choosing not Needing further manual intervention interpretation just can automatically weed out the Scene matching area containing cloud layer.
The content of the invention
The invention solves the problems that method problem be how using based on SVMs sorting technique, according to cloud sector with it is non- The feature difference that cloud sector exists at aspects such as gray scale, texture, edges, it is right so as to reach to classify with non-cloud sector to cloud sector The purpose of satellite visible image cloud detection.Complete set is provided herein and concrete implementation method and step, under realization Scene Matching Manual intervention method is automated, it is not necessary to artificial interpretation just can automatic rejection containing the satellite visible of cloud layer just Projection is as Matching band image.
The present invention proposes that a kind of cloud detection method of optic based on svm enters to rack to the orthography Matching band image containing cloud layer Detection, so as to realize that the manual intervention of Downward-looking scene matching orthophotoquad is automated.
Technical scheme is as follows:
One kind is applied to Downward-looking scene matching constituency criterion interference method, and it is comprised the following steps that:
1) training set is chosen;
2) feature extraction is carried out to the training set chosen, so as to obtain characteristic quantity;
3) by the characteristic quantity together with classification designator corresponding with characteristic quantity, being input to SVM classifier carries out learning training, So as to set up corresponding classification discrimination model;
4) testing image is obtained, and piecemeal treatment is carried out to described image, so as to obtain sub-image;
5) feature extraction is carried out to the sub-image;
6) feature extracted from the sub-image is input in the classification discrimination model carries out discriminant classification;
7) image after the discriminant classification is exported.
Further, the training set of selection includes two classes:Cloudless set and have and converge conjunction.
Further, it is cloudless set and have converge close in element quantity ratio be 2:1.
Further, the step 2) specifically include:In to training set without cloud atlas and there is the cloud atlas to carry out respectively accordingly Mark, is designated as 0 by cloudless image is corresponding, has cloud atlas picture to be designated as 1;Then each image in training set is calculated accordingly respectively Feature, and be stored in database.
Further, the step 2) in characteristic quantity include:
A. grey level histogram:
Have chosen the average and variance of the grey level histogram of image to portray gray feature, described using cloud cover rate The brightness of cloud layer;
B. gray level co-occurrence matrixes:
(1) angular second moment (energy), (2) the moment of inertia, (3) entropy, (4) gray scale correlated characteristic amount.
Further, the average of grey level histogram, variance, the expression formula of cloud cover rate are respectively
λ=# { u (x, y) > ω }/# { u (x, y) }
Wherein, m is histogram average, σ2It is histogram variances, λ is cloud cover rate, xiIt is the gray-scale value of i-stage, i It is gradation of image series, f (xi) histogram functions of image are represented, ω is given threshold value, and # { R } represents the element for meeting set R Number.
Further, the gray level co-occurrence matrixes are specifically included:
Arbitrarily take a bit (x, y) in the picture and deviate it another point (x+ Δs x, y+ Δ y), formed a point pair, If this to gray value for (i, j), i.e. point (x, y) gray scale be i, point (x+ Δs x, y+ Δ y) gray scales be j.Fixed a and b, makes a little (x, y) is moved in entire image, then can obtain corresponding (i, j) value;
It is M × N for size, gray level is two-dimensional digital image u (x, y) of L, and its gray level co-occurrence matrixes is represented by:
P (i, j, d, θ)=# (x, y), (x+ Δs x, y+ Δ y) ∈ M × N | u (x, y)=i, u (x+ Δs x, y+ Δ y)=j }
Wherein d represents the line distance between two pixels, and θ is the angle of the line and horizontal direction.
Further, the step 6) specifically include:Whether judge each sub-block by cloud cover, be then labeled as 1, and Corresponding sub-image is changed into black in artwork;Otherwise retain sub-image.
According to above-mentioned technical proposal, beneficial effects of the present invention are:
(1) present invention proposes a kind of sorting technique based on SVMs, according to cloud sector with non-cloud sector in gray scale, line The feature difference that the aspects such as reason, edge are present, to classify with non-cloud sector to cloud sector, so as to reach to satellite visible image The purpose of cloud detection, it is achieved thereby that the automation of Downward-looking scene matching orthography Matching band image manual intervention.Engineering reality Proof is trampled, the sorting technique based on SVMs has obvious advantage compared with the classification and Detection method based on neutral net, The Matching band probability without cloud layer selected using the method is higher.
(2) present invention proposes a kind of detection method based on piecemeal, exactly when piecemeal treatment is carried out to testing image, adopts Make to carry lap between block and block during with overlap partition, i.e. piecemeal.So in classification and Detection, a leakage for sub-image Sentence, correct discriminant classification may be able in the adjacent detection of the sub-image with lap, it is final to improve detection Precision.
Engineering practice proves that it is each that the cloud detection method of optic based on svm graders proposed by the present invention is applied to optics, radar etc. Plant the Matching band cloud detection of striograph.In addition, the cloud detection method of optic that the present invention is provided has, and ease for operation, practicality be good, efficiency High the characteristics of, be the important method for realizing Downward-looking scene matching orthophotoquad manual intervention automation.
Figure of description
Fig. 1 is the cloud detection method of optic model construction flow chart based on SVM classifier;
Fig. 2 is orthography Matching band image manual intervention automation framework flow chart.
Specific embodiment
Technical scheme is further explained and is illustrated with reference to the accompanying drawings and detailed description.
In the particular embodiment, it is 100 meters to the distribution planned, the highly visible ray for 300 meters is just penetrating Matching band figure As carrying out cloud detection.
One kind is applied to Downward-looking scene matching constituency criterion interference method, and it is comprised the following steps that:
1) training set is chosen;
The training set of selection includes two classes:Cloudless set and have and converge conjunctions, the training for SVM classifier is modeled;Wherein, The selection of cloudless set includes the information of the various cloudless images such as marine site, city, mountain region, and has the selection for converging conjunction then to try one's best Cover the structure of the various cloud layers being likely to occur in pending visible ray satellite image;In the selection quantity of two class set, On the one hand cloudless set in itself than have converge conjunction it is more complex, on the other hand wish training when, grader identification nothing Conjunction is converged more stronger, to reduce erroneous judgement, so the element allowed in the class set of the above two is according to 2:1 quantity ratio is chosen.
Specifically, image subblock of 100 width of preparation comprising various cloud structures and 200 width are non-comprising Gobi desert, land, city etc. The image subblock of cloud layer information prepares, and above sub-block size is 45*45 (pixel).
2) feature extraction is carried out to the training set chosen, so as to obtain characteristic quantity;
For the ease of classification based training, to training set in without cloud atlas and there is cloud atlas to be marked accordingly respectively, will be cloudless Image is corresponding to be designated as 0, has cloud atlas picture to be designated as 1;Then corresponding feature is calculated each image in training set respectively, and will It is stored in database.
If image is u (x, y), wherein (x, y) is the coordinate of point, M is picture traverse, and N is picture altitude.Then in the present invention The characteristic quantity being related to includes:
A. grey level histogram
Have chosen the average and variance of image histogram to portray gray feature, describe cloud using cloud cover rate in addition The brightness of layer:
λ=# { u (x, y) > ω }/# { u (x, y) }
Here m is histogram average, σ2It is histogram variances, λ is cloud cover rate, xiIt is the gray-scale value of i-stage, i is Gradation of image series, f represents the histogram functions of image, and ω is given threshold value, and # { R } represents the element number for meeting set R.
B. gray level co-occurrence matrixes
Arbitrarily take a bit (x, y) in the picture and deviate it another point (x+ Δs x, y+ Δ y), formed a point pair, If this to gray value for (i, j), i.e. point (x, y) gray scale be i, point (x+ Δs x, y+ Δ y) gray scales be j.Fixed a and b, makes a little (x, y) is moved in entire image, then can obtain corresponding (i, j) value.
It is M × N for size, gray level is two-dimensional digital image u (x, y) of L, and its gray level co-occurrence matrixes is represented by:
P (i, j, d, θ)=# (x, y), (x+ Δs x, y+ Δ y) ∈ M × N | u (x, y)=i, u (x+ Δs x, y+ Δ y)=j }
Wherein d represents the line distance between two pixels, is generally difficult to obtain excessive, and θ is the line and horizontal direction Angle, be typically taken as 0, π/4, pi/2,3 π/4.
What co-occurrence matrix reflected is the integrated information of entire image intensity profile, if given d and θ, co-occurrence matrix is carried out Normalized obtains normalized co-occurrence matrix P (i, j), can further extract some features of description textural characteristics Amount:
(1) angular second moment (energy)
Energy reflects gradation of image and is evenly distributed degree and texture fineness degree, and coarse grained value is larger, the value of close grain It is smaller.Usual cloud layer intensity profile is more uniform, and texture is thinner, and corresponding energy is just smaller.Wherein i, j value be gray level 0~ L, L are generally 255.
(2) the moment of inertia
The moment of inertia can be regarded as the definition of image.For land, due to the influence of various scenery, the rill depth of texture, N2It is just big, and for cloud layer, texture is relatively thin and uniform, N2It is relatively small.Wherein i, j value are 0~L of gray level, and L is generally 255。
(3) entropy
It reflects the complexity or non-uniform degree of texture in image, is the measurement of the information content that image has.By It is relatively easy in the texture of cloud layer, it is evenly distributed, corresponding entropy is just small.And land grain distribution is complicated, its entropy is just big.Its Middle i, j value are generally 255 for gray level 0~L, L.
(4) gray scale correlated characteristic amount
Gray scale correlated characteristic amount is used in Description Matrix the similarity of the gray scale between row or column element.
Wherein:
It is used for weigh gray level co-occurrence matrixes element be expert at or column direction on similarity degree, the line being for example horizontally orientated to Reason, the N on the direction of θ=04More than the N on other directions4.Wherein i, j value are generally 255 for gray level 0~L, L.RespectivelyWithAverage.σx, σyRespectivelyWithStandard deviation.
3) by the characteristic quantity together with classification designator corresponding with characteristic quantity, being input to SVM classifier carries out learning training, So as to set up corresponding classification discrimination model;
According to image subblock of 100 width comprising various cloud structures and 200 width obtained in 2) step comprising Gobi desert, land, 5 new feature values and image image tagged of the image subblock training set of the non-cloud layer information such as city, set up SVMs point Class model.By characteristic ginseng value y1 (y1 is the new vector of 5 dimensions) and image tagged result yi(wherein, the corresponding mark of cloudless image It is 0, has cloud atlas picture to be designated as 1) as input.
The key step for building corresponding classification discrimination model is as follows:
A. gaussian radial basis function (RBF) kernel function is built
Wherein, XiIt is supporting vector, X is input vector, and X=y1, σ are the width parameters of function.
B. equation is solved, supporting vector and corresponding Lagrangian is obtained;
C. the method for Lagrangian multipliers is used, using the kernel function method of development, training feature vector is input into, obtain 5 dimensional features The optimal interface equation of parameter
Wherein n=5, the α of non-zeroiIndex impacts taxonomic structure, factor-alphaiIt is the solution of quadratic programming problem, each training sample One y of correspondenceiαiWeights, optional supporting vector Xj, make X=Xj, b is optimal interface equation, by formula Try to achieve.The value of g (x) is 1 or 0, as logic decision value, when the value of g (x) is 1, it is believed that the image has cloud unavailable, no Then think that the image is cloudless available.
4) testing image is obtained, and piecemeal treatment is carried out to testing image, so as to obtain sub-image;
The benefit of piecemeal can be the concurrency of increase method, improve detection speed;But the size of piecemeal can direct shadow The operand and accuracy in detection of method are rung, numerical experimentation shows that smaller piecemeal can obtain better testing result, while needing The time wanted is also longer.By Matching band image to be measured with 45*45 sizes, interval steps carry out piecemeal treatment for 15.
5) feature extraction is carried out to testing image sub-block;
6) feature extracted from sub-image to be measured is input in the classification discrimination model carries out discriminant classification;
Whether each sub-block is judged by cloud cover, is then to be labeled as 1, and be changed into corresponding sub-image in artwork Black;Otherwise retain sub-image.
7) image after discriminant classification is exported.
Above-mentioned specific embodiment is only used for explaining and illustrate technical scheme, but can not constitute will to right The restriction of the protection domain asked.It will be apparent to those skilled in the art that doing any letter on the basis of technical scheme New technical scheme, will fall under the scope of the present invention obtained from single deformation or replacement.

Claims (8)

1. it is a kind of to be applied to Downward-looking scene matching constituency criterion interference method, it is characterised in that to specifically include following steps:
1) training set is chosen;
2) feature extraction is carried out to the training set chosen, so as to obtain characteristic quantity;
3) by the characteristic quantity together with classification designator corresponding with characteristic quantity, being input to SVM classifier carries out learning training, so that Set up corresponding classification discrimination model;
4) testing image is obtained, and piecemeal treatment is carried out to described image, so as to obtain sub-image;
5) feature extraction is carried out to the sub-image;
6) feature extracted from the sub-image is input in the classification discrimination model carries out discriminant classification;
7) image after the discriminant classification is exported.
2. the method for claim 1, it is characterised in that the training set of selection includes two classes:Cloudless set and have and converge Close.
3. method as claimed in claim 2, it is characterised in that the quantity ratio that the element in closing is converged in cloudless set and having is 2: 1。
4. the method for claim 1, it is characterised in that the step 2) specifically include:In to training set without cloud atlas With there is cloud atlas to be marked accordingly respectively, be designated as 0 by cloudless image is corresponding, there is cloud atlas picture to be designated as 1;Then in training set Each image calculate corresponding feature respectively, and be stored in database.
5. the method for claim 1, it is characterised in that the step 2) in characteristic quantity include:
A. grey level histogram:
Have chosen the average and variance of the grey level histogram of image to portray gray feature, cloud layer is described using cloud cover rate Brightness;
B. gray level co-occurrence matrixes:
(1) angular second moment (energy), (2) the moment of inertia, (3) entropy, (4) gray scale correlated characteristic amount.
6. method as claimed in claim 5, it is characterised in that the average of grey level histogram, variance, the expression of cloud cover rate Formula is respectively
m = Σ i x i f ( x i )
σ 2 = Σ i ( x i - m ) 2 f ( x i )
λ=# { u (x, y) > ω }/# { u (x, y) }
Wherein, m is histogram average, σ2It is histogram variances, λ is cloud cover rate, xiIt is the gray-scale value of i-stage, i is figure As number of greyscale levels, f (xi) histogram functions of image are represented, ω is given threshold value, and # { R } represents that the element for meeting set R is individual Number.
7. the method as any one of claim 1-6, it is characterised in that the gray level co-occurrence matrixes are specifically included:
A bit (x, y) is arbitrarily taken in the picture and deviates its another point (x+ Δs x, y+ Δ y), one point pair of formation, if should Point to gray value for (i, j), i.e. point (x, y) gray scale be i, point (x+ Δ x, y+ Δ y) gray scales be j.Fixed a and b, make point (x, Y) moved in entire image, then can obtain corresponding (i, j) value;
It is M × N for size, gray level is two-dimensional digital image u (x, y) of L, and its gray level co-occurrence matrixes is represented by:
P (i, j, d, θ)=# (x, y), (x+ Δs x, y+ Δ y) ∈ M × N | u (x, y)=i, u (x+ Δs x, y+ Δ y)=j }
Wherein d represents the line distance between two pixels, and θ is the angle of the line and horizontal direction.
8. the method as any one of claim 1-7, it is characterised in that the step 6) specifically include:Judge each Whether sub-block, by cloud cover, is then to be labeled as 1, and corresponding sub-image is changed into black in artwork;Otherwise retain son Block image.
CN201611167481.0A 2016-12-16 2016-12-16 Downward-looking scene matching area selection criteria intervention method Pending CN106709515A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611167481.0A CN106709515A (en) 2016-12-16 2016-12-16 Downward-looking scene matching area selection criteria intervention method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611167481.0A CN106709515A (en) 2016-12-16 2016-12-16 Downward-looking scene matching area selection criteria intervention method

Publications (1)

Publication Number Publication Date
CN106709515A true CN106709515A (en) 2017-05-24

Family

ID=58937952

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611167481.0A Pending CN106709515A (en) 2016-12-16 2016-12-16 Downward-looking scene matching area selection criteria intervention method

Country Status (1)

Country Link
CN (1) CN106709515A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392237A (en) * 2017-07-10 2017-11-24 天津师范大学 A kind of cross-domain ground cloud atlas sorting technique based on migration visual information
CN109583484A (en) * 2018-11-14 2019-04-05 西北工业大学 A kind of three classes sea area landmark point automatically selecting method
CN109784189A (en) * 2018-12-19 2019-05-21 中国人民解放军战略支援部队航天工程大学 Video satellite remote sensing images scape based on deep learning matches method and device thereof
WO2020015086A1 (en) * 2018-07-18 2020-01-23 中国矿业大学 Porous medium permeability prediction method based on intelligent machine image learning
CN111325075A (en) * 2018-12-17 2020-06-23 北京华航无线电测量研究所 Video sequence target detection method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463199A (en) * 2014-11-28 2015-03-25 福州大学 Rock fragment size classification method based on multiple features and segmentation recorrection
CN105608473A (en) * 2015-12-31 2016-05-25 中国资源卫星应用中心 High-precision land cover classification method based on high-resolution satellite image
WO2016116724A1 (en) * 2015-01-20 2016-07-28 Bae Systems Plc Detecting and ranging cloud features

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463199A (en) * 2014-11-28 2015-03-25 福州大学 Rock fragment size classification method based on multiple features and segmentation recorrection
WO2016116724A1 (en) * 2015-01-20 2016-07-28 Bae Systems Plc Detecting and ranging cloud features
CN105608473A (en) * 2015-12-31 2016-05-25 中国资源卫星应用中心 High-precision land cover classification method based on high-resolution satellite image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周丽娟: "可见光卫星图像的云检测算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
许国根: "《模式识别与智能计算的MATLAB实现》", 31 July 2012 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392237A (en) * 2017-07-10 2017-11-24 天津师范大学 A kind of cross-domain ground cloud atlas sorting technique based on migration visual information
CN107392237B (en) * 2017-07-10 2020-07-17 天津师范大学 Cross-domain foundation cloud picture classification method based on migration visual information
WO2020015086A1 (en) * 2018-07-18 2020-01-23 中国矿业大学 Porous medium permeability prediction method based on intelligent machine image learning
AU2018424207B2 (en) * 2018-07-18 2020-10-29 China University Of Mining And Technology Porous medium permeability prediction method based on machine image intelligent learning
CN109583484A (en) * 2018-11-14 2019-04-05 西北工业大学 A kind of three classes sea area landmark point automatically selecting method
CN109583484B (en) * 2018-11-14 2022-04-05 西北工业大学 Automatic selection method for three-type sea area landmark points
CN111325075A (en) * 2018-12-17 2020-06-23 北京华航无线电测量研究所 Video sequence target detection method
CN111325075B (en) * 2018-12-17 2023-11-07 北京华航无线电测量研究所 Video sequence target detection method
CN109784189A (en) * 2018-12-19 2019-05-21 中国人民解放军战略支援部队航天工程大学 Video satellite remote sensing images scape based on deep learning matches method and device thereof

Similar Documents

Publication Publication Date Title
Yang et al. Real-time face detection based on YOLO
CN109409263B (en) Method for detecting urban ground feature change of remote sensing image based on Siamese convolutional network
CN106127204B (en) A kind of multi-direction meter reading Region detection algorithms of full convolutional neural networks
Lu et al. Object-oriented change detection for landslide rapid mapping
CN108038445B (en) SAR automatic target identification method based on multi-view deep learning framework
Im et al. Object-based land cover classification using high-posting-density LiDAR data
CN108596055B (en) Airport target detection method of high-resolution remote sensing image under complex background
CN106709515A (en) Downward-looking scene matching area selection criteria intervention method
CN110096994B (en) Small sample PolSAR image classification method based on fuzzy label semantic prior
CN103049763B (en) Context-constraint-based target identification method
CN113139453B (en) Orthoimage high-rise building base vector extraction method based on deep learning
CN110097101B (en) Remote sensing image fusion and coastal zone classification method based on improved reliability factor
CN112183414A (en) Weak supervision remote sensing target detection method based on mixed hole convolution
CN107067405B (en) Remote sensing image segmentation method based on scale optimization
CN111985376A (en) Remote sensing image ship contour extraction method based on deep learning
CN104392228A (en) Unmanned aerial vehicle image target class detection method based on conditional random field model
CN112560675B (en) Bird visual target detection method combining YOLO and rotation-fusion strategy
Hormese et al. Automated road extraction from high resolution satellite images
CN112347895A (en) Ship remote sensing target detection method based on boundary optimization neural network
CN114155481A (en) Method and device for recognizing unstructured field road scene based on semantic segmentation
Spröhnle et al. Object-based analysis and fusion of optical and SAR satellite data for dwelling detection in refugee camps
CN103761526A (en) Urban area detecting method based on feature position optimization and integration
CN111582004A (en) Target area segmentation method and device in ground image
CN109829426A (en) Railway construction temporary building monitoring method and system based on high score remote sensing image
CN115512247A (en) Regional building damage grade assessment method based on image multi-parameter extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170524