CN106651884A - Sketch structure-based mean field variational Bayes synthetic aperture radar (SAR) image segmentation method - Google Patents

Sketch structure-based mean field variational Bayes synthetic aperture radar (SAR) image segmentation method Download PDF

Info

Publication number
CN106651884A
CN106651884A CN201611262018.4A CN201611262018A CN106651884A CN 106651884 A CN106651884 A CN 106651884A CN 201611262018 A CN201611262018 A CN 201611262018A CN 106651884 A CN106651884 A CN 106651884A
Authority
CN
China
Prior art keywords
sketch
line
pixel
sar image
mean field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611262018.4A
Other languages
Chinese (zh)
Other versions
CN106651884B (en
Inventor
刘芳
李婷婷
崔妲珅
焦李成
郝红侠
尚荣华
马文萍
马晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201611262018.4A priority Critical patent/CN106651884B/en
Publication of CN106651884A publication Critical patent/CN106651884A/en
Application granted granted Critical
Publication of CN106651884B publication Critical patent/CN106651884B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Radar Systems Or Details Thereof (AREA)

Abstract

The present invention discloses a sketch structure-based mean field variational Bayes synthetic aperture radar (SAR) image segmentation method. The method mainly solves the problem in the prior art that the SAR image is not segmented accurately. The method comprises a first step of sketching the SAR image, so as to obtain a sketch of the SAR image; a second step of dividing pixel sub-spaces of the SAR image according to an area chart of the SAR image; a third step of segmenting based on the pixel sub-spaces of a hybrid gathering structure of a mean field variational Bayes deduction network model; a fourth step of segmenting based on an independent target of a sketch line gathering feature; a fifth step of segmenting based on a line target of a visual semantic rule; a sixth step of segmenting based on a homogeneous area pixel sub-space of a polynomial logic regression prior model; and a seventh step of combining segmentation results, so as to obtain a segmentation result of the SAR image. Through adoption of the method, a good segmentation effect of the SAR image is obtained and can be used for semantic segmentation of the SAR image.

Description

Mean field variation Bayes's SAR image segmentation method based on sketch structure
Technical field
The invention belongs to technical field of image processing, further relates to the one kind in technical field of image segmentation based on element Retouch mean field variation Bayes's synthetic aperture radar SAR (Synthetic Aperture Radar) the image segmentation side of structure Method.Present invention can apply to SAR image segmentation, can split exactly to the zones of different of SAR image.
Background technology
Synthetic aperture radar SAR is the impressive progress in remote sensing technology field, for obtaining the full resolution pricture of earth surface. Compared with other kinds of imaging technique, SAR has very important advantage, and it is not by air such as cloud layer, rainfall or dense fogs The impact of condition and intensity of illumination, can round-the-clock, round-the-clock obtain high resolution remote sensing data.SAR technologies for it is military, Many fields such as agricultural, geography have great importance.Image segmentation is referred to will be schemed according to color, gray scale and Texture eigenvalue Process as being divided into several mutually disjoint regions.It is for facing at present SAR image to be interpreted by computer Individual huge challenge, and SAR image segmentation is its steps necessary, it affects very big to further detection, identification.
Due to the unique imaging mechanisms of SAR, contain many coherent speckle noises in SAR image, cause many optical imagerys Traditional method all cannot be directly used to the segmentation of SAR image.The conventional segmentation methods of SAR image generally require manually experience and enter Row feature extraction, but the quality of the feature extracted has pivotal role for the segmentation result of SAR image.Bayes machine Exercises are the key technology of unsupervised feature learning, can be used for SAR image segmentation task.However, general bayes machine Learning method can only often be iterated the certain network reasoning process of number of times, lack the reasoning for SAR image, cause its nothing Method efficiently accomplishes the segmentation to SAR image.
Paper " a kind of effective MSTAR SAR image segmentation methods " (Wuhan University Journal that Wuhan University is delivered at which: Page 1377 page -1380 of information science version in October, 2015) in propose a kind of MSTAR SAR image segmentation methods.The method Over-segmentation operation is carried out to pending image first, over-segmentation image-region is obtained.Secondly figure is carried out to the image after over-segmentation As the feature extraction of region class and Pixel-level, obtain, for representing the characteristic vector of image, using space to MSTAR SAR images Latent dirichlet allocation model (sLDA) and markov random file (MRF) set up proposed model, obtain energy Functional.Finally energy functional is optimized with Graph-Cut algorithms and Branch-and-Bound algorithms, obtains final Segmentation result.The weak point that the method is present is when the characteristic vector of SAR image is tried to achieve, to use the Pixel-level of SAR image Feature, without learn SAR image in due to the dependency between pixel distinctive architectural feature, cause segmentation result inadequate Accurately.
Patent " the SAR image segmentation side based on depth own coding and administrative division map that Xian Electronics Science and Technology University applies at which Disclose in method " (number of patent application 201410751944.2, publication number CN104392456A) it is a kind of based on depth own coding and The SAR image segmentation method of administrative division map.The region that the method obtains dividing according to the sketch map of synthetic aperture radar SAR image Figure, by administrative division map be mapped to artwork assembled, homogeneous and structural region;Respectively to aggregation, homogenous region with different depth Self-encoding encoder is trained, and obtains assembling the feature with each point of homogenous region;Dictionary, each point are built to aggregation and homogenous region respectively The provincial characteristicss of all subregion are projected to corresponding dictionary and converged out, respectively the sub-district characteristic of field in two class regions is clustered; Structural region is merged using super-pixel under the guidance of sketch line segment and is split;Merge each region segmentation result and complete SAR figures As segmentation.The weak point that the method is present is, at the beginning of the used weights of depth autoencoder network for automatically extracting characteristics of image Beginning turns to random initializtion, not using the peculiar distribution of SAR image, and the element of SAR image is not added in training network Structural constraint is retouched, it is thus impossible to effectively extract the substitutive characteristics of image, the precision of SAR image segmentation is reduced.
The patent that Xian Electronics Science and Technology University applies at which is " based on deconvolution network and the SAR image of mapping inference network A kind of deconvolution net is disclosed in dividing method " (number of patent application CN201510679181.X, publication number CN105389798A) The SAR image segmentation method of network and mapping inference network.The method is drawn according to the sketch map of synthetic aperture radar SAR image Point administrative division map, by administrative division map be mapped to artwork assembled, homogeneous and structural region.Respectively to each in aggregation and homogenous region Individual mutually disconnected region carries out unsupervised training, obtains characterizing the filter set of each mutual not connected region architectural feature. Reasoning is compared the architectural feature in two class regions mutually not between connected region respectively, obtain assembling and homogenous region point Cut result.Structural region is merged using super-pixel under the guidance of sketch line segment and is split.Merge each region segmentation result complete Split into SAR image.The weak point that the method is present is that the architectural feature in aggregation zone mutually not between connected region is being entered When row compares reasoning, the method uses the inference method of self-organizing feature map SOM network, and this inference method is needed Very important person is to determine cluster numbers, and the cluster time is more long, causes cluster accuracy to reduce, and have impact on the accuracy of SAR image segmentation.
The paper that Liu Fang, Duan Yiping, Li Lingling, burnt Li Cheng etc. are delivered at which is " semantic adjacent with self adaptation based on level vision The SAR image segmentation of the hidden model of domain multinomial " (IEEE Trancactions on Geoscience and Remote Sensing, 2016,54 (7):Propose in 4287-4301.) a kind of based on level vision semanteme and adaptive neighborhood multinomial The SAR image segmentation method of hidden model, the method go out SAR image according to the sketch model extraction of synthetic aperture radar SAR image Sketch map, using sketch line fields method, obtain the administrative division map of SAR image, and administrative division map be mapped in SAR image, Most synthetic aperture SAR image is divided into aggregation zone, homogenous region and structural region at last.Based on the division, to different qualities Region employ different dividing methods.For aggregation zone, gray level co-occurrence matrixes feature is extracted, and adopts local linear The method of constraint coding obtains the expression of each aggregation zone, and then the method using hierarchical clustering is split.To structural area Domain, by analyzing side model and line model, devises vision semantic rule positioning border and line target.In addition, border and line mesh Mark contains strong directional information, therefore the hidden model of multinomial devised based on geometry window is split.To homogeneous Region, goes to represent center pixel in order to be able to find appropriate neighborhood, devises the hidden model of the multinomial based on self-adapting window and enter Row segmentation.The segmentation result in these three regions is integrated into and obtains last segmentation result together.The weak point of the method is, It is not accurate enough to aggregation zone boundary alignment, for the classification number of homogenous region determines not reasonable, the region one of segmentation result Cause property is poor, and pinpoint target is not processed in the segmentation result of structural region.
The content of the invention
Present invention aims to the deficiency of above-mentioned prior art, proposes that a kind of mean field based on sketch structure becomes Divide Bayes's SAR image segmentation method, to improve the accuracy of SAR image segmentation.
For achieving the above object, comprise the steps:
(1) SAR image sketch:
(1a) it is input into synthetic aperture radar SAR image;
(1b) set up the sketch model of synthetic aperture radar SAR image;
(1c) sketch map of synthetic aperture radar SAR image is extracted from sketch model;
(2) divide pixel subspace:
(2a) using sketch line fields method, the sketch map to synthetic aperture radar SAR image carries out compartmentalization process, Obtain including aggregation zone, the administrative division map of the synthetic aperture radar SAR image without sketch line region and structural region;
(2b) by including aggregation zone, the administrative division map without sketch line region and structural region, it is mapped to the synthesis hole of input Footpath radar SAR image, obtains mixing aggregated structure atural object pixel subspace in synthetic aperture radar SAR image, homogenous region Pixel subspace and structure-pixel subspace;
(3) build mean field variation Bayesian inference network model:
(3a) input layer of mean field variation Bayesian inference network model, hidden layer and reconstruction of layer are disposed as into 441 Connection between input layer and hidden layer, hidden layer and reconstruction of layer is disposed as full connection by neuron;
(3b) according to the following formula, calculate the variation lower bound of mean field variation Bayesian inference network model:
Wherein, L (Q) represents the variation lower bound of mean field variation Bayesian inference network model, and log is represented with 10 as bottom Log operations, and P (V | W, H, V c) is represented with regard to W, the conditional probability of H, c, V represent mean field variation Bayesian inference network model In input layer, W represents the connection weight of mean field variation Bayesian inference network model, and H represents mean field variation Bayes Hidden layer in inference network model, c represent the biasing of hidden layer in mean field variation Bayesian inference network model, and b represents average The biasing of input layer in the variation Bayesian inference network model of field, the prior probability of P (W) expression W, P (H | b) represent H with regard to b's Conditional probability, Q (W) represent the variation distribution probability of W, and Q (H) represents the variation distribution probability of H;
(3c) according to the following formula, computation structure reconstructed error:
Wherein, G represents structural remodeling error, and M represents the sum of input picture block,Represent i-th input picture block Reconstructed image block, siI-th sketch block is represented, SM () is represented and asked sketch block to operate, and C () is represented and asked sketch line length to grasp Make;
(4) feature learning is carried out to mixing aggregated structure atural object pixel subspace:
(4a) to the mixing aggregated structure atural object pixel subspace in synthetic aperture radar SAR image, the company spatially gone up The general character carries out region division, if obtaining multiple mutual not connected regions, performs (4b);
(4b) to each mutual not connected region, carry out every a sampling by 21 × 21 window, obtain multiple images block sample;
(4c) to each image block sample, take in sketch map and the one-to-one sketch block sample of image block sample;
(4d) to each mutual not connected region, produce corresponding one group of each region and meet uneven atural object distribution G0Point The random number of cloth;
(4e) to each mutual not connected region, decibel is become to mean field with the corresponding one group of random number in each region for obtaining The weights of leaf this inference network and biasing are initialized, the mean field variation Bayesian inference network after being initialized;
(4f) to the mean field variation Bayesian inference network after each mutual not connected region initialization, by image block sample As the input layer of mean field variation Bayesian inference network, with the side of the mean field variation Bayesian inference of sketch structural constraint Method, carries out structural constraint training to mean field variation Bayesian inference network, and the mean field variation Bayes after being trained pushes away Reason network;
(4g) to each mutual not connected region, the weights of the mean field variation Bayesian inference network after its training are taken, is made For the characteristic set in the region;
(5) split SAR image mixing aggregated structure atural object pixel subspace:
(5a) by all mutually not characteristic set splicings of connected region, using spliced characteristic set as code book;
(5b) all features to each mutual not connected region, calculate the inner product with each feature in code book respectively, obtain To projection vector of all features in each region on code book;
(5c) to each, mutually the projection vector of connected region does not carry out maximum pond, obtains the corresponding knot in each region Structure characteristic vector;
(5d) AP clustering algorithms are propagated using neighbour, the structural eigenvector of all mutual connected regions are not clustered, Obtain mixing the segmentation result of aggregated structure atural object pixel subspace;
(6) segmenting structure pixel subspace:
(6a) vision semantic rule is used, splits line target;
(6b) feature of gathering based on sketch line, splits pinpoint target;
(6c) result of line target and pinpoint target segmentation is merged, obtains the segmentation knot of structure-pixel subspace Really.
(7) split homogenous region pixel subspace:
Using the homogenous region dividing method based on multinomial logistic regression prior model, to homogenous region pixel subspace Split, obtained the segmentation result of homogenous region pixel subspace.
(8) combination and segmentation result:
By the segmentation of mixing aggregated structure atural object pixel subspace, homogenous region pixel subspace and structure-pixel subspace As a result merge, obtain the final segmentation result of synthetic aperture radar SAR image.
The present invention has advantages below compared with prior art:
First, due to the present invention arrange a visual layers unit number it is identical with Hidden unit number quantity, and visual layers to hidden The mean field variation Bayesian inference network of the full connection in layer direction, to mixing aggregated structure atural object pixel subspace regional Unsupervised training is carried out, using the weights of mean field variation Bayesian inference network as the characteristics of image acquired, is overcome existing The Pixel-level feature of technology SAR image asks for the characteristic vector of SAR image, without learn SAR image in due to pixel it Between dependency and distinctive architectural feature shortcoming so that the present invention can automatically extract the architectural feature of SAR image, obtain More preferable region consistency.
Second, as the present invention mutually produces one group in disconnected mixing aggregated structure atural object pixel subspace region to each Meet the random number of SAR image distribution, the weights and biasing of mean field variation Bayesian inference network are initialized, is overcome Prior art is automatically extracted in the depth autoencoder network of characteristics of image, with random distribution to netinit without catching The shortcoming of SAR image substitutive characteristics so that the present invention can effectively acquire the substitutive characteristics for characterizing SAR image atural object, improve The accuracy of SAR image segmentation.
3rd, as the present invention is using the method for the mean field variation Bayesian inference of sketch structural constraint, overcome existing There is technology to automatically extract in the depth autoencoder network of characteristics of image, do not utilize SAR image sketch structure to enter lacking for row constraint Point so that the present invention can catch the important architectural feature for characterizing SAR image atural object, further increases SAR image segmentation Accuracy.
4th, as the feature in each mutual disconnected mixing aggregated structure atural object pixel subspace region is made by the present invention Code book is constituted for dictionary base, the characteristic vector for obtaining is more sparse, carries out the lifting for having in time efficiency when feature compares, Prior art is overcome based on the artificial determination clusters number in deconvolution network and mapping inference network and is clustered of long duration Shortcoming so that the present invention can more accurately obtain the segmentation result of SAR image and improve the effect of SAR image segmentation in time Rate.
Description of the drawings
Fig. 1 is the flow chart of the present invention;
Fig. 2 is the analogous diagram of the present invention;
Fig. 3 is simulation result schematic diagram of the present invention.
Specific embodiment
The present invention will be further described below in conjunction with the accompanying drawings.
Referring to the drawings 1, the present invention's comprises the following steps that.
Step 1, SAR image sketch.
Input synthetic aperture radar SAR image.
Set up the sketch model of synthetic aperture radar SAR image.
1st step, in the range of [100,150], arbitrarily chooses a number, as the sum of template.
2nd step, constructs the side being made up of pixel with different directions and yardstick, a template of line, using template Direction and dimensional information structural anisotropy's Gaussian function, by the Gaussian function, in calculation template each pixel plus Weight coefficient, the weight coefficient of all pixels point in statistical mask, wherein, yardstick number value is 3~5, and direction number value is 18。
3rd step, according to the following formula, calculates pixel in the synthetic aperture radar SAR image corresponding with template area coordinate Average:
Wherein, μ represents the equal of all pixels point in the synthetic aperture radar SAR image corresponding with template area coordinate Value, ∑ represent sum operation, and g represents the corresponding coordinate of any one pixel in the Ω region of template, and ∈ is represented and belonged to symbol Number, wgRepresent weight coefficient of the pixel at coordinate g in the Ω region of template, wgSpan be wg∈ [0,1], Ag Represent the value with pixel of the pixel at coordinate g in corresponding synthetic aperture radar SAR image in the Ω region of template.
4th step, according to the following formula, calculates pixel in the synthetic aperture radar SAR image corresponding with template area coordinate Variance yields:
Wherein, ν represents the variance of all pixels point in the synthetic aperture radar SAR image corresponding with template area coordinate Value.
5th step, according to the following formula, response of each pixel for ratio operator in calculating synthetic aperture radar SAR image Value:
Wherein, R represents response value of each pixel for ratio operator, min { } in synthetic aperture radar SAR image Minimum Value Operations are represented, a represents two different regions in template, μ respectively with baRepresent all pixels point in a of template area Average, μbRepresent the average of all pixels point in the b of template area.
6th step, according to the following formula, response of each pixel for dependency operator in calculating synthetic aperture radar SAR image Value:
Wherein, C represents response value of each pixel for dependency operator in synthetic aperture radar SAR image,Represent Square root functions, a and b represent two zoness of different in template, ν respectivelyaRepresent the variance of all pixels point in a of template area Value, νbRepresent the variance yields of all pixels point in the b of template area, μaRepresent the average of all pixels point in a of template area, μbTable Show the average of all pixels point in the b of template area.
7th step, according to the following formula, response of each pixel for each template in calculating synthetic aperture radar SAR image Value:
Wherein, F represents response value of each pixel for each template in synthetic aperture radar SAR image,Represent During square root functions, R and C represent synthetic aperture radar SAR image respectively, pixel is directed to ratio operator and synthetic aperture radar Response value of the pixel for dependency operator in SAR image.
8th step, judges whether constructed template is equal to the sum of selected template, if so, then performs the 2nd step, otherwise, Perform the 9th step.
9th step, selects the template with maximum response, from each template as synthetic aperture radar SAR image Template, and using the maximum response of the template as pixel in synthetic aperture radar SAR image intensity, by the side of the template The direction of pixel in as synthetic aperture radar SAR image, obtain synthetic aperture radar SAR image sideline response diagram and Gradient map.
10th step, according to the following formula, calculates the intensity level of synthetic aperture radar SAR image intensity map, obtains intensity map:
Wherein, I represents the intensity level of synthetic aperture radar SAR image intensity map, and r represents synthetic aperture radar SAR image Value in the response diagram of sideline, t represent the value in synthetic aperture radar SAR image gradient map.
11st step, using non-maxima suppression method, detects to intensity map, obtains suggestion sketch.
12nd step, choose suggestion sketch in have maximum intensity pixel, will suggestion sketch in the maximum intensity The pixel of pixel connection connects to form suggestion line segment, obtains suggestion sketch map.
13rd step, according to the following formula, calculates the code length gain of sketch line in suggestion sketch map:
Wherein, CLG represents the code length gain of sketch line in suggestion sketch map, and ∑ represents sum operation, and J represents current The number of pixel, A in sketch line neighborhoodjRepresent the observation of j-th pixel in current sketch line neighborhood, Aj,0Represent In the case that current sketch line can not represent structural information, the estimated value of j-th pixel, ln () table in the sketch line neighborhood Show the log operations with e as bottom, Aj,1Represent in the case where current sketch line can represent structural information, the sketch line neighborhood In j-th pixel estimated value.
14th step, in the range of [5,50], arbitrarily chooses a number, as threshold value T.
15th step, selects CLG in all suggestion sketch lines>The suggestion sketch line of T, is combined into synthetic aperture radar The sketch map of SAR image.
The sketch map of synthetic aperture radar SAR image is extracted from sketch model.
The synthetic aperture radar SAR image sketch model that the present invention is used is that Jie-Wu et al. was published in IEEE in 2014 Article on Transactions on Geoscience and Remote Sensing magazines《Local maximal homogenous region search for SAR speckle reduction with sketch-based geometrical kernel function》Proposed in model.
Step 2, divides pixel subspace.
Using sketch line fields method, the sketch map to synthetic aperture radar SAR image carries out compartmentalization process, obtains Including aggregation zone, the administrative division map of the synthetic aperture radar SAR image without sketch line region and structural region.
According to the concentration class of sketch line segment in the sketch map of synthetic aperture radar SAR image, sketch line is divided into into expression The aggregation sketch line of aggregation atural object and represent border, line target, the border sketch line of isolated target, line target sketch line, isolated Target sketch line.
According to the statistics with histogram of sketch line segment concentration class, the sketch line segment conduct that concentration class is equal to optimum concentration class is chosen Seed line-segment sets { Ek, k=1,2 ..., m }, wherein, EkAny bar sketch line segment in expression seed line-segment sets, k represent seed The label of any bar sketch line segment in line-segment sets, m represent the total number of seed line segment, and { } represents set operation.
Using the unselected line segment for being added to seed line-segment sets sum as basic point, with this basic point recursive resolve line segment aggregate.
One radius of construction is the circular primitive in the optimum concentration class interval upper bound, with the circular primitive in line segment aggregate Line segment is expanded, and the line segment aggregate ecto-entad after expansion is corroded, and is obtained in units of sketch point in sketch map Aggregation zone.
Sketch line to representing border, line target and isolated target, centered on each sketch point of each sketch line Construction size is 5 × 5 geometry window, obtains structural region.
Part beyond aggregation zone and structural region will be removed in sketch map as can not sketch region.
By the aggregation zone in sketch map, structural region and can not sketch region merging technique, obtain including aggregation zone, structure Region and the administrative division map of the synthetic aperture radar SAR image without sketch line region.
By including aggregation zone, the administrative division map without sketch line region and structural region, the synthetic aperture thunder of input is mapped to Up to SAR image, mixing aggregated structure atural object pixel subspace in synthetic aperture radar SAR image, homogenous region pixel are obtained Subspace and structure-pixel subspace.
Step 3, builds mean field variation Bayesian inference network model.
By the input layer of mean field variation Bayesian inference network model, hidden layer and reconstruction of layer be disposed as 441 it is neural Connection between input layer and hidden layer, hidden layer and reconstruction of layer is disposed as full connection by unit.
According to the following formula, calculate the variation lower bound of mean field variation Bayesian inference network model:
Wherein, L (Q) represents the variation lower bound of mean field variation Bayesian inference network model, and log is represented with 10 as bottom Log operations, and P (V | W, H, V c) is represented with regard to W, the conditional probability of H, c, V represent mean field variation Bayesian inference network model In input layer, W represents the connection weight of mean field variation Bayesian inference network model, and H represents mean field variation Bayes Hidden layer in inference network model, c represent the biasing of hidden layer in mean field variation Bayesian inference network model, and b represents average The biasing of input layer in the variation Bayesian inference network model of field, the prior probability of P (W) expression W, P (H | b) represent H with regard to b's Conditional probability, Q (W) represent the variation distribution probability of W, and Q (H) represents the variation distribution probability of H.
According to the following formula, computation structure reconstructed error:
Wherein, G represents structural remodeling error, and M represents the sum of input picture block,Represent i-th input picture block Reconstructed image block, siI-th sketch block is represented, SM () is represented and asked sketch block to operate, and C () is represented and asked sketch line length to grasp Make.
Step 4, carries out feature learning to mixing aggregated structure atural object pixel subspace.
To the mixing aggregated structure atural object pixel subspace in synthetic aperture radar SAR image, the connectedness for spatially going up Region division is carried out, if only one of which region, execution step 6.
To each mutual not connected region, carry out every a sampling by 21 × 21 window, obtain multiple images block sample.
To each image block sample, take in sketch map and the one-to-one sketch block sample of image block sample.
To each mutual not connected region, produce corresponding one group of each region and meet uneven atural object distribution G0Distribution Random number.
To each mutual not connected region, one group of random number is corresponded to mean field variation Bayesian inference with each region is obtained The weights of network and biasing are initialized, the mean field variation Bayesian inference network after being initialized.
To a kind of integral transformation that power function is core, the method for Mellin transform, estimate that uneven atural object is distributed G0Distribution Parameter in probability density formula, obtains alpha, gamma, the value of tri- parameters of n.
According to the following formula, calculate the uneven atural object distribution G of synthetic aperture radar SAR image0The probability density of distribution:
Wherein, the probability density of the uneven atural object distribution of P (I (x, y)) expressions synthetic aperture radar SAR image, I (x, Y) intensity level of the denotation coordination for the pixel of (x, y), n represent the equivalent number of synthetic aperture radar SAR image, and α represents conjunction Into the form parameter of aperture radar SAR image, γ represents the scale parameter of synthetic aperture radar SAR image, and Γ () represents gal Horse function, its value are obtained by following formula:
Wherein, u represents independent variable, and ∫ represents integration operation, and t represents integration variable.
G is distributed from uneven atural object is met0Front 441 row are chosen in the random matrix A of distribution, as mean field variation pattra leaves The initial value of the weights of this inference network.
G is distributed from uneven atural object is met0Arbitrarily choose two to arrange in the random matrix A of distribution, become respectively as mean field In point Bayesian inference network in visual layers biasing initial value and mean field variation Bayesian inference network hidden layer biasing it is initial Value, completes the initialization to mean field variation Bayesian inference network.
To the mean field variation Bayesian inference network after each mutual not connected region initialization, with sketch structural constraint The method of mean field variation Bayesian inference, carries out structural constraint training to mean field variation Bayesian inference network, is instructed Mean field variation Bayesian inference network after white silk.
1st step, according to the following formula, updates the weights of mean field variation Bayesian inference network:
Wherein, Q (W) represents the variation distribution probability of W, and W represents the weights of mean field variation Bayesian inference network, N () represents normpdf, and D represents the covariance parameter of normpdf, and K represents average Field variation Bayesian inference network input layer number, vnN-th of expression mean field variation Bayesian inference network is defeated Enter sample, cjThe value of j-th neuron biasing in mean field variation Bayesian inference network hidden layer is represented, γ represents mean field variation The data augmentation parameter of Bayesian inference network, its value byObtain, hnRepresent flat The hidden layer of n-th input sample of equal field variation Bayesian inference network, H represent that mean field variation Bayesian inference network owns The hidden layer of sample, T represent that transposition is operated, and δ represents the weights of mean field variation Bayesian inference network, and its value is obtained by below equation Arrive: Represent Dot product is operated, and besselk () represents Equations of The Second Kind modified Bessel function, ξkThe kth row of ξ are represented, its value is by formulaObtain, φkK-th element of φ is represented, its value is by formula Obtain.
2nd step, according to the following formula, calculates the kth row of the weights of mean field variation Bayesian inference network:
Wherein, wkRepresent the kth row of the weights of mean field variation Bayesian inference network.
3rd step, according to the following formula, updates the biasing of the input layer of mean field variation Bayesian inference network:
4th step, according to the following formula, updates the biasing of the hidden layer of mean field variation Bayesian inference network:
5th step, according to the biasing after renewal and weights, obtains and sample image number of blocks identical reconstructed image block.
6th step, asks its sketch map to each reconstructed image block, used as reconstruct sketch block.
7th step, using the structural remodeling error formula in claim 1 step (3c), seeks structural failure G.
8th step, judges that average G, whether more than threshold value 0.2, if so, then performs the 1st step, otherwise, performs the 9th step.
9th step, completes mean field variation Bayesian inference network structural constraint training.
To each mutual not connected region, the weights of the mean field variation Bayesian inference network after its training are taken, as this The characteristic set in region.
Step 5, segmentation SAR image mixing aggregated structure atural object pixel subspace.
By all mutually not characteristic set splicings of connected region, using spliced characteristic set as code book.
All features to each mutual not connected region, calculate the inner product with each feature in code book respectively, obtain every Projection vector of all features in individual region on code book.
To each, mutually the projection vector of connected region does not carry out maximum pond, obtains the corresponding structure spy in each region Levy vector.
AP clustering algorithms are propagated using neighbour, the structural eigenvector of all mutual connected regions is not clustered, is obtained The segmentation result of mixing aggregated structure atural object pixel subspace.
Step 6, segmenting structure pixel subspace.
Vision semantic rule is used, splits line target.
If i-th sketch line liWith j-th strip sketch line ljThe distance between be Dij, liDirection be Oi, ljDirection be Oj, The total number of i, j ∈ [1,2 ..., S], S for sketch line.
By width more than 3 pixels line target with two sketch line liAnd ljRepresent, liAnd ljThe distance between DijIt is less than T1And poor (the O in directioni-Oj) less than 10 degree, wherein T1=5.
If the s article sketch line lsGeometry window wsThe interior average gray per string is AiIf the gray scale difference of adjacent column is ADi=| Ai-Ai+1|, if zs=[zs1,zs2,...,zs9] for the gray scale difference AD of adjacent columniLabel vector.
By width less than 3 pixels line target with single sketch line lsRepresent, lsGeometry window wsIt is interior, calculate phase The gray scale difference AD of adjacent columniIf, ADi>T2, then zsi=1;Otherwise zsi=0, zsIn have two elements value be 1, remaining is 0, its Middle T2=34.
If L1,L2It is the set of the sketch line for representing line target, if Dij<T1And | Oi-Oj|<10, then li,lj∈L1; If sum is (zs)=2, then ls∈L2, wherein sum () represent to vector important summation operation.
In structure-pixel subspace, according to the set L of the sketch line of line target1, by liAnd ljBetween region as line mesh Mark.
In structure-pixel subspace, according to the set L of the sketch line of line target2, l will be coveredsRegion as line target.
Based on the feature of gathering of sketch line, split pinpoint target.
1st step, in the structural region of administrative division map, all sketch wire tags that would not indicate line target are candidate's sketch line Sketch line in set.
2nd step, randomly selects a sketch line from candidate's sketch line set, with an end points of selected sketch line Centered on, construct the geometry window that size is for 5 × 5.
3rd step, judges the end points with the presence or absence of other sketch lines in geometry window, if existing, performs the 4th step;Otherwise, Perform the 6th step.
4th step, judges whether to only exist an end points, if so, carries out the end points place sketch line and current sketch line Connection;Otherwise, perform the 5th step.
5th step, the sketch line that sketch line selected by connection is located with each end points, chooses wherein angle from all connecting lines The sketch line that two maximum sketch lines are completed as connection.
6th step, judges the interior end points with the presence or absence of other sketch lines of geometry window of another end points of sketch line, if Exist, perform the 4th step;Otherwise, perform the 7th step.
7th step, the sketch line to completing attended operation choose the sketch line comprising two and more than two sketch line segments, Bar number n comprising sketch line segment, wherein n >=2 in sketch line selected by statistics.
8th step, judges that the bar number n of sketch line then performs the 9th step whether equal to 2, if so,;Otherwise, perform the 10th step.
Sketch line of the angle value on sketch line summit in the range of [10 °, 140 °] is gathered spy as having by the 9th step The sketch line levied.
10th step, selects the sketch line of the angle value on the corresponding n-1 summit of sketch line all in the range of [10 °, 140 °].
11st step, in selected sketch line, is defined as follows two kinds of situations:
Whether the first situation, judge the i-th -1, the adjacent two sketch line segments of i-th sketch line segment, i+1 bar i-th The same side of bar sketch line segment place straight line, 2≤i≤n-1, if all sketch line segments on sketch line and adjacent segments are all same Side, then the labelling sketch line is with the sketch line for gathering feature.
Whether second situation, judge the i-th -1, the adjacent two sketch line segments of i-th sketch line segment, i+1 bar i-th The same side of bar sketch line segment place straight line, 2≤i≤n-1, if there is n-1 bar sketch line segments with adjacent segments same on sketch line Side, and have a sketch line segment to be adjacent line segment in non-the same side, also the labelling sketch line is with the element for gathering feature Retouch line.
12nd step, an optional sketch line in the sketch line for gathering feature are held by two of selected sketch line Point coordinates, determines the distance between two end points, if the end-point distances are in the range of [0,20], then using selected sketch line as table Show the sketch line of pinpoint target.
13rd step, judge it is untreated whether all selected with the sketch line for gathering feature, if so, perform the 12nd step; Otherwise, perform the 14th step.
14th step, with the method for super-pixel segmentation, to the sketch line for representing pinpoint target in synthetic aperture radar SAR image The pixel of surrounding carries out super-pixel segmentation, by super-pixel of the gray value of super-pixel after segmentation in [0,45] or [180,255] As pinpoint target super-pixel.
15th step, merges pinpoint target super-pixel, using the border of the pinpoint target super-pixel after merging as pinpoint target Border, obtain the segmentation result of pinpoint target.
The result of line target and pinpoint target segmentation is merged, the segmentation result of structure-pixel subspace is obtained.
Step 7, splits homogenous region pixel subspace.
By the segmentation of mixing aggregated structure atural object pixel subspace, homogenous region pixel subspace and structure-pixel subspace As a result merge, obtain the final segmentation result of synthetic aperture radar SAR image.
1st step, arbitrarily chooses a pixel, from the pixel subspace of homogenous region centered on selected pixel 3 × 3 square window is set up, the standard deviation sigma of the window is calculated1
The length of side of square window is increased by 2 by the 2nd step, obtains new square window, calculates the standard deviation of new square window σ2
3rd step, if standard deviation threshold method T3=3, if | σ12|<T3, then by standard deviation be σ2Square window as final Square window, perform the 4th step;Otherwise, perform the 2nd step.
4th step, according to the following formula, calculates the prior probability of center pixel in square window:
Wherein, p1' represent square window in center pixel prior probability, exp () represent exponential function operation, η ' tables Show probabilistic model parameter, η ' values are 1, xk" represent square window in belong to kth ' class number of pixels, k' ∈ [1 ..., K'], K' represents the classification number of segmentation, and K' values are 5, xi' represent the pixel for belonging to the i-th ' class in the square window that obtains of the 3rd step Number.
5th step, the probability density of pixel grey scale is multiplied with the probability density of texture, likelihood probability p' is obtained2, wherein, The probability density of gray scale is obtained by the distribution of fading channel Nakagami, and the probability density of texture is obtained by t-distribution.
6th step, by prior probability p1' and likelihood probability p2' be multiplied, obtain posterior probability p12'。
Whether the 7th step, also have untreated pixel in judging homogenous region pixel subspace, if having, perform the 1st step; Otherwise, perform the 9th step.
8th step, according to maximum posteriori criterion, obtains the segmentation result of homogenous region pixel subspace.
Step 8, combination and segmentation result.
By the segmentation of mixing aggregated structure atural object pixel subspace, homogenous region pixel subspace and structure-pixel subspace As a result merge, obtain the final segmentation result of synthetic aperture radar SAR image.
The effect of the present invention is further described with reference to analogous diagram.
1. simulated conditions:
The present invention emulation hardware condition be:Intellisense and image understanding laboratory graphics workstation;Present invention emulation The synthetic aperture radar SAR image for being used is:Ku wave band resolution is 1 meter of Piperiver figures.
2. emulation content:
The emulation experiment of the present invention is that the Piperiver figures in SAR image are split, as shown in Fig. 2 (a) Piperiver schemes.The figure is from the synthetic aperture radar SAR image that Ku wave band resolution is 1 meter.
Using the present invention SAR image sketch step, to Piperiver the retouching of pixel shown in Fig. 2 (a), obtain as Sketch map shown in Fig. 2 (b).
Using the division pixel subspace step of the present invention, to the sketch map compartmentalization shown in Fig. 2 (b), obtain such as Fig. 2 Administrative division map shown in (c).White space in Fig. 2 (c) represents aggregation zone, and others are without sketch line region and structural area Domain.Administrative division map shown in Fig. 2 (c) is mapped to into the figures of Piperiver shown in Fig. 2 (a), the Piperiver as shown in Fig. 2 (d) is obtained Mixing aggregated structure atural object pixel subspace figure.
Using the segmentation SAR image mixing aggregated structure atural object pixel subspace step of the present invention, to shown in Fig. 2 (d) Piperiver mixing aggregated structure atural object pixels subspace figure is split, and obtains the mixing aggregated structure ground shown in Fig. 3 (a) Image sub-prime space segmentation result figure, its grey area represent untreated ground object space, the region table of remaining same color Show same ground object space, the different ground object space of the region representation of different colours.By structural area in administrative division map shown in Fig. 2 (c) Sketch map shown in domain mapping to Fig. 2 (b), obtains the corresponding sketch line of structural region shown in Fig. 2 (e).Knot shown in Fig. 2 (f) In the corresponding sketch line in structure region, black is the sketch line for representing line target, the corresponding sketch of the structural region shown in Fig. 2 (g) In line, black is the sketch line for representing pinpoint target.
Image Segmentation Methods Based on Features pinpoint target step gathered based on sketch line using the present invention, the independence shown in Fig. 3 (b) is obtained The segmentation result of target.
Using the combination and segmentation result step of the present invention, mixing aggregated structure atural object pixel merged shown in Fig. 3 (a) is empty Between segmentation result, homogenous region pixel subspace segmentation result and structure-pixel subspace segmentation result, obtain Fig. 3 (c), Fig. 3 C () is the final segmentation result figure of Fig. 2 (a) Piperiver images.
3. simulated effect analysis:
Fig. 3 (c) is final segmentation result figure of the inventive method to Piperiver images, and Fig. 3 (d) is regarded based on level Feel the semantic final segmentation result with the SAR image segmentation method of the hidden model of adaptive neighborhood multinomial to Piperiver images Figure, by contrasting segmentation result figure, it could be assumed that, the inventive method is for the side of mixing aggregated structure atural object pixel subspace Define that position is more accurate, for the classification number of homogenous region pixel subspace determines that more rationally the region of segmentation result is consistent Property substantially preferably, and preferable dividing processing has been carried out to the pinpoint target in structure-pixel subspace.Using the inventive method Synthetic aperture radar SAR image is split, effectively SAR image is split, and improve SAR image segmentation Accuracy.

Claims (9)

1. a kind of mean field variation Bayes's SAR image segmentation method based on sketch structure, comprises the steps:
(1) SAR image sketch:
(1a) it is input into synthetic aperture radar SAR image;
(1b) set up the sketch model of synthetic aperture radar SAR image;
(1c) sketch map of synthetic aperture radar SAR image is extracted from sketch model;
(2) divide pixel subspace:
(2a) using sketch line fields method, the sketch map to synthetic aperture radar SAR image carries out compartmentalization process, obtains Including aggregation zone, the administrative division map of the synthetic aperture radar SAR image without sketch line region and structural region;
(2b) by including aggregation zone, the administrative division map without sketch line region and structural region, it is mapped to the synthetic aperture thunder of input Up to SAR image, mixing aggregated structure atural object pixel subspace in synthetic aperture radar SAR image, homogenous region pixel are obtained Subspace and structure-pixel subspace;
(3) build mean field variation Bayesian inference network model:
(3a) by the input layer of mean field variation Bayesian inference network model, hidden layer and reconstruction of layer be disposed as 441 it is neural Connection between input layer and hidden layer, hidden layer and reconstruction of layer is disposed as full connection by unit;
(3b) according to the following formula, calculate the variation lower bound of mean field variation Bayesian inference network model:
L ( Q ) = &Sigma; Z log P ( V | W , H , c ) + &Sigma; Z log P ( W ) + &Sigma; Z log P ( H | b ) - &Sigma; Z Q ( W ) - &Sigma; Z Q ( H )
Wherein, L (Q) represents the variation lower bound of mean field variation Bayesian inference network model, and log represents denary logarithm Operation, and P (V | W, H, c) represent V with regard to W, the conditional probability of H, c, during V represents mean field variation Bayesian inference network model Input layer, W represent the connection weight of mean field variation Bayesian inference network model, and H represents mean field variation Bayesian inference Hidden layer in network model, c represent the biasing of hidden layer in mean field variation Bayesian inference network model, and b represents that mean field becomes The biasing of input layer in point Bayesian inference network model, P (W) represent the prior probability of W, and P (H | b) represent conditions of the H with regard to b Probability, Q (W) represent the variation distribution probability of W, and Q (H) represents the variation distribution probability of H;
(3c) according to the following formula, computation structure reconstructed error:
Wherein, G represents structural remodeling error, and M represents the sum of input picture block,Represent the reconstruct image of i-th input picture block As block, siI-th sketch block is represented, SM () is represented and asked sketch block to operate, and C () is represented and asked sketch line length to operate;
(4) feature learning is carried out to mixing aggregated structure atural object pixel subspace:
(4a) to the mixing aggregated structure atural object pixel subspace in synthetic aperture radar SAR image, the connectedness for spatially going up Region division is carried out, if obtaining multiple mutual not connected regions, is performed (4b);
(4b) to each mutual not connected region, carry out every a sampling by 21 × 21 window, obtain multiple images block sample;
(4c) to each image block sample, take in sketch map and the one-to-one sketch block sample of image block sample;
(4d) to each mutual not connected region, produce corresponding one group of each region and meet uneven atural object distribution G0Distribution Random number;
(4e) to each mutual not connected region, with the corresponding one group of random number in each region for obtaining to mean field variation Bayes The weights of inference network and biasing are initialized, the mean field variation Bayesian inference network after being initialized;
(4f) to each mean field variation Bayesian inference network mutually not after connected region initialization, using image block sample as The input layer of mean field variation Bayesian inference network, with the method for the mean field variation Bayesian inference of sketch structural constraint, Structural constraint training is carried out to mean field variation Bayesian inference network, the mean field variation Bayesian inference net after being trained Network;
(4g) to each mutual not connected region, the weights of the mean field variation Bayesian inference network after its training are taken, as this The characteristic set in region;
(5) split SAR image mixing aggregated structure atural object pixel subspace:
(5a) by all mutually not characteristic set splicings of connected region, using spliced characteristic set as code book;
(5b) all features to each mutual not connected region, calculate the inner product with each feature in code book respectively, obtain every Projection vector of all features in individual region on code book;
(5c) to each, mutually the projection vector of connected region does not carry out maximum pond, obtains the corresponding structure spy in each region Levy vector;
(5d) AP clustering algorithms are propagated using neighbour, the structural eigenvector of all mutual connected regions is not clustered, is obtained The segmentation result of mixing aggregated structure atural object pixel subspace;
(6) segmenting structure pixel subspace:
(6a) vision semantic rule is used, splits line target;
(6b) feature of gathering based on sketch line, splits pinpoint target;
(6c) result of line target and pinpoint target segmentation is merged, obtains the segmentation result of structure-pixel subspace.
(7) split homogenous region pixel subspace:
Using the homogenous region dividing method based on multinomial logistic regression prior model, homogenous region pixel subspace is carried out Segmentation, obtains the segmentation result of homogenous region pixel subspace.
(8) combination and segmentation result:
The segmentation result of mixing aggregated structure pixel subspace, homogenous region pixel subspace and structure-pixel subspace is carried out Merge, obtain the final segmentation result of synthetic aperture radar SAR image.
2. the mean field variation Bayes's SAR image segmentation method based on sketch structure according to claim 1, its feature It is that the sketch model for setting up synthetic aperture radar SAR image described in step (1b) is comprised the following steps that:
1st step, in the range of [100,150], arbitrarily chooses a number, as the sum of template;
2nd step, constructs the side being made up of pixel with different directions and yardstick, a template of line, using the side of template To with dimensional information structural anisotropy's Gaussian function, by the Gaussian function, the weighting system of each pixel in calculation template Number, the weight coefficient of all pixels point in statistical mask, wherein, yardstick number value is 3~5, and direction number value is 18;
3rd step, according to the following formula, in the calculating synthetic aperture radar SAR image corresponding with template area coordinate, pixel is equal Value:
&mu; = &Sigma; g &Element; &Omega; w g A g &Sigma; g &Element; &Omega; w g
Wherein, μ represents the average of all pixels point in the synthetic aperture radar SAR image corresponding with template area coordinate, ∑ Sum operation is represented, g represents the corresponding coordinate of any one pixel in the Ω region of template, and ∈ is represented and belonged to symbol, wg Represent weight coefficient of the pixel at coordinate g in the Ω region of template, wgSpan be wg∈ [0,1], AgRepresent with The value of pixel of the pixel at coordinate g in corresponding synthetic aperture radar SAR image in the Ω region of template;
4th step, according to the following formula, calculates the side of pixel in the synthetic aperture radar SAR image corresponding with template area coordinate Difference:
v = &Sigma; g &Element; &Omega; w g ( A g - &mu; ) 2 &Sigma; g &Element; &Omega; w g
Wherein, ν represents the variance yields of all pixels point in the synthetic aperture radar SAR image corresponding with template area coordinate;
5th step, according to the following formula, response value of each pixel for ratio operator in calculating synthetic aperture radar SAR image:
R = 1 - m i n { &mu; a &mu; b , &mu; b &mu; a }
Wherein, R represents response value of each pixel for ratio operator in synthetic aperture radar SAR image, and min { } is represented Minimum Value Operations, a and b represent two different regions in template, μ respectivelyaIn expression template area a, all pixels point is equal Value, μbRepresent the average of all pixels point in the b of template area;
6th step, according to the following formula, response value of each pixel for dependency operator in calculating synthetic aperture radar SAR image:
C = 1 1 + 2 &CenterDot; v a 2 + v b 2 ( &mu; a + &mu; b ) 2
Wherein, C represents response value of each pixel for dependency operator in synthetic aperture radar SAR image,Expression square Root is operated, and a and b represents two zoness of different in template, ν respectivelyaRepresent the variance yields of all pixels point in a of template area, νbTable Show the variance yields of all pixels point in the b of template area, μaRepresent the average of all pixels point in a of template area, μbRepresent template region The average of all pixels point in the b of domain;
7th step, according to the following formula, response value of each pixel for each template in calculating synthetic aperture radar SAR image:
F = R 2 + C 2 2
Wherein, F represents response value of each pixel for each template in synthetic aperture radar SAR image,Expression square Root is operated, and during R and C represents synthetic aperture radar SAR image respectively, pixel is schemed for ratio operator and synthetic aperture radar SAR Response value of the pixel for dependency operator as in;
8th step, judges whether constructed template is equal to the sum of selected template, if so, then performs the 2nd step, otherwise, performs 9th step;
9th step, selects the template with maximum response from each template, as the template of synthetic aperture radar SAR image, And the direction of the template is made by the maximum response of the template as the intensity of pixel in synthetic aperture radar SAR image For the direction of pixel in synthetic aperture radar SAR image, the sideline response diagram and gradient of synthetic aperture radar SAR image are obtained Figure;
10th step, according to the following formula, calculates the intensity level of synthetic aperture radar SAR image intensity map, obtains intensity map:
I = r t 1 - r - t + 2 r t
Wherein, I represents the intensity level of synthetic aperture radar SAR image intensity map, and r represents synthetic aperture radar SAR image sideline Value in response diagram, t represent the value in synthetic aperture radar SAR image gradient map;
11st step, using non-maxima suppression method, detects to intensity map, obtains suggestion sketch;
12nd step, the pixel in choosing suggestion sketch with maximum intensity, by the pixel in suggestion sketch with the maximum intensity The pixel of point connection connects to form suggestion line segment, obtains suggestion sketch map;
13rd step, according to the following formula, calculates the code length gain of sketch line in suggestion sketch map:
C L G = &Sigma; j J &lsqb; A j 2 A j , 0 2 + l n ( A j , 0 2 ) - A j 2 A j , 1 2 - l n ( A j , 1 2 ) &rsqb; ,
Wherein, CLG represents the code length gain of sketch line in suggestion sketch map, and ∑ represents sum operation, and J represents current sketch The number of pixel, A in line neighborhoodjRepresent the observation of j-th pixel in current sketch line neighborhood, Aj,0Represent current In the case that sketch line can not represent structural information, the estimated value of j-th pixel in the sketch line neighborhood, ln () represent with E is the log operations at bottom, Aj,1Represent in the case where current sketch line can represent structural information, jth in the sketch line neighborhood The estimated value of individual pixel;
14th step, in the range of [5,50], arbitrarily chooses a number, as threshold value T;
15th step, selects CLG in all suggestion sketch lines>The suggestion sketch line of T, is combined into synthetic aperture radar SAR figure The sketch map of picture.
3. the mean field variation Bayes's SAR image segmentation method based on sketch structure according to claim 1, its feature It is that sketch line fields method described in step (2a) is comprised the following steps that:
Sketch line, according to the concentration class of sketch line segment in the sketch map of synthetic aperture radar SAR image, is divided into table by the 1st step Show the aggregation sketch line of aggregation atural object and represent border, line target, the border sketch line of isolated target, line target sketch line, orphan Vertical target sketch line;
2nd step, according to the statistics with histogram of sketch line segment concentration class, chooses the sketch line segment work that concentration class is equal to optimum concentration class For seed line-segment sets { Ek, k=1,2 ..., m }, wherein, EkAny bar sketch line segment in expression seed line-segment sets, k represent kind Sub-line section concentrates the label of any bar sketch line segment, m to represent the total number of seed line segment, and { } represents set operation;
3rd step, using the unselected line segment for being added to seed line-segment sets sum as basic point, with this basic point recursive resolve line-segment sets Close;
4th step, one radius of construction are the circular primitive in the optimum concentration class interval upper bound, with the circular primitive in line segment aggregate Line segment expanded, the line segment aggregate ecto-entad after expansion is corroded, is obtained in sketch map with sketch point as list The aggregation zone of position;
5th step, the sketch line to representing border, line target and isolated target, during each the sketch point with each sketch line is Heart construction size is 5 × 5 geometry window, obtains structural region;
6th step, will remove part beyond aggregation zone and structural region as can not sketch region in sketch map;
7th step, by the aggregation zone in sketch map, structural region and can not sketch region merging technique, obtain including aggregation zone, knot Structure region and the administrative division map of the synthetic aperture radar SAR image without sketch line region.
4. the mean field variation Bayes's SAR image segmentation method based on sketch structure according to claim 1, its feature Be, described in step (4e) to each mutual not connected region, with corresponding random number to mean field variation Bayesian inference The weights of network and biasing are initialized, and the concrete steps of the mean field variation Bayesian inference network after being initialized are such as Under:
1st step, to a kind of integral transformation that power function is core, the method for Mellin transform, estimates that uneven atural object is distributed G0Distribution Parameter in probability density formula, obtains alpha, gamma, the value of tri- parameters of n;
2nd step, the 1st step according to the following formula, calculate the uneven atural object distribution G of synthetic aperture radar SAR image0The probability of distribution is close Degree:
P ( I ( x , y ) ) = n n &Gamma; ( n - &alpha; ) I ( x , y ) n - 1 &gamma; &alpha; &Gamma; ( n ) &Gamma; ( - &alpha; ) ( &gamma; + n I ( x , y ) n - &alpha;
Wherein, P (I (x, y)) represents the probability density of the uneven atural object distribution of synthetic aperture radar SAR image, I (x, y) table Show the intensity level of the pixel that coordinate is (x, y), n represents the equivalent number of synthetic aperture radar SAR image, and α represents synthesis hole The form parameter of footpath radar SAR image, γ represent the scale parameter of synthetic aperture radar SAR image, and Γ () represents gamma letter Number, its value are obtained by following formula:
&Gamma; ( u ) = &Integral; 0 + &infin; t u - 1 e - t d t
Wherein, u represents independent variable, and ∫ represents integration operation, and t represents integration variable;
3rd step, is distributed G from uneven atural object is met0Front 441 row are chosen in the random matrix A of distribution, becomes decibel as mean field The initial value of the weights of leaf this inference network;
4th step, is distributed G from uneven atural object is met0Arbitrarily choose two to arrange in the random matrix A of distribution, become respectively as mean field In point Bayesian inference network in visual layers biasing initial value and mean field variation Bayesian inference network hidden layer biasing it is initial Value, completes the initialization to mean field variation Bayesian inference network.
5. the mean field variation Bayes's SAR image segmentation method based on sketch structure according to claim 1, its feature It is that the concrete steps for carrying out structural constraint training to mean field variation Bayesian inference network described in step (4f) are such as Under:
1st step, according to the following formula, updates the weights of mean field variation Bayesian inference network:
Wherein, Q (W) represents the variation distribution probability of W, and W represents the weights of mean field variation Bayesian inference network, N () table Show normpdf, D represents the covariance parameter of normpdf, and K represents mean field variation Bayesian inference network input layer number, vnRepresent n-th input sample of mean field variation Bayesian inference network, cj The value of j-th neuron biasing in mean field variation Bayesian inference network hidden layer is represented, γ represents mean field variation Bayes The data augmentation parameter of inference network, its value byObtain, hnRepresent that mean field becomes The hidden layer of n-th input sample of Bayesian inference network, H is divided to represent all samples of mean field variation Bayesian inference network Hidden layer, T represent that transposition is operated, and δ represents the weights of mean field variation Bayesian inference network, and its value is obtained by the following formula: Represent dot product Operation, besselk () represent Equations of The Second Kind modified Bessel function, ξkThe kth row of ξ are represented, its value is by formulaObtain, φkK-th element of φ is represented, its value is by formula Obtain;
2nd step, according to the following formula, calculates the kth row of the weights of mean field variation Bayesian inference network:
Wherein, wkRepresent the kth row of the weights of mean field variation Bayesian inference network;
3rd step, according to the following formula, updates the biasing of the input layer of mean field variation Bayesian inference network:
4th step, according to the following formula, updates the biasing of the hidden layer of mean field variation Bayesian inference network:
5th step, according to the biasing after renewal and weights, obtains and sample image number of blocks identical reconstructed image block;
6th step, asks its sketch map to each reconstructed image block, used as reconstruct sketch block;
7th step, using the structural remodeling error formula in claim 1 step (3c), seeks structural failure G;
8th step, judges that average G, whether more than threshold value 0.2, if so, then performs the 1st step, otherwise, performs the 9th step;
9th step, completes mean field variation Bayesian inference network structural constraint training.
6. the mean field variation Bayes's SAR image segmentation method based on sketch structure according to claim 1, its feature It is that the vision semantic rule described in step (6a) is as follows:
If i-th sketch line liWith j-th strip sketch line ljThe distance between be Dij, liDirection be Oi, ljDirection be Oj, i, j The total number of ∈ [1,2 ..., S], S for sketch line;
By width more than 3 pixels line target with two sketch line liAnd ljRepresent, liAnd ljThe distance between DijLess than T1And Poor (the O in directioni- Oj) less than 10 degree, wherein T1=5;
If the s article sketch line lsGeometry window wsThe interior average gray per string is AiIf the gray scale difference of adjacent column is ADi= |Ai-Ai+1|, if zs=[zs1,zs2,...,zs9] for the gray scale difference AD of adjacent columniLabel vector;
By width less than 3 pixels line target with single sketch line lsRepresent, lsGeometry window wsIt is interior, calculate adjacent column Gray scale difference ADiIf, ADi>T2, then zsi=1;Otherwise zsi=0, zsIn have two elements value be 1, remaining is 0, wherein T2 =34;
If L1,L2It is the set of the sketch line for representing line target, if Dij<T1And | Oi- Oj|<10, then li, lj∈L1;If sum is (zs) =2, then ls∈L2, wherein sum () represent to vector important summation operation.
7. the mean field variation Bayes's SAR image segmentation method based on sketch structure according to claim 1, its feature It is that the segmentation line target described in step (6a) is comprised the following steps that:
1st step, in structure-pixel subspace, according to the set L of the sketch line of line target1, by liAnd ljBetween region as line Target;
2nd step, in structure-pixel subspace, according to the set L of the sketch line of line target2, l will be coveredsRegion as line mesh Mark.
8. the mean field variation Bayes's SAR image segmentation method based on sketch structure according to claim 1, its feature It is that the segmentation pinpoint target described in step (6b) is comprised the following steps that:
1st step, in the structural region of administrative division map, all sketch wire tags that would not indicate line target are candidate's sketch line set In sketch line;
2nd step, randomly selects a sketch line from candidate's sketch line set, during an end points with selected sketch line is The heart, constructs the geometry window that size is for 5 × 5;
3rd step, judges the end points with the presence or absence of other sketch lines in geometry window, if existing, performs the 4th step;Otherwise, perform 6th step;
4th step, judges whether to only exist an end points, is if so, attached the end points place sketch line and current sketch line; Otherwise, perform the 5th step;
5th step, the sketch line that sketch line selected by connection is located with each end points choose wherein angle maximum from all connecting lines Two sketch lines as the sketch line that completes of connection;
6th step, judges the interior end points with the presence or absence of other sketch lines of geometry window of another end points of sketch line, if depositing Performing the 4th step;Otherwise, perform the 7th step;
7th step, the sketch line to completing attended operation choose the sketch line comprising two and more than two sketch line segments, statistics Bar number n comprising sketch line segment, wherein n >=2 in selected sketch line;
8th step, judges that the bar number n of sketch line then performs the 9th step whether equal to 2, if so,;Otherwise, perform the 10th step;
Sketch line of the angle value on sketch line summit in the range of [10 °, 140 °] is gathered feature as having by the 9th step Sketch line;
10th step, selects the sketch line of the angle value on the corresponding n-1 summit of sketch line all in the range of [10 °, 140 °];
11st step, in selected sketch line, is defined as follows two kinds of situations:
The first situation, judges whether the i-th -1, the adjacent two sketch line segments of i-th sketch line segment, i+1 bar are plain at i-th The same side of line segment place straight line, 2≤i≤n-1 are retouched, if all sketch line segments on sketch line and adjacent segments are all same Side, then the labelling sketch line is with the sketch line for gathering feature;
Second situation, judges whether the i-th -1, the adjacent two sketch line segments of i-th sketch line segment, i+1 bar are plain at i-th The same side of line segment place straight line, 2≤i≤n-1 are retouched, if there is n-1 bar sketch line segments with adjacent segments in the same side on sketch line, And have a sketch line segment to be adjacent line segment in non-the same side, also the labelling sketch line is with the sketch line for gathering feature;
12nd step, an optional sketch line in the sketch line for gathering feature are sat by two end points of selected sketch line Mark, determines the distance between two end points, if the end-point distances are in the range of [0,20], then only using selected sketch line as expression The sketch line of vertical target;
13rd step, judge it is untreated whether all selected with the sketch line for gathering feature, if so, perform the 12nd step;Otherwise, Perform the 14th step;
14th step, with the method for super-pixel segmentation, around the sketch line of expression pinpoint target in synthetic aperture radar SAR image Pixel carry out super-pixel segmentation, by super-pixel conduct of the gray value of super-pixel after segmentation in [0,45] or [180,255] Pinpoint target super-pixel;
15th step, merge pinpoint target super-pixel, using the border of the pinpoint target super-pixel after merging as pinpoint target side Boundary, obtains the segmentation result of pinpoint target.
9. the mean field variation Bayes's SAR image segmentation method based on sketch structure according to claim 1, its feature It is that the concrete steps of the homogenous region dividing method based on multinomial logistic regression prior model described in step (7) are such as Under:
1st step, is arbitrarily chosen a pixel from the pixel subspace of homogenous region, is set up centered on selected pixel 3 × 3 square window, calculates the standard deviation sigma of the window1
The length of side of square window is increased by 2 by the 2nd step, obtains new square window, calculates the standard deviation sigma of new square window2
3rd step, if standard deviation threshold method T3=3, if | σ12|<T3, then by standard deviation be σ2Square window as final side Shape window, performs the 4th step;Otherwise, perform the 2nd step;
4th step, according to the following formula, calculates the prior probability of center pixel in square window:
p 1 &prime; = exp ( &eta; &prime; ( 1 + x k &prime; &prime; ) ) &Sigma; i = 1 K &prime; exp ( &eta; &prime; ( 1 + x i &prime; &prime; ) )
Wherein, p '1The prior probability of center pixel in square window is represented, exp () represents exponential function operation, and η ' represents general Rate model parameter, η ' values are 1, xk′' represent square window in belong to kth ' class number of pixels, k' ∈ [1 ..., K'], K' The classification number of segmentation is represented, K' values are 5, xi' represent the number of pixels for belonging to the i-th ' class in the square window that obtains of the 3rd step;
5th step, the probability density of pixel grey scale is multiplied with the probability density of texture, likelihood probability p' is obtained2, wherein, gray scale Probability density is obtained by the distribution of fading channel Nakagami, and the probability density of texture is obtained by t-distribution;
6th step, by prior probability p1' and likelihood probability p2' be multiplied, obtain posterior probability p12';
Whether the 7th step, also have untreated pixel in judging homogenous region pixel subspace, if having, perform the 1st step;Otherwise, Perform the 9th step;
8th step, according to maximum posteriori criterion, obtains the segmentation result of homogenous region pixel subspace.
CN201611262018.4A 2016-12-30 2016-12-30 Mean field variation Bayes's SAR image segmentation method based on sketch structure Active CN106651884B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611262018.4A CN106651884B (en) 2016-12-30 2016-12-30 Mean field variation Bayes's SAR image segmentation method based on sketch structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611262018.4A CN106651884B (en) 2016-12-30 2016-12-30 Mean field variation Bayes's SAR image segmentation method based on sketch structure

Publications (2)

Publication Number Publication Date
CN106651884A true CN106651884A (en) 2017-05-10
CN106651884B CN106651884B (en) 2019-10-08

Family

ID=58838714

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611262018.4A Active CN106651884B (en) 2016-12-30 2016-12-30 Mean field variation Bayes's SAR image segmentation method based on sketch structure

Country Status (1)

Country Link
CN (1) CN106651884B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403434A (en) * 2017-07-28 2017-11-28 西安电子科技大学 SAR image semantic segmentation method based on two-phase analyzing method
CN107492129A (en) * 2017-08-17 2017-12-19 西安电子科技大学 Non-convex compressed sensing optimal reconfiguration method with structuring cluster is represented based on sketch
CN108492009A (en) * 2018-03-06 2018-09-04 宁波中青华云新媒体科技有限公司 Influence power evaluation system construction method and system, influence power evaluation method
CN108898101A (en) * 2018-06-29 2018-11-27 西安电子科技大学 Based on sketch map and prior-constrained High Resolution SAR image path network detecting method
CN108932526A (en) * 2018-06-08 2018-12-04 西安电子科技大学 SAR image sample block selection method based on sketch structure feature cluster
CN110008785A (en) * 2018-01-04 2019-07-12 清华大学 A kind of target identification method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060050960A1 (en) * 2004-09-07 2006-03-09 Zhuowen Tu System and method for anatomical structure parsing and detection
US8587471B2 (en) * 2009-04-03 2013-11-19 Tele-Rilevamento Europa—T.R.E. s.r.l. Process for identifying statistically homogeneous pixels in SAR images acquired on the same area
CN103903257A (en) * 2014-02-27 2014-07-02 西安电子科技大学 Image segmentation method based on geometric block spacing symbiotic characteristics and semantic information
CN104036491A (en) * 2014-05-14 2014-09-10 西安电子科技大学 SAR image segmentation method based on area division and self-adaptive polynomial implicit model
CN104732552A (en) * 2015-04-09 2015-06-24 西安电子科技大学 SAR image segmentation method based on nonstationary condition field
EP2953095A2 (en) * 2014-06-02 2015-12-09 Canon Kabushiki Kaisha Image processing apparatus, image processing system, image processing method, and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060050960A1 (en) * 2004-09-07 2006-03-09 Zhuowen Tu System and method for anatomical structure parsing and detection
US8587471B2 (en) * 2009-04-03 2013-11-19 Tele-Rilevamento Europa—T.R.E. s.r.l. Process for identifying statistically homogeneous pixels in SAR images acquired on the same area
CN103903257A (en) * 2014-02-27 2014-07-02 西安电子科技大学 Image segmentation method based on geometric block spacing symbiotic characteristics and semantic information
CN104036491A (en) * 2014-05-14 2014-09-10 西安电子科技大学 SAR image segmentation method based on area division and self-adaptive polynomial implicit model
EP2953095A2 (en) * 2014-06-02 2015-12-09 Canon Kabushiki Kaisha Image processing apparatus, image processing system, image processing method, and storage medium
CN104732552A (en) * 2015-04-09 2015-06-24 西安电子科技大学 SAR image segmentation method based on nonstationary condition field

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
N. NASIOS ETAL.: "Variational segmentation of color images", 《IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING 2005》 *
宋晓峰 等: "基于区域MRF和贝叶斯置信传播的SAR图像分割", 《电子学报》 *
陈颖峰: "基于几何区域的灰度共生矩阵和Region Map的SAR图像分割方法", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403434A (en) * 2017-07-28 2017-11-28 西安电子科技大学 SAR image semantic segmentation method based on two-phase analyzing method
CN107403434B (en) * 2017-07-28 2019-08-06 西安电子科技大学 SAR image semantic segmentation method based on two-phase analyzing method
CN107492129A (en) * 2017-08-17 2017-12-19 西安电子科技大学 Non-convex compressed sensing optimal reconfiguration method with structuring cluster is represented based on sketch
CN107492129B (en) * 2017-08-17 2021-01-19 西安电子科技大学 Non-convex compressive sensing optimization reconstruction method based on sketch representation and structured clustering
CN110008785A (en) * 2018-01-04 2019-07-12 清华大学 A kind of target identification method and device
CN110008785B (en) * 2018-01-04 2022-09-02 清华大学 Target identification method and device
CN108492009A (en) * 2018-03-06 2018-09-04 宁波中青华云新媒体科技有限公司 Influence power evaluation system construction method and system, influence power evaluation method
CN108932526A (en) * 2018-06-08 2018-12-04 西安电子科技大学 SAR image sample block selection method based on sketch structure feature cluster
CN108932526B (en) * 2018-06-08 2020-04-14 西安电子科技大学 SAR image sample block selection method based on sketch structural feature clustering
CN108898101A (en) * 2018-06-29 2018-11-27 西安电子科技大学 Based on sketch map and prior-constrained High Resolution SAR image path network detecting method
CN108898101B (en) * 2018-06-29 2021-09-28 西安电子科技大学 High-resolution SAR image road network detection method based on sketch and prior constraint

Also Published As

Publication number Publication date
CN106651884B (en) 2019-10-08

Similar Documents

Publication Publication Date Title
CN106651884A (en) Sketch structure-based mean field variational Bayes synthetic aperture radar (SAR) image segmentation method
CN106611423B (en) SAR image segmentation method based on ridge ripple filter and deconvolution structural model
CN106611420B (en) The SAR image segmentation method constrained based on deconvolution network and sketch map direction
CN106683102B (en) SAR image segmentation method based on ridge ripple filter and convolutional coding structure learning model
CN108537102B (en) High-resolution SAR image classification method based on sparse features and conditional random field
CN106611422B (en) Stochastic gradient Bayes&#39;s SAR image segmentation method based on sketch structure
CN103049763B (en) Context-constraint-based target identification method
CN104077599B (en) Polarization SAR image classification method based on deep neural network
CN105809198B (en) SAR image target recognition method based on depth confidence network
CN106846322B (en) The SAR image segmentation method learnt based on curve wave filter and convolutional coding structure
CN104915676B (en) SAR image sorting technique based on further feature study and watershed
CN107403434B (en) SAR image semantic segmentation method based on two-phase analyzing method
CN105389550B (en) It is a kind of based on sparse guide and the remote sensing target detection method that significantly drives
CN103413151B (en) Hyperspectral image classification method based on figure canonical low-rank representation Dimensionality Reduction
CN106920243A (en) The ceramic material part method for sequence image segmentation of improved full convolutional neural networks
CN106611421A (en) SAR image segmentation method based on feature learning and sketch line constraint
CN106778821A (en) Classification of Polarimetric SAR Image method based on SLIC and improved CNN
CN104392228A (en) Unmanned aerial vehicle image target class detection method based on conditional random field model
CN105930815A (en) Underwater organism detection method and system
CN106203444B (en) Classification of Polarimetric SAR Image method based on band wave and convolutional neural networks
CN106909902A (en) A kind of remote sensing target detection method based on the notable model of improved stratification
CN102999762B (en) Decompose and the Classification of Polarimetric SAR Image method of spectral clustering based on Freeman
CN107292336A (en) A kind of Classification of Polarimetric SAR Image method based on DCGAN
CN106600595A (en) Human body characteristic dimension automatic measuring method based on artificial intelligence algorithm
CN107341813A (en) SAR image segmentation method based on structure learning and sketch characteristic inference network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant