CN107464247B - Based on G0Distributed random gradient variational Bayesian SAR image segmentation method - Google Patents

Based on G0Distributed random gradient variational Bayesian SAR image segmentation method Download PDF

Info

Publication number
CN107464247B
CN107464247B CN201710702367.1A CN201710702367A CN107464247B CN 107464247 B CN107464247 B CN 107464247B CN 201710702367 A CN201710702367 A CN 201710702367A CN 107464247 B CN107464247 B CN 107464247B
Authority
CN
China
Prior art keywords
sketch
sar image
pixel
stochastic gradient
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710702367.1A
Other languages
Chinese (zh)
Other versions
CN107464247A (en
Inventor
刘芳
孙宗豪
焦李成
李婷婷
郝红侠
古晶
马文萍
陈璞花
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201710702367.1A priority Critical patent/CN107464247B/en
Publication of CN107464247A publication Critical patent/CN107464247A/en
Application granted granted Critical
Publication of CN107464247B publication Critical patent/CN107464247B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a method based on G0Distribution ofThe random gradient variational Bayes SAR image segmentation method comprises the steps of extracting a sketch of an SAR image according to an initial sketch model; dividing the SAR image into a mixed pixel subspace, a homogeneous pixel subspace and a structural pixel subspace according to the regional map; estimating G for each region of extreme inhomogeneity in the mixed pixel subspace0Distribution parameter, using G-based0The distributed random gradient variational Bayesian model learns the structural characteristics of the mixed pixel, thereby realizing unsupervised segmentation of the mixed pixel subspace; and correspondingly segmenting the homogeneous pixel subspace and the structural pixel subspace, fusing segmentation results of the three subspaces, and finally obtaining an SAR image segmentation result. Because the invention assumes the prior distribution and the approximate posterior distribution of hidden variables in the model as G meeting the extremely inhomogeneous region0And the corresponding analytic expressions are deduced for learning, so that the clustering accuracy of the extremely inhomogeneous regions in the mixed pixel subspace is improved.

Description

Based on G0Distributed random gradient variational Bayesian SAR image segmentation method
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a G-based image processing method0The distributed random gradient variational Bayes SAR image segmentation method can be applied to accurately segmenting different areas of the synthetic aperture radar SAR and detecting and identifying targets in the SAR image.
Background
Synthetic Aperture Radar (SAR) is an important development in the field of remote sensing technology, and is used for acquiring high-resolution images of the earth surface. Compared with other types of imaging technologies, the SAR has very important advantages, is not influenced by atmospheric conditions such as cloud layers, rainfall or heavy fog and the like and illumination intensity, and can acquire high-resolution remote sensing data all day long and all weather. The SAR technology has important guiding significance for many fields such as military, agriculture, geography and the like.
The image segmentation refers to a process of dividing an image into a plurality of mutually disjoint areas according to characteristics such as color, gray scale and texture. The current common methods for image segmentation include: edge detection based methods, threshold based methods, region growing and watershed based methods, clustering based methods, and the like. Due to the unique imaging mechanism of the SAR, the SAR image contains a lot of speckle noise, so that the traditional methods of many optical images cannot be directly used for the segmentation of the SAR image. Traditional segmentation methods for SAR images include some methods based on clustering such as K-means, FCM, etc., and other supervised and semi-supervised methods. The features are often extracted by manual experience, but the quality of the extracted features has an important influence on the segmentation result of the SAR image. For the supervised and semi-supervised methods, label data are needed, the label data of the SAR image are few, and the cost for acquiring the label data is high. The Bayesian network has unique advantages in the aspects of expression and inference of uncertain knowledge, and the variational Bayesian inference network can not only perform unsupervised training without label data, but also effectively learn the structural characteristics implied by each pixel space, and has great significance in effective segmentation of SAR images.
An effective MSTAR SAR image segmentation method (Wuhan university school report: page 1377-page 1380 of No. 10 of 2015 of information science edition) is provided in a paper published by Wuhan university. The method comprises the steps of firstly carrying out over-segmentation operation on an image to be processed to obtain an over-segmented image area. Secondly, extracting the characteristics of the image region level and the pixel level of the image after the segmentation to obtain a characteristic vector for representing the image, and establishing the model proposed by the method by using a space hidden Dirichlet allocation model (sLDA) and a Markov Random Field (MRF) for the MSTARSAR image to obtain an energy functional. And finally, optimizing the energy functional by using a Graph-Cut algorithm and a Branch-and-Bound algorithm to obtain a final segmentation result. The method has the disadvantages that when the characteristic vector of the SAR image is obtained, the pixel-level characteristics of the SAR image are used, and the specific structural characteristics of the SAR image due to the correlation among pixels are not automatically learned, so that the structural characteristics really representing the ground feature characteristics of the SAR image are not fully utilized, and the segmentation result is not accurate enough.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a G-based solution to the above-mentioned deficiencies in the prior art0The distributed random gradient variational Bayes SAR image segmentation method is used for improving the accuracy of SAR image segmentation.
The invention adopts the following technical scheme:
based on G0The distributed random gradient variational Bayes SAR image segmentation method comprises the steps of extracting a sketch of an SAR image according to an initial sketch model; then dividing the SAR image into a mixed pixel subspace, a homogeneous pixel subspace and a structural pixel subspace according to the regional image; estimating G for each region of extreme inhomogeneity in the mixed pixel subspace0Distribution parameter, using G-based0The distributed random gradient variational Bayesian model learns the structural characteristics of the mixed pixel, and unsupervised segmentation of the mixed pixel subspace is realized; and correspondingly segmenting the homogeneous pixel subspace and the structural pixel subspace, fusing segmentation results of the three subspaces, and finally obtaining an SAR image segmentation result.
Preferably, the specific steps are as follows:
s1, inputting a synthetic aperture radar SAR image, and establishing a sketch model of the synthetic aperture radar SAR image;
s2, performing regional processing on the sketch of the synthetic aperture radar SAR image by adopting a sketch line regional method to obtain a regional diagram of the synthetic aperture radar SAR image comprising an aggregation region, a sketch-free region and a structural region;
s3, mapping a region map comprising a gathering region, a no-sketch region and a structural region into the SAR image to obtain a mixed pixel subspace, a homogeneous pixel subspace and a structural pixel subspace of the SAR image;
s4, obtaining the condition that each region meets G by using the intensity values of all pixel points in each region and adopting a Mellin transform parameter estimation method for extremely uneven regions in the mixed pixel subspace0Estimated values of three parameters α, γ, n required for the distribution;
s5, constructing a random gradient variational Bayesian network model for the mixed pixel subspace;
s6, performing feature learning on the mixed pixel subspace;
s7, segmenting SAR image mixed pixel subspace;
s8, extracting line targets by adopting a visual semantic rule, and then segmenting a structural pixel subspace by using a structural region segmentation method of a polynomial hidden model based on a geometric structure window to obtain a segmentation result of the structural pixel subspace;
s9, dividing the homogeneous pixel subspace by adopting a homogeneous region division method of a polynomial hidden model based on self-adaptive window selection to obtain a division result of the homogeneous pixel subspace;
and S10, combining the segmentation results of the mixed pixel subspace, the homogeneous pixel subspace and the structural pixel subspace to obtain the final segmentation result of the SAR image.
Preferably, step S1 specifically includes the following steps:
s101, randomly selecting one number within the range of [100,150] as the total number of the templates;
s102, constructing a template with edges and lines formed by pixel points and having different directions and scales, constructing an anisotropic Gaussian function by using the direction and scale information of the template, calculating a weighting coefficient of each pixel point in the template through the Gaussian function, and counting the weighting coefficients of all the pixel points in the template, wherein the number of the scales is 3-5, and the number of the directions is 18;
s103, calculating the average value of pixel points in the synthetic aperture radar SAR image corresponding to the template area coordinates:
Figure BDA0001380679010000031
wherein mu represents the mean value of all pixel points in the synthetic aperture radar SAR image corresponding to the coordinates of the template region, sigma represents the summation operation, g represents the coordinates corresponding to any pixel point in the omega region of the template, epsilon represents the symbol, wgRepresenting the weight coefficient, w, of the pixel point in the omega-th region of the template at the coordinate ggHas a value range of wg∈[0,1],AgRepresenting the value of a pixel point in the synthetic aperture radar SAR image corresponding to the pixel point in the omega-th area of the template at the coordinate g;
s104, calculating a variance value of pixel points in the synthetic aperture radar SAR image corresponding to the template area coordinates:
Figure BDA0001380679010000041
the method comprises the steps that v represents the variance value of all pixel points in a synthetic aperture radar SAR image corresponding to the coordinates of a template area;
s105, calculating a response value of each pixel point in the synthetic aperture radar SAR image to a ratio operator:
Figure BDA0001380679010000042
wherein R represents the response value of each pixel point in the synthetic aperture radar SAR image to a ratio operator, min {. cndot } represents minimum operation, a and b respectively represent two different areas in a template, and muaRepresents the mean value, mu, of all the pixels in the template region abRepresenting the mean value of all pixel points in the template region b;
s106, calculating a response value of each pixel in the synthetic aperture radar SAR image to the correlation operator:
Figure BDA0001380679010000043
wherein C represents each in the SAR image of the synthetic aperture radarThe response value of each pixel to the correlation operator,
Figure BDA0001380679010000044
representing square root operations, a and b representing two different regions in the template, vaExpress the variance value, v, of all the pixels in the template region abRepresents the variance value, mu, of all the pixels in the template region baRepresents the mean value, mu, of all the pixels in the template region abRepresenting the mean value of all pixel points in the template region b;
s107, calculating the response value of each pixel point in the synthetic aperture radar SAR image to each template:
Figure BDA0001380679010000045
wherein F represents the response value of each pixel point in the synthetic aperture radar SAR image to each template,
Figure BDA0001380679010000051
representing square root operation, wherein R and C respectively represent response values of pixel points in the synthetic aperture radar SAR image to a ratio operator and pixel points in the synthetic aperture radar SAR image to a correlation operator;
s108, judging whether the constructed template is equal to the total number of the selected templates, if so, executing the step S102, otherwise, executing the step S109;
s109, selecting a template with the maximum response value from the templates as a template of the SAR image, taking the maximum response value of the template as the intensity of a pixel point in the SAR image, taking the direction of the template as the direction of the pixel point in the SAR image, and obtaining a sideline response graph and a gradient graph of the SAR image;
s110, calculating the intensity value of the synthetic aperture radar SAR image intensity map to obtain an intensity map:
Figure BDA0001380679010000052
wherein I represents the intensity value of the synthetic aperture radar SAR image intensity map, r represents the value in the synthetic aperture radar SAR image edge response map, and t represents the value in the synthetic aperture radar SAR image gradient map;
s111, detecting the intensity map by adopting a non-maximum value inhibition method to obtain a suggested sketch;
s112, selecting the pixel point with the maximum strength in the suggested sketch, and connecting the pixel points which are communicated with the pixel point with the maximum strength in the suggested sketch to form a suggested line segment to obtain a suggested sketch;
s113, calculating the coding length gain of the sketch line in the suggested sketch:
Figure BDA0001380679010000053
wherein CLG represents coding length gain of sketch line in the suggested sketch, Sigma represents summation operation, J represents number of pixel points in current sketch line neighborhood, AjExpressing the observed value of the jth pixel point in the current sketch line neighborhood, Aj,0Indicating the estimation value of the jth pixel point in the sketch line neighborhood under the condition that the current sketch line can not represent structural information, ln (·) represents the logarithm operation with e as the base, Aj,1The estimation value of the jth pixel point in the adjacent area of the sketch line is expressed under the condition that the current sketch line can express the structural information;
s114, randomly selecting one number within the range of [5,50] as a threshold value T;
and S115, selecting the recommended sketch lines with CLG > T in all the recommended sketch lines, combining the recommended sketch lines into a sketch of the SAR image, and extracting the sketch of the SAR image from the sketch model.
Preferably, step S2 specifically includes:
s201, dividing sketch lines into gathering sketch lines representing gathering ground objects and boundary sketch lines, line target sketch lines and isolated target sketch lines representing boundaries, line targets and isolated targets according to the concentration degree of sketch line segments in a sketch map of the synthetic aperture radar SAR image;
s202, according to the histogram statistics of the concentration degree of the sketch line segments, selecting the sketch line segments with the concentration degree equal to the optimal concentration degree as a seed line segment set { EkK is 1,2,.. m }, wherein EkRepresenting any sketch line segment in the seed line segment set, k representing the label of any sketch line segment in the seed line segment set, m representing the total number of the seed line segments, and {. represents the set operation;
s203, using the unselected line segment added into the seed line segment set as a base point, and recursively solving the line segment set by using the base point;
s204, constructing a circular element with the radius as the upper bound of the optimal concentration degree interval, expanding the line segments in the line segment set by using the circular element, corroding the expanded line segment set from outside to inside, and obtaining a concentration area with a sketch point as a unit on a sketch map;
s205, constructing a geometric structure window with the size of 5 multiplied by 5 by taking each pixel point of each sketch line as the center to obtain a structural area for the sketch lines representing the boundary, the line target and the isolated target;
s206, taking the part of the sketch except the aggregation region and the structural region as a non-sketch region;
and S207, combining the gathering area, the non-sketch area and the structural area in the sketch to obtain an area map of the synthetic aperture radar SAR image comprising the gathering area, the non-sketch area and the structural area.
Preferably, step S5 is specifically as follows:
intermediate variables from the input layer to the hidden layer of the stochastic gradient variation Bayesian network model:
Figure BDA0001380679010000071
wherein h isφIntermediate variables representing the input layer to the hidden layer of the stochastic gradient variational Bayesian network model,
Figure BDA0001380679010000072
representing the input layer to intermediate variable h of a stochastic gradient variational Bayesian network modelφM represents the number of neurons in the hidden layer, m is 50, n represents the number of neurons in the input layer, n is 441,
Figure BDA0001380679010000073
to represent
Figure BDA0001380679010000074
A corresponding offset vector;
approximate posterior probability of stochastic gradient variational bayesian network model:
qφ(z|x)~G0φφ)
Figure BDA0001380679010000075
wherein q isφ(z | x) represents the approximate posterior probability, G, of a stochastic gradient variational Bayesian network model0φφ) Denotes the uniformity as alphaφScale of gammaφG of (A)0Distribution, G0Probability density formula of distribution
Figure BDA0001380679010000076
-α,γ,n,I(x,y)>0
Wherein, I (x, y) is the image pixel intensity value, and n is the equivalent view; gamma is a scale parameter; alpha is the uniformity; Γ (x) is a Gamma function, defined in the real number domain as:
Figure BDA0001380679010000077
when the equivalent visual number n is 1, the distribution becomes Beta-Prime distribution, and the expression is:
Figure BDA0001380679010000078
-α,γ,I(x,y)>0
wherein,
Figure BDA0001380679010000079
intermediate variable h from input layer to hidden layer of representing stochastic gradient variation Bayesian network modelφAnd-alphaφThe connection weight of (a) is set,
Figure BDA00013806790100000710
to represent
Figure BDA00013806790100000711
The corresponding offset vector is set to the offset vector,
Figure BDA00013806790100000712
intermediate variable h representing input layer to hidden layerφAnd gammaφThe connection weight of (a) is set,
Figure BDA00013806790100000713
to represent
Figure BDA00013806790100000714
A corresponding offset vector;
intermediate variables from hidden layer to reconstructed layer of the stochastic gradient variation Bayesian network model:
Figure BDA0001380679010000081
wherein h isθIntermediate variables representing hidden layer to reconstruction layer of the stochastic gradient variation Bayes network model,
Figure BDA0001380679010000082
hidden layer to intermediate variable hθThe connection weight of (a) is set,
Figure BDA0001380679010000083
to represent
Figure BDA0001380679010000084
A corresponding offset vector;
conditional probability of stochastic gradient variant bayesian network model:
Figure BDA00013806790100000817
Figure BDA0001380679010000085
wherein,
Figure BDA0001380679010000086
representing conditional probability of a stochastic gradient variational Bayesian network model, G0θθ) Denotes the uniformity as alphaθScale of gammaθThe normal distribution of (c),
Figure BDA0001380679010000087
intermediate variable h representing hidden to reconstructed layersθAnd-alphaθThe connection weight of (a) is set,
Figure BDA0001380679010000088
to represent
Figure BDA0001380679010000089
The corresponding offset vector is set to the offset vector,
Figure BDA00013806790100000810
intermediate variable h representing hidden to reconstructed layersθAnd gammaθThe connection weight of (a) is set,
Figure BDA00013806790100000811
to represent
Figure BDA00013806790100000812
A corresponding offset vector;
lower bound of variation of stochastic gradient variation bayesian network model:
Figure BDA00013806790100000813
wherein L (theta, phi) represents a variation lower bound of the stochastic gradient variation Bayes network model, phi represents a variation parameter of the stochastic gradient variation Bayes network model,
Figure BDA00013806790100000814
theta represents a generation parameter of the stochastic gradient variation bayesian network model,
Figure BDA00013806790100000815
DKL(qφ(z|x)||pθ(z)) represents qφ(z | x) and pθ(z) relative entropy between z representing hidden layer variables of the stochastic gradient variational Bayesian network model, pθ(z) represents the prior probability of the hidden layer variable z, Σ (-) represents the summation operation, L represents the number of times the hidden layer variable z was gaussian sampled, log (-) represents the logarithm operation, z (-) represents the sum of the hidden layer variable zlExpressing the result of the first Gaussian sampling of z, and the value of the result is expressed by a formula
Figure BDA00013806790100000816
Get, wherein, <' > indicates a dot product operation, ∈lRepresenting auxiliary variables, epsilon, of Gaussian sampleslN (0, I) indicates that the Gaussian sampled auxiliary variable satisfies the standard normal distribution.
Preferably, two cases of equivalent view n ≠ 1 and n ≠ 1 are considered:
when the equivalent view n is 1, the following is calculated:
-DKL(qφ(z|x)||pθ(z))=∫Pβ'(z;-α,γ)logPβ'(z;c,γ1)-Pβ'(z;-α,γ)logPβ'(z;-α,γ)dz
wherein p is a prioriθ(z) satisfies- α ═ c, γ ═ γ1Beta-Primer distribution, c, gamma of1Are all positive numbers, are known quantities derived from the image, and approximate a posteriori qφ(z | x) satisfiesBeta-Primer distribution, where z ∈ [ a, b ]]A is more than 0 and less than b and less than or equal to 1, and the normalized image pixel intensity value is represented;
when the equivalent view n ≠ 1, it is calculated as follows:
Figure BDA0001380679010000091
wherein p is a prioriθ(z) satisfies- α ═ c, γ ═ γ1G of (A)0Distribution, c, gamma1Are all positive numbers, which are known quantities derived from the image; approximate posterior qφ(z | x) satisfies G0Distribution, wherein z ∈ [ a, b ]]And a is more than 0 and less than b and less than or equal to 1, which represents the normalized image pixel intensity value.
Preferably, step S6 includes the steps of:
s601, performing area division on a mixed pixel subspace of the synthetic aperture radar SAR image according to spatial connectivity, and if only one non-connected area exists, executing the step S7;
s602, sampling each unconnected area at intervals according to a 21 x 21 window to obtain a plurality of image block samples corresponding to each area;
s603, for each non-communicated area, generating a group of non-uniform ground object distribution G corresponding to each area0A distributed random number;
s604, initializing a connection weight of the stochastic gradient variant Bayesian network by using a group of random numbers corresponding to each region for each region, and obtaining the initialized stochastic gradient variant Bayesian network;
s605, initializing each unconnected area, then using the image block sample as an input layer of the random gradient variation Bayesian network, and training the initialized random gradient variation Bayesian network by using a random gradient variation Bayesian inference method according to the following steps to obtain the trained random gradient variation Bayesian network;
s606, for each region which is not communicated with each other, the weight of the trained random gradient variational Bayesian network is taken
Figure BDA0001380679010000092
As a feature set for that region.
Preferably, step S603 specifically includes:
the first step is as follows: calculating uneven ground object distribution G of synthetic aperture radar SAR image0Probability density of distribution:
Figure BDA0001380679010000101
wherein, P (I (x, y)) represents the probability density of the uneven ground feature distribution of the synthetic aperture radar SAR image, I (x, y) represents the intensity value of the pixel point with coordinates (x, y), n represents the equivalent view of the synthetic aperture radar SAR image, α represents the shape parameter of the synthetic aperture radar SAR image, γ represents the scale parameter of the synthetic aperture radar SAR image, and Γ (·) represents a gamma function, the value of which is obtained by the following formula:
Figure BDA0001380679010000102
wherein u represents an independent variable, ^ represents an integral operation, t represents an integral variable, and a mixed pixel subspace region R is randomly selectediThe 50 image block samples form a matrix A of 441 × 50;
the second step is that: non-uniform terrain distribution G through SAR images using matrix A0The probability density function of the distribution generates a matrix B of 441 multiplied by 50, the data in the matrix B satisfies the uneven ground object distribution G of the SAR image of the synthetic aperture radar0And (4) distribution.
Preferably, step S604 specifically includes:
the first step is as follows: taking the matrix B as an input layer x to an intermediate variable h of the stochastic gradient variation Bayesian network modelφConnection weight of
Figure BDA0001380679010000107
Second step of: randomly selecting 50 rows from the matrix B to form a matrix C of 50 multiplied by 50, and taking the matrix C as an intermediate variable h of the stochastic gradient variational Bayesian network modelφTo-alphaφConnection weight of
Figure BDA0001380679010000103
Taking the matrix C as an intermediate variable h of the stochastic gradient variation Bayesian network modelφTo gammaφConnection weight
Figure BDA0001380679010000104
Taking the matrix C as an implicit layer z to an intermediate variable h of the stochastic gradient variation Bayesian network modelθConnection weight of
Figure BDA0001380679010000105
The third step: taking the transposition of the matrix B as an intermediate variable h of the stochastic gradient variational Bayesian network modelθTo-alphaθConnection weight of
Figure BDA0001380679010000106
Taking the transposition of the matrix B as an intermediate variable h of the stochastic gradient variational Bayesian network modelθTo gammaθConnection weight of
Figure BDA0001380679010000111
Preferably, step S605 specifically includes:
firstly, initializing the prior probability of a hidden layer of a stochastic gradient variational Bayesian network model to G0Distribution probability, the approximate posterior probability of the stochastic gradient variation Bayesian network model is initialized to G0And distributing the probability to obtain an analytic expression of a variation lower bound of the random gradient variation Bayesian network model as follows:
(a) when the equivalent view n is 1,
Figure BDA0001380679010000112
(b) when the equivalent view number n ≠ 1,
Figure BDA0001380679010000113
wherein F'm=(1/(α-(m-1))((nb)m-1log(nb+γ1)(nb+γ1)m-1-α
-(na)m-1log(na+γ1)(na+γ1)m-1-α-(m-1)F′m-1-G'm-1))
G'm=1/(α-m)·((nb)m(nb+γ)α-m-(na)m(na+γ)α-m-mG'm-1);
And secondly, updating generation parameters of the stochastic gradient variation Bayesian network model:
Figure BDA0001380679010000121
wherein, thetat+1Represents the generation parameter theta of the stochastic gradient variation Bayesian network model after the t +1 iterationtRepresenting the generation parameters of the stochastic gradient variation Bayesian network model after the t-th iteration,
Figure BDA0001380679010000122
an operation of obtaining a partial derivative of a parameter theta of L (theta, phi);
and thirdly, updating variation parameters of the stochastic gradient variation Bayesian network model:
Figure BDA0001380679010000123
wherein phi ist+1Represents the variation parameter phi of the stochastic gradient variation Bayes network model after the t +1 iterationtRepresenting the variation parameters of the stochastic gradient variation Bayesian network model after the t-th iteration,
Figure BDA0001380679010000124
an operation of partial derivation of a parameter phi of L (theta, phi) is expressed;
step four, judging whether the times of the variation lower bound invariance reaches a threshold value of 100, if so, executing the step five; otherwise, executing the second step;
and fifthly, finishing the training of the stochastic gradient variational Bayesian network.
Compared with the prior art, the invention has at least the following beneficial effects:
the invention is based on G0The distributed random gradient variational Bayes SAR image segmentation method comprises the steps of extracting a sketch of an SAR image according to an initial sketch model; then dividing the SAR image into a mixed pixel subspace, a homogeneous pixel subspace and a structural pixel subspace according to the regional image; estimating G for each region of extreme inhomogeneity in the mixed pixel subspace0Distribution parameter, using G-based0The distributed random gradient variational Bayesian model learns the structural characteristics of the mixed pixel, and unsupervised segmentation of the mixed pixel subspace is realized; the invention sets a random gradient variational Bayesian network, carries out unsupervised training through each non-communicated region of the mixed pixel subspace, takes the weight of the trained network as the structural characteristic of each non-communicated region, overcomes the defect that the prior art uses the pixel level characteristic of the SAR image to obtain the characteristic vector of the SAR image, but does not learn the special structural characteristic of the SAR image due to the correlation between the pixels, and can automatically extract the structural characteristic of the SAR image.
Further, in the SAR image sketch, a black line is a sketch line. It can be seen that the clustered regions have many clustered sketch lines, and the sketch lines representing edges and line objects are not clustered. This means that the sketch lines of different areas have different properties, on the basis of which the degree of concentration is used to divide the sketch lines into aggregated sketch lines and non-aggregated sketch lines.
Further, an aggregation area is extracted on the aggregated sketch lines, a structural area is extracted on the non-aggregated sketch lines, and the rest area is the area without the sketch lines. Therefore, the SAR image can be divided into an aggregation region, a non-sketch line region and a structural region based on the sketch map, the obtained region map is a sparse representation of the SAR image, and the SAR image is reduced to a space with a relatively single structure and used for guiding SAR image segmentation.
Furthermore, when the invention is used for carrying out feature learning on each region which is not communicated with each other in the mixed pixel subspace, the pixel intensity value of each region is respectively used for estimating G0Parameters of distribution, then by G0The probability density function of the distribution generates random data to initialize the weight of the network, and overcomes the defect that the network is initialized by random distribution in the depth self-coding network for automatically extracting the image characteristics in the prior art without catching the intrinsic characteristics of the SAR image, so that the intrinsic characteristics of the ground features of the SAR image can be effectively learned by adopting the method and the device, and the accuracy of SAR image segmentation is improved.
Further, since the present invention uses G0Distribution as a distribution of hidden layer variable priors and conditional probabilities, G0The distribution can better depict the statistical characteristics of the SAR image, and compared with the Gaussian hypothesis, the distribution can better depict the characteristics of the SAR image, so that the method can learn the characteristics of the ground features of the SAR image, and further improve the accuracy of SAR image segmentation.
In summary, the present invention assumes both the hidden variable prior distribution and the approximate posterior distribution in the model as G satisfying the extremely heterogeneous region0And the corresponding analytic expressions are deduced for learning, so that the clustering accuracy of extremely heterogeneous regions in the mixed pixel subspace is improved.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a Bayesian inference model diagram of the present invention;
FIG. 3 is a simulation of the present invention;
FIG. 4 is a diagram illustrating simulation results of the present invention.
Detailed Description
The invention provides a method based on G0The distributed random gradient variational Bayes SAR image segmentation method comprises the steps of extracting a sketch of an SAR image according to an initial sketch model; dividing the SAR image into a mixed pixel subspace, a homogeneous pixel subspace and a structural pixel subspace according to the regional map; estimating G for each region of extreme inhomogeneity in the mixed pixel subspace0Distribution parameter, using G-based0The distributed random gradient variational Bayesian model learns the structural characteristics of the mixed pixel, thereby realizing unsupervised segmentation of the mixed pixel subspace; and correspondingly segmenting the homogeneous pixel subspace and the structural pixel subspace, fusing segmentation results of the three subspaces, and finally obtaining an SAR image segmentation result.
Referring to FIG. 1, the present invention is based on G0The distributed random gradient variational Bayes SAR image segmentation method specifically comprises the following steps:
and S1, sketching SAR images.
And inputting a synthetic aperture radar SAR image.
Establishing a sketch model of the SAR image according to the following steps:
s101, randomly selecting one number within the range of [100,150] as the total number of the templates;
s102, constructing a template with edges and lines formed by pixel points and having different directions and scales, constructing an anisotropic Gaussian function by using the direction and scale information of the template, calculating a weighting coefficient of each pixel point in the template through the Gaussian function, and counting the weighting coefficients of all the pixel points in the template, wherein the number of the scales is 3-5, and the number of the directions is 18;
s103, calculating the average value of pixel points in the synthetic aperture radar SAR image corresponding to the template area coordinates according to the following formula:
Figure BDA0001380679010000151
wherein, mu represents and modeThe method comprises the steps that the mean value of all pixel points in a synthetic aperture radar SAR image corresponding to the coordinates of a plate area is sigma, summation operation is represented, g represents the corresponding coordinates of any pixel point in the omega-th area of a template, epsilon represents the symbol belonging to, and w represents the symbol belonging togRepresenting the weight coefficient, w, of the pixel point in the omega-th region of the template at the coordinate ggHas a value range of wg∈[0,1],AgRepresenting the value of a pixel point in the synthetic aperture radar SAR image corresponding to the pixel point in the omega-th area of the template at the coordinate g;
s104, calculating a variance value of pixel points in the synthetic aperture radar SAR image corresponding to the template area coordinates according to the following formula:
Figure BDA0001380679010000152
the method comprises the steps that v represents the variance value of all pixel points in a synthetic aperture radar SAR image corresponding to the coordinates of a template area;
s105, calculating the response value of each pixel point in the synthetic aperture radar SAR image to the ratio operator according to the following formula:
Figure BDA0001380679010000153
wherein R represents the response value of each pixel point in the synthetic aperture radar SAR image to a ratio operator, min {. cndot } represents minimum operation, a and b respectively represent two different areas in a template, and muaRepresents the mean value, mu, of all the pixels in the template region abRepresenting the mean value of all pixel points in the template region b;
s106, calculating a response value of each pixel in the synthetic aperture radar SAR image to the correlation operator according to the following formula:
Figure BDA0001380679010000154
where C denotes per pixel in the synthetic aperture radar SAR image for the correlation operatorIn response to the value of the response(s),
Figure BDA0001380679010000155
representing square root operations, a and b representing two different regions in the template, vaExpress the variance value, v, of all the pixels in the template region abRepresents the variance value, mu, of all the pixels in the template region baRepresents the mean value, mu, of all the pixels in the template region abRepresenting the mean value of all pixel points in the template region b;
s107, calculating the response value of each pixel point in the SAR image to each template according to the following formula:
Figure BDA0001380679010000161
wherein F represents the response value of each pixel point in the synthetic aperture radar SAR image to each template,
Figure BDA0001380679010000162
representing square root operation, wherein R and C respectively represent response values of pixel points in the synthetic aperture radar SAR image to a ratio operator and pixel points in the synthetic aperture radar SAR image to a correlation operator;
s108, judging whether the constructed template is equal to the total number of the selected templates, if so, executing S102, otherwise, executing S109;
s109, selecting a template with the maximum response value from the templates as a template of the SAR image, taking the maximum response value of the template as the intensity of a pixel point in the SAR image, taking the direction of the template as the direction of the pixel point in the SAR image, and obtaining a sideline response graph and a gradient graph of the SAR image;
s110, calculating the intensity value of the synthetic aperture radar SAR image intensity map according to the following formula to obtain the intensity map:
Figure BDA0001380679010000163
wherein I represents the intensity value of the synthetic aperture radar SAR image intensity map, r represents the value in the synthetic aperture radar SAR image edge response map, and t represents the value in the synthetic aperture radar SAR image gradient map;
s111, detecting the intensity map by adopting a non-maximum value inhibition method to obtain a suggested sketch;
s112, selecting the pixel point with the maximum strength in the suggested sketch, and connecting the pixel points which are communicated with the pixel point with the maximum strength in the suggested sketch to form a suggested line segment to obtain a suggested sketch;
s113, calculating the coding length gain of the sketch line in the suggested sketch according to the following formula:
Figure BDA0001380679010000171
wherein CLG represents coding length gain of sketch line in the suggested sketch, Sigma represents summation operation, J represents number of pixel points in current sketch line neighborhood, AjExpressing the observed value of the jth pixel point in the current sketch line neighborhood, Aj,0Indicating the estimation value of the jth pixel point in the sketch line neighborhood under the condition that the current sketch line can not represent structural information, ln (·) represents the logarithm operation with e as the base, Aj,1The estimation value of the jth pixel point in the adjacent area of the sketch line is expressed under the condition that the current sketch line can express the structural information;
s114, randomly selecting one number within the range of [5,50] as a threshold value T;
and S115, selecting the recommended sketch lines with CLG > T in all the recommended sketch lines, and combining the recommended sketch lines into a sketch of the SAR image.
And extracting a sketch map of the SAR image from the sketch model.
The synthetic aperture radar SAR image sketch model used by the invention is a model proposed by Jie-Wu et al in 2014 in an article Local maximum geographic region search for SAR specific reduction with sketch-based geographic region search function in IEEE Transactions on Geoscience and Remote Sensing journal.
And S2, obtaining an area map by sketch.
And carrying out regional processing on the sketch map of the synthetic aperture radar SAR image by adopting a sketch line regional method to obtain a regional map of the synthetic aperture radar SAR image comprising an aggregation region, a sketch-free region and a structural region.
S201, dividing sketch lines into gathering sketch lines representing gathering ground objects and boundary sketch lines, line target sketch lines and isolated target sketch lines representing boundaries, line targets and isolated targets according to the concentration degree of sketch line segments in a sketch map of the synthetic aperture radar SAR image.
S202, according to the histogram statistics of the concentration degree of the sketch line segments, selecting the sketch line segments with the concentration degree equal to the optimal concentration degree as a seed line segment set { EkK is 1,2,.. m }, wherein EkRepresenting any sketch line segment in the seed line segment set, k representing the label of any sketch line segment in the seed line segment set, m representing the total number of the seed line segments, and {. represents the set operation.
S203, using the unselected line segment added into the seed line segment set as the base point, and recursively solving the line segment set by the base point.
S204, constructing a circular element with the radius as the upper bound of the optimal concentration degree interval, expanding the line segments in the line segment set by using the circular element, corroding the expanded line segment set from outside to inside, and obtaining a concentration area with a sketch point as a unit on a sketch map.
S205, a geometric structure window of 5 × 5 in size is constructed with each pixel point of each sketch line as the center, for the sketch lines representing the boundary, line object, and isolated object, to obtain a structure region.
S206, the parts except the aggregation areas and the structural areas in the sketch are taken as non-sketch areas.
And S207, combining the gathering area, the non-sketch area and the structural area in the sketch to obtain an area map of the synthetic aperture radar SAR image comprising the gathering area, the non-sketch area and the structural area.
S3, dividing pixel subspace
And mapping a region map comprising the aggregation region, the no-sketch region and the structural region into the synthetic aperture radar SAR image to obtain a mixed pixel subspace, a homogeneous pixel subspace and a structural pixel subspace of the synthetic aperture radar SAR image.
S4, obtaining the condition that each region meets G by using the intensity values of all pixel points in each region and adopting a Mellin transform parameter estimation method for extremely uneven regions in the mixed pixel subspace0The estimated values of the three parameters α, γ, n required for the distribution.
And S5, constructing a stochastic gradient variational Bayesian network model.
Calculating intermediate variables from the input layer to the hidden layer of the stochastic gradient variation Bayesian network model according to the following formula:
Figure BDA0001380679010000181
wherein h isφIntermediate variables representing the input layer to the hidden layer of the stochastic gradient variational Bayesian network model,
Figure BDA0001380679010000182
representing the input layer to intermediate variable h of a stochastic gradient variational Bayesian network modelφM represents the number of neurons in the hidden layer, m is 50, n represents the number of neurons in the input layer, n is 441,
Figure BDA0001380679010000183
to represent
Figure BDA0001380679010000184
The corresponding offset vector.
Calculating the approximate posterior probability of the stochastic gradient variational Bayesian network model according to the following formula:
qφ(z|x)~G0φφ)
Figure BDA0001380679010000191
wherein q isφ(z | x) represents the approximate posterior probability, G, of a stochastic gradient variational Bayesian network model0φφ) Denotes the uniformity as alphaφScale of gammaφG of (A)0Distribution, G0Probability density formula of distribution
Figure BDA0001380679010000192
-α,γ,n,I(x,y)>0
Wherein, I (x, y) is the image pixel intensity value, and n is the equivalent view; gamma is a scale parameter; alpha is the uniformity; Γ (x) is a Gamma function, defined in the real number domain as:
Figure BDA0001380679010000193
when the equivalent visual number n is 1, the distribution becomes Beta-Prime distribution, and the expression is:
Figure BDA0001380679010000194
-α,γ,I(x,y)>0
wherein,
Figure BDA0001380679010000195
intermediate variable h from input layer to hidden layer of representing stochastic gradient variation Bayesian network modelφAnd-alphaφThe connection weight of (a) is set,
Figure BDA0001380679010000196
to represent
Figure BDA0001380679010000197
The corresponding offset vector is set to the offset vector,
Figure BDA0001380679010000198
intermediate variable h representing input layer to hidden layerφAnd gammaφThe connection weight of (a) is set,
Figure BDA0001380679010000199
to represent
Figure BDA00013806790100001910
The corresponding offset vector.
Calculating intermediate variables from the hidden layer to the reconstruction layer of the stochastic gradient variation Bayesian network model according to the following formula:
Figure BDA00013806790100001911
wherein h isθIntermediate variables representing hidden layer to reconstruction layer of the stochastic gradient variation Bayes network model,
Figure BDA00013806790100001912
hidden layer to intermediate variable hθThe connection weight of (a) is set,
Figure BDA00013806790100001913
to represent
Figure BDA00013806790100001914
The corresponding offset vector.
The conditional probability of the stochastic gradient variational bayesian network model is calculated according to the following formula:
Figure BDA00013806790100001915
Figure BDA00013806790100001916
wherein,
Figure BDA0001380679010000201
representing conditional probability of a stochastic gradient variational Bayesian network model, G0θθ) Denotes the uniformity as alphaθScale of gammaθThe normal distribution of (c),
Figure BDA0001380679010000202
intermediate variable h representing hidden to reconstructed layersθAnd-alphaθThe connection weight of (a) is set,
Figure BDA0001380679010000203
to represent
Figure BDA0001380679010000204
The corresponding offset vector is set to the offset vector,
Figure BDA0001380679010000205
intermediate variable h representing hidden to reconstructed layersθAnd gammaθThe connection weight of (a) is set,
Figure BDA0001380679010000206
to represent
Figure BDA0001380679010000207
The corresponding offset vector.
Calculating the variation lower bound of the stochastic gradient variation Bayesian network model according to the following formula:
Figure BDA0001380679010000208
wherein L (theta, phi) represents a variation lower bound of the stochastic gradient variation Bayes network model, phi represents a variation parameter of the stochastic gradient variation Bayes network model,
Figure BDA0001380679010000209
theta represents a generation parameter of the stochastic gradient variation bayesian network model,
Figure BDA00013806790100002010
DKL(qφ(z|x)||pθ(z)) represents qφ(z | x) and pθ(z) relative entropy between z representing hidden layer variables of the stochastic gradient variational Bayesian network model, pθ(z) represents the prior probability of the hidden layer variable z, Σ (-) represents the summation operation, L represents the number of times the hidden layer variable z was gaussian sampled, log (-) represents the logarithm operation, z (-) represents the sum of the hidden layer variable zlExpressing the result of the first Gaussian sampling of z, and the value of the result is expressed by a formula
Figure BDA00013806790100002011
Get, wherein, <' > indicates a dot product operation, ∈lRepresenting auxiliary variables, epsilon, of Gaussian sampleslN (0, I), indicating that the Gaussian sampled auxiliary variable satisfies the standard normal distribution.
D in the lower bound (1) of the variationKL(qφ(z|x)||pθ(z)) the present invention is based on qφ(z | x) and pθ(z) are all G0The distribution assumption pushes down to derive an analytical expression rather than using a gaussian distribution because the extremely inhomogeneous region of the SAR image satisfies G0And (4) distribution. In addition G0The distribution differs in the form in the case where the equivalent view n is equal to 1 and not equal to 1, so the invention considers the case where the equivalent view n is equal to 1 and n is equal to 1, respectively:
when the equivalent view n is 1, the calculation formula is as follows:
-DKL(qφ(z|x)||pθ(z))=∫Pβ'(z;-α,γ)logPβ'(z;c,γ1)-Pβ'(z;-α,γ)logPβ'(z;-α,γ)dz (2)
wherein p is a prioriθ(z) satisfies- α ═ c, γ ═ γ1Beta-Primer distribution, c, gamma of1Are all positive numbers and are known quantities derived from the image. Approximate posterior qφ(z | x) satisfies a Beta-Primer distribution where z ∈ [ a, b ]]And a is more than 0 and less than b and less than or equal to 1, and the normalized image pixel intensity value is also a known quantity. The derivation process is omitted here, and the final form is directly given:
Figure BDA0001380679010000211
when the equivalent view number n ≠ 1, the calculation formula is as follows:
Figure BDA0001380679010000212
wherein p is a prioriθ(z) satisfies- α ═ c, γ ═ γ1G of (A)0Distribution, c, gamma1Are all positive numbers and are known quantities derived from the image. Approximate posterior qφ(z | x) satisfies G0Distribution, where z ∈ [ a, b ]]And a is more than 0 and less than b and less than or equal to 1, and the normalized image pixel intensity value is also a known quantity. The derivation process is omitted here, and the final form is directly given:
-DKL(qφ(z|x)||pθ(z))=logn+clogγ1-logΓ(n-α)+logΓ(n)+logΓ(-α)-αlogγ+(n-1)nnΓ(n-α)/(γαΓ(n)Γ(-α))·(1/(n(α-(n-1)))·(lnb·bn-1/(γ1+nb)n-1-α
-lna·an-1/(γ1+na)n-1-α-lnb·bn-1/(γ+nb)n-1-α+lna·an-1/(γ+na)n-1-α))
-(n+c)Γ(n-α)/(γαΓ(n)Γ(-α))·(1/(α-(n-2))·((nb)n-2log(nb+γ1)(nb+γ1)n-2-α-(na)n-2log(na+γ1)(na+γ1)n-2-α-(n-1)F′n-2-G'n-2))
+(n-α)Γ(n-α)/(γαΓ(n)Γ(-α))(1/(α-(n-2))·((nb)n-2log(nb+γ)(nb+γ)n-2-α
-(na)n-2log(na+γ)(na+γ)n-2-α-(n-1)F′n-2-G'n-2))
wherein F'm=(1/(α-(m-1))((nb)m-1log(nb+γ1)(nb+γ1)m-1-α
-(na)m-1log(na+γ1)(na+γ1)m-1-α-(m-1)F′m-1-G'm-1)),
G'm=1/(α-m)·((nb)m(nb+γ)α-m-(na)m(na+γ)α-m-mG'm-1),m≥1。
And S6, feature learning of the mixed pixel subspace.
S601, carrying out region division on the mixed pixel subspace of the synthetic aperture radar SAR image according to spatial connectivity, and if only one region is not connected with each other, executing the step S7.
And S602, sampling each unconnected area at intervals according to a 21 x 21 window to obtain a plurality of image block samples corresponding to each area.
S603, for each non-communicated area, generating a group of non-uniform ground object distribution G corresponding to each area0Random number of distribution:
the first step is as follows: according to the following formula, calculating the uneven ground object distribution G of the SAR image0Probability density of distribution:
Figure BDA0001380679010000221
wherein, P (I (x, y)) represents the probability density of the uneven ground feature distribution of the synthetic aperture radar SAR image, I (x, y) represents the intensity value of the pixel point with coordinates (x, y), n represents the equivalent view of the synthetic aperture radar SAR image, α represents the shape parameter of the synthetic aperture radar SAR image, γ represents the scale parameter of the synthetic aperture radar SAR image, and Γ (·) represents a gamma function, the value of which is obtained by the following formula:
Figure BDA0001380679010000222
wherein u represents an independent variable, ^ represents an integral operation, and t represents an integral variable.
Randomly selecting a blend pixel subspace region RiOf the image block samples, constitute a matrix a of 441 × 50.
The second step is that: using the matrix A, byUneven ground object distribution G of aperture-forming radar SAR image0The probability density function of the distribution generates a matrix B of 441 multiplied by 50, the data in the matrix B satisfies the uneven ground object distribution G of the SAR image of the synthetic aperture radar0And (4) distribution.
S604, initializing the connection weight of the stochastic gradient variant Bayes network by using a group of random numbers corresponding to each region for each region, so as to obtain the initialized stochastic gradient variant Bayes network:
the first step is as follows: taking the matrix B as an input layer x to an intermediate variable h of the stochastic gradient variation Bayesian network modelφConnection weight of
Figure BDA0001380679010000223
The second step is that: randomly selecting 50 rows from the matrix B to form a matrix C of 50 multiplied by 50, and taking the matrix C as an intermediate variable h of the stochastic gradient variational Bayesian network modelφTo-alphaφConnection weight of
Figure BDA0001380679010000231
Taking the matrix C as an intermediate variable h of the stochastic gradient variation Bayesian network modelφTo gammaφConnection weight
Figure BDA0001380679010000232
Taking the matrix C as an implicit layer z to an intermediate variable h of the stochastic gradient variation Bayesian network modelθConnection weight of
Figure BDA0001380679010000233
The third step: taking the transposition of the matrix B as an intermediate variable h of the stochastic gradient variational Bayesian network modelθTo-alphaθConnection weight of
Figure BDA0001380679010000234
Taking the transposition of the matrix B as an intermediate variable h of the stochastic gradient variational Bayesian network modelθTo gammaθConnection weight of
Figure BDA0001380679010000235
S605, initializing each unconnected area, then using the image block sample as an input layer of the stochastic gradient variation Bayesian network, and training the initialized stochastic gradient variation Bayesian network by using a stochastic gradient Bayesian inference method according to the following steps to obtain the trained stochastic gradient variation Bayesian network:
step 1, initializing prior probability of hidden layer of stochastic gradient variation Bayesian network model to G0Distribution probability, the approximate posterior probability of the stochastic gradient variation Bayesian network model is initialized to G0And distributing the probability to obtain an analytic expression of a variation lower bound of the random gradient variation Bayesian network model as follows:
(a) when the equivalent view n is 1,
Figure BDA0001380679010000236
(b) when the equivalent view number n ≠ 1,
Figure BDA0001380679010000237
Figure BDA0001380679010000241
wherein F'm=(1/(α-(m-1))((nb)m-1log(nb+γ1)(nb+γ1)m-1-α
-(na)m-1log(na+γ1)(na+γ1)m-1-α-(m-1)F′m-1-G'm-1))
G'm=1/(α-m)·((nb)m(nb+γ)α-m-(na)m(na+γ)α-m-mG'm-1)
And 2, updating generation parameters of the stochastic gradient variation Bayesian network model according to the following formula:
Figure BDA0001380679010000242
wherein, thetat+1Represents the generation parameter theta of the stochastic gradient variation Bayesian network model after the t +1 iterationtRepresenting the generation parameters of the stochastic gradient variation Bayesian network model after the t-th iteration,
Figure BDA0001380679010000243
an operation of obtaining a partial derivative of a parameter theta of L (theta, phi);
and 3, updating variation parameters of the stochastic gradient variation Bayesian network model according to the following formula:
Figure BDA0001380679010000244
wherein phi ist+1Represents the variation parameter phi of the stochastic gradient variation Bayes network model after the t +1 iterationtRepresenting the variation parameters of the stochastic gradient variation Bayesian network model after the t-th iteration,
Figure BDA0001380679010000245
an operation of partial derivation of a parameter phi of L (theta, phi) is expressed;
step 4, judging whether the times of the variation lower bound invariance reaches a threshold value 100, if so, executing step 5; otherwise, executing the step 2;
and 5, finishing the training of the stochastic gradient variational Bayesian network.
S606, for each region which is not communicated with each other, the weight of the trained random gradient variational Bayesian network is taken
Figure BDA0001380679010000251
As a feature set for that region.
And S7, segmenting the SAR image mixed pixel subspace.
And splicing the feature sets of all the unconnected areas, and taking the spliced feature sets as a codebook.
And respectively calculating the inner product of all the characteristics of each unconnected area and each characteristic in the codebook to obtain the projection vector of all the characteristics of each area on the codebook.
And carrying out maximum value convergence on all projection vectors of each unconnected area to obtain a structural feature vector corresponding to each area.
And clustering the structural feature vectors of all the unconnected areas by using a hierarchical clustering algorithm to obtain a segmentation result of the mixed pixel subspace.
And S8, dividing the structural pixel subspace.
And extracting line targets by adopting a visual semantic rule, and then segmenting the structural pixel subspace by using a structural region segmentation method of a polynomial hidden model based on a geometric structure window to obtain a segmentation result of the structural pixel subspace.
The method for segmenting the structural region used in the present invention is a Model proposed by Fan-Liu et al in 2016 in the article SAR Image Segmentation base on Hierarchical Visual search and Adaptive neighbor patent Model, published in the journal of IEEE transactions on geographic and Remote Sensing.
And S9, dividing the homogeneous pixel subspace.
And (3) dividing the homogeneous pixel subspace by adopting a homogeneous region division method of a polynomial hidden model based on self-adaptive window selection to obtain a division result of the homogeneous pixel subspace.
The homogeneous region Segmentation method Based on Adaptive window selection polynomial hidden Model is a Model proposed by Fan-Liu et al in 2016 in the article SAR Image Segmentation Based on high performance Visual search and Adaptive neighbor multinominal Model published in the journal of IEEE transactions on science and Remote Sensing.
And S10, combining the segmentation results of the mixed pixel subspace, the homogeneous pixel subspace and the structural pixel subspace to obtain the final segmentation result of the SAR image.
The effect of the present invention will be further described with reference to the simulation diagram.
1. Simulation conditions are as follows:
the hardware conditions of the simulation of the invention are as follows: a smart sensing and image understanding laboratory graphics workstation; the synthetic aperture radar SAR image used by the simulation of the invention is as follows: and the resolution of the X wave band is 1 meter in a Pyramid graph.
2. Simulation content:
the simulation experiment of the present invention is to segment the Pyramid map in the SAR image, as shown in fig. 3 (a). The figure is derived from a synthetic aperture radar SAR image with a resolution of 1 meter in the X-band.
The Pyramid image shown in fig. 3(a) is rasterized by the SAR image rasterization step of the present invention, and a sketch image shown in fig. 3(b) is obtained.
The pixel rendering shown in fig. 3(b) is regionalized by the pixel subspace division step of the present invention, resulting in a region map as shown in fig. 3 (c). The white space in fig. 3(c) indicates the aggregation region, and the rest are the non-sketch region and the texture region. From the region map shown in fig. 3(c), a Pyramid image mixed pixel subspace map as shown in fig. 3(d) can be obtained.
Based on G by adopting the invention0The distributed stochastic gradient variant bayesian network segments the Pyramid mixed pixel subspace map shown in fig. 3(d) to obtain the cluster structure of the mixed pixel subspace shown in fig. 4(a), wherein the gray areas represent unprocessed terrain spaces, the remaining areas of the same color represent the same terrain space, and the areas of different colors represent different terrain spaces.
By using the merge segmentation result step of the present invention, the mixed pixel subspace segmentation result, the homogeneous pixel subspace segmentation result and the structural pixel subspace segmentation result shown in fig. 4(a) are merged to obtain fig. 4(c), and fig. 4(c) is a final segmentation result diagram of the Pyramid image shown in fig. 3 (a).
3. Simulation effect analysis:
fig. 4(a) is a mixed pixel subspace segmentation result graph of the Pyramid image by the method of the present invention, and fig. 4(b) is a mixed pixel subspace segmentation result graph obtained by a gaussian-distributed random gradient variant bayesian network. FIG. 4(c) is a final segmentation result of the Pyramid image according to the method of the present invention, and FIG. 4(d) is a final segmentation result based on the assumption of Gaussian distribution. The comparison can be used for drawing a conclusion that the method is more reasonable for the segmentation result of the mixed pixel subspace. The method provided by the invention is used for segmenting the SAR image, so that the accuracy of SAR image segmentation is improved.
The above-mentioned contents are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modification made on the basis of the technical idea of the present invention falls within the protection scope of the claims of the present invention.

Claims (1)

1. Based on G0The distributed random gradient variational Bayes SAR image segmentation method is characterized in that a sketch of an SAR image is extracted according to an initial sketch model; then dividing the SAR image into a mixed pixel subspace, a homogeneous pixel subspace and a structural pixel subspace according to the regional image; estimating G for each region of extreme inhomogeneity in the mixed pixel subspace0Distribution parameter, using G-based0The distributed random gradient variational Bayesian model learns the structural characteristics of the mixed pixel, and unsupervised segmentation of the mixed pixel subspace is realized; correspondingly segmenting the homogeneous pixel subspace and the structural pixel subspace, fusing segmentation results of the three subspaces, and finally obtaining an SAR image segmentation result, wherein the method comprises the following specific steps:
s1, inputting a synthetic aperture radar SAR image, and establishing a sketch model of the synthetic aperture radar SAR image, which specifically comprises the following steps:
s101, randomly selecting one number within the range of [100,150] as the total number of the templates;
s102, constructing a template with edges and lines formed by pixel points and having different directions and scales, constructing an anisotropic Gaussian function by using the direction and scale information of the template, calculating a weighting coefficient of each pixel point in the template through the Gaussian function, and counting the weighting coefficients of all the pixel points in the template, wherein the number of the scales is 3-5, and the number of the directions is 18;
s103, calculating the average value of pixel points in the synthetic aperture radar SAR image corresponding to the template area coordinates:
Figure FDA0002203207240000011
wherein mu represents the mean value of all pixel points in the synthetic aperture radar SAR image corresponding to the coordinates of the template region, sigma represents the summation operation, g represents the coordinates corresponding to any pixel point in the omega region of the template, epsilon represents the symbol, wgRepresenting the weight coefficient, w, of the pixel point in the omega-th region of the template at the coordinate ggHas a value range of wg∈[0,1],AgRepresenting the value of a pixel point in the synthetic aperture radar SAR image corresponding to the pixel point in the omega-th area of the template at the coordinate g;
s104, calculating a variance value of pixel points in the synthetic aperture radar SAR image corresponding to the template area coordinates:
Figure FDA0002203207240000021
the method comprises the steps that v represents the variance value of all pixel points in a synthetic aperture radar SAR image corresponding to the coordinates of a template area;
s105, calculating a response value of each pixel point in the synthetic aperture radar SAR image to a ratio operator:
Figure FDA0002203207240000022
wherein R represents the response value of each pixel point in the synthetic aperture radar SAR image to a ratio operator, min {. cndot } represents minimum operation, a and b respectively represent two different areas in a template, and muaRepresenting a template regionMean value of all pixels in a, mubRepresenting the mean value of all pixel points in the template region b;
s106, calculating a response value of each pixel in the synthetic aperture radar SAR image to the correlation operator:
Figure FDA0002203207240000023
wherein C represents the response value of each pixel in the synthetic aperture radar SAR image to the correlation operator,
Figure FDA0002203207240000024
representing square root operations, a and b representing two different regions in the template, vaExpress the variance value, v, of all the pixels in the template region abRepresents the variance value, mu, of all the pixels in the template region baRepresents the mean value, mu, of all the pixels in the template region abRepresenting the mean value of all pixel points in the template region b;
s107, calculating the response value of each pixel point in the synthetic aperture radar SAR image to each template:
Figure FDA0002203207240000025
wherein F represents the response value of each pixel point in the synthetic aperture radar SAR image to each template,
Figure FDA0002203207240000026
representing square root operation, wherein R and C respectively represent response values of pixel points in the synthetic aperture radar SAR image to a ratio operator and pixel points in the synthetic aperture radar SAR image to a correlation operator;
s108, judging whether the constructed template is equal to the total number of the selected templates, if so, executing the step S102, otherwise, executing the step S109;
s109, selecting a template with the maximum response value from the templates as a template of the SAR image, taking the maximum response value of the template as the intensity of a pixel point in the SAR image, taking the direction of the template as the direction of the pixel point in the SAR image, and obtaining a sideline response graph and a gradient graph of the SAR image;
s110, calculating the intensity value of the synthetic aperture radar SAR image intensity map to obtain an intensity map:
Figure FDA0002203207240000031
wherein I represents the intensity value of the synthetic aperture radar SAR image intensity map, r represents the value in the synthetic aperture radar SAR image edge response map, and t represents the value in the synthetic aperture radar SAR image gradient map;
s111, detecting the intensity map by adopting a non-maximum value inhibition method to obtain a suggested sketch;
s112, selecting the pixel point with the maximum strength in the suggested sketch, and connecting the pixel points which are communicated with the pixel point with the maximum strength in the suggested sketch to form a suggested line segment to obtain a suggested sketch;
s113, calculating the coding length gain of the sketch line in the suggested sketch:
Figure FDA0002203207240000032
wherein CLG represents coding length gain of sketch line in the suggested sketch, Sigma represents summation operation, J represents number of pixel points in current sketch line neighborhood, AjExpressing the observed value of the jth pixel point in the current sketch line neighborhood, Aj,0Indicating the estimation value of the jth pixel point in the sketch line neighborhood under the condition that the current sketch line can not represent structural information, ln (·) represents the logarithm operation with e as the base, Aj,1The estimation value of the jth pixel point in the adjacent area of the sketch line is expressed under the condition that the current sketch line can express the structural information;
s114, randomly selecting one number within the range of [5,50] as a threshold value T;
s115, selecting the recommended sketch lines with CLG > T in all the recommended sketch lines, combining the recommended sketch lines into a sketch of the SAR image, and extracting the sketch of the SAR image from the sketch model;
s2, performing regional processing on the sketch map of the synthetic aperture radar SAR image by adopting a sketch line regional method to obtain a regional map of the synthetic aperture radar SAR image comprising an aggregation region, a sketch-free region and a structural region, wherein the method specifically comprises the following steps:
s201, dividing sketch lines into gathering sketch lines representing gathering ground objects and boundary sketch lines, line target sketch lines and isolated target sketch lines representing boundaries, line targets and isolated targets according to the concentration degree of sketch line segments in a sketch map of the synthetic aperture radar SAR image;
s202, according to the histogram statistics of the concentration degree of the sketch line segments, selecting the sketch line segments with the concentration degree equal to the optimal concentration degree as a seed line segment set { EkK is 1,2,.. m }, wherein EkRepresenting any sketch line segment in the seed line segment set, k representing the label of any sketch line segment in the seed line segment set, m representing the total number of the seed line segments, and {. represents the set operation;
s203, using the unselected line segment added into the seed line segment set as a base point, and recursively solving the line segment set by using the base point;
s204, constructing a circular element with the radius as the upper bound of the optimal concentration degree interval, expanding the line segments in the line segment set by using the circular element, corroding the expanded line segment set from outside to inside, and obtaining a concentration area with a sketch point as a unit on a sketch map;
s205, constructing a geometric structure window with the size of 5 multiplied by 5 by taking each pixel point of each sketch line as the center to obtain a structural area for the sketch lines representing the boundary, the line target and the isolated target;
s206, taking the part of the sketch except the aggregation region and the structural region as a non-sketch region;
s207, combining the gathering area, the non-sketch area and the structural area in the sketch to obtain an area map of the synthetic aperture radar SAR image comprising the gathering area, the non-sketch area and the structural area;
s3, mapping a region map comprising a gathering region, a no-sketch region and a structural region into the SAR image to obtain a mixed pixel subspace, a homogeneous pixel subspace and a structural pixel subspace of the SAR image;
s4, obtaining the condition that each region meets G by using the intensity values of all pixel points in each region and adopting a Mellin transform parameter estimation method for extremely uneven regions in the mixed pixel subspace0Estimated values of three parameters α, γ, n required for the distribution;
s5, constructing a stochastic gradient variational Bayesian network model for the mixed pixel subspace, which comprises the following steps:
intermediate variables from the input layer to the hidden layer of the stochastic gradient variation Bayesian network model:
Figure FDA0002203207240000051
wherein h isφIntermediate variables representing the input layer to the hidden layer of the stochastic gradient variational Bayesian network model,
Figure FDA0002203207240000052
representing the input layer to intermediate variable h of a stochastic gradient variational Bayesian network modelφM represents the number of neurons in the hidden layer, m is 50, n represents the number of neurons in the input layer, n is 441,
Figure FDA0002203207240000053
to represent
Figure FDA0002203207240000054
A corresponding offset vector;
approximate posterior probability of stochastic gradient variational bayesian network model:
Figure FDA0002203207240000055
wherein q isφ(z | x) represents the approximate posterior probability, G, of a stochastic gradient variational Bayesian network model0φφ) Denotes the uniformity as alphaφScale of gammaφG of (A)0Distribution, G0Probability density formula of distribution
Figure FDA0002203207240000056
Wherein, I (x, y) is the image pixel intensity value, and n is the equivalent view; gamma is a scale parameter; alpha is the uniformity; Γ (x) is a Gamma function, defined in the real number domain as:
Figure FDA0002203207240000057
when the equivalent visual number n is 1, the distribution becomes Beta-Prime distribution, and the expression is:
Figure FDA0002203207240000058
wherein,
Figure FDA0002203207240000059
intermediate variable h from input layer to hidden layer of representing stochastic gradient variation Bayesian network modelφAnd-alphaφThe connection weight of (a) is set,
Figure FDA00022032072400000510
to represent
Figure FDA00022032072400000511
The corresponding offset vector is set to the offset vector,
Figure FDA00022032072400000512
intermediate variable h representing input layer to hidden layerφAnd gammaφThe connection weight of (a) is set,
Figure FDA00022032072400000513
to represent
Figure FDA00022032072400000514
A corresponding offset vector;
intermediate variables from hidden layer to reconstructed layer of the stochastic gradient variation Bayesian network model:
Figure FDA0002203207240000061
wherein h isθIntermediate variables representing hidden layer to reconstruction layer of the stochastic gradient variation Bayes network model,
Figure FDA0002203207240000062
hidden layer to intermediate variable hθThe connection weight of (a) is set,
Figure FDA0002203207240000063
to represent
Figure FDA0002203207240000064
A corresponding offset vector;
conditional probability of stochastic gradient variant bayesian network model:
Figure FDA0002203207240000065
wherein,
Figure FDA0002203207240000066
bayes expressing stochastic gradient variationalConditional probability of network model, G0θθ) Denotes the uniformity as alphaθScale of gammaθThe normal distribution of (c),
Figure FDA0002203207240000067
intermediate variable h representing hidden to reconstructed layersθAnd-alphaθThe connection weight of (a) is set,
Figure FDA0002203207240000068
to represent
Figure FDA0002203207240000069
The corresponding offset vector is set to the offset vector,
Figure FDA00022032072400000610
intermediate variable h representing hidden to reconstructed layersθAnd gammaθThe connection weight of (a) is set,
Figure FDA00022032072400000611
to represent
Figure FDA00022032072400000612
A corresponding offset vector;
lower bound of variation of stochastic gradient variation bayesian network model:
Figure FDA00022032072400000613
wherein L (theta, phi) represents a variation lower bound of the stochastic gradient variation Bayes network model, phi represents a variation parameter of the stochastic gradient variation Bayes network model,
Figure FDA00022032072400000614
theta represents a generation parameter of the stochastic gradient variation bayesian network model,
Figure FDA00022032072400000615
DKL(qφ(z|x)||pθ(z)) represents qφ(z | x) and pθ(z) relative entropy between z representing hidden layer variables of the stochastic gradient variational Bayesian network model, pθ(z) represents the prior probability of the hidden layer variable z,
Figure FDA00022032072400000616
indicating a summation operation, L indicates the number of gaussian samples taken by the hidden layer variable z,
Figure FDA00022032072400000617
denotes a logarithmic operation, zlExpressing the result of the first Gaussian sampling of z, and the value of the result is expressed by a formula
Figure FDA00022032072400000618
To obtain a mixture of, among others,
Figure FDA00022032072400000619
represents a dot product operation, εlRepresenting a gaussian sampled auxiliary variable,
Figure FDA00022032072400000620
representing that the auxiliary variable of Gaussian sampling meets the standard normal distribution;
two cases of equivalent view n ≠ 1 and n ≠ 1 are considered:
when the equivalent view n is 1, the following is calculated:
-DKL(qφ(z|x)||pθ(z))=∫Pβ'(z;-α,γ)logPβ'(z;c,γ1)-Pβ'(z;-α,γ)logPβ'(z;-α,γ)dz
wherein p is a prioriθ(z) satisfies- α ═ c, γ ═ γ1Beta-Primer distribution, c, gamma of1Are all positive numbers, are known quantities derived from the image, and approximate a posteriori qφ(z | x) satisfies a Beta-Primer distribution where z ∈ [ a, b ]]And a is more than 0 and less than b and less than or equal to 1 to represent the normalized image pixel intensity value;
When the equivalent view n ≠ 1, it is calculated as follows:
Figure FDA0002203207240000071
wherein p is a prioriθ(z) satisfies- α ═ c, γ ═ γ1G of (A)0Distribution, c, gamma1Are all positive numbers, which are known quantities derived from the image; approximate posterior qφ(z | x) satisfies G0Distribution, wherein z ∈ [ a, b ]]A is more than 0 and less than b and less than or equal to 1, and the normalized image pixel intensity value is represented;
s6, learning the characteristics of the mixed pixel subspace, comprising the following steps:
s601, performing area division on a mixed pixel subspace of the synthetic aperture radar SAR image according to spatial connectivity, and if only one non-connected area exists, executing the step S7;
s602, sampling each unconnected area at intervals according to a 21 x 21 window to obtain a plurality of image block samples corresponding to each area;
s603, for each non-communicated area, generating a group of non-uniform ground object distribution G corresponding to each area0The distributed random numbers specifically include:
the first step is as follows: calculating uneven ground object distribution G of synthetic aperture radar SAR image0Probability density of distribution:
Figure FDA0002203207240000072
wherein P (I (x, y)) represents the probability density of uneven ground feature distribution of the SAR image, I (x, y) represents the intensity value of a pixel point with coordinates of (x, y), n represents the equivalent view of the SAR image, alpha represents the shape parameter of the SAR image, gamma represents the scale parameter of the SAR image,
Figure FDA0002203207240000073
represents a gamma function whose value is given by:
Figure FDA0002203207240000081
wherein u represents an independent variable, ^ represents an integral operation, t represents an integral variable, and a mixed pixel subspace region R is randomly selectediThe 50 image block samples form a matrix A of 441 × 50;
the second step is that: non-uniform terrain distribution G through SAR images using matrix A0The probability density function of the distribution generates a matrix B of 441 multiplied by 50, the data in the matrix B satisfies the uneven ground object distribution G of the SAR image of the synthetic aperture radar0Distributing;
s604, initializing a connection weight of the stochastic gradient variant bayesian network with a group of random numbers corresponding to each region for each region, to obtain an initialized stochastic gradient variant bayesian network, which specifically includes:
the first step is as follows: taking the matrix B as an input layer x to an intermediate variable h of the stochastic gradient variation Bayesian network modelφConnection weight of
Figure FDA0002203207240000082
The second step is that: randomly selecting 50 rows from the matrix B to form a matrix C of 50 multiplied by 50, and taking the matrix C as an intermediate variable h of the stochastic gradient variational Bayesian network modelφTo-alphaφConnection weight of
Figure FDA0002203207240000083
Taking the matrix C as an intermediate variable h of the stochastic gradient variation Bayesian network modelφTo gammaφConnection weight
Figure FDA0002203207240000084
Taking the matrix C as the hidden layer z to middle variation of the stochastic gradient variation Bayesian network modelAmount hθConnection weight of
Figure FDA0002203207240000085
The third step: taking the transposition of the matrix B as an intermediate variable h of the stochastic gradient variational Bayesian network modelθTo-alphaθConnection weight of
Figure FDA0002203207240000086
Taking the transposition of the matrix B as an intermediate variable h of the stochastic gradient variational Bayesian network modelθTo gammaθConnection weight of
Figure FDA0002203207240000087
S605, initializing each unconnected region, then using the image block sample as an input layer of the stochastic gradient variation Bayesian network, and training the initialized stochastic gradient variation Bayesian network by using a stochastic gradient Bayesian inference method according to the following steps to obtain the trained stochastic gradient variation Bayesian network, wherein the step S605 specifically comprises:
firstly, initializing the prior probability of a hidden layer of a stochastic gradient variational Bayesian network model to G0Distribution probability, the approximate posterior probability of the stochastic gradient variation Bayesian network model is initialized to G0And distributing the probability to obtain an analytic expression of a variation lower bound of the random gradient variation Bayesian network model as follows:
(a) when the equivalent view n is 1,
Figure FDA0002203207240000091
(b) when the equivalent view number n ≠ 1,
Figure FDA0002203207240000092
wherein F'm=(1/(α-(m-1))((nb)m-1log(nb+γ1)(nb+γ1)m-1-α-(na)m-1log(na+γ1)(na+γ1)m-1-α-(m-1)F'm-1-G'm-1))
Figure FDA0002203207240000094
And secondly, updating generation parameters of the stochastic gradient variation Bayesian network model:
Figure FDA0002203207240000093
wherein, thetat+1Represents the generation parameter theta of the stochastic gradient variation Bayesian network model after the t +1 iterationtRepresenting the generation parameters of the stochastic gradient variation Bayesian network model after the t-th iteration,
Figure FDA0002203207240000101
an operation of obtaining a partial derivative of a parameter theta of L (theta, phi);
and thirdly, updating variation parameters of the stochastic gradient variation Bayesian network model:
Figure FDA0002203207240000102
wherein phi ist+1Represents the variation parameter phi of the stochastic gradient variation Bayes network model after the t +1 iterationtRepresenting the variation parameters of the stochastic gradient variation Bayesian network model after the t-th iteration,
Figure FDA0002203207240000103
an operation of partial derivation of a parameter phi of L (theta, phi) is expressed;
step four, judging whether the times of the variation lower bound invariance reaches a threshold value of 100, if so, executing the step five; otherwise, executing the second step;
fifthly, finishing the training of the stochastic gradient variational Bayesian network;
s606, for each region which is not communicated with each other, the weight of the trained random gradient variational Bayesian network is taken
Figure FDA0002203207240000104
As a feature set for the region;
s7, segmenting SAR image mixed pixel subspace;
s8, extracting line targets by adopting a visual semantic rule, and then segmenting a structural pixel subspace by using a structural region segmentation method of a polynomial hidden model based on a geometric structure window to obtain a segmentation result of the structural pixel subspace;
s9, dividing the homogeneous pixel subspace by adopting a homogeneous region division method of a polynomial hidden model based on self-adaptive window selection to obtain a division result of the homogeneous pixel subspace;
and S10, combining the segmentation results of the mixed pixel subspace, the homogeneous pixel subspace and the structural pixel subspace to obtain the final segmentation result of the SAR image.
CN201710702367.1A 2017-08-16 2017-08-16 Based on G0Distributed random gradient variational Bayesian SAR image segmentation method Active CN107464247B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710702367.1A CN107464247B (en) 2017-08-16 2017-08-16 Based on G0Distributed random gradient variational Bayesian SAR image segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710702367.1A CN107464247B (en) 2017-08-16 2017-08-16 Based on G0Distributed random gradient variational Bayesian SAR image segmentation method

Publications (2)

Publication Number Publication Date
CN107464247A CN107464247A (en) 2017-12-12
CN107464247B true CN107464247B (en) 2021-09-21

Family

ID=60549887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710702367.1A Active CN107464247B (en) 2017-08-16 2017-08-16 Based on G0Distributed random gradient variational Bayesian SAR image segmentation method

Country Status (1)

Country Link
CN (1) CN107464247B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108664706B (en) * 2018-04-16 2020-11-03 浙江大学 Semi-supervised Bayesian Gaussian mixture model-based online estimation method for oxygen content of one-stage furnace in ammonia synthesis process
CN108986108B (en) * 2018-06-26 2022-04-19 西安电子科技大学 SAR image sample block selection method based on sketch line segment aggregation characteristics
CN109344837B (en) * 2018-10-22 2022-03-04 西安电子科技大学 SAR image semantic segmentation method based on deep convolutional network and weak supervised learning
CN110108806B (en) * 2019-04-04 2022-03-22 广东电网有限责任公司广州供电局 Transformer oil chromatographic data representation method based on probability information compression

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102903102A (en) * 2012-09-11 2013-01-30 西安电子科技大学 Non-local-based triple Markov random field synthetic aperture radar (SAR) image segmentation method
CN106611422A (en) * 2016-12-30 2017-05-03 西安电子科技大学 Stochastic gradient Bayesian SAR image segmentation method based on sketch structure

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7961975B2 (en) * 2006-07-31 2011-06-14 Stc. Unm System and method for reduction of speckle noise in an image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102903102A (en) * 2012-09-11 2013-01-30 西安电子科技大学 Non-local-based triple Markov random field synthetic aperture radar (SAR) image segmentation method
CN106611422A (en) * 2016-12-30 2017-05-03 西安电子科技大学 Stochastic gradient Bayesian SAR image segmentation method based on sketch structure

Also Published As

Publication number Publication date
CN107464247A (en) 2017-12-12

Similar Documents

Publication Publication Date Title
Zhao et al. Detail-preserving smoothing classifier based on conditional random fields for high spatial resolution remote sensing imagery
Huang et al. An adaptive mean-shift analysis approach for object extraction and classification from urban hyperspectral imagery
Kayabol et al. Unsupervised amplitude and texture classification of SAR images with multinomial latent model
CN107464247B (en) Based on G0Distributed random gradient variational Bayesian SAR image segmentation method
Lei et al. Multiscale superpixel segmentation with deep features for change detection
CN102013017B (en) Method for roughly sorting high-resolution remote sensing image scene
Ke et al. Adaptive change detection with significance test
CN110097101B (en) Remote sensing image fusion and coastal zone classification method based on improved reliability factor
CN106611422B (en) Stochastic gradient Bayes&#39;s SAR image segmentation method based on sketch structure
Ding et al. Interactive image segmentation using probabilistic hypergraphs
Tso et al. A contextual classification scheme based on MRF model with improved parameter estimation and multiscale fuzzy line process
CN105930815A (en) Underwater organism detection method and system
CN102402685B (en) Method for segmenting three Markov field SAR image based on Gabor characteristic
CN106611423B (en) SAR image segmentation method based on ridge ripple filter and deconvolution structural model
CN106651884B (en) Mean field variation Bayes&#39;s SAR image segmentation method based on sketch structure
CN110796667A (en) Color image segmentation method based on improved wavelet clustering
Shang et al. SAR image segmentation based on constrained smoothing and hierarchical label correction
Xiang et al. Adaptive statistical superpixel merging with edge penalty for PolSAR image segmentation
CN103870842B (en) Polarized SAR image classification method combining polarization feature and watershed
Chen et al. Color Image Segmentation Using Fuzzy C‐Regression Model
Reddy et al. A hybrid K-means algorithm improving low-density map-based medical image segmentation with density modification
CN103365985A (en) Class-adaptive polarimetric SAR (synthetic aperture radar) classification method
Hao et al. A novel change detection approach for VHR remote sensing images by integrating multi-scale features
Li et al. Alter-cnn: An approach to learning from label proportions with application to ice-water classification
CN105160666B (en) SAR image change detection based on Non-Stationary Analysis and condition random field

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant