CN108090913A - A kind of image, semantic dividing method based on object level Gauss-Markov random fields - Google Patents

A kind of image, semantic dividing method based on object level Gauss-Markov random fields Download PDF

Info

Publication number
CN108090913A
CN108090913A CN201711316006.XA CN201711316006A CN108090913A CN 108090913 A CN108090913 A CN 108090913A CN 201711316006 A CN201711316006 A CN 201711316006A CN 108090913 A CN108090913 A CN 108090913A
Authority
CN
China
Prior art keywords
mrow
msubsup
msub
msup
munder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711316006.XA
Other languages
Chinese (zh)
Other versions
CN108090913B (en
Inventor
郑晨
姚鸿泰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Qingchen Technology Co ltd
Original Assignee
Henan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University filed Critical Henan University
Priority to CN201711316006.XA priority Critical patent/CN108090913B/en
Publication of CN108090913A publication Critical patent/CN108090913A/en
Application granted granted Critical
Publication of CN108090913B publication Critical patent/CN108090913B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Coloring Foods And Improving Nutritive Qualities (AREA)

Abstract

The present invention proposes a kind of image, semantic dividing method based on object level Gauss Markov random fields, and step is:Initialization over-segmentation is carried out to pixel-level image, obtains object level image and Region adjacency graph, and defines neighborhood system, observational characteristic field and dividing mark field respectively on Region adjacency graph;Mark Label Field and neighborhood system are split according to object level, Gauss Markov modelings are carried out to the feature in each region of observational characteristic field and its feature of neighborhood, construction is for the object level equation of linear regression in each region;Probabilistic Modeling is carried out to Characteristic Field and Label Field respectively, the Posterior distrbutionp of dividing mark field is obtained according to Bayes criterions, and final segmentation result is obtained according to maximum posteriori criterion.The present invention can be used under complicated semantic and high spatial resolution background in carrying out semantic segmentation system to image in bulk, compared to artificial detection, drastically increase work efficiency.

Description

A kind of image, semantic dividing method based on object level Gauss-Markov random fields
Technical field
The present invention relates to image, semantic segmentation technical field more particularly to it is a kind of based on object level Gauss-Markov with The image, semantic dividing method on airport.
Background technology
Image, semantic segmentation refers to the pixel in image being grouped according to semantic difference expressed in image, the mistake Journey is independently carried out by machine.
With the continuous development of modern sensor manufacturing process and imaging technique, handled image spatial resolution is more next It is higher, and the amount of images obtained is increased with exponential form, if using segmentation by hand, inefficiency.Pervious Pixel-level Dividing method can not consider larger range of spatial information, cause the waste of bulk information.And the ground based on object level in recent years Reason analytical technology becomes a kind of hot spot technology for extracting image information, it is applied in image, semantic segmentation, it can be considered that Larger range of spatial information.But the interaction relationship between provincial characteristics is ignored, segmentation precision has much room for improvement.Cause This had not only ensured making full use of for spatial information, it is necessary to a kind of image, semantic dividing method, but also it can be considered that between provincial characteristics It interacts.
The content of the invention
Cannot be taken into account for conventional images semantic segmentation method make full use of ensure spatial information and consider provincial characteristics it Between the technical issues of interacting, the present invention proposes a kind of image, semantic segmentation based on object level Gauss-Markov random fields Method, had not only ensured making full use of for spatial information, but also it can be considered that interaction between provincial characteristics.
In order to achieve the above object, the technical proposal of the invention is realized in this way:One kind is based on object level Gauss- The image, semantic dividing method of Markov random fields, which is characterized in that its step are as follows:
Step 1:Initialization over-segmentation is carried out to the pixel-level image of reading, obtains the object being made of overdivided region Grade image and corresponding object level Region adjacency graph RAG define the neighborhood system N of the image according to Region adjacency graph RAGO, it is right As grade observational characteristic field YOWith object level dividing mark field XO
Step 2:According to object level dividing mark field XOWith neighborhood system NOTo observational characteristic field YOEach region ri's Feature and its feature of neighborhood carry out Gauss-Markov modelings, and construction is for each region riObject level linear regression side Journey, i=1 ..., l;
Step 3:Respectively to observational characteristic field YOWith dividing mark field XOProbabilistic Modeling is carried out, and is obtained according to Bayes criterions To dividing mark field XOPosterior distrbutionp, update iterative segmentation with maximum posteriori criterion and thus solve final segmentation.
The specific implementation step of the step 1 is as follows:
1) definition of location index set and Pixel-level are carried out to the high spatial resolution triple channel image I (R, G, B) of input Observational characteristic set defines, it is assumed that image I (R, G, B) resolution ratio is m × n, is obtained:Location index set S={ sxy=(x, y) | 1≤x≤m;1≤y≤n }, Pixel-level observational characteristic collectionWherein,Represent picture at the s of position The observational characteristic value of vegetarian refreshments,The respectively value of R, G of image, B component, m are the length of image, and n is the width of image Degree, (x, y) are the position coordinates of pixel in image;
2) over-segmentation processing is carried out to pixel-level image with mean-shift methods according to the minimum area of setting:It will figure L minimum area is segmented into as s as I (R, G, B) is undueminRegion, each region assigns label, obtains labelling matrix Ls={ ls| S ∈ S }, wherein, element ls∈{1,…,l},s∈S;Thus the location index set R={ r of object level image are obtained1,r2,…, rl, wherein, region ri=s | ls=i };
3) handled to obtain object level Region adjacency graph G=(R, E) according to over-segmentation, wherein, location index set R is object Grade element, each element represent an overdivided region, E={ eij| 1≤i, j≤l } represent syntople, element eijRepresent area Domain riIn with region rjAdjacent number of pixels, eij≠ 0 and if only if element RiAnd RjIt is adjacent;
4) object level observational characteristic field is defined on Region adjacency graph GAnd object Grade segmentation Label FieldWherein,Represent region riObservational characteristic, | ri| represent region ri Interior pixel number;XOIt is a random field,It is a stochastic variable,Wherein, K is segmentation class Do not gather, k is previously given segmentation classification number;
5) object level neighborhood system is provided according to object level Region adjacency graph G=(R, E):Its In,
The step 2 is as follows:
1) it can obtain what each overdivided region was included by location index set R in Region adjacency graph G=(R, E) Number of pixels is the area parameters of object level element, obtains area matrix RS={ RSi| 1≤i≤l }, wherein RSi=| ri|;
2) x is setOIt is object level dividing mark field XOOne realization, according to xOObtain characteristic mean of all categories and feature Covariance matrix realizes that flow is:
(a) known object grade segmentation Label Field is embodied as xO, calculate the corresponding segmentation of each pixel in original image Generic, i.e. Pixel-level dividing mark matrixWherein
(b) characteristic mean m={ m are calculated respectivelyi| 1≤i≤k } and Eigen Covariance matrix ∑={ ∑i|1≤i≤k}:
3) for each object level element ri, give its dividing mark and be embodied asAfterwards, equation of linear regression is constructed such as Under:
Wherein, ei~N (0, ∑h) it is a white Gaussian noise.
The specific method of the step 3 is as follows:
2) for object level observational characteristic field YO, it is not that joint probability modeling is directly carried out to observational characteristic, but to every One object level element riResidual error item in the object level equation of linear regression constructed carries out joint modeling, obtains Characteristic Field Likelihood function, i.e.,:
2) object level dividing mark field XOProbabilistic Modeling is carried out, from Markov-Gibbs equivalences, object level segmentation Label Field meets Gibbs distributions, and the prior distribution for obtaining Label Field is as follows:
Wherein, Z is normalization constant, U (xO) represent that segmentation field is embodied as xOWhen energy, K for segmentation category set, V2 () is group potential-energy function, is provided by Potts models, i.e.,:
3) Posterior distrbutionp that Label Field can be obtained by Bayes formula is:
So the optimum of dividing mark is asked, which to translate into, seeks dividing mark field XOThe maximized problem of Posterior distrbutionp, I.e.:
By loop iteration, dividing mark is updated, finally obtains segmentation result.
The specific implementation process of the loop iteration is:
5) Pixel-level MRF methods are realized by classical ICM algorithms first, obtain the segmentation generic of each pixel, That is Pixel-level segmentation field result:xP={ xs| s ∈ S }, and then the Object Segmentation Label Field for obtaining primary iteration is realizedWhereinMode is mode function;
6) realization walked by object level dividing mark field in tIt is corresponded to according to the following formula The characteristic mean of each classificationAnd Eigen Covariance
7) each object level element r is calculated respectivelyiObject level equation of linear regression:
8) computing object and Characteristic Field probability and Label Field probability, and by the update dividing mark of object, be specially respectively:
Beneficial effects of the present invention:Provide the molding semantic segmentation method to high spatial resolution RGB image;It can use In the semantic segmentation of batch processing high spatial resolution RGB image, segmentation efficiency far is horizontal higher than traditional-handwork segmentation, also compares Existing major part object-oriented grade partitioning scheme is efficient;Fixed value is directly assigned for the parameter to be estimated in equation of linear regression Letter is simple and efficient, and precision is high.
Description of the drawings
It in order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention, for those of ordinary skill in the art, without creative efforts, can be with Other attached drawings are obtained according to these attached drawings.
Fig. 1 is the flow chart of the present invention.
Fig. 2 is the flow chart that the present invention initializes.
Fig. 3 is the exemplary plot of initialization process of the present invention.
Fig. 4 is the flow chart of equation of linear regression of the present invention construction.
Fig. 5 is the flow chart of present invention joint modeling.
Fig. 6 is experiment simulation figure of the present invention.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, those of ordinary skill in the art are obtained every other under the premise of not making the creative labor Embodiment belongs to the scope of protection of the invention.
As shown in Figure 1, a kind of image, semantic dividing method based on object level Gauss-Markov random fields, step is such as Under:
Step 1:Initialization over-segmentation is carried out to the pixel-level image of reading, obtains the object being made of overdivided region Grade image and corresponding object level Region adjacency graph RAG define the neighborhood system N of the image according to Region adjacency graph RAGO, it is right As grade observational characteristic field YOWith object level dividing mark field XO
In order to carry out object level graphical analysis, efficiency of algorithm is improved, it is necessary to initialization over-segmentation be carried out, to obtain region Adjacent map RAG.Region adjacency graph RAG is that the spatial relationship between each overdivided region according to object level image obtains. The method that image carries out initialization over-segmentation is the mean-shift methods of minimum area factor.According to pixel-level image feature, Object level is obtained using the mean-shift methods for being related to minimum area parameter (contained pixel number i.e. in overdivided region) Graphical representation finally acquires object level characteristics of image.As shown in Fig. 2, its specific implementation step is as follows:
1) definition of location index set and Pixel-level are carried out to the high spatial resolution triple channel image I (R, G, B) of input Observational characteristic set defines, it is assumed that image I (R, G, B) resolution ratio is m × n, then can be respectively obtained:Location index set S ={ sxy=(x, y) | 1≤x≤m;1≤y≤n }, Pixel-level observational characteristic collectionWherein, Represent the observational characteristic value of pixel at the s of position,The respectively value of R, G of image, B component, m are the length of image Degree, n are the width of image, and (x, y) is the position coordinates of pixel in image.
2) over-segmentation processing is carried out to pixel-level image with mean-shift methods according to the minimum area of setting:It will figure L minimum area is segmented into as s as I (R, G, B) is undueminRegion, each region assigns label, obtains labelling matrix Ls={ ls| S ∈ S }, wherein, element ls∈{1,…,l},s∈S.Thus the location index set R={ r of object level image are obtained1,r2,…, rl, wherein, region ri=s | ls=i }.After handling result gray processing as shown in Fig. 3 (a), the lines in figure are the knots of its segmentation Fruit.
3) handled to obtain object level Region adjacency graph G=(R, E) according to over-segmentation.Wherein, location index set R is object Grade element, each element represent an overdivided region.E={ eij| 1≤i, j≤l } represent syntople, element eijRepresent area Domain riIn with region rjAdjacent number of pixels, eij≠ 0 and if only if element RiAnd RjIt is adjacent.
4) object level observational characteristic field is defined on Region adjacency graph GObject level Dividing mark fieldWherein,Represent region riObservational characteristic, | ri| represent region riIt is interior Pixel number.XOIt is a random field,It is a stochastic variable, represents overdivided region riSegmentation classification,Wherein K is segmentation category set, and k is previously given segmentation classification number.
5) object level neighborhood system is provided according to object level Region adjacency graph G=(R, E):Its In,A part in rectangle frame in Fig. 3 (a) is amplified to obtain Fig. 3 (b), in Fig. 3 (b) Each area carries out neighbourhood signatures such as Fig. 3 (c).
Step 2:Each region rs of the mark Label Field XO and neighborhood system NO to observational characteristic field YO is split according to object leveli Feature and its feature of neighborhood carry out Gauss-Markov modelings, construction is for each region riObject level linear regression Equation, i=1 ..., l.
Object level equation of linear regression use object level element size and boundary length as equation of linear regression Parameter, to each object level element build equation of linear regression, as shown in figure 4, being as follows:
1) it can obtain what each overdivided region was included by location index set R in Region adjacency graph G=(R, E) It is considered as the area parameters of object level element by number of pixels, obtains area matrix RS={ RSi| 1≤i≤l }, wherein RSi=| ri|。
2) x is assumedOIt is object level dividing mark field XOOne realization, according to xOObtain characteristic mean of all categories and spy Covariance matrix is levied, realizes that flow is:
(a) known object grade segmentation Label Field is embodied as xO, calculate the corresponding segmentation of each pixel in original image Generic, i.e. Pixel-level dividing mark matrixWherein
(b) characteristic mean m={ mi | 1≤i≤k } and Eigen Covariance matrix ∑={ ∑ are calculated respectivelyi|1≤i≤k}:
3) for each object level element ri, give its dividing mark and be embodied as xiAfter O, equation of linear regression is constructed It is as follows:
Wherein, for the ease of calculating, it is assumed that ei~N (0, ∑h) it is a white Gaussian noise.
Step 3:Respectively to observational characteristic field YOWith dividing mark field XOProbabilistic Modeling is carried out, and is obtained according to Bayes criterions To dividing mark field XOPosterior distrbutionp, update iterative segmentation with maximum posteriori criterion and thus solve final segmentation.
Probabilistic Modeling is included by the error term construction observational characteristic field Y in object level equation of linear regressionOMultivariate Normal It is distributed and using Potts Construction of A Model segmentation mark Label Field XOGibbs distribution.Finally segmentation result is:It is distributed using Gibbs Sampling update iterative segmentation, final output convergence solution.As shown in figure 5, concrete operations are as follows:
1) for object level observational characteristic field YO, it is not that joint probability modeling is directly carried out to observational characteristic, but to every One object level element riResidual error item in the object level equation of linear regression constructed carries out joint modeling, obtains Characteristic Field Likelihood function, i.e.,:
2) object level dividing mark field XOProbabilistic Modeling is carried out, since Label Field has geneva, by Markov-Gibbs Equivalence understands that Label Field meets Gibbs distributions, and the prior distribution for obtaining Label Field is as follows:
Wherein, Z is normalization constant, U (xO) represent that segmentation field is embodied as xOWhen energy, V2() is group potential energy letter Number, is provided, i.e., by Potts models:
3) Posterior distrbutionp that Label Field can be obtained by Bayes formula is:
So the optimum of dividing mark is asked, which to translate into, seeks dividing mark field XOThe maximized problem of Posterior distrbutionp, I.e.:
By loop iteration, dividing mark is updated, finally obtains result.Specific loop iteration process is as follows:
1) Pixel-level MRF (Markov are realized by classical ICM (iteration condition model) algorithm first Random field) method, obtain the segmentation generic of each pixel, i.e. Pixel-level segmentation field result:xP={ xs|s∈ S }, and then the Object Segmentation Label Field for obtaining primary iteration is realizedWherein I.e. for overdivided region ri, dividing mark is the mode of its interior pixels point dividing mark.
2) realization walked by object level dividing mark field in tIt is corresponded to according to the following formula The characteristic mean of each classificationAnd Eigen Covariance
3) each object level element r is calculated respectivelyiObject level equation of linear regression:
4) computing object and Characteristic Field probability and Label Field probability, and by the update dividing mark of object, be specially respectively:
The present invention operation platform be:Core [email protected], RAM:4G, 64 win10 systems, 2015a editions matlab.(coloured image gray processing) shown in the coloured image of Aerial Images 1024_1 such as Fig. 6 (a1), true segmentation by hand As shown in Fig. 6 (a2).To image 1024_1 ICM methods, make β=0.5, coloured image such as Fig. 6 of obtained segmentation result (a3) shown in.To image 1024_1 GMRF methods, make β=0.5, coloured image such as Fig. 6 (a4) institute of obtained segmentation result Show.It is three layers, β=0.5 with " Haar " wavelet decomposition to image 1024_1 MRMRF methods, the colour of obtained segmentation result Shown in image such as Fig. 6 (a5).To image 1024_1 OMRF methods, and make s=256, β=0.5, obtained segmentation result Shown in coloured image such as Fig. 6 (a6).To the image 1024_1 present invention (OGMRF-RC) methods, and make s=256, β=0.5, Shown in the coloured image of obtained segmentation result such as Fig. 6 (a7).Shown in the coloured image of Aerial Images 1024_2 such as Fig. 6 (b1), True segmentation by hand is as shown in Fig. 6 (b2).To image 1024_2 ICM methods, make β=0.3, the colour of obtained segmentation result Shown in image such as Fig. 6 (b3).To image 1024_2 GMRF methods, make β=0.3, the coloured image of obtained segmentation result is such as Shown in Fig. 6 (b4).It is three layers, β=0.3 with " Haar " wavelet decomposition to image 1024_2 MRMRF methods, obtained segmentation As a result shown in coloured image such as Fig. 6 (b5).To image 1024_2 OMRF methods, and make s=144, β=0.3 obtains Shown in the coloured image of segmentation result such as Fig. 6 (b6).To the image 1024_2 present invention (OGMRF-RC) methods, and make s= 144, β=0.3, shown in coloured image such as Fig. 6 (b7) of obtained segmentation result.Aerial Images 1024_1's and image 1024_2 The Kappa coefficients of segmentation result are as shown in table 1, and the overall accuracy OA of segmentation result is as shown in table 2.
The Kappa coefficients of 1 segmentation result of table
The overall accuracy (Overall Accuracy, OA) of 2 segmentation result of table
The segmentation precision of the present invention is best it can be seen from data in Fig. 6 and table 1-2.Aerial Images include more Texture information, the spectrum value of the subobject in same class differs greatly, and different classes of subobject may have it is similar Spectrum value.For example, in urban parts, roof and garden have a different spectral values, but the trees of urban parts and forest part Spectral value be similar.For these reasons, three kinds of methods based on pixel have many mistake classification fine crushing.With based on picture The method of element is compared, and overdivided region is considered as elementary cell by object-based method, therefore significantly optimizes segmentation precision. OMRF methods are modeled property field using the probability distribution of the feature of object, and OGMRF-RC methods are then to utilize object level The probability distribution of residual error item is modeled property field in equation of linear regression.OGMRF-RC methods the advantage of doing so is that, can With reduce in iterative process it is generic between influence of the spectrum change for segmentation.For example, in the top half of Fig. 6 (a7), Large-scale bare area and forest are accurately divided into idle part, rather than in Fig. 6 (a6) be divided into house part OMRF that Sample.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention With within principle, any modifications, equivalent replacements and improvements are made should all be included in the protection scope of the present invention god.

Claims (5)

1. a kind of image, semantic dividing method based on object level Gauss-Markov random fields, which is characterized in that its step is such as Under:
Step 1:Initialization over-segmentation is carried out to the pixel-level image of reading, obtains the object level figure being made of overdivided region Picture and corresponding object level Region adjacency graph RAG define the neighborhood system N of the image according to Region adjacency graph RAGO, object level Observational characteristic field YOWith object level dividing mark field XO
Step 2:According to object level dividing mark field XOWith neighborhood system NOTo observational characteristic field YOEach region riFeature and The feature of its neighborhood carries out Gauss-Markov modelings, and construction is for each region riObject level equation of linear regression, i= 1,…,l;
Step 3:Respectively to observational characteristic field YOWith dividing mark field XOProbabilistic Modeling is carried out, and is divided according to Bayes criterions Cut Label Field XOPosterior distrbutionp, update iterative segmentation with maximum posteriori criterion and thus solve final segmentation.
2. the image, semantic dividing method according to claim 1 based on object level Gauss-Markov random fields, special Sign is that the specific implementation step of the step 1 is as follows:
1) definition of location index set is carried out to the high spatial resolution triple channel image I (R, G, B) of input and Pixel-level is observed Characteristic set defines, it is assumed that image I (R, G, B) resolution ratio is m × n, is obtained:Location index set S={ sxy=(x, y) | 1≤x ≤m;1≤y≤n }, Pixel-level observational characteristic collectionWherein,Represent pixel at the s of position Observational characteristic value,The respectively value of R, G of image, B component, m are the length of image, and n is the width of image, (x, y) is the position coordinates of pixel in image;
2) over-segmentation processing is carried out to pixel-level image with mean-shift methods according to the minimum area of setting:By image I (R, G, B) is too segmented into l minimum area as sminRegion, each region assigns label, obtains labelling matrix Ls={ ls|s∈ S }, wherein, element ls∈{1,…,l},s∈S;Thus the location index set R={ r of object level image are obtained1,r2,…, rl, wherein, region ri=s | ls=i };
3) handled to obtain object level Region adjacency graph G=(R, E) according to over-segmentation, wherein, location index set R is object level member Element, each element represent an overdivided region, E={ eij| 1≤i, j≤l } represent syntople, element eijRepresent region ri In with region rjAdjacent number of pixels, eij≠ 0 and if only if element RiAnd RjIt is adjacent;
4) object level observational characteristic field is defined on Region adjacency graph GSplit with object level Label FieldWherein,Represent region riObservational characteristic, | ri| represent region riInterior picture Vegetarian refreshments number;XOIt is a random field,It is a stochastic variable,Wherein, K is segmentation classification collection It closes, k is previously given segmentation classification number;
5) object level neighborhood system is provided according to object level Region adjacency graph G=(R, E):Wherein,
3. the image, semantic dividing method according to claim 1 based on object level Gauss-Markov random fields, special Sign is that the step 2 is as follows:
1) pixel that each overdivided region included can be obtained by location index set R in Region adjacency graph G=(R, E) Number is the area parameters of object level element, obtains area matrix RS={ RSi| 1≤i≤l }, wherein RSi=| ri|;
2) x is setOIt is object level dividing mark field XOOne realization, according to xOObtain characteristic mean of all categories and feature association side Poor matrix realizes that flow is:
(a) known object grade segmentation Label Field is embodied as xO, calculate class belonging to the corresponding segmentation of each pixel in original image Not, i.e. Pixel-level dividing mark matrixWherein
(b) characteristic mean μ={ μ is calculated respectivelyi| 1≤i≤k } and Eigen Covariance matrix ∑={ ∑i|1≤i≤k}:
<mrow> <msub> <mi>&amp;mu;</mi> <mi>h</mi> </msub> <mo>=</mo> <mfrac> <mrow> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>:</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>=</mo> <mi>h</mi> </mrow> </munder> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>s</mi> <mo>&amp;Element;</mo> <msub> <mi>r</mi> <mi>i</mi> </msub> </mrow> </munder> <msubsup> <mi>y</mi> <mi>s</mi> <mi>P</mi> </msubsup> </mrow> <mrow> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>:</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>=</mo> <mi>h</mi> </mrow> </munder> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>s</mi> <mo>&amp;Element;</mo> <msub> <mi>r</mi> <mi>i</mi> </msub> </mrow> </munder> <mrow> <mo>|</mo> <msub> <mi>r</mi> <mi>i</mi> </msub> <mo>|</mo> </mrow> </mrow> </mfrac> <mo>,</mo> <mn>1</mn> <mo>&amp;le;</mo> <mi>h</mi> <mo>&amp;le;</mo> <mi>k</mi> <mo>,</mo> </mrow>
<mrow> <msub> <mi>&amp;Sigma;</mi> <mi>i</mi> </msub> <mo>=</mo> <mfrac> <mrow> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>:</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>=</mo> <mi>h</mi> </mrow> </munder> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>s</mi> <mo>&amp;Element;</mo> <msub> <mi>r</mi> <mi>i</mi> </msub> </mrow> </munder> <msup> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mi>s</mi> <mi>P</mi> </msubsup> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <mi>h</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mi>s</mi> <mi>P</mi> </msubsup> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <mi>h</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>:</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>=</mo> <mi>h</mi> </mrow> </munder> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>s</mi> <mo>&amp;Element;</mo> <msub> <mi>r</mi> <mi>i</mi> </msub> </mrow> </munder> <mrow> <mo>|</mo> <msub> <mi>r</mi> <mi>i</mi> </msub> <mo>|</mo> </mrow> </mrow> </mfrac> <mo>,</mo> <mn>1</mn> <mo>&amp;le;</mo> <mi>h</mi> <mo>&amp;le;</mo> <mi>k</mi> <mo>;</mo> </mrow>
3) for each object level element ri, give its dividing mark and be embodied asAfterwards, it is as follows to construct equation of linear regression:
<mrow> <msubsup> <mi>y</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> </msub> <mo>=</mo> <munder> <mi>&amp;Sigma;</mi> <mrow> <msub> <mi>r</mi> <mi>j</mi> </msub> <mo>&amp;Element;</mo> <msubsup> <mi>N</mi> <mi>i</mi> <mi>O</mi> </msubsup> </mrow> </munder> <msub> <mi>&amp;theta;</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mi>j</mi> <mi>O</mi> </msubsup> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <msubsup> <mi>x</mi> <mi>j</mi> <mi>O</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>e</mi> <mi>i</mi> </msub> <mo>,</mo> </mrow>
<mrow> <msub> <mi>&amp;theta;</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <mfrac> <msub> <mi>e</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <munder> <mi>&amp;Sigma;</mi> <mrow> <msub> <mi>r</mi> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>&amp;Element;</mo> <msubsup> <mi>N</mi> <mi>i</mi> <mi>O</mi> </msubsup> </mrow> </munder> <msub> <mi>e</mi> <mrow> <msup> <mi>ij</mi> <mo>&amp;prime;</mo> </msup> </mrow> </msub> </mrow> </mfrac> <mo>&amp;CenterDot;</mo> <mfrac> <mrow> <mo>|</mo> <msub> <mi>r</mi> <mi>j</mi> </msub> <mo>|</mo> </mrow> <mrow> <munder> <mi>&amp;Sigma;</mi> <mrow> <msub> <mi>r</mi> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>&amp;Element;</mo> <msubsup> <mi>N</mi> <mi>i</mi> <mi>O</mi> </msubsup> </mrow> </munder> <mrow> <mo>|</mo> <msub> <mi>r</mi> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>|</mo> </mrow> </mrow> </mfrac> <mo>;</mo> </mrow>
Wherein,It is a white Gaussian noise.
4. the image, semantic dividing method according to claim 1 based on object level Gauss-Markov random fields, special Sign is that the specific method of the step 3 is as follows:
1) for object level observational characteristic field YO, it is not that joint probability modeling is directly carried out to observational characteristic, but it is right to each As grade element riResidual error item in the object level equation of linear regression constructed carries out joint modeling, obtains the likelihood letter of Characteristic Field Number, i.e.,:
<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <msup> <mi>Y</mi> <mi>O</mi> </msup> <mo>=</mo> <msup> <mi>y</mi> <mi>O</mi> </msup> <mo>|</mo> <msup> <mi>X</mi> <mi>O</mi> </msup> <mo>=</mo> <msup> <mi>x</mi> <mi>O</mi> </msup> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&amp;Pi;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>l</mi> </munderover> <mi>p</mi> <mrow> <mo>(</mo> <msubsup> <mi>Y</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>=</mo> <msubsup> <mi>y</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>|</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>=</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>,</mo> <msubsup> <mi>y</mi> <mi>j</mi> <mi>O</mi> </msubsup> <mo>,</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>O</mi> </msubsup> <mo>,</mo> <msub> <mi>r</mi> <mi>j</mi> </msub> <mo>&amp;Element;</mo> <msubsup> <mi>N</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <munderover> <mi>&amp;Pi;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>l</mi> </munderover> <mfrac> <mn>1</mn> <mrow> <msup> <mrow> <mo>(</mo> <mn>2</mn> <mi>&amp;pi;</mi> <mo>)</mo> </mrow> <mrow> <mn>3</mn> <mo>/</mo> <mn>2</mn> </mrow> </msup> <msup> <mrow> <mo>|</mo> <msub> <mi>&amp;Sigma;</mi> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> </msub> <mo>|</mo> </mrow> <mrow> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </msup> </mrow> </mfrac> <mi>exp</mi> <mo>&amp;lsqb;</mo> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> </msub> <mo>-</mo> <munder> <mi>&amp;Sigma;</mi> <mrow> <msub> <mi>r</mi> <mi>j</mi> </msub> <mo>&amp;Element;</mo> <msubsup> <mi>N</mi> <mi>i</mi> <mi>O</mi> </msubsup> </mrow> </munder> <msub> <mi>&amp;theta;</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>(</mo> <mrow> <msubsup> <mi>y</mi> <mi>j</mi> <mi>O</mi> </msubsup> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <msubsup> <mi>x</mi> <mi>j</mi> <mi>O</mi> </msubsup> </msub> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>&amp;Sigma;</mi> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <msup> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> </msub> <mo>-</mo> <munder> <mi>&amp;Sigma;</mi> <mrow> <msub> <mi>r</mi> <mi>j</mi> </msub> <mo>&amp;Element;</mo> <msubsup> <mi>N</mi> <mi>i</mi> <mi>O</mi> </msubsup> </mrow> </munder> <msub> <mi>&amp;theta;</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>(</mo> <mrow> <msubsup> <mi>y</mi> <mi>j</mi> <mi>O</mi> </msubsup> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <msubsup> <mi>x</mi> <mi>j</mi> <mi>O</mi> </msubsup> </msub> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>&amp;rsqb;</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <munderover> <mi>&amp;Pi;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>l</mi> </munderover> <mfrac> <mn>1</mn> <mrow> <msup> <mrow> <mo>(</mo> <mn>2</mn> <mi>&amp;pi;</mi> <mo>)</mo> </mrow> <mrow> <mn>3</mn> <mo>/</mo> <mn>2</mn> </mrow> </msup> <msup> <mrow> <mo>|</mo> <msub> <mi>&amp;Sigma;</mi> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> </msub> <mo>|</mo> </mrow> <mrow> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </msup> </mrow> </mfrac> <mi>exp</mi> <mo>&amp;lsqb;</mo> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msub> <mi>e</mi> <mi>i</mi> </msub> <msup> <mrow> <mo>(</mo> <msub> <mi>&amp;Sigma;</mi> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <msubsup> <mi>e</mi> <mi>i</mi> <mi>T</mi> </msubsup> <mo>&amp;rsqb;</mo> </mrow> </mtd> </mtr> </mtable> </mfenced>
2) object level dividing mark field XOProbabilistic Modeling is carried out, from Markov-Gibbs equivalences, object level dividing mark field Meet Gibbs distributions, the prior distribution for obtaining Label Field is as follows:
<mrow> <mi>P</mi> <mrow> <mo>(</mo> <msup> <mi>X</mi> <mi>O</mi> </msup> <mo>=</mo> <msup> <mi>x</mi> <mi>O</mi> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mi>Z</mi> </mfrac> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mi>U</mi> <mo>(</mo> <msup> <mi>x</mi> <mi>O</mi> </msup> <mo>)</mo> <mo>)</mo> </mrow> </mrow>
<mrow> <mi>Z</mi> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <msup> <mi>x</mi> <mi>O</mi> </msup> </munder> <mi>U</mi> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mi>O</mi> </msup> <mo>)</mo> </mrow> </mrow>
<mrow> <mi>U</mi> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mi>O</mi> </msup> <mo>)</mo> </mrow> <mo>&amp;ap;</mo> <munder> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>r</mi> <mi>i</mi> </msub> <mo>&amp;Element;</mo> <mi>R</mi> <mo>,</mo> <mi>j</mi> <mo>&amp;Element;</mo> <msubsup> <mi>N</mi> <mi>i</mi> <mi>O</mi> </msubsup> </mrow> </munder> <msub> <mi>V</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>,</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>O</mi> </msubsup> <mo>)</mo> </mrow> </mrow>
<mrow> <mi>P</mi> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>=</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>|</mo> <msubsup> <mi>X</mi> <mi>j</mi> <mi>O</mi> </msubsup> <mo>=</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>O</mi> </msubsup> <mo>,</mo> <msub> <mi>r</mi> <mi>j</mi> </msub> <mo>&amp;Element;</mo> <mi>R</mi> <mo>-</mo> <mo>{</mo> <msub> <mi>r</mi> <mi>i</mi> </msub> <mo>}</mo> <mo>)</mo> </mrow> <mo>=</mo> <mi>P</mi> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>=</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>|</mo> <msubsup> <mi>X</mi> <mi>j</mi> <mi>O</mi> </msubsup> <mo>=</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>O</mi> </msubsup> <mo>,</mo> <msub> <mi>r</mi> <mi>j</mi> </msub> <mo>&amp;Element;</mo> <msubsup> <mi>N</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>)</mo> </mrow> </mrow>
<mrow> <mi>P</mi> <mrow> <mo>(</mo> <msup> <mi>X</mi> <mi>O</mi> </msup> <mo>=</mo> <msup> <mi>x</mi> <mi>O</mi> </msup> <mo>)</mo> </mrow> <mo>&amp;ap;</mo> <munderover> <mi>&amp;Pi;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>l</mi> </munderover> <mi>P</mi> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>=</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>|</mo> <msubsup> <mi>X</mi> <mi>j</mi> <mi>O</mi> </msubsup> <mo>=</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>O</mi> </msubsup> <mo>,</mo> <msub> <mi>r</mi> <mi>j</mi> </msub> <mo>&amp;Element;</mo> <msubsup> <mi>N</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>)</mo> </mrow> </mrow>
<mrow> <mtable> <mtr> <mtd> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>=</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>|</mo> <msubsup> <mi>X</mi> <mi>j</mi> <mi>O</mi> </msubsup> <mo>=</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>O</mi> </msubsup> <mo>,</mo> <msub> <mi>r</mi> <mi>j</mi> </msub> <mo>&amp;Element;</mo> <msubsup> <mi>N</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>exp</mi> <mrow> <mo>(</mo> <mi>U</mi> <mo>(</mo> <mrow> <mo>{</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>}</mo> <mo>&amp;cup;</mo> <mo>{</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>O</mi> </msubsup> <mo>,</mo> <msub> <mi>r</mi> <mi>j</mi> </msub> <mo>&amp;Element;</mo> <msubsup> <mi>N</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>}</mo> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow> <mrow> <munder> <mi>&amp;Sigma;</mi> <mrow> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>&amp;Element;</mo> <mi>K</mi> </mrow> </munder> <mi>exp</mi> <mrow> <mo>(</mo> <mi>U</mi> <mo>(</mo> <mrow> <mo>{</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>}</mo> <mo>&amp;cup;</mo> <mo>{</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>O</mi> </msubsup> <mo>,</mo> <msub> <mi>r</mi> <mi>j</mi> </msub> <mo>&amp;Element;</mo> <msubsup> <mi>N</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>}</mo> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <mfrac> <mrow> <mi>exp</mi> <mrow> <mo>(</mo> <munder> <mi>&amp;Sigma;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> <mo>:</mo> <msub> <mi>r</mi> <mi>j</mi> </msub> <mo>&amp;Element;</mo> <msubsup> <mi>N</mi> <mi>i</mi> <mi>O</mi> </msubsup> </mrow> </munder> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>(</mo> <mrow> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>,</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>O</mi> </msubsup> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow> <mrow> <munder> <mi>&amp;Sigma;</mi> <mrow> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>&amp;Element;</mo> <mi>K</mi> </mrow> </munder> <mi>exp</mi> <mrow> <mo>(</mo> <munder> <mi>&amp;Sigma;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> <mo>:</mo> <msub> <mi>r</mi> <mi>j</mi> </msub> <mo>&amp;Element;</mo> <msubsup> <mi>N</mi> <mi>i</mi> <mi>O</mi> </msubsup> </mrow> </munder> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>(</mo> <mrow> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>,</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>O</mi> </msubsup> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </mtd> </mtr> </mtable> <mo>;</mo> </mrow>
Wherein, Z is normalization constant, U (xO) represent that segmentation field is embodied as xOWhen energy, K for segmentation category set, V2(·) For group potential-energy function, provided by Potts models, i.e.,:
<mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>,</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>O</mi> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>&amp;beta;</mi> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>&amp;NotEqual;</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>O</mi> </msubsup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>=</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>O</mi> </msubsup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mi>i</mi> <mo>&amp;NotEqual;</mo> <mi>j</mi> <mo>;</mo> </mrow>
3) Posterior distrbutionp that Label Field can be obtained by Bayes formula is:
<mrow> <mi>P</mi> <mrow> <mo>(</mo> <msup> <mi>X</mi> <mi>O</mi> </msup> <mo>=</mo> <msup> <mi>x</mi> <mi>O</mi> </msup> <mo>|</mo> <msup> <mi>Y</mi> <mi>O</mi> </msup> <mo>=</mo> <msup> <mi>y</mi> <mi>O</mi> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <msup> <mi>Y</mi> <mi>O</mi> </msup> <mo>=</mo> <msup> <mi>y</mi> <mi>O</mi> </msup> <mo>|</mo> <msup> <mi>X</mi> <mi>O</mi> </msup> <mo>=</mo> <msup> <mi>x</mi> <mi>O</mi> </msup> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <mi>P</mi> <mrow> <mo>(</mo> <msup> <mi>X</mi> <mi>O</mi> </msup> <mo>=</mo> <msup> <mi>x</mi> <mi>O</mi> </msup> <mo>)</mo> </mrow> </mrow> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <msup> <mi>Y</mi> <mi>O</mi> </msup> <mo>=</mo> <msup> <mi>y</mi> <mi>O</mi> </msup> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>;</mo> </mrow>
So the optimum of dividing mark is asked, which to translate into, seeks dividing mark field XOThe maximized problem of Posterior distrbutionp, i.e.,:
<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <msup> <mover> <mi>x</mi> <mo>^</mo> </mover> <mi>O</mi> </msup> <mo>=</mo> <munder> <mrow> <mi>arg</mi> <mi>max</mi> </mrow> <msup> <mi>x</mi> <mi>O</mi> </msup> </munder> <mi>P</mi> <mrow> <mo>(</mo> <msup> <mi>X</mi> <mi>O</mi> </msup> <mo>=</mo> <msup> <mi>x</mi> <mi>O</mi> </msup> <mo>|</mo> <msup> <mi>Y</mi> <mi>O</mi> </msup> <mo>=</mo> <msup> <mi>y</mi> <mi>O</mi> </msup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <munder> <mrow> <mi>arg</mi> <mi>max</mi> </mrow> <msup> <mi>x</mi> <mi>O</mi> </msup> </munder> <mi>P</mi> <mrow> <mo>(</mo> <msup> <mi>Y</mi> <mi>O</mi> </msup> <mo>=</mo> <msup> <mi>y</mi> <mi>O</mi> </msup> <mo>|</mo> <msup> <mi>X</mi> <mi>O</mi> </msup> <mo>=</mo> <msup> <mi>x</mi> <mi>O</mi> </msup> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <mi>P</mi> <mrow> <mo>(</mo> <msup> <mi>X</mi> <mi>O</mi> </msup> <mo>=</mo> <msup> <mi>x</mi> <mi>O</mi> </msup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced>
By loop iteration, dividing mark is updated, finally obtains segmentation result.
5. the image, semantic dividing method according to claim 4 based on object level Gauss-Markov random fields, special Sign is that the specific implementation process of the loop iteration is:
1) Pixel-level MRF methods are realized by classical ICM algorithms first, obtains the segmentation generic of each pixel, i.e. picture Plain grade splits field result:xP={ xs| s ∈ S }, and then the Object Segmentation Label Field for obtaining primary iteration is realizedWhereinMode is mode function;
2) realization walked by object level dividing mark field in tIt obtains according to the following formula corresponding each The characteristic mean of classificationAnd Eigen Covariance
<mrow> <msubsup> <mi>&amp;mu;</mi> <mi>h</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </msubsup> <mo>=</mo> <mfrac> <mrow> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>:</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mi>O</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </msubsup> <mo>=</mo> <mi>h</mi> </mrow> </munder> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>s</mi> <mo>&amp;Element;</mo> <msub> <mi>r</mi> <mi>i</mi> </msub> </mrow> </munder> <msubsup> <mi>y</mi> <mi>s</mi> <mi>P</mi> </msubsup> </mrow> <mrow> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>:</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mi>O</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </msubsup> <mo>=</mo> <mi>h</mi> </mrow> </munder> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>s</mi> <mo>&amp;Element;</mo> <msub> <mi>r</mi> <mi>i</mi> </msub> </mrow> </munder> <mrow> <mo>|</mo> <msub> <mi>r</mi> <mi>i</mi> </msub> <mo>|</mo> </mrow> </mrow> </mfrac> <mo>,</mo> </mrow>
<mrow> <msubsup> <mi>&amp;Sigma;</mi> <mi>h</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </msubsup> <mo>=</mo> <mfrac> <mrow> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>:</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mi>O</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </msubsup> <mo>=</mo> <mi>h</mi> </mrow> </munder> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>s</mi> <mo>&amp;Element;</mo> <msub> <mi>r</mi> <mi>i</mi> </msub> </mrow> </munder> <msup> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mi>s</mi> <mi>P</mi> </msubsup> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <mi>h</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mi>s</mi> <mi>P</mi> </msubsup> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <mi>h</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>:</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mi>O</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </msubsup> <mo>=</mo> <mi>h</mi> </mrow> </munder> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>s</mi> <mo>&amp;Element;</mo> <msub> <mi>r</mi> <mi>i</mi> </msub> </mrow> </munder> <mrow> <mo>|</mo> <msub> <mi>r</mi> <mi>i</mi> </msub> <mo>|</mo> </mrow> </mrow> </mfrac> <mo>;</mo> </mrow>
3) each object level element r is calculated respectivelyiObject level equation of linear regression:
<mrow> <msubsup> <mi>y</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>-</mo> <msubsup> <mi>&amp;mu;</mi> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </msubsup> <mo>=</mo> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>:</mo> <msub> <mi>r</mi> <mi>j</mi> </msub> <mo>&amp;Element;</mo> <msubsup> <mi>N</mi> <mi>i</mi> <mi>O</mi> </msubsup> </mrow> </munder> <msub> <mi>&amp;theta;</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mi>j</mi> <mi>O</mi> </msubsup> <mo>-</mo> <msubsup> <mi>&amp;mu;</mi> <msubsup> <mi>x</mi> <mi>j</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </msubsup> <mo>)</mo> </mrow> <mo>+</mo> <msubsup> <mi>e</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </msubsup> </mrow>
4) computing object and Characteristic Field probability and Label Field probability, and by the update dividing mark of object, be specially respectively:
<mrow> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mi>O</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </msubsup> <mo>=</mo> <munder> <mrow> <mi>arg</mi> <mi>max</mi> </mrow> <mrow> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>&amp;Element;</mo> <mi>K</mi> </mrow> </munder> <mi>p</mi> <mrow> <mo>(</mo> <msubsup> <mi>Y</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>=</mo> <msubsup> <mi>y</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>|</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>=</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>,</mo> <msubsup> <mi>y</mi> <mi>j</mi> <mi>O</mi> </msubsup> <mo>,</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mrow> <mi>O</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </msubsup> <mo>,</mo> <msub> <mi>r</mi> <mi>j</mi> </msub> <mo>&amp;Element;</mo> <msubsup> <mi>N</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <mi>P</mi> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>|</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mrow> <mi>O</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </msubsup> <mo>,</mo> <msub> <mi>r</mi> <mi>j</mi> </msub> <mo>&amp;Element;</mo> <msubsup> <mi>N</mi> <mi>i</mi> <mi>O</mi> </msubsup> <mo>)</mo> </mrow> <mo>.</mo> </mrow>
CN201711316006.XA 2017-12-12 2017-12-12 Image semantic segmentation method based on object-level Gauss-Markov random field Active CN108090913B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711316006.XA CN108090913B (en) 2017-12-12 2017-12-12 Image semantic segmentation method based on object-level Gauss-Markov random field

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711316006.XA CN108090913B (en) 2017-12-12 2017-12-12 Image semantic segmentation method based on object-level Gauss-Markov random field

Publications (2)

Publication Number Publication Date
CN108090913A true CN108090913A (en) 2018-05-29
CN108090913B CN108090913B (en) 2020-06-19

Family

ID=62173916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711316006.XA Active CN108090913B (en) 2017-12-12 2017-12-12 Image semantic segmentation method based on object-level Gauss-Markov random field

Country Status (1)

Country Link
CN (1) CN108090913B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830319A (en) * 2018-06-12 2018-11-16 北京合众思壮科技股份有限公司 A kind of image classification method and device
CN109615637A (en) * 2019-01-16 2019-04-12 中国科学院地理科学与资源研究所 A kind of improved remote sensing image Hybrid Techniques
CN110136143A (en) * 2019-05-16 2019-08-16 河南大学 Geneva based on ADMM algorithm multiresolution remote sensing image segmentation method off field
CN111210433A (en) * 2019-04-16 2020-05-29 河南大学 Markov field remote sensing image segmentation method based on anisotropic potential function

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295236A (en) * 2013-05-29 2013-09-11 湘潭大学 Method for building Markov multi-feature random field model and technology for segmenting brain MR (magnetic resonance) images on basis of model
CN106600611A (en) * 2016-12-23 2017-04-26 西安电子科技大学 SAR image segmentation method based on sparse triple Markov field
CN106951830A (en) * 2017-02-23 2017-07-14 北京联合大学 A kind of many object marking methods of image scene constrained based on priori conditions
CN107180434A (en) * 2017-05-23 2017-09-19 中国地质大学(武汉) Polarization SAR image segmentation method based on super-pixel and fractal net work evolution algorithmic

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295236A (en) * 2013-05-29 2013-09-11 湘潭大学 Method for building Markov multi-feature random field model and technology for segmenting brain MR (magnetic resonance) images on basis of model
CN106600611A (en) * 2016-12-23 2017-04-26 西安电子科技大学 SAR image segmentation method based on sparse triple Markov field
CN106951830A (en) * 2017-02-23 2017-07-14 北京联合大学 A kind of many object marking methods of image scene constrained based on priori conditions
CN107180434A (en) * 2017-05-23 2017-09-19 中国地质大学(武汉) Polarization SAR image segmentation method based on super-pixel and fractal net work evolution algorithmic

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHEN ZHENG, ET AL.: "Semantic Segmentation of Remote Sensing Imagery Using Object-Based Markov Random Field Model With Region Penalties", 《IEEE JOURNAL OF SELECTED TOPICS IN ALPPLIED EARTH OBSERVATION AND REMOTE SENSING》 *
YANG YONG, ET AL.: "Segmentation of SAR imagery using the Gaussian Markov random field model", 《PROCEEDINGS 7TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830319A (en) * 2018-06-12 2018-11-16 北京合众思壮科技股份有限公司 A kind of image classification method and device
CN108830319B (en) * 2018-06-12 2022-09-16 北京合众思壮科技股份有限公司 Image classification method and device
CN109615637A (en) * 2019-01-16 2019-04-12 中国科学院地理科学与资源研究所 A kind of improved remote sensing image Hybrid Techniques
CN111210433A (en) * 2019-04-16 2020-05-29 河南大学 Markov field remote sensing image segmentation method based on anisotropic potential function
CN111210433B (en) * 2019-04-16 2023-03-03 河南大学 Markov field remote sensing image segmentation method based on anisotropic potential function
CN110136143A (en) * 2019-05-16 2019-08-16 河南大学 Geneva based on ADMM algorithm multiresolution remote sensing image segmentation method off field

Also Published As

Publication number Publication date
CN108090913B (en) 2020-06-19

Similar Documents

Publication Publication Date Title
Tyleček et al. Spatial pattern templates for recognition of objects with regular structure
CN108573276B (en) Change detection method based on high-resolution remote sensing image
Alshehhi et al. Hierarchical graph-based segmentation for extracting road networks from high-resolution satellite images
CN104915636B (en) Remote sensing image road recognition methods based on multistage frame significant characteristics
CN103049763B (en) Context-constraint-based target identification method
CN108596055B (en) Airport target detection method of high-resolution remote sensing image under complex background
Ta et al. Graph-based tools for microscopic cellular image segmentation
CN108090913A (en) A kind of image, semantic dividing method based on object level Gauss-Markov random fields
CN103198333B (en) A kind of automatic semantic marker method of high-resolution remote sensing image
CN102496034B (en) High-spatial resolution remote-sensing image bag-of-word classification method based on linear words
CN107909015A (en) Hyperspectral image classification method based on convolutional neural networks and empty spectrum information fusion
Liu et al. Multiscale road centerlines extraction from high-resolution aerial imagery
CN102096821A (en) Number plate identification method under strong interference environment on basis of complex network theory
CN103426158B (en) The method of two phase Remote Sensing Imagery Change Detection
CN110163239A (en) A kind of Weakly supervised image, semantic dividing method based on super-pixel and condition random field
Merabet et al. Building roof segmentation from aerial images using a line-and region-based watershed segmentation technique
CN112950780B (en) Intelligent network map generation method and system based on remote sensing image
CN106846322A (en) Based on the SAR image segmentation method that curve wave filter and convolutional coding structure learn
Masouleh et al. Fusion of deep learning with adaptive bilateral filter for building outline extraction from remote sensing imagery
CN105956610B (en) A kind of remote sensing images classification of landform method based on multi-layer coding structure
CN107358249A (en) The hyperspectral image classification method of dictionary learning is differentiated based on tag compliance and Fisher
Weinmann et al. A hybrid semantic point cloud classification-segmentation framework based on geometric features and semantic rules
CN104346814A (en) SAR (specific absorption rate) image segmentation method based on hierarchy visual semantics
Ouzounis et al. Interactive collection of training samples from the max-tree structure
CN102737232B (en) Cleavage cell recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240117

Address after: Building 43, Zone A, Energy Conservation and Environmental Protection Innovation Park, No. 199 Hongwu Avenue, Tangqiao Town, Zhangjiagang City, Jiangsu Province, 215000

Patentee after: Suzhou Qingchen Technology Co.,Ltd.

Address before: No.85, Minglun street, Shunhe District, Kaifeng City, Henan Province

Patentee before: Henan University

TR01 Transfer of patent right