CN107527350A - A kind of solid waste object segmentation methods towards visual signature degraded image - Google Patents

A kind of solid waste object segmentation methods towards visual signature degraded image Download PDF

Info

Publication number
CN107527350A
CN107527350A CN201710559875.9A CN201710559875A CN107527350A CN 107527350 A CN107527350 A CN 107527350A CN 201710559875 A CN201710559875 A CN 201710559875A CN 107527350 A CN107527350 A CN 107527350A
Authority
CN
China
Prior art keywords
mrow
msub
pixel
super
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710559875.9A
Other languages
Chinese (zh)
Other versions
CN107527350B (en
Inventor
刘盛
王超
冯缘
尹科杰
陈胜勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Shishang Technology Co ltd
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201710559875.9A priority Critical patent/CN107527350B/en
Publication of CN107527350A publication Critical patent/CN107527350A/en
Application granted granted Critical
Publication of CN107527350B publication Critical patent/CN107527350B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20116Active contour; Active surface; Snakes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

A kind of solid waste object segmentation methods suitable for visual signature degraded image, relate generally in the field such as robot vision and image segmentation.Because visual signature degeneration and solid waste object have adhesion and circumstance of occlusion, traditional images partitioning algorithm hardly results in high-precision segmentation result.The present invention obtains background model by depth background modeling, compares background model and solid waste point cloud to extract prospect mask.Local mask in extraction prospect mask, whole image segmentation problem is converted to multiple local mask segmentation problem.For local mask, split adhesion by fuzzy region extraction and block object, finally perform confusion region heavy label to obtain high-precision segmentation result.Segmentation precision of the present invention is high, can effectively split the solid waste object of severe color degeneration, and also very preferable for adhesion and the solid waste object blocked, segmentation effect.

Description

A kind of solid waste object segmentation methods towards visual signature degraded image
Technical field
The present invention relates to the solid waste of the technical fields, especially visual signature degraded image such as robot vision, image segmentation Object Segmentation.
Background technology
Traditional image segmentation algorithm, use color and the feature of profile.But because industrial environment is more complicated:Transmission Belt surface is covered by dust, and the dust granule of solid waste body surface result in serious visual signature and degenerate, and solid waste object is present Adhesion and circumstance of occlusion.These can all split to two dimensional image has a great impact, so traditional image partition method is not Suitable for industrial scene.
The content of the invention
Degenerated to solve existing visual signature, object adhesion and object block the problem of causing to split difficulty.The present invention Provide the solid waste object segmentation methods under a kind of visual signature is degenerated seriously.The present invention obtains background by depth background modeling Model, compare background model and solid waste point cloud to extract prospect mask.Mask is binary map, and interested pixel value is set to 255, its After image element is set to 0.For the local mask connected in prospect mask, split by extracting fuzzy region, finally perform mould Area's heavy label is pasted to obtain high-precision segmentation result.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of solid waste object segmentation methods towards visual signature degraded image, the dividing method comprise the following steps:
1) background depth gauss hybrid models, are established by a series of depth information of background cloud datas, for image In each pixel, modeled by Gaussian Mixture distribution, the probability distribution of the depth of a pixel equal to d is with formula (1.1) To represent,
Wherein, wjIt is the weight of j-th of Gaussian Profile,K is the sum of Gaussian Profile, η (d;Θj) it is J Gaussian Profile, represented with formula (1.2),
Wherein, μkIt is the average of k-th of Gaussian Profile, ∑kIt is the covariance matrix of k-th of Gaussian Profile, ∑kk 2I, I It is unit matrix, σkIt is the standard deviation of k-th of Gaussian Profile, K Gaussian Profile is according to wkkSequence, B Gauss point before sequence Cloth, obtained as background model, B by formula (1.3),
Wherein, T is minimum threshold value, for any one pixel in solid waste point cloud, finds the background model of correspondence position, If the mean μ of all Gaussian Profiles in its depth value and background modelkPoor absolute value be more than standard deviation sigmakSetting times Number, the pixel is by as foreground pixel;
2), by comparing background model and pending solid waste point cloud, in binary map, foreground pixel correspondence position is set to 255, background pixel is set to 0, obtains prospect mask, the local maskM connected in extraction prospect masklocal, and extract corresponding Local RGB figure, local mask outline figure FcWith local depth edge figure Em, the segmentation problem of whole image is converted into Multiple local mask segmentation problem;
3), for each local mask, super-pixel segmentation is carried out on corresponding local RGB image, obtains super-pixel Set S={ s1,s2,s3,…,sn-1,sn, siA single super-pixel is represented, while is also a point set, by multiple features Similar pixel composition;
4) local mask outline figure F, is passed throughcWith local edge graph Em, internal edge is obtained according to formula (4.1) EinnerFigure,
Wherein,Represent in FcIt is upper to perform the expansive working that fuzzy core size is (2*k+1), pass through formula (4.2) edge pixel collection E is extractedp,
Ep=p (x, y) | Einner(x, y)=255 }, (4.2)
Wherein, p (x, y) is to meet the pixel of condition, Einner(x, y) is EinnerThe upper y rows of figure, the pixel of xth row Value;
5) super-pixel set S and edge pixel collection E, are passed throughp, edge super-pixel collection B is extracted according to formula (5.1)sp,
Wherein, p is any pixel, s in imagekIt is the super-pixel for the condition that meets, by BspThe super-pixel extraction of middle adjoining Out borderline region B is defined as adjacent super-pixel set, each adjoining super-pixel collectionregion
6), based on borderline region Bregion, confusion region is generated by an iteration, iterative process such as formula (6.1),
Wherein,It is BregionBorderline region after x expansion, expansion passes through merging to borderline region every time Adjacent super-pixel is completed,It is by formula (6.2) expansion
Wherein,It is borderline regionAdjoining super-pixel collection, x is initially 0, and iteration, x add 1 each time, By once or after successive ignition, MobjMultiple independent blocks can be divided into, for an independent block, set if it contains Fixed number amount or more super-pixel, then it is the live part for forming an object to think it, otherwise it is assumed that being invalid portion Point, when x is more than given threshold or MobjWhen possessing two pieces or more than two pieces separate live parts, iterative process is stopped Only, the borderline region being calculated by formula (6.3)Confidence level,
Wherein,It is the borderline region ultimately generated, y is the number of ultimate bound zone broadening,RepresentThe number of pixels possessed, the ratio that borderline region accounts for local mask is bigger, and it turns into mould It is smaller to paste the possibility in area, f=1 represents MobjContain two pieces or more than two pieces separate live parts;F=0 is represented MobjWithout two pieces or more than two pieces separate live parts, ifMore than one threshold value C, this border Region is just selected as confusion region, is adhesion and blocks the region that is difficult to differentiate between object, if a local mask does not have Confusion region, then it is assumed that be single body;If there is confusion region, then Accurate Segmentation 7) is needed;
7) Accurate Segmentation, is realized by being allocated label to all pixels on local mask, carries out primary label When, different label la are distributed to the pixel of different live parts, la={ 1,2,3 ... }, 0 is then distributed and gives confusion region and Mobj The pixel of middle inactive portion;
8), for precise marking confusion region, the adjoining super-pixel collection of confusion region is extracted, according to the label of these super-pixel, It is divided into two or more adjoining super-pixel collection, calculates the LAB colors and depth of each piece of super-pixel that super-pixel is concentrated The average of degree, and the centre coordinate of super-pixel, for any one la=0 pixel, it and super picture are calculated by formula (8.1) The diversity factor of element,
Wherein, dlabFor the Euclidean distance on LAB color spaces, ddepthFor the Euclidean distance of depth, dxyIt is image coordinate Fasten the Euclidean distance of coordinate, wlab,wdepthAnd wxyIt is the weight of each distance, i is the sequence number that super-pixel concentrates super-pixel, is obtained Into pixel and super-pixel set after the diversity factor of all super-pixel, pixel and adjacent super-pixel are calculated by formula (8.2) Diversity factor between set,
D=min0 < i≤n(di), (8.2)
Wherein, i is the sequence number that super-pixel concentrates super-pixel, and n is the number that adjacent super-pixel concentrates super-pixel, for obtaining D, d it is smaller represent pixel with abut super-pixel collection it is more similar, the label of most like super-pixel collection is distributed into the pixel, when After la=0 all pixels heavy label terminates, local mask completes segmentation, and inspection result is with the presence or absence of isolated point or area Domain, the optimization of segmentation result is realized by distributing label that most neighbor pixels possess.
The present invention technical concept be:Establish depth background model and carry out background subtraction, obtain prospect mask.Extraction prospect Local mask in mask, whole image segmentation problem is converted to multiple local mask segmentation problem.Pass through confusion region Extract to split local mask, and by confusion region heavy label, to obtain a high-precision segmentation result.
Beneficial effects of the present invention are mainly manifested in:Segmentation precision is high, can effectively split the solid waste of visual signature degeneration Object, and it is also very preferable for adhesion and the solid waste object blocked, segmentation effect.It is finally based on the mark again of Pixel-level Note, can obtain more accurate edge.
Brief description of the drawings
Fig. 1 is a local mask in prospect mask.
Fig. 2 is the border super-pixel of extraction.
Fig. 3 is a confusion region of extraction.
Fig. 4 is the result marked for the first time to local mask.
Fig. 5 is the adjoining super-pixel chosen, different according to the label of institute's band, can be divided into two set.Selection is adjacent super Foundation of the set of pixels as heavy label.
Fig. 6 is the result of confusion region heavy label, and confusion region is divided into two parts, is belonging respectively to different objects, with different marks Sign to represent.
Fig. 7 is the flow chart towards the solid waste object segmentation methods of visual signature degraded image.
Embodiment
The invention will be further described below in conjunction with the accompanying drawings.
1~Fig. 7 of reference picture, a kind of solid waste object segmentation methods towards visual signature degraded image, comprises the following steps:
1) background depth gauss hybrid models, are established using a series of depth information of background cloud datas, for image In each pixel, modeled by Gaussian Mixture distribution, the probability distribution of the depth of a pixel equal to d is with formula (1.1) To represent,
Wherein, wjIt is the weight of j-th of Gaussian Profile,K is the sum of Gaussian Profile, and K is taken in the present invention =5, and η (d;Θj) it is j-th of Gaussian Profile, represented with formula (1.2),
Wherein, μkIt is the average of k-th of Gaussian Profile, ∑kIt is the covariance matrix of k-th of Gaussian Profile, ∑kk 2I, I It is unit matrix, σkIt is the standard deviation of k-th of Gaussian Profile, K Gaussian Profile is according to wkkSequence, B Gauss point before sequence Cloth, obtained as background model, B by formula (1.3),
Wherein, T is minimum threshold value, for any one pixel in solid waste point cloud, finds the background model of correspondence position, If the mean μ of all Gaussian Profiles in its depth value and background modelkPoor absolute value be more than standard deviation sigmakSetting times Number (taking 2.5), the pixel is by as foreground pixel;
2), by comparing background model and pending solid waste point cloud, in binary map, foreground pixel correspondence position is set to 255, background pixel is set to 0, obtains prospect mask, the local maskM connected in extraction prospect masklocal, and extract corresponding Local RGB figure, local mask outline figure FcWith local depth edge figure Em, the segmentation problem of whole image is converted into Multiple local mask segmentation problem, figure one are example part mask;
3), for each local mask, super-pixel segmentation is carried out on corresponding local RGB image, obtains super-pixel Set S={ s1,s2,s3,…,sn-1,sn, siA single super-pixel is represented, while is also a point set, by multiple features Similar pixel composition;
4) local mask outline figure F, is passed throughcWith local edge graph Em, internal edge is obtained according to formula (4.1) EinnerFigure,
Wherein,Represent in FcIt is upper to perform the expansive working that fuzzy core size is (2*k+1), pass through formula (4.2) edge pixel collection E is extractedp,
Ep=p (x, y) | Einner(x, y)=255 }, (4.2)
Wherein, p (x, y) is to meet the pixel of condition, Einner(x, y) is EinnerThe upper y rows of figure, the pixel of xth row Value;
5) it is, more mixed and disorderly due to obtaining the information that internal edge contains, and the missing at edge is there may be, it is necessary to enter one Step processing, passes through super-pixel set S and edge point set Ep, edge super-pixel collection B is extracted according to formula (5.1)sp,
Wherein, p is any pixel, s in imagekIt is the super-pixel for the condition that meets, as shown in Fig. 2 BspMiddle super-pixel is Internal edge is expanded, and remains the continuity of internal edge, shows as the adjoining of super-pixel, by BspThe super-pixel extraction of middle adjoining Out borderline region B is defined as adjacent super-pixel set, each adjoining super-pixel collectionregion
6), based on borderline region Bregion, confusion region is generated by an iteration, iterative process such as formula (6.1),
Wherein,It is BregionBorderline region after x expansion, expansion passes through merging to borderline region every time Adjacent super-pixel is completed,It is by formula (6.2) expansion
Wherein,It is borderline regionAdjoining super-pixel collection, x is initially 0, and iteration, x add 1 each time, By once or after successive ignition, MobjIndependent multiple pieces can be divided into, for an independent block, set if it contains Fixed number amount (taking 7) is individual or more super-pixel, then it is the live part for forming an object to think it, otherwise it is assumed that being nothing Part is imitated, when x is more than given threshold (taking 4) or MobjWhen possessing two pieces or more than two pieces separate live parts, repeatedly Stop for process, the borderline region being calculated by formula (6.3)Confidence level,
Wherein,It is the borderline region ultimately generated, y is the number of ultimate bound zone broadening,RepresentThe number of pixels possessed, the ratio that borderline region accounts for local mask is bigger, and it turns into mould It is smaller to paste the possibility in area, f=1 represents MobjContain two pieces or more than two pieces effective object parts;F=0 represents MobjNot yet There are two pieces or more than two pieces effective object parts, ifMore than one threshold value C, the present invention in C=0.4, this Individual borderline region is just selected as confusion region, is adhesion and blocks the region that is difficult to differentiate between object, if a part Mask does not have confusion region, then it is assumed that is single body;If there is confusion region, as shown in figure 3, then needing Accurate Segmentation 7);
7), by being allocated label to all pixels on local mask to realize Accurate Segmentation, as shown in figure 4, just During level mark, different label la are distributed to the pixel of different objects main body, la={ 1,2,3 ... }, 0 is then distributed and gives confusion region And MobjIn inactive portion pixel;
8), for precise marking confusion region, the adjoining super-pixel collection of confusion region is extracted, as shown in figure 5, surpassing picture according to these The label of element, it is divided into two or more adjoining super-pixel collection, calculates the LAB for each piece of super-pixel that super-pixel is concentrated The average of color and depth, and the centre coordinate of super-pixel, for any one la=0 pixel, calculated by formula (8.1) Its diversity factor with super-pixel,
Wherein, dlabFor the Euclidean distance on LAB color spaces, ddepthFor the Euclidean distance of depth, dxyIt is image coordinate Fasten the Euclidean distance of coordinate, wlab,wdepthAnd wxyIt is the weight of each distance, W in the present inventionlab=4, wdepth=3, wxy= 3, i be the sequence number that super-pixel concentrates super-pixel, obtains pixel with after the diversity factor of all super-pixel in super-pixel set, passing through public affairs Formula (8.2) calculates diversity factor between pixel and adjacent super-pixel set,
D=min0 < i≤n(di), (8.2)
Wherein, i is the sequence number that super-pixel concentrates super-pixel, and n is the number that adjacent super-pixel concentrates super-pixel, for obtaining D, d it is smaller represent pixel with abut super-pixel collection it is more similar, so the label of most like super-pixel collection is distributed into the picture Element, after la=0 all pixels heavy label terminates, as shown in fig. 6, confusion region is divided into two pieces, local mask completes segmentation, Inspection result realizes segmentation result with the presence or absence of isolated point or region by distributing label that most neighbor pixels possess Optimization.
In the present embodiment, using the method for confusion region extraction, there will be adhesion and the local mask blocked to separate, and obtains Multiple effective object parts, carry out local mask first mark.Then by choosing the adjoining super-pixel of confusion region, according to Label after first mark is different, is divided into two or more adjacent super-pixel set, according to pixel and adjacent super-pixel The diversity factor of set, heavy label is carried out to non-classified pixel, to obtain high-precision segmentation result.

Claims (1)

1. a kind of solid waste object segmentation methods towards visual signature degraded image, the dividing method comprises the following steps:
1) background depth gauss hybrid models, are established by a series of depth information of background cloud datas, for every in image One pixel, is modeled by Gaussian Mixture distribution, and the probability distribution of the depth of a pixel equal to d is with formula (1.1) come table Show,
<mrow> <mi>p</mi> <mrow> <mo>(</mo> <mi>d</mi> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </msubsup> <msub> <mi>w</mi> <mi>j</mi> </msub> <mi>&amp;eta;</mi> <mrow> <mo>(</mo> <mi>d</mi> <mo>;</mo> <msub> <mi>&amp;Theta;</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1.1</mn> <mo>)</mo> </mrow> </mrow>
Wherein, wjIt is the weight of j-th of Gaussian Profile,K is the sum of Gaussian Profile, η (d;Θj) it is j-th Gaussian Profile, represented with formula (1.2),
<mrow> <mi>&amp;eta;</mi> <mrow> <mo>(</mo> <mi>d</mi> <mo>;</mo> <msub> <mi>&amp;Theta;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>&amp;eta;</mi> <mrow> <mo>(</mo> <mi>d</mi> <mo>;</mo> <msub> <mi>&amp;mu;</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>&amp;Sigma;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msqrt> <mrow> <mn>2</mn> <mi>&amp;pi;</mi> <mo>|</mo> <msub> <mi>&amp;Sigma;</mi> <mi>k</mi> </msub> <mo>|</mo> </mrow> </msqrt> </mfrac> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <msup> <msub> <mi>&amp;Sigma;</mi> <mi>k</mi> </msub> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> </mrow> </msup> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1.2</mn> <mo>)</mo> </mrow> </mrow>
Wherein, μkIt is the average of k-th of Gaussian Profile, ∑kIt is the covariance matrix of k-th of Gaussian Profile, ∑kk 2I, I are single Bit matrix, σkIt is the standard deviation of k-th of Gaussian Profile, K Gaussian Profile is according to wkkSequence, B Gaussian Profile, quilt before sequence As background model, B is obtained by formula (1.3),
<mrow> <mi>B</mi> <mo>=</mo> <mi>arg</mi> <mi> </mi> <msub> <mi>min</mi> <mi>b</mi> </msub> <mrow> <mo>(</mo> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>b</mi> </msubsup> <msub> <mi>w</mi> <mi>j</mi> </msub> <mo>&gt;</mo> <mi>T</mi> <mo>)</mo> </mrow> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1.3</mn> <mo>)</mo> </mrow> </mrow>
Wherein, T is minimum threshold value, for any one pixel in solid waste point cloud, finds the background model of correspondence position, if The mean μ of all Gaussian Profiles in its depth value and background modelkPoor absolute value be more than standard deviation sigmakSetting multiple, The pixel is by as foreground pixel;
2), by comparing background model and pending solid waste point cloud, in binary map, foreground pixel correspondence position is set to 255, the back of the body Scene element is set to 0, obtains prospect mask, the local maskM connected in extraction prospect masklocal, and extract corresponding local RGB schemes, local mask outline figure FcWith local depth edge figure Em, the segmentation problem of whole image is converted into multiple offices Portion mask segmentation problem;
3), for each local mask, super-pixel segmentation is carried out on corresponding local RGB image, obtains super-pixel set S ={ s1,s2,s3,…,sn-1,sn, siA single super-pixel is represented, while is also a point set, as similar in multiple features Pixel forms;
4) local mask outline figure F, is passed throughcWith local edge graph Em, internal edge E is obtained according to formula (4.1)innerFigure,
<mrow> <msub> <mi>E</mi> <mrow> <mi>i</mi> <mi>n</mi> <mi>n</mi> <mi>e</mi> <mi>r</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>E</mi> <mi>m</mi> </msub> <mo>-</mo> <msub> <mi>F</mi> <mi>c</mi> </msub> <mo>&amp;CirclePlus;</mo> <msub> <mi>C</mi> <mrow> <mn>2</mn> <mo>*</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4.1</mn> <mo>)</mo> </mrow> </mrow>
Wherein,Represent in FcIt is upper to perform the expansive working that fuzzy core size is (2*k+1), carried by formula (4.2) Take edge pixel collection Ep,
Ep=p (x, y) | Einner(x, y)=255 }, (4.2)
Wherein, p (x, y) is to meet the pixel of condition, Einner(x, y) is EinnerThe upper y rows of figure, the pixel value of xth row;
5) super-pixel set S and edge pixel collection E, are passed throughp, edge super-pixel collection B is extracted according to formula (5.1)sp,
<mrow> <msub> <mi>B</mi> <mrow> <mi>s</mi> <mi>p</mi> </mrow> </msub> <mo>=</mo> <mo>{</mo> <mrow> <msub> <mi>s</mi> <mi>k</mi> </msub> <mo>|</mo> <mtable> <mtr> <mtd> <mrow> <mi>p</mi> <mo>&amp;Element;</mo> <msub> <mi>s</mi> <mi>k</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>a</mi> <mi>n</mi> <mi>d</mi> <mi> </mi> <mi>p</mi> <mo>&amp;Element;</mo> <msub> <mi>E</mi> <mi>p</mi> </msub> </mrow> </mtd> </mtr> </mtable> </mrow> <mo>}</mo> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5.1</mn> <mo>)</mo> </mrow> </mrow>
Wherein, p is any pixel, s in imagekIt is the super-pixel for the condition that meets, by BspThe super-pixel of middle adjoining extracts As adjacent super-pixel set, each adjoining super-pixel collection is defined as borderline region Bregion
6), based on borderline region Bregion, confusion region is generated by an iteration, iterative process such as formula (6.1),
<mrow> <msub> <mi>M</mi> <mrow> <mi>o</mi> <mi>b</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>M</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> </mrow> </msub> <mo>-</mo> <msup> <msub> <mi>B</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>g</mi> <mi>i</mi> <mi>o</mi> <mi>n</mi> </mrow> </msub> <msub> <mi>x</mi> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </msub> </msup> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6.1</mn> <mo>)</mo> </mrow> </mrow>
Wherein,It is BregionBorderline region after x expansion, expansion passes through merging adjoining to borderline region every time Super-pixel complete,It is by formula (6.2) expansion
<mrow> <msup> <msub> <mi>B</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>g</mi> <mi>i</mi> <mi>o</mi> <mi>n</mi> </mrow> </msub> <msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </msub> </msup> <mo>=</mo> <msup> <msub> <mi>B</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>g</mi> <mi>i</mi> <mi>o</mi> <mi>n</mi> </mrow> </msub> <msub> <mi>x</mi> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </msub> </msup> <mo>&amp;cup;</mo> <msup> <msub> <mi>A</mi> <mrow> <mi>s</mi> <mi>p</mi> </mrow> </msub> <msub> <mi>x</mi> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </msub> </msup> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6.2</mn> <mo>)</mo> </mrow> </mrow>
Wherein,It is borderline regionAdjoining super-pixel collection, x is initially 0, each time iteration, and x adds 1, passes through Once or after successive ignition, MobjMultiple independent blocks can be divided into, for an independent block, if it contains setting number Amount or more super-pixel, then it is the live part for forming an object to think it, otherwise it is assumed that be inactive portion, when X is more than given threshold or MobjWhen possessing two pieces or more than two pieces separate live parts, iterative process stops, and leads to Cross the borderline region that formula (6.3) is calculatedConfidence level,
Wherein,It is the borderline region ultimately generated, y is the number of ultimate bound zone broadening, RepresentThe number of pixels possessed, the ratio that borderline region accounts for local mask is bigger, and it turns into the possibility of confusion region Smaller, f=1 represents MobjContain two pieces or more than two pieces separate live parts;F=0 represents MobjWithout two pieces or More than two pieces separate live parts of person, ifMore than one threshold value C, this borderline region are just chosen As confusion region, it is adhesion and blocks the region that is difficult to differentiate between object, if a local mask does not have confusion region, then it is assumed that It is single body;If there is confusion region, then Accurate Segmentation 7) is needed;
7) Accurate Segmentation, is realized by being allocated label to all pixels on local mask, will when carrying out primary label Different label la distribute to the pixel of different live parts, la={ 1,2,3 ... }, then distribute 0 and give confusion region and MobjIn it is invalid Partial pixel;
8), for precise marking confusion region, the adjoining super-pixel collection of confusion region is extracted, according to the label of these super-pixel, is divided into Two or more adjoining super-pixel collection, calculate the LAB colors and depth of each piece of super-pixel that super-pixel is concentrated Average, and the centre coordinate of super-pixel, for any one la=0 pixel, it and super-pixel are calculated by formula (8.1) Diversity factor,
<mrow> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>w</mi> <mrow> <mi>l</mi> <mi>a</mi> <mi>b</mi> </mrow> </msub> <mo>*</mo> <msub> <mi>d</mi> <mrow> <mi>l</mi> <mi>a</mi> <mi>b</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>w</mi> <mrow> <mi>d</mi> <mi>e</mi> <mi>p</mi> <mi>t</mi> <mi>h</mi> </mrow> </msub> <mo>*</mo> <msub> <mi>d</mi> <mrow> <mi>d</mi> <mi>e</mi> <mi>p</mi> <mi>t</mi> <mi>h</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>w</mi> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </msub> <mo>*</mo> <msub> <mi>d</mi> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </msub> </mrow> <mrow> <msub> <mi>w</mi> <mrow> <mi>l</mi> <mi>a</mi> <mi>b</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>w</mi> <mrow> <mi>d</mi> <mi>e</mi> <mi>p</mi> <mi>t</mi> <mi>h</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>w</mi> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </msub> </mrow> </mfrac> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8.1</mn> <mo>)</mo> </mrow> </mrow>
Wherein, dlabFor the Euclidean distance on LAB color spaces, ddepthFor the Euclidean distance of depth, dxyIt is that image coordinate fastens seat Target Euclidean distance, wlab,wdepthAnd wxyIt is the weight of each distance, i is the sequence number that super-pixel concentrates super-pixel, obtains pixel After the diversity factor of all super-pixel in super-pixel set, calculated by formula (8.2) between pixel and adjacent super-pixel set Diversity factor,
< i≤n (the d of d=min 0i), (8.2)
Wherein, i is the sequence number that super-pixel concentrates super-pixel, and n is the number that adjacent super-pixel concentrates super-pixel, for obtained d, D is smaller to represent that pixel is more similar to adjacent super-pixel collection, and the label of most like super-pixel collection is distributed into the pixel, works as la= After 0 all pixels heavy label terminates, local mask completes segmentation, and inspection result is led to the presence or absence of isolated point or region Cross and distribute label that most neighbor pixels possess to realize the optimization of segmentation result.
CN201710559875.9A 2017-07-11 2017-07-11 A kind of solid waste object segmentation methods towards visual signature degraded image Active CN107527350B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710559875.9A CN107527350B (en) 2017-07-11 2017-07-11 A kind of solid waste object segmentation methods towards visual signature degraded image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710559875.9A CN107527350B (en) 2017-07-11 2017-07-11 A kind of solid waste object segmentation methods towards visual signature degraded image

Publications (2)

Publication Number Publication Date
CN107527350A true CN107527350A (en) 2017-12-29
CN107527350B CN107527350B (en) 2019-11-05

Family

ID=60748294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710559875.9A Active CN107527350B (en) 2017-07-11 2017-07-11 A kind of solid waste object segmentation methods towards visual signature degraded image

Country Status (1)

Country Link
CN (1) CN107527350B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146894A (en) * 2018-08-07 2019-01-04 庄朝尹 A kind of model area dividing method of three-dimensional modeling
CN109409376A (en) * 2018-11-05 2019-03-01 昆山紫东智能科技有限公司 For the image partition method, terminal and storage medium of solid waste object
CN109635809A (en) * 2018-11-02 2019-04-16 浙江工业大学 A kind of superpixel segmentation method towards vision degraded image
CN110542908A (en) * 2019-09-09 2019-12-06 阿尔法巴人工智能(深圳)有限公司 laser radar dynamic object perception method applied to intelligent driving vehicle
CN111105443A (en) * 2019-12-26 2020-05-05 南京邮电大学 Video group figure motion trajectory tracking method based on feature association

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150023575A1 (en) * 2013-07-17 2015-01-22 Siemens Medical Solutions Usa, Inc. Anatomy Aware Articulated Registration for Image Segmentation
CN104463870A (en) * 2014-12-05 2015-03-25 中国科学院大学 Image salient region detection method
CN105957078A (en) * 2016-04-27 2016-09-21 浙江万里学院 Multi-view video segmentation method based on graph cut
CN106056155A (en) * 2016-05-30 2016-10-26 西安电子科技大学 Super-pixel segmentation method based on boundary information fusion
CN106886995A (en) * 2017-01-13 2017-06-23 北京航空航天大学 Polyteny example returns the notable object segmentation methods of image of device polymerization

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150023575A1 (en) * 2013-07-17 2015-01-22 Siemens Medical Solutions Usa, Inc. Anatomy Aware Articulated Registration for Image Segmentation
CN104463870A (en) * 2014-12-05 2015-03-25 中国科学院大学 Image salient region detection method
CN105957078A (en) * 2016-04-27 2016-09-21 浙江万里学院 Multi-view video segmentation method based on graph cut
CN106056155A (en) * 2016-05-30 2016-10-26 西安电子科技大学 Super-pixel segmentation method based on boundary information fusion
CN106886995A (en) * 2017-01-13 2017-06-23 北京航空航天大学 Polyteny example returns the notable object segmentation methods of image of device polymerization

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146894A (en) * 2018-08-07 2019-01-04 庄朝尹 A kind of model area dividing method of three-dimensional modeling
CN109635809A (en) * 2018-11-02 2019-04-16 浙江工业大学 A kind of superpixel segmentation method towards vision degraded image
CN109635809B (en) * 2018-11-02 2021-08-17 浙江工业大学 Super-pixel segmentation method for visual degradation image
CN109409376A (en) * 2018-11-05 2019-03-01 昆山紫东智能科技有限公司 For the image partition method, terminal and storage medium of solid waste object
CN109409376B (en) * 2018-11-05 2020-10-30 昆山紫东智能科技有限公司 Image segmentation method for solid waste object, computer terminal and storage medium
CN110542908A (en) * 2019-09-09 2019-12-06 阿尔法巴人工智能(深圳)有限公司 laser radar dynamic object perception method applied to intelligent driving vehicle
CN110542908B (en) * 2019-09-09 2023-04-25 深圳市海梁科技有限公司 Laser radar dynamic object sensing method applied to intelligent driving vehicle
CN111105443A (en) * 2019-12-26 2020-05-05 南京邮电大学 Video group figure motion trajectory tracking method based on feature association

Also Published As

Publication number Publication date
CN107527350B (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN107527350B (en) A kind of solid waste object segmentation methods towards visual signature degraded image
CN103942794B (en) A kind of image based on confidence level is collaborative scratches drawing method
CN104537676B (en) Gradual image segmentation method based on online learning
CN104063702B (en) Three-dimensional gait recognition based on shielding recovery and partial similarity matching
CN107610141A (en) A kind of remote sensing images semantic segmentation method based on deep learning
CN102938066A (en) Method for reconstructing outer outline polygon of building based on multivariate data
CN101770578B (en) Image characteristic extraction method
CN105975974A (en) ROI image extraction method in finger vein identification
CN104408733B (en) Object random walk-based visual saliency detection method and system for remote sensing image
CN102147867B (en) Method for identifying traditional Chinese painting images and calligraphy images based on subject
CN102622769A (en) Multi-target tracking method by taking depth as leading clue under dynamic scene
CN103544685B (en) A kind of image composition beautification method adjusted based on main body and system
CN102521597B (en) Hierarchical strategy-based linear feature matching method for images
CN104913784B (en) A kind of autonomous extracting method of planetary surface navigation characteristic
CN103793930A (en) Pencil drawing image generation method and device
CN104517317A (en) Three-dimensional reconstruction method of vehicle-borne infrared images
CN102799646B (en) A kind of semantic object segmentation method towards multi-view point video
CN104299009A (en) Plate number character recognition method based on multi-feature fusion
CN105374039A (en) Monocular image depth information estimation method based on contour acuity
CN103971338A (en) Variable-block image repair method based on saliency map
CN106937120A (en) Object-based monitor video method for concentration
CN105469111A (en) Small sample set object classification method on basis of improved MFA and transfer learning
CN103093470A (en) Rapid multi-modal image synergy segmentation method with unrelated scale feature
CN104574307A (en) Method for extracting primary colors of painting work image
CN104392462A (en) SAR image registration method based on salient division sub-region pair

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200611

Address after: Room 1504-2, Dikai International Center, Jianggan District, Hangzhou, Zhejiang Province

Patentee after: HANGZHOU SHISHANG TECHNOLOGY Co.,Ltd.

Address before: The city Zhaohui six districts Chao Wang Road Hangzhou City, Zhejiang province 310014 18

Patentee before: ZHEJIANG University OF TECHNOLOGY