CN107527350B - A kind of solid waste object segmentation methods towards visual signature degraded image - Google Patents

A kind of solid waste object segmentation methods towards visual signature degraded image Download PDF

Info

Publication number
CN107527350B
CN107527350B CN201710559875.9A CN201710559875A CN107527350B CN 107527350 B CN107527350 B CN 107527350B CN 201710559875 A CN201710559875 A CN 201710559875A CN 107527350 B CN107527350 B CN 107527350B
Authority
CN
China
Prior art keywords
pixel
super
region
segmentation
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710559875.9A
Other languages
Chinese (zh)
Other versions
CN107527350A (en
Inventor
刘盛
王超
冯缘
尹科杰
陈胜勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Shishang Technology Co ltd
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201710559875.9A priority Critical patent/CN107527350B/en
Publication of CN107527350A publication Critical patent/CN107527350A/en
Application granted granted Critical
Publication of CN107527350B publication Critical patent/CN107527350B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20116Active contour; Active surface; Snakes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

A kind of solid waste object segmentation methods suitable for visual signature degraded image, relate generally in the fields such as robot vision and image segmentation.Since visual signature is degenerated with solid waste object there are adhesion and circumstance of occlusion, traditional images partitioning algorithm hardly results in high-precision segmentation result.The present invention obtains background model by depth background modeling, compares background model and solid waste point cloud to extract prospect mask.Whole image segmentation problem is converted to the segmentation problem of multiple part mask by the local mask in extraction prospect mask.For local mask, adhesion is divided by fuzzy region extraction and blocks object, executes confusion region heavy label finally to obtain high-precision segmentation result.Segmentation precision of the present invention is high, can effectively divide the solid waste object of severe color degeneration, and for adhesion and the solid waste object blocked, segmentation effect is also very ideal.

Description

A kind of solid waste object segmentation methods towards visual signature degraded image
Technical field
The present invention relates to the technical fields such as robot vision, image segmentation, the especially solid waste of visual signature degraded image Object Segmentation.
Background technique
Traditional image segmentation algorithm uses the feature of color and profile.But since industrial environment is more complicated: transmission Belt surface is covered by dust, and the dust granule of solid waste body surface results in serious visual signature and degenerates, and solid waste object exists Adhesion and circumstance of occlusion.These, which can all divide two dimensional image, has a great impact, so traditional image partition method is not Suitable for industrial scene.
Summary of the invention
It degenerates to solve existing visual signature, object adhesion and object block the problem for causing segmentation difficult.The present invention Provide the solid waste object segmentation methods under a kind of visual signature is degenerated seriously.The present invention obtains background by depth background modeling Model compares background model and solid waste point cloud to extract prospect mask.Mask, that is, binary map, interested pixel value are set as 255, Afterimage element is set as 0.For the local mask being connected in prospect mask, it is split by extracting fuzzy region, finally executes mould Area's heavy label is pasted to obtain high-precision segmentation result.
The technical solution adopted by the present invention to solve the technical problems is:
A kind of solid waste object segmentation methods towards visual signature degraded image, the dividing method include the following steps:
1) background depth gauss hybrid models, are established by a series of depth information of background point cloud datas, for image In each pixel, modeled by Gaussian Mixture distribution, the probability distribution of the depth of a pixel equal to d is with formula (1.1) It indicates,
Wherein, wjIt is the weight of j-th of Gaussian Profile,K is the sum of Gaussian Profile, η (d;Θj) it is J Gaussian Profile indicate with formula (1.2),
Wherein, μkIt is the mean value of k-th of Gaussian Profile, ∑kIt is the covariance matrix of k-th of Gaussian Profile, ∑kk 2I, I It is unit matrix, σkIt is the standard deviation of k-th of Gaussian Profile, K Gaussian Profile is according to wkkSequence, the Gauss point of B before sorting Cloth, by as background model, B is obtained by formula (1.3),
Wherein, T is the smallest threshold value, for any one pixel in solid waste point cloud, finds the background model of corresponding position, If the mean μ of all Gaussian Profiles in its depth value and background modelkAbsolute value of the difference be greater than standard deviation sigmakSetting times Number, the pixel is by as foreground pixel;
2), by comparing background model and solid waste to be processed point cloud, in binary map, foreground pixel corresponding position is set as 255, background pixel is set as 0, obtains prospect mask, extracts the local maskM being connected in prospect masklocal, and extract corresponding Local RGB figure, the outer profile figure F of local maskcWith local depth edge figure Em, the segmentation problem of whole image is converted into The segmentation problem of multiple part mask;
3), for each part mask, super-pixel segmentation is carried out on corresponding local RGB image, obtains super-pixel Set S={ s1,s2,s3,…,sn-1,sn, siIt indicates an individual super-pixel, while being also a point set, by multiple features Similar pixel composition;
4), pass through the outer profile figure F of local maskcWith local edge graph Em, internal edge is obtained according to formula (4.1) EinnerFigure,
Wherein,It indicates in FcIt is upper to execute the expansive working that fuzzy core size is (2*k+1), pass through formula (4.2) edge pixel collection E is extractedp,
Ep=p (x, y) | Einner(x, y)=255 }, (4.2)
Wherein, p (x, y) is the pixel for the condition that meets, Einner(x, y) is EinnerScheme upper y row, the pixel of xth column Value;
5), pass through super-pixel set S and edge pixel collection Ep, edge super-pixel collection B is extracted according to formula (5.1)sp,
Wherein, p is any pixel, s in imagekIt is the super-pixel for the condition that meets, by BspThe super-pixel of middle adjoining is extracted Out as adjacent super-pixel set, each adjoining super-pixel collection is defined as borderline region Bregion
6), it is based on borderline region Bregion, generate confusion region by an iteration, iterative process such as formula (6.1),
Wherein,It is BregionBorderline region after x expansion, borderline region is expanded every time passes through merging Adjacent super-pixel is completed,It is by formula (6.2) expansion
Wherein,It is borderline regionAdjoining super-pixel collection, x is initially 0, and iteration, x add 1 each time, After primary or successive ignition, MobjMultiple independent pieces can be divided into, independent for one piece, set if it contains Fixed number amount or more super-pixel, it is considered that it is the live part for forming an object, otherwise it is assumed that being invalid portion Point, when x is greater than given threshold or MobjWhen possessing two pieces or two pieces or more mutually independent live parts, iterative process is stopped Only, the borderline region being calculated by formula (6.3)Confidence level,
Wherein,It is the borderline region ultimately generated, y is the number of ultimate bound zone broadening,It indicatesThe number of pixels possessed, the ratio that borderline region accounts for local mask is bigger, it becomes mould A possibility that pasting area is smaller, and f=1 indicates MobjContain two pieces or two pieces or more mutually independent live parts;F=0 is indicated MobjWithout two pieces or two pieces or more mutually independent live parts, ifGreater than one threshold value C, this boundary Region is just selected as confusion region, is adhesion and blocks the region being difficult to differentiate between object, if a part mask does not have Confusion region, then it is assumed that be single body;If there is confusion region, then Accurate Segmentation 7) is needed;
7) Accurate Segmentation, is realized by being allocated label to all pixels on local mask, carries out primary label When, different label la are distributed to the pixel of different live parts, la={ 1,2,3 ... } then distributes 0 to confusion region and Mobj The pixel of middle inactive portion;
8), for precise marking confusion region, the adjoining super-pixel collection of confusion region is extracted, according to the label of these super-pixel, It is divided into two or more adjoining super-pixel collection, calculates the LAB color and depth of each piece of super-pixel that super-pixel is concentrated The mean value of degree and the centre coordinate of super-pixel calculate it and super picture by formula (8.1) for the pixel of any one la=0 The diversity factor of element,
Wherein, dlabFor the Euclidean distance on LAB color space, ddepthFor the Euclidean distance of depth, dxyIt is image coordinate Fasten the Euclidean distance of coordinate, wlab,wdepthAnd wxyIt is the weight of each distance, i is the serial number that super-pixel concentrates super-pixel, is obtained Into pixel and super-pixel set after the diversity factor of all super-pixel, pixel and adjacent super-pixel are calculated by formula (8.2) Diversity factor between set,
D=min0 < i≤n(di), (8.2)
Wherein, i is the serial number that super-pixel concentrates super-pixel, and n is the number that adjacent super-pixel concentrates super-pixel, for obtaining D, the smaller expression pixel of d is more similar to adjacent super-pixel collection, the label of most like super-pixel collection is distributed into the pixel, when After all pixels heavy label of la=0, local mask completes segmentation, and inspection result is with the presence or absence of isolated point or area The optimization of segmentation result is realized by distributing the label that most neighbor pixels possess in domain.
Technical concept of the invention are as follows: establish depth background model and carry out background subtraction, obtain prospect mask.Extraction prospect Whole image segmentation problem is converted to the segmentation problem of multiple part mask by the local mask in mask.Pass through confusion region It extracts to divide local mask, and by confusion region heavy label, to obtain a high-precision segmentation result.
Beneficial effects of the present invention are mainly manifested in: segmentation precision is high, can effectively divide the solid waste of visual signature degeneration Object, and for adhesion and the solid waste object blocked, segmentation effect is also very ideal.The finally mark again based on Pixel-level Note, can obtain more accurate edge.
Detailed description of the invention
Fig. 1 is a part mask in prospect mask.
Fig. 2 is the boundary super-pixel extracted.
Fig. 3 is the confusion region extracted.
Fig. 4 is the result marked for the first time to local mask.
Fig. 5 is the adjoining super-pixel chosen, different according to the label of institute's band, can be divided into two set.Selection is adjacent super Foundation of the set of pixels as heavy label.
Fig. 6 is confusion region heavy label as a result, confusion region is divided into two parts, different objects is belonging respectively to, with different marks Label are to indicate.
Fig. 7 is the flow chart of the solid waste object segmentation methods towards visual signature degraded image.
Specific embodiment
The invention will be further described below in conjunction with the accompanying drawings.
A kind of referring to Fig.1~Fig. 7, solid waste object segmentation methods towards visual signature degraded image, includes the following steps:
1) background depth gauss hybrid models, are established using a series of depth information of background point cloud datas, for image In each pixel, modeled by Gaussian Mixture distribution, the probability distribution of the depth of a pixel equal to d is with formula (1.1) It indicates,
Wherein, wjIt is the weight of j-th of Gaussian Profile,K is the sum of Gaussian Profile, takes K in the present invention =5, and η (d;Θj) it is j-th of Gaussian Profile, it is indicated with formula (1.2),
Wherein, μkIt is the mean value of k-th of Gaussian Profile, ∑kIt is the covariance matrix of k-th of Gaussian Profile, ∑kk 2I, I It is unit matrix, σkIt is the standard deviation of k-th of Gaussian Profile, K Gaussian Profile is according to wkkSequence, the Gauss point of B before sorting Cloth, by as background model, B is obtained by formula (1.3),
Wherein, T is the smallest threshold value, for any one pixel in solid waste point cloud, finds the background model of corresponding position, If the mean μ of all Gaussian Profiles in its depth value and background modelkAbsolute value of the difference be greater than standard deviation sigmakSetting times Number (taking 2.5), the pixel is by as foreground pixel;
2), by comparing background model and solid waste to be processed point cloud, in binary map, foreground pixel corresponding position is set as 255, background pixel is set as 0, obtains prospect mask, extracts the local maskM being connected in prospect masklocal, and extract corresponding Local RGB figure, the outer profile figure F of local maskcWith local depth edge figure Em, the segmentation problem of whole image is converted into The segmentation problem of multiple part mask, figure one are example part mask;
3), for each part mask, super-pixel segmentation is carried out on corresponding local RGB image, obtains super-pixel Set S={ s1,s2,s3,…,sn-1,sn, siIt indicates an individual super-pixel, while being also a point set, by multiple features Similar pixel composition;
4), pass through the outer profile figure F of local maskcWith local edge graph Em, internal edge is obtained according to formula (4.1) EinnerFigure,
Wherein,It indicates in FcIt is upper to execute the expansive working that fuzzy core size is (2*k+1), pass through formula (4.2) edge pixel collection E is extractedp,
Ep=p (x, y) | Einner(x, y)=255 }, (4.2)
Wherein, p (x, y) is the pixel for the condition that meets, Einner(x, y) is EinnerScheme upper y row, the pixel of xth column Value;
5), more mixed and disorderly due to obtaining the information that internal edge contains, and there may be the missing at edge, it needs into one Step processing, passes through super-pixel set S and edge point set Ep, edge super-pixel collection B is extracted according to formula (5.1)sp,
Wherein, p is any pixel, s in imagekIt is the super-pixel for the condition that meets, as shown in Fig. 2, BspMiddle super-pixel is Internal edge expansion, remains the continuity of internal edge, shows as the adjoining of super-pixel, by BspThe super-pixel of middle adjoining is extracted Out as adjacent super-pixel set, each adjoining super-pixel collection is defined as borderline region Bregion
6), it is based on borderline region Bregion, generate confusion region by an iteration, iterative process such as formula (6.1),
Wherein,It is BregionBorderline region after x expansion, borderline region is expanded every time passes through merging Adjacent super-pixel is completed,It is by formula (6.2) expansion
Wherein,It is borderline regionAdjoining super-pixel collection, x is initially 0, and iteration, x add 1 each time, After primary or successive ignition, MobjIndependent multiple pieces can be divided into, independent for one piece, set if it contains Fixed number amount (taking 7) is a or more super-pixel, it is considered that it is the live part for forming an object, otherwise it is assumed that being nothing Part is imitated, when x is greater than given threshold (taking 4) or MobjWhen possessing two pieces or two pieces or more mutually independent live parts, repeatedly Stop for process, the borderline region being calculated by formula (6.3)Confidence level,
Wherein,It is the borderline region ultimately generated, y is the number of ultimate bound zone broadening,It indicatesThe number of pixels possessed, the ratio that borderline region accounts for local mask is bigger, it becomes mould A possibility that pasting area is smaller, and f=1 indicates MobjContain two pieces or two pieces or more effective object parts;F=0 indicates MobjNot yet There are two pieces or two pieces or more effective object parts, ifGreater than one threshold value C, the present invention in C=0.4, this A borderline region is just selected as confusion region, is adhesion and blocks the region being difficult to differentiate between object, if a part Mask does not have confusion region, then it is assumed that is single body;If there is confusion region, as shown in figure 3, then needing Accurate Segmentation 7);
7) Accurate Segmentation, is realized by being allocated label to all pixels on local mask, as shown in figure 4, just When grade label, different label la are distributed to the pixel of different objects main body, la={ 1,2,3 ... } then distributes 0 to confusion region And MobjIn inactive portion pixel;
8), for precise marking confusion region, the adjoining super-pixel collection of confusion region is extracted, as shown in figure 5, surpassing picture according to these The label of element is divided into two or more adjoining super-pixel collection, calculates the LAB for each piece of super-pixel that super-pixel is concentrated The centre coordinate of the mean value and super-pixel of color and depth calculates the pixel of any one la=0 by formula (8.1) The diversity factor of it and super-pixel,
Wherein, dlabFor the Euclidean distance on LAB color space, ddepthFor the Euclidean distance of depth, dxyIt is image coordinate Fasten the Euclidean distance of coordinate, wlab,wdepthAnd wxyIt is the weight of each distance, W in the present inventionlab=4, wdepth=3, wxy= 3, i be the serial number that super-pixel concentrates super-pixel, obtains pixel with after the diversity factor of all super-pixel in super-pixel set, passes through public affairs Formula (8.2) calculates diversity factor between pixel and adjacent super-pixel set,
D=min0 < i≤n(di), (8.2)
Wherein, i is the serial number that super-pixel concentrates super-pixel, and n is the number that adjacent super-pixel concentrates super-pixel, for obtaining D, the smaller expression pixel of d is more similar to adjacent super-pixel collection, so the label of most like super-pixel collection is distributed to the picture Element, after all pixels heavy label of la=0, as shown in fig. 6, confusion region is divided into two pieces, local mask completes segmentation, Inspection result realizes segmentation result by distributing the label that most neighbor pixels possess with the presence or absence of isolated point or region Optimization.
In the present embodiment, the method extracted using confusion region, the local mask that adhesion will be present and block is separated, is obtained Multiple effective object parts carry out the first label of part mask.Then by choosing the adjoining super-pixel of confusion region, according to Label after first label is different, is divided into two or more adjacent super-pixel set, according to pixel and adjacent super-pixel The diversity factor of set carries out heavy label to non-classified pixel, to obtain high-precision segmentation result.

Claims (1)

1. a kind of solid waste object segmentation methods towards visual signature degraded image, the dividing method include the following steps:
1) background depth gauss hybrid models, are established by a series of depth information of background point cloud datas, for every in image One pixel, is modeled by Gaussian Mixture distribution, and probability distribution of the depth equal to d of a pixel is with formula (1.1) come table Show,
Wherein, wjIt is the weight of j-th of Gaussian Profile,K is the sum of Gaussian Profile, η (d;Θj) it is j-th Gaussian Profile indicates with formula (1.2),
Wherein, μkIt is the mean value of k-th of Gaussian Profile, ∑kIt is the covariance matrix of k-th of Gaussian Profile, ∑kk 2I, I are single Bit matrix, σkIt is the standard deviation of k-th of Gaussian Profile, K Gaussian Profile is according to wkkSequence, the Gaussian Profile of B, quilt before sorting As background model, B is obtained by formula (1.3),
Wherein, T is the smallest threshold value, for any one pixel in solid waste point cloud, finds the background model of corresponding position, if The mean μ of all Gaussian Profiles in its depth value and background modelkAbsolute value of the difference be greater than standard deviation sigmakSetting multiple, The pixel is by as foreground pixel;
2), by comparing background model and solid waste to be processed point cloud, in binary map, foreground pixel corresponding position is set as 255, back Scene element is set as 0, obtains prospect mask, extracts the local maskM being connected in prospect masklocal, and extract corresponding part RGB figure, the outer profile figure F of local maskcWith local depth edge figure Em, the segmentation problem of whole image is converted into multiple offices The segmentation problem of portion mask;
3), for each part mask, super-pixel segmentation is carried out on corresponding local RGB image, obtains super-pixel set S ={ s1, s2, s3..., sn-1, sn, siIt indicates an individual super-pixel, while being also a point set, as similar in multiple features Pixel composition;
4), pass through the outer profile figure F of local maskcWith local edge graph Em, internal edge E is obtained according to formula (4.1)innerFigure,
Einner=Em-Fc⊕C2*r+1, (4.1)
Wherein, Fc⊕C2*r+1It indicates in FcIt is upper to execute the expansive working that fuzzy core size is (2*r+1), it is extracted by formula (4.2) Edge pixel collection Ep,
Ep=p (x, y) | Einner(x, y)=255 }, (4.2)
Wherein, p (x, y) is the pixel for the condition that meets, Einner(x, y) is EinnerScheme upper y row, the pixel value of xth column;
5), pass through super-pixel set S and edge pixel collection Ep, edge super-pixel collection B is extracted according to formula (5.1)sp,
Wherein, p is any pixel, s in imagekIt is the super-pixel for the condition that meets, by BspThe super-pixel of middle adjoining extracts As adjacent super-pixel set, each adjoining super-pixel collection is defined as borderline region Bregion
6), it is based on borderline region Bregion, generate confusion region by an iteration, iterative process such as formula (6.1),
WhereinIt is BregionBorderline region after t expansion, borderline region are expanded every time by merging adjoining Super-pixel is completed,It is by formula (6.2) expansion
Wherein,It is borderline regionAdjoining super-pixel collection, t is initially 0, each time iteration, and t adds 1, passes through After primary or successive ignition, MobjIt can be divided into multiple independent pieces, independent for one piece, if it contains setting number Amount or more super-pixel, it is considered that it is the live part for forming an object, otherwise it is assumed that be inactive portion, when T is greater than given threshold or MobjWhen possessing two pieces or two pieces or more mutually independent live parts, iterative process stops, and leads to Cross the borderline region that formula (6.3) is calculatedConfidence level,
Wherein,It is the borderline region ultimately generated, ft is the number of ultimate bound zone broadening,It indicatesThe number of pixels possessed, the ratio that borderline region accounts for local mask is bigger, it becomes A possibility that confusion region, is smaller, and f=1 indicates MobjContain two pieces or two pieces or more mutually independent live parts;F=0 is indicated MobjWithout two pieces or two pieces or more mutually independent live parts, ifGreater than one threshold value C, this boundary Region is just selected as confusion region, is adhesion and blocks the region being difficult to differentiate between object, if a part mask does not have Confusion region, then it is assumed that be single body;If there is confusion region, then Accurate Segmentation 7) is needed;
7) Accurate Segmentation, is realized by being allocated label to all pixels on local mask, it, will when carrying out primary label Different label la distribute to the pixel of different live parts, and la={ 1,2,3... } then distributes 0 to confusion region and MobjMiddle nothing Imitate the pixel of part;
8), for precise marking confusion region, the adjoining super-pixel collection for extracting confusion region is divided into according to the label of these super-pixel Two or more adjoining super-pixel collection calculates the LAB color and depth of each piece of super-pixel that super-pixel is concentrated The centre coordinate of mean value and super-pixel calculates it and super-pixel by formula (8.1) for the pixel of any one la=0 Diversity factor,
Wherein, dlabFor the Euclidean distance on LAB color space, ddepthFor the Euclidean distance of depth, dxyIt is that image coordinate fastens seat Target Euclidean distance, wlab, wdepthAnd wxyIt is the weight of each distance, i is the serial number that super-pixel concentrates super-pixel, obtains pixel After the diversity factor of super-pixel all in super-pixel set, calculated by formula (8.2) between pixel and adjacent super-pixel set Diversity factor,
Diff=min0 < i≤n(diffi), (8.2)
Wherein, i is the serial number that super-pixel concentrates super-pixel, and n is the number that adjacent super-pixel concentrates super-pixel, for what is obtained The smaller expression pixel of diff, diff is more similar to adjacent super-pixel collection, and the label of most like super-pixel collection is distributed to the picture Element, after all pixels heavy label of la=0, local mask completes segmentation, inspection result with the presence or absence of isolated point or The optimization of segmentation result is realized by distributing the label that most neighbor pixels possess in person region.
CN201710559875.9A 2017-07-11 2017-07-11 A kind of solid waste object segmentation methods towards visual signature degraded image Active CN107527350B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710559875.9A CN107527350B (en) 2017-07-11 2017-07-11 A kind of solid waste object segmentation methods towards visual signature degraded image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710559875.9A CN107527350B (en) 2017-07-11 2017-07-11 A kind of solid waste object segmentation methods towards visual signature degraded image

Publications (2)

Publication Number Publication Date
CN107527350A CN107527350A (en) 2017-12-29
CN107527350B true CN107527350B (en) 2019-11-05

Family

ID=60748294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710559875.9A Active CN107527350B (en) 2017-07-11 2017-07-11 A kind of solid waste object segmentation methods towards visual signature degraded image

Country Status (1)

Country Link
CN (1) CN107527350B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146894A (en) * 2018-08-07 2019-01-04 庄朝尹 A kind of model area dividing method of three-dimensional modeling
CN109635809B (en) * 2018-11-02 2021-08-17 浙江工业大学 Super-pixel segmentation method for visual degradation image
CN109409376B (en) * 2018-11-05 2020-10-30 昆山紫东智能科技有限公司 Image segmentation method for solid waste object, computer terminal and storage medium
CN110542908B (en) * 2019-09-09 2023-04-25 深圳市海梁科技有限公司 Laser radar dynamic object sensing method applied to intelligent driving vehicle
CN111105443A (en) * 2019-12-26 2020-05-05 南京邮电大学 Video group figure motion trajectory tracking method based on feature association

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463870A (en) * 2014-12-05 2015-03-25 中国科学院大学 Image salient region detection method
CN105957078A (en) * 2016-04-27 2016-09-21 浙江万里学院 Multi-view video segmentation method based on graph cut
CN106056155A (en) * 2016-05-30 2016-10-26 西安电子科技大学 Super-pixel segmentation method based on boundary information fusion
CN106886995A (en) * 2017-01-13 2017-06-23 北京航空航天大学 Polyteny example returns the notable object segmentation methods of image of device polymerization

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9741131B2 (en) * 2013-07-17 2017-08-22 Siemens Medical Solutions Usa, Inc. Anatomy aware articulated registration for image segmentation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463870A (en) * 2014-12-05 2015-03-25 中国科学院大学 Image salient region detection method
CN105957078A (en) * 2016-04-27 2016-09-21 浙江万里学院 Multi-view video segmentation method based on graph cut
CN106056155A (en) * 2016-05-30 2016-10-26 西安电子科技大学 Super-pixel segmentation method based on boundary information fusion
CN106886995A (en) * 2017-01-13 2017-06-23 北京航空航天大学 Polyteny example returns the notable object segmentation methods of image of device polymerization

Also Published As

Publication number Publication date
CN107527350A (en) 2017-12-29

Similar Documents

Publication Publication Date Title
CN107527350B (en) A kind of solid waste object segmentation methods towards visual signature degraded image
CN102622769B (en) Multi-target tracking method by taking depth as leading clue under dynamic scene
CN103942794B (en) A kind of image based on confidence level is collaborative scratches drawing method
CN102938066A (en) Method for reconstructing outer outline polygon of building based on multivariate data
CN105261046B (en) A kind of tone moving method of scene adaptive
CN104050682A (en) Image segmentation method fusing color and depth information
CN104299263A (en) Method for modeling cloud scene based on single image
CN103955945B (en) Self-adaption color image segmentation method based on binocular parallax and movable outline
CN104517317A (en) Three-dimensional reconstruction method of vehicle-borne infrared images
CN102799646B (en) A kind of semantic object segmentation method towards multi-view point video
CN106570874A (en) Image marking method combining local image constraint and overall target constraint
CN105374039A (en) Monocular image depth information estimation method based on contour acuity
CN103093470A (en) Rapid multi-modal image synergy segmentation method with unrelated scale feature
CN102609950A (en) Two-dimensional video depth map generation process
CN103761734A (en) Binocular stereoscopic video scene fusion method for keeping time domain consistency
CN109712143A (en) A kind of Fast image segmentation method based on super-pixel multiple features fusion
CN104408733A (en) Object random walk-based visual saliency detection method and system for remote sensing image
CN102129576B (en) Method for extracting duty ratio parameter of all-sky aurora image
CN102147867A (en) Method for identifying traditional Chinese painting images and calligraphy images based on subject
CN103955942A (en) SVM-based depth map extraction method of 2D image
CN104913784A (en) Method for autonomously extracting navigation characteristic on surface of planet
CN101765019A (en) Stereo matching algorithm for motion blur and illumination change image
CN101847259B (en) Infrared object segmentation method based on weighted information entropy and markov random field
CN102903111A (en) Stereo matching algorithm for large low-texture area based on image segmentation
CN104376312B (en) Face identification method based on bag of words compressed sensing feature extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200611

Address after: Room 1504-2, Dikai International Center, Jianggan District, Hangzhou, Zhejiang Province

Patentee after: HANGZHOU SHISHANG TECHNOLOGY Co.,Ltd.

Address before: The city Zhaohui six districts Chao Wang Road Hangzhou City, Zhejiang province 310014 18

Patentee before: ZHEJIANG University OF TECHNOLOGY