CN106997591A - A kind of super voxel dividing method of RGB D image mutative scales - Google Patents

A kind of super voxel dividing method of RGB D image mutative scales Download PDF

Info

Publication number
CN106997591A
CN106997591A CN201710168730.6A CN201710168730A CN106997591A CN 106997591 A CN106997591 A CN 106997591A CN 201710168730 A CN201710168730 A CN 201710168730A CN 106997591 A CN106997591 A CN 106997591A
Authority
CN
China
Prior art keywords
point
super voxel
super
voxel
rgb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710168730.6A
Other languages
Chinese (zh)
Inventor
袁夏
徐鹏
周宏扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201710168730.6A priority Critical patent/CN106997591A/en
Publication of CN106997591A publication Critical patent/CN106997591A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses the super voxel dividing method of mutative scale in one towards RGB D view data, seed point is selected in data using the sampling of Poisson dish, then cluster is iterated according to the color distance and space length between data point and each seed point, obtain initial super voxel segmentation result, the super voxel obtained using initial segmentation is summit, the syntople of super voxel is that non-directed graph is set up on side, super voxel is carried out using the method based on graph theory to merge, the super voxel segmentation result that scale size differs is obtained in same RGB D images, the super voxel segmentation of mutative scale is realized.The inventive method belongs to data prediction, splits obtained super voxel and obtains the super voxel of large scale in the high region of data consistency, obtains the super voxel of small yardstick in the place of data consistency difference, more meets human vision cognitive features.

Description

A kind of super voxel dividing method of RGB-D images mutative scale
Technical field
The present invention relates to a kind of super voxel dividing method, particularly a kind of super voxel of mutative scale suitable for RGB-D images Dividing method.
Background technology
With the progress of sensor technology, the procurement cost of RGB-D images is more and more lower, how more effectively to RGB-D Image carries out the important research content that pretreatment is computer vision in recent years.In order to make full use of in RGB-D images Three-dimensional geometric information, similar with the concept of two dimensional image super-pixel over-segmentation, it is a kind of that RGB-D images are too segmented into super voxel Effective pretreatment mode, can effectively reduce the data volume of subsequent algorithm processing.
Obtained each super voxel size uniformity of the super voxel partitioning algorithm conventional at present after scale parameter determination compared with Height, if follow-up need to carry out multiscale analysis, the scale factor different by setting controls the size of super voxel, so existed The super voxel segmentation result for obtaining a kind of yardstick will be calculated when analyzing each yardstick, so as to add amount of calculation.Base of the present invention The thought analyzed in mutative scale, when carrying out super voxel and splitting, the high region segmentation of data consistency into large scale super body Element, the region segmentation of data consistency difference is into the super voxel of small yardstick, so that according to data local distribution, it is adaptive general RGB-D data are divided into the super voxel of different scale, subsequent algorithm is carried out mutative scale meter in an over-segmentation data Calculate, reduce amount of calculation.
The content of the invention
, can be preferably right it is an object of the invention to provide a kind of super voxel dividing method of RGB-D images mutative scale RGB-D images carry out over-segmentation pretreatment.
The technical solution for realizing the object of the invention is:A kind of super voxel dividing method of RGB-D images mutative scale, including Following steps:
Step 1, seed point are chosen, if a frame RGB-D view data is P, resolution ratio is arranged for m rows n, and each data point is included 3 Color Channels (r, g, b) and 1 depth channel (d).A point p is randomly selected from P0As initial seed point, half is set Footpath threshold value R is sampled in P using Poisson dish sampling algorithm as the minimum range between seed point to be generated and obtains seed point set Close;Seed point is chosen and specifically includes following steps:
Step 1-1, in a frame RGB-D view data P randomly select a point p0As initial seed point, will actively it adopt Sampling point queue L1Sky is initialized as, p0Add L1, by inactive sampled point queue L2It is initialized as sky;
Step 1-2, judgement enliven sampled point queue L1Whether it is empty, if L1It is not sky, then from L1In go out one point of team pi, with piFor the center of circle, R and 2R are random selection candidate's sampled point in the concentric circular regions of radius, if candidate's sampled point is with having planted The distance of son point is then added into L more than R1;If the attempt to K times still without qualified candidate's sampled point, then by piFrom L1In Delete, and add L2;Wherein R uses pixel coordinate unit, and K is default numerical value;
Step 1-3, L2In point be selected seed point.
Step 2, super voxel pre-segmentation, are converted to Lab, depth value d is converted to by the color space of each point in data from RGB Three dimensional space coordinate (x, y, z), using each seed point as cluster centre, Color distance and three dimensions distance iterative calculation are non- Distance between seed point and seed point, gathers for a class apart from the point of seed point recently and the seed point, obtains initial super voxel and divide Cut result;Following steps are specifically included to super voxel pre-segmentation:
Step 2-1, the depth value at RGB-D images midpoint is converted into three dimensional space coordinate, if the point that the i-th row jth is arranged in P For pij, its depth value is dij, depth value is converted into three dimensional space coordinate using formula (1):
Wherein f is the focal length of video camera, (cx,cy) be image centre coordinate, will with the color space conversion formula of standard The RGB color of each point is converted to Lab colors in P, and the point in P is expressed as into 6 dimensional vectors [l, a, b, x, y, z];
Step 2-2, the individual seed point obtained using step 1 carry out range searching cluster as initial cluster center, calculate each poly- The distance of point and the cluster centre in the 2R*2R contiguous ranges of class center, each non-cluster central point is ranged special with it The minimum cluster centre of distance is levied to complete two point p in first time cluster process, PiAnd pjBetween characteristic distance measure formulas such as Under:
In formula (2), dlabFor color distance, dxyzFor space length, λ is for determining colouring information and space length information Weight, λ more large space distance it is more important, the smaller color distances of λ are more important;
Step 2-3, iteration cluster process, the result clustered for the first time according to step 2-2, recalculate the cluster of each class Center;New cluster centre characteristic value is the average value of all point features of each class, is then found in a class and new cluster The immediate point of central feature value calculates according to characteristic distance in 2-2 as new cluster centre point and classifying method is counted again Calculate each non-cluster central point generic;K end of iteration;After iterative calculation terminates, the point in each class is to be formed at the beginning of one Begin super voxel.
Step 3, the fusion of super voxel, using initially super voxel as summit, set up side construction non-directed graph G=between adjacent super voxel (V, E), the difference between adjacent super voxel is measured using the Lab color spaces distance and normal vector angular separation of each initial super voxel It is different, minimized using difference in class and class inherited maximizes the initial super voxel of thought fusion, obtain the super voxel of mutative scale.Super body Element fusion, specifically includes following steps:
Step 3-1, the obtained super set of voxels of initial segmentation is set as C, each super voxel is ci, with super voxel ciFor top Point viVertex set V is set up, the adjacent super voxel v for having public boundaryi, vjBetween set up side eijSide collection E is formed, nothing is built with V and E To figure G;According to the three dimensional space coordinate at P midpoints, k neighbour's point coordinates covariance matrix principal component decomposition method meters of standard are used The normal vector of each point is calculated, then each edge with formula (5) to G is assigned to weight w (vi,vj)
Wherein,lmaxRespectively super voxel ciMiddle mean flow rate and high-high brightness a little, θijFor super voxel ciAnd cj Normal vector angle, the normal vector of super voxel be taken as in the super voxel normal vector average a little, α is weight factor;
Step 3-2, initial super voxel is merged, preceding each super voxel will be merged and be initialized as a region Ai, vkAnd vl It is region AiIn two super voxels, the internal dissmilarity degree of definition for the region minimum spanning tree maximum side, as shown in formula (7)
Int(Ai)=maxw (vk,vl),vk,vl∈Ai,(vk,vl)∈E (7)
Two region A are defined with formula (8)iAnd AjOutside dissmilarity degree, wherein vmIt is region AiIn super voxel, vnIt is area Domain AjIn super voxel
Dif(Ai,Aj)=minw (vm,vn),vm∈Ai,vn∈Aj,(vm,vn)∈E (8)
MInt(Ai,Aj) it is AiAnd AjThe internal dissmilarity degree of minimum in the two regions, is calculated with formula (9) afterwards
MInt(Ai,Aj)=min ((Int (Ai)+τ(Ai)),(Int(Aj+τ(Aj))) (9)
Wherein τ (Ai)=e/ | Ai|, | Ai| it is region AiThe number of point is included, e is the constant of setting;
The outside dissmilarity degree and internal dissmilarity degree in two regions are compared afterwards, closed if the formula that meets (10) And the two regions, otherwise nonjoinder
MInt(Ai,Aj)<Dif(Ai,Aj) (10)
The region that region merging technique process is performed until in P obtains the super voxel of mutative scale without untill can merging.
Compared with prior art, its remarkable advantage is the present invention:(1) present invention is initial using the selection of the Poisson dish method of sampling Seed point, compared with the uniform grid method of sampling, this method more meets the distribution character of cone cell on the retina, contributes to Uniform sampling is concentrated in uneven three-dimensional data points;(2) present invention merges depth value and normal vector etc. on the basis of color Geological information, is more suitable for the data for handling RGB-D types;(3) the inventive method is by once calculating in same frame RGB-D data It is upper split obtain the super voxel of different scale, hence it is evident that be different from and once calculate that can only to obtain yardstick on same frame data more consistent Super voxel undue segmentation method, be more beneficial for continuing after the decrease the amount of calculation and multiscale analysis of algorithm.
The present invention will be further described below in conjunction with the accompanying drawings.
Brief description of the drawings
Fig. 1 for the present invention algorithm pre-segmentation with merge after the super voxel segmentation result of mutative scale.
Fig. 2 is the super voxel dividing method flow chart of RGB-D image mutative scales of the invention.
Embodiment
A kind of super voxel dividing method of RGB-D images mutative scale of the present invention, comprises the following steps:
Step 1, seed point are chosen, if a frame RGB-D view data is P, resolution ratio arranges for m rows n, the letter of each data point Breath includes 3 Color Channels (r, g, b) and 1 depth channel (d).A point p is randomly selected from P0As initial seed point, Set radius threshold R as the minimum range between seed point to be generated, sampled using Poisson dish sampling algorithm in P and obtain seed Point set.Specifically include following steps:
Step 1-1, a point p is randomly selected in P0As initial seed point, sampled point queue L will be enlivened1It is initialized as Sky, p0Add L1, by inactive sampled point queue L2It is initialized as sky;
If step 1-2, L1It is not sky, then from L1In go out one point p of teami, with piFor the center of circle, with R and 2R, (R is used respectively Pixel coordinate unit) to randomly choose candidate's sampled point in the concentric circular regions of radius, if candidate's sampled point and existing seed point Distance be then added into L more than R1.If the attempt to K times still without qualified candidate's sampled point, then by piFrom L1In delete Remove, and add L2
Step 1-3, L2In point be selected seed point.
Step 2, super voxel pre-segmentation, are turned the color space of each point in P from RGB with the color space conversion method of standard Be changed to Lab, depth value d is converted to three dimensional space coordinate (x, y, z), using each seed point as cluster centre, Color distance and Three dimensions distance iterates to calculate the distance between non-seed point and seed point, obtains initial super voxel segmentation result.Specifically include Following steps:
Step 2-1, the depth value at RGB-D images midpoint is converted into three dimensional space coordinate, if the point that the i-th row jth is arranged in P For pij, its depth value is dij, depth value is converted into three dimensional space coordinate using formula (1):
Wherein f is the focal length of video camera, (cx,cy) be image centre coordinate, will with the color space conversion formula of standard The RGB color of each point is converted to Lab colors in P.Changed by color space and coordinate space, the point in P be expressed as 6 dimensions to Measure [l, a, b, x, y, z];
Step 2-2, each seed point obtained using step 1 carry out range searching cluster as initial cluster center, calculate each poly- The distance of point and the cluster centre in the 2R*2R contiguous ranges of class center, each non-cluster central point is ranged special with it The minimum cluster centre of distance is levied to complete two point p in first time cluster process, PiAnd pjBetween characteristic distance measure formulas such as Under:
In formula (2), dlabFor color distance, dxyzFor space length, λ is for determining colouring information and space length information Weight, λ more large space distance it is more important, the smaller color distances of λ are more important, with specific reference to concrete application demand set.
Step 2-3, iteration cluster process, the result clustered for the first time according to step 2-2, recalculate the cluster of each class Center.New cluster centre characteristic value is the average value of such all point feature, then in a class in searching and new cluster The immediate point of heart characteristic value calculates according to characteristic distance in 2-2 as new cluster centre point and classifying method is recalculated Each non-cluster central point generic.K end of iteration.After iterative calculation terminates, the point in each class is to form one initially Super voxel.
Step 3, the fusion of super voxel, using initially super voxel as summit, syntople is side construction non-directed graph G between each super voxel =(V, E), the difference between adjacent super voxel is measured using the Lab color spaces distance and normal vector angular separation of each initial super voxel It is different, minimized using difference in class and class inherited maximizes the initial super voxel of thought fusion, obtain the super voxel of mutative scale.Specifically Comprise the following steps:
Step 3-1, the obtained super set of voxels of initial segmentation is set as C, each super voxel is ci, with super voxel ciFor top Point viVertex set V is set up, the adjacent super voxel v for having public boundaryi, vjBetween set up side eijSide collection E is formed, nothing is built with V and E To figure G.According to the three dimensional space coordinate at P midpoints, k neighbour's point coordinates covariance matrix principal component decomposition method meters of standard are used The normal vector of each point is calculated, then each edge with formula (5) to G is assigned to weight w (vi,vj)
Wherein,lmaxRespectively super voxel ciMiddle mean flow rate and high-high brightness a little, θijFor super voxel ciAnd cj Normal vector angle, the normal vector of super voxel be taken as in the super voxel normal vector average a little, α is weight factor.
Step 3-2, initial super voxel is merged, merge preceding each super voxel and oneself be initialized as a region Ai, vkWith vlIt is region AiIn two super voxels, the internal dissmilarity degree of definition for the region minimum spanning tree maximum side, such as formula (7) institute Show
Int(Ai)=maxw (vk,vl),vk,vl∈Ai,(vk,vl)∈E (7)
The outside dissmilarity degree in two regions is defined with formula (8)
Dif(Ai,Aj)=minw (vm,vn),vm∈Ai,vn∈Aj,(vm,vn)∈E (8)
MInt (Ai, Aj) is AiAnd AjThe minimum internal diversity in the two regions, is calculated with formula (9) afterwards
MInt(Ai,Aj)=min ((Int (Ai)+τ(Ai)),(Int(Aj+τ(Aj))) (9)
Wherein τ (Ai)=e/ | Ai|, | Ai| it is region AiThe number of point is included, e is the constant of setting;
The outside dissmilarity degree and internal dissmilarity degree in two regions are compared afterwards, closed if the formula that meets (10) And the two regions, otherwise nonjoinder
MInt(Ai,Aj)<Dif(Ai,Aj) (10)
The region that region merging technique process is performed until in P obtains the super voxel of mutative scale without untill can merging.
Invention is using Poisson dish method of sampling selection initial seed point, and compared with the uniform grid method of sampling, this method is more Meet the distribution character of cone cell on the retina, help to concentrate uniform sampling in uneven three-dimensional data points.
With reference to specific embodiment, the present invention will be further described.
Embodiment 1
Fig. 1 carries out pre-segmentation and mutative scale point for the present invention's to the RGB-D images that a frame Kinect depth cameras are gathered The process and its result schematic diagram cut.
Split a frame RGB-D images underneath with the undue segmentation method of the super voxel of mutative scale of the present invention, comprise the following steps:
Step 1, seed point are chosen, and a frame RGB-D view data is P, and resolution ratio is that 480 rows 640 are arranged, each data point Packet contains 3 Color Channels (r, g, b) and 1 depth channel (d).A point p is randomly selected from P0It is used as initial seed Point, is set radius threshold R=20 as the minimum range between seed point to be generated, is sampled using Poisson dish sampling algorithm in P Obtain seed point set.Specifically include following steps:
Step 1-1, a point p is randomly selected in P0As initial seed point, sampled point queue L will be enlivened1It is initialized as Sky, p0Add L1, by inactive sampled point queue L2It is initialized as sky;
If step 1-2, L1It is not sky, then from L1In go out one point p of teami, with piFor the center of circle, R and 2R are the concentric of radius Circle randomly chooses candidate's sampled point in region, and L is added into if the distance of candidate's sampled point and existing seed point is more than R1.Such as Fruit attempts K=30 times still without qualified candidate's sampled point, then by piFrom L1It is middle to delete, and add L2
Step 1-3, L2In point be selected seed point
Step 2, super voxel pre-segmentation, are turned the color space of each point in P from RGB with the color space conversion formula of standard Be changed to Lab, depth value d is converted to three dimensional space coordinate (x, y, z), using each seed point as cluster centre, Color distance and Three dimensions distance iterates to calculate the distance between non-seed point and seed point, obtains initial super voxel segmentation result.Specifically include Following steps:
Step 2-1, the depth value at RGB-D images midpoint is converted into three dimensional space coordinate, if the point that the i-th row jth is arranged in P For pij, its depth value is dij, depth value is converted into three dimensional space coordinate using formula (1):
According to the exploitation handbook of Kinect depth cameras, f=570.3, cx=320, cy=240.It is empty with the color of standard Between conversion formula the RGB color of each point in P is converted into Lab colors.Changed by color space and coordinate space, the point in P It is expressed as 6 dimensional vectors [l, a, b, x, y, z];
Step 2-2, the individual seed point obtained using step 1 carry out range searching cluster as initial cluster center, calculate each poly- The distance of point and the cluster centre in the 2R*2R contiguous ranges of class center, each non-cluster central point is ranged special with it The minimum cluster centre of distance is levied to complete two point p in the first cluster process, PiAnd pjBetween characteristic distance measure formulas such as Under:
In formula (2), dlabFor color distance, dxyzFor space length, λ=1.
Step 2-3, iteration cluster process, the result clustered for the first time according to step 2-2, recalculate the cluster of each class Center.New cluster centre characteristic value is the average value of all point features of each class, is then found in a class and new cluster The immediate point of central feature value calculates according to characteristic distance in 2-2 as new cluster centre point and classifying method is counted again Calculate each non-cluster central point generic.K=10 end of iteration.After iterative calculation terminates, the point in each class is to form one Individual initial super voxel, as shown in pre-segmentation result in Fig. 1.
Step 3, the fusion of super voxel, using initially super voxel as summit, syntople is side construction non-directed graph G between each super voxel =(V, E), the difference between adjacent super voxel is measured using the Lab color spaces distance and normal vector angular separation of each initial super voxel It is different, minimized using difference in class and class inherited maximizes the initial super voxel of thought fusion, obtain the super voxel of mutative scale.Specifically Comprise the following steps:
Step 3-1, the obtained super set of voxels of initial segmentation is set as C, each super voxel is ci, with super voxel ciFor top Point viVertex set V is set up, the adjacent super voxel v for having public boundaryi, vjBetween set up side eijSide collection E is formed, nothing is built with V and E To figure G.According to the three dimensional space coordinate at P midpoints, k neighbour's point coordinates covariance matrix principal component decomposition method meters of standard are used The normal vector of each point is calculated, then each edge with formula (5) to G is assigned to weight w (vi,vj)
Wherein,lmaxRespectively super voxel ciMiddle mean flow rate and high-high brightness a little, θijFor super voxel ciAnd cj Normal vector angle, the normal vector of super voxel be taken as in the super voxel normal vector average a little, α=0.5.
Step 3-2, initial super voxel merge, and each super voxel oneself is initialized as a region A before mergingi, vkAnd vlIt is area Domain AiIn two super voxels, the internal dissmilarity degree of definition for the region minimum spanning tree maximum side, as shown in formula (7)
Int(Ai)=maxw (vk,vl),vk,vl∈Ai,(vk,vl)∈E (7)
Two region A are defined with formula (8)iAnd AjOutside dissmilarity degree, wherein vmIt is region AiIn super voxel, vnIt is area Domain AjIn super voxel
Dif(Ai,Aj)=minw (vm,vn),vm∈Ai,vn∈Aj,(vm,vn)∈E (8)
MInt(Ai,Aj) it is AiAnd AjThe internal dissmilarity degree of minimum in the two regions, is calculated with formula (9) afterwards
MInt(Ai,Aj)=min ((Int (Ai)+τ(Ai)),(Int(Aj+τ(Aj))) (9)
Wherein τ (Ai)=e/ | Ai|, | Ai| it is region AiInclude the number of point, e=100.
The outside dissmilarity degree and internal dissmilarity degree in two regions are compared afterwards, closed if the formula that meets (10) And the two regions, otherwise nonjoinder
MInt(Ai,Aj)<Dif(Ai,Aj) (10)
Region that region merging technique process is performed until in P is not untill can merging, finally giving after merging Each region is the super voxel of mutative scale that segmentation is obtained, as shown in the super voxel segmentation result of mutative scale in Fig. 1.
The inventive method obtains the super voxel of different scale by once calculating the segmentation in same frame RGB-D data, bright Show to be different from once to calculate and the undue segmentation method of the more consistent super voxel of yardstick can only be obtained on same frame data, be more beneficial for Continue the amount of calculation and multiscale analysis of algorithm after the decrease.

Claims (4)

1. a kind of super voxel dividing method of RGB-D images mutative scale, it is characterised in that comprise the following steps:
Step 1, seed point are chosen, if a frame RGB-D view data is P, resolution ratio arranges for m rows n, the packet of each data point Containing 3 Color Channels (r, g, b) and 1 depth channel (d);A point p is randomly selected from P0(numeral 0 is subscript) is as just Beginning seed point, is set radius threshold R as the minimum range between seed point to be generated, is adopted using Poisson dish sampling algorithm in P Sample obtains seed point set;
Step 2, pre-segmentation is carried out to super voxel, the color space of each point in data is converted into Lab from RGB, depth value d is turned Three dimensional space coordinate (x, y, z) is changed to, afterwards using each seed point as cluster centre, Color distance and three dimensions distance change In generation, calculates the distance between non-seed point and seed point, obtains initial super voxel segmentation result;
Step 3, super voxel is merged, using initially super voxel as summit, to be that side is constructed undirected for syntople between each super voxel G=(V, E) is schemed, using between the Lab color spaces distance and the adjacent super voxel of normal vector angular separation measurement of each initial super voxel Difference, obtain the super voxel of mutative scale by merging initial super voxel, complete and the super voxel of RGB-D image mutative scales is divided Cut.
2. the super voxel dividing method of RGB-D images mutative scale according to claim 1, it is characterised in that seed in step 1 Point selection specifically includes following steps:
Step 1-1, in a frame RGB-D view data P randomly select a point p0As initial seed point, sampled point will be enlivened Queue L1Sky is initialized as, p0Add L1, by inactive sampled point queue L2It is initialized as sky;
Step 1-2, judgement enliven sampled point queue L1Whether it is empty, if L1It is not sky, then from L1In go out one point p of teami, with pi For the center of circle, R and 2R are random selection candidate's sampled point in the concentric circular regions of radius, if candidate's sampled point and existing seed point Distance is then added into L more than R1;If the attempt to K times still without qualified candidate's sampled point, then by piFrom L1It is middle to delete, And add L2;Wherein R uses pixel coordinate unit, and K is default numerical value;
Step 1-3, L2In point be selected seed point.
3. the super voxel dividing method of RGB-D images mutative scale according to claim 1 or 2, it is characterised in that in step 2 Following steps are specifically included to super voxel pre-segmentation:
Step 2-1, the depth value at RGB-D images midpoint is converted into three dimensional space coordinate, if the point that the i-th row jth is arranged in P is pij, its depth value is dij, depth value is converted into three dimensional space coordinate using formula (1):
p i j ( x , y , z ) T = d i j ( i - c x f , j - c y f , 1 ) T - - - ( 1 )
Wherein f is the focal length of video camera, (cx,cy) be image centre coordinate, with the color space conversion formula of standard by P The RGB color of each point is converted to Lab colors, and the point in P is expressed as into 6 dimensional vectors [l, a, b, x, y, z];
Step 2-2, the individual seed point obtained using step 1 are carried out range searching cluster, calculated in each cluster as initial cluster center The distance of point in heart 2R*2R contiguous ranges and the cluster centre, by each non-cluster central point range with its feature away from From minimum cluster centre to complete two point p in first time cluster process, PiAnd pjBetween characteristic distance measure formulas it is as follows:
D ( p i , p j ) = d l a b + &lambda; d x y z R - - - ( 2 )
d l a b = ( l i - l j ) 2 + ( a i - a j ) 2 + ( b i - b j ) 2 - - - ( 3 )
d x y z = ( x i - x j ) 2 + ( y i - y j ) 2 + ( z i - z j ) 2 - - - ( 4 )
In formula (2), dlabFor color distance, dxyzFor space length, λ is the power for determining colouring information and space length information Weight, λ more large space distances are more important, and the smaller color distances of λ are more important;
In step 2-3, iteration cluster process, the result clustered for the first time according to step 2-2, the cluster for recalculating each class The heart;New cluster centre characteristic value is the average value of all point features of each class, then in a class in searching and new cluster The immediate point of heart characteristic value calculates according to characteristic distance in 2-2 as new cluster centre point and classifying method is recalculated Each non-cluster central point generic;K end of iteration;After iterative calculation terminates, the point in each class is to form one initially Super voxel.
4. three-dimensional point cloud point normal estimation method according to claim 1, it is characterised in that super voxel melts in step 3 Close, specifically include following steps:
Step 3-1, the obtained super set of voxels of initial segmentation is set as C, each super voxel is ci, with super voxel ciFor vertex vi Vertex set V is set up, the adjacent super voxel v for having public boundaryi, vjBetween set up side eijSide collection E is formed, non-directed graph is built with V and E G;According to the three dimensional space coordinate at P midpoints, calculate each using k neighbour's point coordinates covariance matrix principal component decompositions method of standard Point normal vector, then each edge with formula (5) to G be assigned to weight w (vi,vj)
w ( v i , v j ) = &alpha; | l &OverBar; i - l &OverBar; j | l m a x + ( 1 - &alpha; ) &theta; i j &pi; - - - ( 5 )
&theta; i j = cos - 1 ( n &RightArrow; i &CenterDot; n &RightArrow; j | n &RightArrow; i | | n &RightArrow; j | ) - - - ( 6 )
Wherein,lmaxRespectively super voxel ciMiddle mean flow rate and high-high brightness a little, θijFor super voxel ciAnd cjMethod Vector angle, the normal vector of super voxel be taken as in the super voxel normal vector average a little, α is weight factor;
Step 3-2, initial super voxel is merged, preceding each super voxel will be merged and be initialized as a region Ai, vkAnd vlIt is area Domain AiIn two super voxels, the internal dissmilarity degree of definition for the region minimum spanning tree maximum side, as shown in formula (7)
Int(Ai)=max w (vk,vl),vk,vl∈Ai,(vk,vl)∈E (7)
Two region A are defined with formula (8)iAnd AjOutside dissmilarity degree, wherein vmIt is region AiIn super voxel, vnIt is region Aj In super voxel
Dif(Ai,Aj)=min w (vm,vn),vm∈Ai,vn∈Aj,(vm,vn)∈E (8)
MInt(Ai,Aj) it is AiAnd AjThe internal dissmilarity degree of minimum in the two regions, is calculated with formula (9) afterwards
MInt(Ai,Aj)=min ((Int (Ai)+τ(Ai)),(Int(Aj+τ(Aj))) (9)
Wherein τ (Ai)=e/ | Ai|, | Ai| it is region AiThe number of point is included, e is the constant of setting;
The outside dissmilarity degree and internal dissmilarity degree in two regions are compared afterwards, merge this if the formula that meets (10) Two regions, otherwise nonjoinder
MInt(Ai,Aj)<Dif(Ai,Aj) (10)
The region that region merging technique process is performed until in P obtains the super voxel of mutative scale without untill can merging.
CN201710168730.6A 2017-03-21 2017-03-21 A kind of super voxel dividing method of RGB D image mutative scales Pending CN106997591A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710168730.6A CN106997591A (en) 2017-03-21 2017-03-21 A kind of super voxel dividing method of RGB D image mutative scales

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710168730.6A CN106997591A (en) 2017-03-21 2017-03-21 A kind of super voxel dividing method of RGB D image mutative scales

Publications (1)

Publication Number Publication Date
CN106997591A true CN106997591A (en) 2017-08-01

Family

ID=59431712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710168730.6A Pending CN106997591A (en) 2017-03-21 2017-03-21 A kind of super voxel dividing method of RGB D image mutative scales

Country Status (1)

Country Link
CN (1) CN106997591A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146894A (en) * 2018-08-07 2019-01-04 庄朝尹 A kind of model area dividing method of three-dimensional modeling
CN109257543A (en) * 2018-11-29 2019-01-22 维沃移动通信有限公司 Screening-mode control method and mobile terminal
CN109917419A (en) * 2019-04-12 2019-06-21 中山大学 A kind of depth fill-in congestion system and method based on laser radar and image
CN110610505A (en) * 2019-09-25 2019-12-24 中科新松有限公司 Image segmentation method fusing depth and color information
CN110852949A (en) * 2019-11-07 2020-02-28 上海眼控科技股份有限公司 Point cloud data completion method and device, computer equipment and storage medium
CN111488769A (en) * 2019-01-28 2020-08-04 北京工商大学 Unsupervised fusion point cloud superpixelization method based on light spot divergence size
CN112348816A (en) * 2021-01-07 2021-02-09 北京明略软件***有限公司 Brain magnetic resonance image segmentation method, storage medium, and electronic device
CN112580641A (en) * 2020-11-23 2021-03-30 上海明略人工智能(集团)有限公司 Image feature extraction method and device, storage medium and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877128A (en) * 2009-12-23 2010-11-03 中国科学院自动化研究所 Method for segmenting different objects in three-dimensional scene
CN104616289A (en) * 2014-12-19 2015-05-13 西安华海盈泰医疗信息技术有限公司 Removal method and system for bone tissue in 3D CT (Three Dimensional Computed Tomography) image
CN105719295A (en) * 2016-01-21 2016-06-29 浙江大学 Intracranial hemorrhage area segmentation method based on three-dimensional super voxel and system thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877128A (en) * 2009-12-23 2010-11-03 中国科学院自动化研究所 Method for segmenting different objects in three-dimensional scene
CN104616289A (en) * 2014-12-19 2015-05-13 西安华海盈泰医疗信息技术有限公司 Removal method and system for bone tissue in 3D CT (Three Dimensional Computed Tomography) image
CN105719295A (en) * 2016-01-21 2016-06-29 浙江大学 Intracranial hemorrhage area segmentation method based on three-dimensional super voxel and system thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PENGXU等: "Scale adaptive supervoxel segmentation of RGB-D image", 《2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO)》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146894A (en) * 2018-08-07 2019-01-04 庄朝尹 A kind of model area dividing method of three-dimensional modeling
CN109257543A (en) * 2018-11-29 2019-01-22 维沃移动通信有限公司 Screening-mode control method and mobile terminal
CN111488769A (en) * 2019-01-28 2020-08-04 北京工商大学 Unsupervised fusion point cloud superpixelization method based on light spot divergence size
CN109917419A (en) * 2019-04-12 2019-06-21 中山大学 A kind of depth fill-in congestion system and method based on laser radar and image
CN109917419B (en) * 2019-04-12 2021-04-13 中山大学 Depth filling dense system and method based on laser radar and image
CN110610505A (en) * 2019-09-25 2019-12-24 中科新松有限公司 Image segmentation method fusing depth and color information
CN110852949A (en) * 2019-11-07 2020-02-28 上海眼控科技股份有限公司 Point cloud data completion method and device, computer equipment and storage medium
CN112580641A (en) * 2020-11-23 2021-03-30 上海明略人工智能(集团)有限公司 Image feature extraction method and device, storage medium and electronic equipment
CN112580641B (en) * 2020-11-23 2024-06-04 上海明略人工智能(集团)有限公司 Image feature extraction method and device, storage medium and electronic equipment
CN112348816A (en) * 2021-01-07 2021-02-09 北京明略软件***有限公司 Brain magnetic resonance image segmentation method, storage medium, and electronic device

Similar Documents

Publication Publication Date Title
CN106997591A (en) A kind of super voxel dividing method of RGB D image mutative scales
Nayak et al. Analysing roughness of surface through fractal dimension: A review
CN106203430B (en) A kind of conspicuousness object detecting method based on foreground focused degree and background priori
Li et al. Automatic organ-level point cloud segmentation of maize shoots by integrating high-throughput data acquisition and deep learning
CN106846344B (en) A kind of image segmentation optimal identification method based on the complete degree in edge
CN108052886B (en) A kind of puccinia striiformis uredospore programming count method of counting
CN104361340B (en) The SAR image target quick determination method for being detected and being clustered based on conspicuousness
CN104282026B (en) Distributing homogeneity appraisal procedure based on watershed algorithm and minimum spanning tree
CN106709517A (en) Mangrove recognition method and system
Su Scale-variable region-merging for high resolution remote sensing image segmentation
CN108229550A (en) A kind of cloud atlas sorting technique that network of forests network is cascaded based on more granularities
CN110309780A (en) High resolution image houseclearing based on BFD-IGA-SVM model quickly supervises identification
CN109446986B (en) Effective feature extraction and tree species identification method for tree laser point cloud
CN104408733B (en) Object random walk-based visual saliency detection method and system for remote sensing image
CN109711401A (en) A kind of Method for text detection in natural scene image based on Faster Rcnn
CN111047695A (en) Method for extracting height spatial information and contour line of urban group
CN107564016B (en) A kind of the Multi-Band Remote Sensing Images segmentation and labeling method of integrally object light spectrum information
CN106960221A (en) A kind of hyperspectral image classification method merged based on spectral signature and space characteristics and system
CN104252549A (en) Well spacing analysis method based on Kriging interpolation
CN109740631A (en) Object-based OBIA-SVM-CNN Remote Image Classification
CN109035289A (en) Purple soil image segmentation extracting method based on Chebyshev inequality H threshold value
CN107292328A (en) The remote sensing image shadow Detection extracting method and system of multiple dimensioned multiple features fusion
CN106296680A (en) A kind of multiple features fusion high-resolution remote sensing image dividing method based on region
CN110084211A (en) A kind of action identification method
CN116030352B (en) Long-time-sequence land utilization classification method integrating multi-scale segmentation and super-pixel segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170801

RJ01 Rejection of invention patent application after publication