CN106469446A - The dividing method of depth image and segmenting device - Google Patents

The dividing method of depth image and segmenting device Download PDF

Info

Publication number
CN106469446A
CN106469446A CN201510520359.6A CN201510520359A CN106469446A CN 106469446 A CN106469446 A CN 106469446A CN 201510520359 A CN201510520359 A CN 201510520359A CN 106469446 A CN106469446 A CN 106469446A
Authority
CN
China
Prior art keywords
area
cutting object
pixel
chosen area
chosen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510520359.6A
Other languages
Chinese (zh)
Other versions
CN106469446B (en
Inventor
吴小勇
刘洁
王维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Technology Co Ltd
Xiaomi Inc
Original Assignee
Xiaomi Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Inc filed Critical Xiaomi Inc
Priority to CN201510520359.6A priority Critical patent/CN106469446B/en
Publication of CN106469446A publication Critical patent/CN106469446A/en
Application granted granted Critical
Publication of CN106469446B publication Critical patent/CN106469446B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The disclosure is directed to a kind of dividing method of depth image and segmenting device.Methods described includes:Obtain the depth value of the pixel in the first chosen area in described depth image;Estimate the depth value scope of the cutting object in described first chosen area;Determine that depth value in described first chosen area is in the pixel in the range of described depth value, and according to determined by pixel form depth value export area;And according to described depth value export area, the cutting object in described first chosen area is split, obtain the cutting object in described first chosen area.The dividing method of depth image of disclosure offer and segmenting device, when the depth difference of cutting object and background is larger, can be partitioned into required object exactly.

Description

The dividing method of depth image and segmenting device
Technical field
It relates to computer vision field, more particularly, to a kind of dividing method of depth image and segmentation Device.
Background technology
Image segmentation refers to digital picture is divided into the process of multiple images region (set of pixel). Traditional image partition method be the pixel value in image is processed on the basis of carry out.Therefore, In the place that the color distortion of cutting object and background is larger, the effectiveness comparison of segmentation is preferable, right splitting As place less with the color distortion of background, the effect of segmentation is just very undesirable.
Content of the invention
For overcoming problem present in correlation technique, the disclosure provide a kind of dividing method of depth image and Segmenting device.
Inventor expects, has a kind of view data to include the depth information of image at present.In cutting object When larger with the depth difference of background, carry out image segmentation using depth information, related skill can be overcome In art, the clear problem of the obscure boundary leading to is split according to pixel value, so that segmentation result is more accurate.
According to the embodiment of the present disclosure in a first aspect, provide a kind of dividing method of depth image.Described side Method includes:Obtain the depth value of the pixel in the first chosen area in described depth image;Estimate described The depth value scope of the cutting object in the first chosen area;Determine depth value in described first chosen area Be in the pixel in the range of described depth value, and according to determined by pixel formed depth value leading-out zone Domain;And the cutting object in described first chosen area is carried out point according to described depth value export area Cut, obtain the cutting object in described first chosen area.
According to the second aspect of the embodiment of the present disclosure, provide a kind of segmenting device of depth image.Described dress Put including:Depth value acquisition module, is configured to obtain in the first chosen area in described depth image The depth value of pixel;Depth value scope estimation module, is configured to estimate in described first chosen area Cutting object depth value scope;Depth value export area determining module, is configured to determine that described In one chosen area, depth value is in the pixel in the range of described depth value, and according to determined by pixel Point forms depth value export area;And first segmentation module, be configured to according to described depth value derive Region is split to the cutting object in described first chosen area, obtains in described first chosen area Cutting object.
According to the third aspect of the embodiment of the present disclosure, provide a kind of segmenting device of depth image.Described dress Put including:Processor;For storing the memorizer of processor executable;Wherein, described processor It is configured to:Obtain the depth value of the pixel in the first chosen area in described depth image;Estimate institute State the depth value scope of the cutting object in the first chosen area;Determine depth in described first chosen area Value is in the pixel in the range of described depth value, and according to determined by pixel formed depth value derive Region;And according to described depth value export area, the cutting object in described first chosen area is carried out Segmentation, obtains the cutting object in described first chosen area.
The technical scheme that embodiment of the disclosure provides can include following beneficial effect:
Object according to needed for acquired depth information in shooting process is partitioned in the picture, in segmentation Bulk effective information will not be lost in the image obtaining, be also not in bulk redundancy, and image The edge of segmentation is finer.Therefore, the disclosure provides the dividing method of depth image and segmenting device When the depth difference of cutting object and background is larger, required cutting object can be partitioned into exactly.
It should be appreciated that above general description and detailed description hereinafter are only exemplary and explanatory , the disclosure can not be limited.
Brief description
Accompanying drawing herein is merged in description and constitutes the part of this specification, shows and meets this public affairs The embodiment opened, and be used for explaining the principle of the disclosure together with description.
Fig. 1 is the schematic diagram of the image to be split according to an exemplary embodiment;
Fig. 2 is a kind of flow chart of the dividing method of depth image according to an exemplary embodiment;
Fig. 3 is the schematic diagram of first chosen area of the Fig. 1 according to an exemplary embodiment;
Fig. 4 is the depth of the cutting object in estimation first chosen area according to an exemplary embodiment The flow chart of angle value scope;
Fig. 5 is the distribution situation block diagram of the depth value of all pixels point in Fig. 1;
Fig. 6 is the cutting object implemented according to another exemplary in estimation first chosen area exemplifying The flow chart of depth value scope;
Fig. 7 is the schematic diagram of the reference zone according to an exemplary embodiment;
Fig. 8 is the depth of the cutting object in estimation first chosen area according to an exemplary embodiment The surface chart of the dialog box of angle value scope;
Fig. 9 is the flow chart of the dividing method implementing a kind of depth image exemplifying according to another exemplary;
Figure 10 is a kind of flow process of the dividing method of depth image according to further example embodiment Figure;
Figure 11 is the signal of the cutting object being formed in depth image according to an exemplary embodiment Figure;
Figure 12 is to implement, according to another exemplary, the showing of cutting object being formed in depth image of exemplifying It is intended to;
Figure 13 is a kind of block diagram of the segmenting device of depth image according to an exemplary embodiment;
Figure 14 is the structured flowchart of the depth value scope estimation module according to an exemplary embodiment;
Figure 15 is the block diagram of the depth value scope estimation module implementing to exemplify according to another exemplary;
Figure 16 is the block diagram of the segmenting device implementing a kind of depth image exemplifying according to another exemplary;
Figure 17 is the block diagram of the segmenting device implementing a kind of depth image exemplifying according to another exemplary; And
Figure 18 is a kind of block diagram of the segmenting device of depth image according to an exemplary embodiment.
Specific embodiment
Here will in detail exemplary embodiment be illustrated, its example is illustrated in the accompanying drawings.Following When description is related to accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawings represent same or analogous Key element.Embodiment described in following exemplary embodiment does not represent own consistent with the disclosure Embodiment.On the contrary, they be only with such as appended claims in described in detail, some sides of the disclosure The example of the consistent apparatus and method in face.
Fig. 1 is the schematic diagram of the image to be split according to an exemplary embodiment.Shown in Fig. 1 A cat in roof walking is have taken, background is blue sky and white clouds in image.We to do is to this Cat splits from whole image.It is, the cutting object in Fig. 1 is cat.If this cat For black cat or yellow cat, then utilize correlation technique, the effect of segmentation is reasonable.If this cat is white Cat, then because cutting object (white cat) is identical with background (white clouds) color, carried out using correlation technique During segmentation, obscure boundary is clear, and the effect of segmentation is bad.
So, when the color of cutting object and background is close, how this makes a distinction and just can make segmentation effect More preferably.Inventor expects to distinguish using the depth information in image.
In the correlation technique of computer vision, have been developed at present carrying out body-sensing seizure " body-sensing camera " (is also depth camera), and this body-sensing camera can obtain expression when shooting Each point in captured picture and the range information of photographic head.For example, the Xtion PRO shooting of Asus Head.Also this body-sensing camera is applied in the Xbox ONE game machine that Microsoft researches and develops at it.? In the image (referred to as depth image) being shot by this body-sensing camera, each pixel is except having Outside pixel value, also there is depth information.The depth information of one pixel should when being and representing this image of shooting The information of the distance between position in picture for the pixel and photographic head.
If it is, the image in Fig. 1 is the depth image being shot by depth camera, being capable of root The two is distinguished by the difference according to white cat and the depth information of white clouds, to reach more preferable segmentation effect.Cause This, inventor, based on above inventive concept, provides the dividing method of the depth image described in the disclosure And segmenting device.The dividing method of depth image and segmenting device that the disclosure described in detail below provides.
Fig. 2 is a kind of flow chart of the dividing method of depth image according to an exemplary embodiment. As shown in Fig. 2 the method comprising the steps of.
In step s 11, obtain the depth value of the pixel in the first chosen area in described depth image.
First, depth image is chosen one first chosen area.The scope of described first chosen area can To determine depending on concrete condition, it can be a part for entire depth image or depth image. In the case of the approximate range of the cutting object in can determine depth image, depth image can be chosen In a part as the first chosen area so that the cutting object of depth image is included in the first selection area In domain.Only the pixel in described first chosen area is processed, obtain in the first chosen area Cutting object, that is, obtained the cutting object of entire depth image.
For example, Fig. 3 is the schematic diagram of first chosen area of the Fig. 1 according to an exemplary embodiment. As shown in figure 3, wanting the image of dialogue cat to be split, dotted rectangle can be chosen as the first selection Region.White cat to be split is contained in dotted rectangle.So, in the part choosing depth image The scope of handled image-region in the case of the first chosen area, can be reduced, reduce computing Amount, accelerates the speed of segmentation.
In the depth image that depth camera shoots, each pixel has depth information.Each picture Vegetarian refreshments location and the distance between photographic head (depth) in picture be in theory zero arrive infinite Big.In practical application, the depth of each pixel can be normalized, obtain any one The depth value of individual scope.It is, in the depth image that depth camera shoots, can get every Numerical value (depth value) that one pixel carries, representing depth information.
In described step S12, estimate the depth value scope of the cutting object in the first chosen area.
Cutting object in first chosen area is split from the first chosen area it is thus necessary to determine that Which pixel belongs to cutting object, and which pixel is not belonging to cutting object.Can be in described step The depth value scope of the pixel that cutting object is comprised first is estimated, further according to estimated depth in S13 Value scope finds the pixel being in the range of described depth value.
As described above, after normalized, depth value can be the numerical value in any scope.Permissible Cutting object is estimated according to the depth value of the pixel in the first chosen area obtaining in step S11 Depth value scope.
Specifically, the depth value distribution situation of all pixels point in the first chosen area can first be counted, Judge the depth value scope of the cutting object in the first chosen area according to described distribution situation.Fig. 4 is The depth value scope of the cutting object in estimation first chosen area according to an exemplary embodiment Flow chart.As shown in figure 4, estimating the depth value scope (step of the cutting object in the first chosen area Rapid S12) comprise the following steps.
In step S121, the depth value according to the pixel in the first acquired chosen area determines The distribution situation of the depth value of all pixels point in the first chosen area.
Wherein, the distribution situation of depth value embodies the accounting relation of each depth value in the first chosen area. The distribution situation of described depth value for example can be expressed as the pixel number of each depth value, at depth value Pixel number in each depth value scope etc., and can count in a variety of manners, such as curve Figure, block diagram etc..Grasp the distribution situation of the depth value of all pixels point in the first chosen area, knot Specific situation in conjunction depth image is it becomes possible to estimate the depth of the cutting object in the first chosen area Value scope.
In step S122, according to determined by distribution situation estimate the first chosen area in segmentation right The depth value scope of elephant.
Can the accounting relation of the cutting object in estimating the first chosen area and the first chosen area, And first on the basis of cutting object in chosen area and the depth difference between background, estimate first The depth value scope of the cutting object in chosen area.
For example, Fig. 5 is the distribution situation block diagram of the depth value of all pixels point in Fig. 1.As Fig. 5 Shown, X-axis represents the depth value of pixel, this depth value be distributed in after normalized 0-200 it Between.Y-axis represents quantity or the accounting of pixel.The background of a cat and sky is comprised in the scene of Fig. 1. Through estimating it is recognised that cat occupied area constitutes about the half of sky area, and the depth value of sky should This is the maximum in depth value.When the first chosen area elects entire depth image as it can be determined that In one chosen area, depth value concentrates on two depth value regions substantially.The depth of the pixel that cat is comprised Angle value concentrates on a depth value region, and the pixel value of sky concentrates on another depth value region ( Near big depth value).Be may determine that by the block diagram in Fig. 5, the pixel corresponding to cat in Fig. 1 The depth value of point should be between 40-80, and other most of regions (sky and white clouds) then correspond to The depth value close to 200.
In step s 12, first a reference zone can also be chosen in the first chosen area, according to selected The depth value to estimate the cutting object in the first chosen area for the depth value scope in the reference zone taking Scope.Fig. 6 is the cutting object implemented according to another exemplary in estimation first chosen area exemplifying Depth value scope flow chart.As shown in fig. 6, estimating the depth of the cutting object in the first chosen area Angle value scope (step S12) comprises the following steps.
In step S123, the first chosen area is chosen a reference zone.
Choose described reference zone to be intended to estimate the first choosing according to the depth value scope of described reference zone Take the depth value scope of the cutting object in region.Therefore, reference zone can be chosen and exist with cutting object The region of close association is had on depth value.For example, it is possible in cutting object in the first chosen area A part is as reference zone.
Fig. 7 is the schematic diagram of the reference zone according to an exemplary embodiment.As shown in fig. 7, can To choose the first chosen area as whole image, a part of region I with selection cat is as reference zone I.User determines that described reference zone I is the part in cutting object (cat).Then, according to described The depth value scope of reference zone I determines the depth value scope of whole cat, thus ground whole cat from image In split, step afterwards is described in more detail below.
In step S124, determine the depth value scope of reference zone.
According to the depth value of the pixel obtaining in step S11 in the first chosen area in depth image, can To obtain the depth value scope in arbitrarily selected region in the first chosen area.
In step s 125, the depth value scope according to reference zone estimates dividing in the first chosen area Cut the depth value scope of object.
It is, combine scene (reference zone and the segmentation of image according to the depth value scope of reference zone The relation of object) estimating the depth value scope of the cutting object in the first chosen area.For example, root According to the depth bounds 50-55 of the reference zone I of the cutting object cat in Fig. 7, to judge whole cat ( Cutting object in one chosen area) depth value scope.
Belong to a part of region of the cutting object in the first chosen area in selected reference zone In the case of, can be according to concrete conditions such as position in described cutting object for the reference zone, sizes Lai really The depth value scope of the cutting object in fixed first chosen area.For example, in Fig. 7, reference zone I elects as One piece of region with cat, its depth bounds be 50-55, due to the reference zone I with cat with respect to It is located at the position of the comparison front end of whole cat for video camera, whole cat (cutting object) can be judged accordingly Depth value scope be, for example, 40-80.
In the software interface for Range Image Segmentation, the depth value model according to reference zone can be set Enclose the interactive interface of the depth value scope of the cutting object to estimate in the first chosen area, carried out by user Select.Fig. 8 is the cutting object in estimation first chosen area according to an exemplary embodiment The surface chart of the dialog box of depth value scope.As shown in figure 8, in step s 125, according to reference area When the depth value scope in domain determines the depth value scope of the cutting object in the first chosen area, can eject This dialog box.Wherein it is possible to square frame mark out determined by reference zone depth value scope.Reference There is slidably arrow, user can input the first estimated choosing in a sliding manner before and after region Take the depth value scope of the cutting object in region.In dialog box shown in Fig. 8, the depth of reference zone Scope is 50-55, and the depth value scope of estimated cutting object is 40-80.
When carrying out image segmentation, it is often required to character image is split.Embodiment shown in Fig. 6 In, in the case that the cutting object in the first chosen area is character image, can choose reference zone is Area of skin color.It is, the pixel of the colour of skin in the first chosen area can be detected, by the bag detecting Area of skin color containing skin pixel point is as reference zone.
For example, in the software interface for Range Image Segmentation, can arrange in toolbar and " choose The button of the colour of skin ".By clicking on described button, automatically the first chosen area can be carried out by computer The colour of skin (such as yellow skin) identifies.
After identifying the area of skin color in the first chosen area, using area of skin color as reference zone.According to Position in character image for the area of skin color, can estimate character image in the dialog box shown in Fig. 8 Depth value scope, and then determine the overall region of character image in the first chosen area.
Choose area of skin color as reference zone, not only can automatically be detected by computer, eliminate The artificial trouble choosing reference zone.And, selected reference zone has typical case in cutting object Representativeness so that the depth value scope more valency of cutting object that determines of next step accurately so that The segmentation result obtaining character image is more accurate.
It becomes possible to according to institute after the depth value scope of the cutting object in estimating the first chosen area The depth value scope estimated finds corresponding pixel, thus carrying out the segmentation of image.
In step s 13, determine that depth value in the first chosen area is in the pixel in the range of depth value, And according to determined by pixel formed depth value export area.
Wherein, depth value export area is the region derived according to depth value mode.Operation circle in user In face, the dialog box shown in Fig. 8 and depth image can be shown in the display simultaneously.Slide in user While arrow in dynamic dialog box, depth value export area determined by display in depth image, just In user according to determined by depth value export area adjust the cutting object in the first chosen area Depth value scope, thus obtain optimal segmentation result.
In step S14, according to depth value export area, the cutting object in the first chosen area is carried out Segmentation, obtains the cutting object in the first chosen area.It is, by depth value export area from depth Split in image to form the cutting object in the first chosen area.
Object according to needed for acquired depth information in shooting process is partitioned in the picture for the disclosure, Bulk effective information will not be lost in the image that segmentation obtains, also be not in bulk redundancy, and And the edge of image segmentation is finer.Therefore, the dividing method of the depth image that the disclosure provides is dividing Cut object and background depth difference larger when, required cutting object can be partitioned into exactly.
Alternatively, the dividing method of depth image above-described embodiment of the disclosure being provided is divided with traditional Segmentation method combines, and chooses respective advantage, and the segmentation result obtaining is than only dividing with one of which method Cut result more accurate.
Fig. 9 is the flow chart of the dividing method implementing a kind of depth image exemplifying according to another exemplary. As shown in figure 9, on the basis of the embodiment shown in Fig. 2, methods described is further comprising the steps of.
In step S15, obtain the pixel value of the pixel in the second chosen area in depth image.
The purpose choosing the second chosen area in depth image is in order in step afterwards, application passes The dividing method of system is split.Described second chosen area can be entire depth image or A part for depth image.Second chosen area and the first chosen area can overlap each other it is also possible to not Overlapping.For example, when wanting to split entire depth image traditional method, entire depth can be chosen Image is as the second chosen area, at this point it is possible to the cut zone that conventional segmentation methods are obtained and step The depth value export area that S11~step S14 obtains combines, and it is more accurate with a kind of method than only to obtain Segmentation result.Several embodiments that two methods merged are described in more detail below.
In step s 16, the second selection area is determined according to the pixel value of the pixel in the second chosen area The pixel of the cutting object in domain, and according to determined by pixel formed pixel value export area.
Wherein, pixel value export area is the region derived according to pixel value mode.Traditional dividing method It is to process pixel value to obtain pixel value export area later.It is simply that with tradition side in described step S18 The step that method is split.Including any one in once:According to the dividing method based on threshold value Determine the pixel of the cutting object in described second chosen area, and according to determined by pixel formed Pixel value export area;According to the segmentation being determined based on the dividing method at edge in described second chosen area The pixel of object, and according to determined by pixel formed pixel value export area;According to based on region Dividing method determine the pixel of the cutting object in described second chosen area, and according to determined by Pixel forms pixel value export area;Determine described second selection area according to based on the dividing method of graph theory The pixel of the cutting object in domain, and according to determined by pixel formed pixel value export area;Or Person is according to the picture of the cutting object being determined based on the dividing method of energy functional in described second chosen area Vegetarian refreshments, and according to determined by pixel formed pixel value export area.Any of the above method is this area Well known to technical staff, it is not described in detail in this.
In step S17, according to pixel value export area, the cutting object in the second chosen area is carried out Segmentation, obtains the cutting object in the second chosen area.
From the above mentioned, in depth image, two different regions can be chosen, use depth value respectively The deriving method (step S15~step S17) of deriving method (step S11~step S14) and pixel value It is partitioned into the cutting object in two regions.Afterwards, can also according to the cutting object in two regions Lai To determine the cutting object in depth image.
Although it is understood that step S11~step S14 and step S15~step S17 are with Fig. 9 Order illustrate, but be not limited to this order it is also possible to step S15~step S17 is placed in step Before rapid S11~step S14.
Figure 10 is a kind of flow process of the dividing method of depth image according to further example embodiment Figure.As shown in Figure 10, on the basis of the embodiment shown in Fig. 9, methods described can also include walking Rapid S18.
In step S18, according to dividing in the cutting object in the first chosen area and the second chosen area Cut object and determine the cutting object in depth image.
Due to (being walked with the deriving method (step S11~step S14) of depth value and the deriving method of pixel value Rapid S15~step S17) each there is the advantage of oneself, therefore, it can regarding concrete condition Lai Jiangliangge area The cutting object in domain is combined, to form the cutting object of depth image.
Hereinafter describe and two methods are combined the several specific embodiments split.
1) in the case that the first chosen area and described second chosen area do not overlap, step S18 Can be:Cutting object in cutting object in first chosen area and the second chosen area is merged, To form the cutting object in depth image.
In this embodiment, the cutting object in depth image can be divided into and not overlap two differences Region in, depending on concrete condition, apply the dividing method of depth value, another region in a region Middle application traditional method (dividing method of pixel value).For example, by background and cutting object depth difference Larger region is chosen for the first chosen area, by region choosing larger with cutting object color distortion for background It is taken as the second chosen area.After cutting object in respectively obtaining two regions, dividing two regions Cut object merging together, be the formation of the cut zone of depth image.
Figure 11 is the signal of the cutting object being formed in depth image according to an exemplary embodiment Figure.As shown in figure 11, in depth image, by the image segmentation of personage out.Can be by depth map As being divided into two regions, in area above, the image distance background of personage farther out, is suitable for using step The method segmentation of S11~step S14, therefore elects area above as first chosen area.Following area In domain, the image of personage is nearer with background, and color distortion is larger, is suitable for using step S15~step The method segmentation of S17, therefore elects area below as second chosen area.Carrying out respectively splitting it Afterwards, the cutting object in obtain two regions is merged, define complete character image.
2) described first chosen area and described second chosen area are overlapped, form overlapping region In the case of, step S18 can be:By the cutting object in described first chosen area in described overlay region Part outside described overlapping region for the cutting object in part outside domain, described second chosen area, And any one merging in following, to form the cutting object in described depth image:
Part in described overlapping region for the cutting object in described first chosen area;Or
Part in described overlapping region for the cutting object in described second chosen area.
It is, partly can retain outside overlapping region, in overlapping region, one partly can be selected The reasonable reservation of individual segmentation effect, the region merging technique being retained is formed the dividing method of depth image.
For example, after the cutting object in the first chosen area being split, if it is considered to which part point Cut effect is bad, again can choose the second chosen area in depth image, by the second chosen area Again split once with traditional dividing method, replaced in the first chosen area with the result again split Corresponding part.It is, portion that can be with the cutting object in the second chosen area in overlapping region Divide part in overlapping region for the cutting object replacing in the first chosen area.
Figure 12 is to implement, according to another exemplary, the showing of cutting object being formed in depth image of exemplifying It is intended to.As shown in figure 12, the character image in the depth image on Figure 11 left side is split, can To choose the entire image of Figure 11 as the first chosen area, first split with the method for depth segmentation, Obtain the result on Figure 12 left side.Now observe that, in the result of segmentation, the segmentation result in the lower right corner is paid no attention to Think, then the region that can choose in the dotted line frame of the lower right corner (now, second is chosen as the second chosen area Region is included in the first chosen area, and the second chosen area is also overlapping region), use conventional segmentation side Method is split to the character image in the second chosen area.It is then possible to by the second chosen area Character image (namely part in described overlapping region for the cutting object in described second chosen area) With part outside the second chosen area for the character image in the first chosen area (namely described first Part outside described overlapping region for the cutting object in chosen area) merge, to form whole personage Image (cutting object of depth image).It is understood that now there is no described second selection area Part outside described overlapping region for the cutting object in domain.
Embodiment described above all applies two kinds of dividing methods (depth value dividing method and conventional segmentation Method), respective advantage can be given play to, segmentation effect is better than with a kind of more preferable segmentation result of method.
Object according to needed for acquired depth information in shooting process is partitioned in the picture for the disclosure, Bulk effective information will not be lost in the image that segmentation obtains, also be not in bulk redundancy, and And the edge of image segmentation is finer.Therefore, the dividing method of the depth image that the disclosure provides is dividing Cut object and background depth difference larger when, required cutting object can be partitioned into exactly.
The disclosure also provides a kind of segmenting device of depth image.Figure 13 is according to an exemplary embodiment A kind of block diagram of the segmenting device of depth image illustrating.As shown in figure 13, described device includes depth Value acquisition module 11, depth value scope estimation module 12, depth value export area determining module 13 and First segmentation module 14.
Described depth value acquisition module 11 is configured to obtain in the first chosen area in described depth image Pixel depth value.
Described depth value scope estimation module 12 is configured to estimate the segmentation in described first chosen area The depth value scope of object.
Described depth value export area determining module 13 is configured to determine that in described first chosen area deep Angle value is in the pixel in the range of described depth value, and according to determined by pixel formed depth value lead Go out region.
Described first segmentation module 14 is configured to be selected to described first according to described depth value export area Take the cutting object in region to be split, obtain the cutting object in described first chosen area.
Figure 14 is the structured flowchart of the depth value scope estimation module 12 according to an exemplary embodiment. As shown in figure 14, depth value scope estimation module 12 includes distribution situation determining unit 121 and the first depth Angle value scope estimation unit 122.
Described distribution situation determining unit 121 is configured to according in acquired described first chosen area The depth value of pixel determine the distribution of the depth value of all pixels point in described first chosen area Situation.
Described first depth value scope estimation unit 122 is configured to distribution situation determined by basis and estimates The depth value scope of the cutting object in described first chosen area.
Figure 15 is the block diagram of the depth value scope estimation module 12 implementing to exemplify according to another exemplary. As shown in figure 15, depth value scope estimation module 12 includes reference zone selection unit 123, with reference to deep Angle value scope determining unit 124 and the second depth value scope estimation unit 125.
Described reference zone is chosen unit 123 and is configured to choose a reference in described first chosen area Region;
Described reference depth value scope determining unit 124 is configured to determine that the depth value of described reference zone Scope.
Described second depth value scope estimation unit 125 is configured to the depth value according to described reference zone Scope estimates the depth value scope of the cutting object in described first chosen area.
Alternatively, cutting object is character image, and reference zone is area of skin color.
Figure 16 is the block diagram of the segmenting device implementing a kind of depth image exemplifying according to another exemplary. As shown in figure 16, on the basis of Figure 13, described device also includes pixel value acquisition module 15, pixel Value export area determining module 16 and the second segmentation module 17.
Described pixel value acquisition module 15 is configured to obtain in the second chosen area in described depth image Pixel pixel value.
Described pixel value export area determining module 16 is configured to according in described second chosen area The pixel value of pixel determines the pixel of the cutting object in described second chosen area, and according to really Fixed pixel forms pixel value export area.
Described second segmentation module 17 is configured to be selected to described second according to described pixel value export area Take the cutting object in region to be split, obtain the cutting object in described second chosen area.
Wherein, pixel value export area determining module 16 include following in any one:
Threshold value lead-out unit, is configured to determine described second selection area according to based on the dividing method of threshold value The pixel of the cutting object in domain, and according to determined by pixel formed pixel value export area.
Edge lead-out unit, is configured to determine described second selection area according to based on the dividing method at edge The pixel of the cutting object in domain, and according to determined by pixel formed pixel value export area.
Region lead-out unit, is configured to determine described second selection area according to based on the dividing method in region The pixel of the cutting object in domain, and according to determined by pixel formed pixel value export area.
Graph theory lead-out unit, is configured to determine described second selection area according to based on the dividing method of graph theory The pixel of the cutting object in domain, and according to determined by pixel formed pixel value export area.
Energy functional lead-out unit, is configured to determine described the according to based on the dividing method of energy functional The pixel of the cutting object in two chosen area, and according to determined by pixel formed pixel value derive Region.
Figure 17 is the block diagram of the segmenting device implementing a kind of depth image exemplifying according to another exemplary. As shown in figure 17, on the basis of Figure 16, described device also includes cutting object determining module 18.
Described object determining module 18 be configured to according to the cutting object in described first chosen area and Cutting object in described second chosen area determines the cutting object in described depth image.
Alternatively, described first chosen area and described second chosen area do not overlap;And
Described cutting object determining module 18 is configured to:Will be right for the segmentation in described first chosen area As merging with the cutting object in described second chosen area, right to form the segmentation in described depth image As.
Alternatively, described first chosen area and described second chosen area are overlapped, form overlay region Domain;And
Described cutting object determining module 18 is configured to:Will be right for the segmentation in described first chosen area As the cutting object in the part outside described overlapping region, described second chosen area is in described overlap Part outside region and following in any one merging, with formed in described depth image point Cut object:
Part in described overlapping region for the cutting object in described first chosen area;Or
Part in described overlapping region for the cutting object in described second chosen area.
With regard to the device in above-described embodiment, the concrete mode of wherein modules execution operation is having It has been described in detail in the embodiment closing methods described, explanation will be not set forth in detail herein.
Object according to needed for acquired depth information in shooting process is partitioned in the picture for the disclosure, Bulk effective information will not be lost in the image that segmentation obtains, also be not in bulk redundancy, and And the edge of image segmentation is finer.Therefore, the segmenting device of the depth image that the disclosure provides is dividing Cut object and background depth difference larger when, required cutting object can be partitioned into exactly.
Figure 18 is a kind of frame of the segmenting device 1800 of the depth image according to an exemplary embodiment Figure.For example, device 1800 can be mobile phone, computer, digital broadcast terminal, information receiving and transmitting Equipment, game console, tablet device, armarium, body-building equipment, personal digital assistant etc..
With reference to Figure 18, device 1800 can include following one or more assemblies:Process assembly 1802, Memorizer 1804, electric power assembly 1806, multimedia groupware 1808, audio-frequency assembly 1810, input/output (I/O) interface 1812, sensor cluster 1814, and communication component 1816.
The integrated operation of the usual control device 1800 of process assembly 1802, such as with display, call, The associated operation of data communication, camera operation and record operation.Process assembly 1802 can include one Individual or multiple processors 1820 carry out execute instruction, with complete above-mentioned depth image dividing method whole Or part steps.Additionally, process assembly 1802 can include one or more modules, it is easy to treatment group Interaction between part 1802 and other assemblies.For example, process assembly 1802 can include multi-media module, To facilitate the interaction between multimedia groupware 1808 and process assembly 1802.
Memorizer 1804 is configured to store various types of data to support the operation in device 1800. The example of these data includes the instruction for any application program of operation or method on device 1800, Contact data, telephone book data, message, image, video etc..Memorizer 1804 can be by any The volatibility of type or non-volatile memory device or combinations thereof are realized, and such as static random-access is deposited Reservoir (SRAM), Electrically Erasable Read Only Memory (EEPROM), erasable programmable is only Reading memorizer (EPROM), programmable read only memory (PROM), read only memory (ROM), Magnetic memory, flash memory, disk or CD.
Electric power assembly 1806 provides electric power for the various assemblies of device 1800.Electric power assembly 1806 is permissible Including power-supply management system, one or more power supplys, and other with generate for device 1800, manage and The assembly that distribution electric power is associated.
Multimedia groupware 1808 includes one output interface of offer between described device 1800 and user Screen.In certain embodiments, screen can include liquid crystal display (LCD) and touch panel (TP). If screen includes touch panel, screen may be implemented as touch screen, to receive the input from user Signal.Touch panel includes one or more touch sensors with sensing touch, slip and touch panel Gesture.Described touch sensor can not only sensing touch or sliding action border, but also detect The persistent period related to described touch or slide and pressure.In certain embodiments, multimedia group Part 1808 includes a front-facing camera and/or post-positioned pick-up head.When device 1800 is in operator scheme, During as screening-mode or video mode, front-facing camera and/or post-positioned pick-up head can receive outside many matchmakers Volume data.Each front-facing camera and post-positioned pick-up head can be optical lens system or the tools of a fixation There is focusing and optical zoom capabilities.
Audio-frequency assembly 1810 is configured to output and/or input audio signal.For example, audio-frequency assembly 1810 Including a mike (MIC), when device 1800 is in operator scheme, such as call model, record mould When formula and speech recognition mode, mike is configured to receive external audio signal.The audio frequency letter being received Number can be further stored in memorizer 1804 or send via communication component 1816.In some enforcements In example, audio-frequency assembly 1810 also includes a speaker, for exports audio signal.
I/O interface 1812 is for providing interface between process assembly 1802 and peripheral interface module, above-mentioned outer Enclosing interface module can be keyboard, click wheel, button etc..These buttons may include but be not limited to:Homepage Button, volume button, start button and locking press button.
Sensor cluster 1814 includes one or more sensors, for providing each side for device 1800 The state estimation in face.For example, what sensor cluster 1814 can detect device 1800 beats opening/closing shape State, the relative localization of assembly, for example described assembly is display and the keypad of device 1800, sensing Device assembly 1814 can be with the position change of detection means 1800 or 1,800 1 assemblies of device, user Presence or absence of with what device 1800 contacted, device 1800 orientation or acceleration/deceleration and device 1800 Temperature change.Sensor cluster 1814 can include proximity transducer, is configured to do not appointing The presence of object nearby is detected during what physical contact.Sensor cluster 1814 can also include light sensing Device, such as CMOS or ccd image sensor, for using in imaging applications.In some embodiments In, described sensor cluster 1814 can also include acceleration transducer, gyro sensor, and magnetic passes Sensor, pressure transducer or temperature sensor.
Communication component 1816 is configured to facilitate wired or wireless way between device 1800 and other equipment Communication.Device 1800 can access the wireless network based on communication standard, such as WiFi, 2G or 3G, Or combinations thereof.In one exemplary embodiment, communication component 1816 receives via broadcast channel Broadcast singal or broadcast related information from external broadcasting management system.In one exemplary embodiment, Described communication component 1816 also includes near-field communication (NFC) module, to promote junction service.For example, RF identification (RFID) technology can be based in NFC module, Infrared Data Association (IrDA) technology, Ultra broadband (UWB) technology, bluetooth (BT) technology and other technologies are realizing.
In the exemplary embodiment, device 1800 can be by one or more application specific integrated circuits (ASIC), digital signal processor (DSP), digital signal processing appts (DSPD), programmable patrol Collect device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor Device or other electronic components are realized, for executing the dividing method of above-mentioned depth image.
In the exemplary embodiment, additionally provide a kind of non-transitory computer-readable storage including instruction Medium, for example, include the memorizer 1804 instructing, above-mentioned instruction can be by the processor 1820 of device 1800 Execution is to complete the dividing method of above-mentioned depth image.For example, described non-transitory computer-readable storage Medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and light Data storage device etc..
Those skilled in the art, after considering description and putting into practice the disclosure, will readily occur to its of the disclosure Its embodiment.The application is intended to any modification, purposes or the adaptations of the disclosure, this A little modifications, purposes or adaptations are followed the general principle of the disclosure and are included the disclosure and be not disclosed Common knowledge in the art or conventional techniques.Description and embodiments are considered only as example Property, the true scope of the disclosure is pointed out by claim below with spirit.
It should be appreciated that the disclosure is not limited to be described above and illustrated in the accompanying drawings accurate Structure, and various modifications and changes can carried out without departing from the scope.The scope of the present disclosure is only by institute Attached claim is limiting.

Claims (17)

1. a kind of dividing method of depth image is it is characterised in that methods described includes:
Obtain the depth value of the pixel in the first chosen area in described depth image;
Estimate the depth value scope of the cutting object in described first chosen area;
Determine that depth value in described first chosen area is in the pixel in the range of described depth value, and root According to determined by pixel formed depth value export area;And
According to described depth value export area, the cutting object in described first chosen area is split, Obtain the cutting object in described first chosen area.
2. method according to claim 1 is it is characterised in that area is chosen in described estimation described first The step of the depth value scope of the cutting object in domain includes:
Described first choosing is determined according to the depth value of the pixel in acquired described first chosen area Take the distribution situation of the depth value of all pixels point in region;And
According to determined by distribution situation estimate described first chosen area in cutting object depth value Scope.
3. method according to claim 1 is it is characterised in that area is chosen in described estimation described first The step of the depth value scope of the cutting object in domain includes:
Choose a reference zone in described first chosen area;
Determine the depth value scope of described reference zone;And
Depth value scope according to described reference zone estimates the cutting object in described first chosen area Depth value scope.
4. method according to claim 3 it is characterised in that described cutting object be character image, And described reference zone is area of skin color.
5. method according to claim 1 is it is characterised in that methods described also includes:
Obtain the pixel value of the pixel in the second chosen area in described depth image;
Determined in described second chosen area according to the pixel value of the pixel in described second chosen area Cutting object pixel, and according to determined by pixel formed pixel value export area;And
According to described pixel value export area, the cutting object in described second chosen area is split, Obtain the cutting object in described second chosen area.
6. method according to claim 5 is it is characterised in that described choose area according to described second The pixel value of the pixel in domain determines the pixel of the cutting object in described second chosen area, and root According to determined by pixel formed pixel value export area step include following in any one:
Pixel according to the cutting object being determined based on the dividing method of threshold value in described second chosen area Point, and according to determined by pixel formed pixel value export area;
Pixel according to the cutting object being determined based on the dividing method at edge in described second chosen area Point, and according to determined by pixel formed pixel value export area;
Pixel according to the cutting object being determined based on the dividing method in region in described second chosen area Point, and according to determined by pixel formed pixel value export area;
Pixel according to the cutting object being determined based on the dividing method of graph theory in described second chosen area Point, and according to determined by pixel formed pixel value export area;Or
According to the cutting object being determined based on the dividing method of energy functional in described second chosen area Pixel, and according to determined by pixel formed pixel value export area.
7. method according to claim 5 is it is characterised in that methods described also includes:
Right according to the segmentation in the cutting object in described first chosen area and described second chosen area As determining the cutting object in described depth image.
8. method according to claim 7 it is characterised in that
Described first chosen area and described second chosen area do not overlap,
Described according in the cutting object in described first chosen area and described second chosen area point Cut object and determine that the step of the cutting object in described depth image is:By in described first chosen area Cutting object in cutting object and described second chosen area merges, to be formed in described depth image Cutting object,
Or,
Described first chosen area and described second chosen area are overlapped, form overlapping region,
Described according in the cutting object in described first chosen area and described second chosen area point Cut object and determine that the step of the cutting object in described depth image is:By in described first chosen area Part outside described overlapping region for the cutting object, the cutting object in described second chosen area are in institute State part outside overlapping region and following in any one merging, to form described depth image In cutting object:
Part in described overlapping region for the cutting object in described first chosen area;Or
Part in described overlapping region for the cutting object in described second chosen area.
9. a kind of segmenting device of depth image is it is characterised in that described device includes:
Depth value acquisition module, is configured to obtain the pixel in the first chosen area in described depth image The depth value of point;
Depth value scope estimation module, is configured to estimate the cutting object in described first chosen area Depth value scope;
Depth value export area determining module, is configured to determine that at depth value in described first chosen area Pixel in the range of described depth value, and according to determined by pixel formed depth value export area; And
First segmentation module, is configured to according to described depth value export area to described first chosen area In cutting object split, obtain the cutting object in described first chosen area.
10. device according to claim 9 is it is characterised in that described depth value scope estimates mould Block includes:
Distribution situation determining unit, is configured to according to the pixel in acquired described first chosen area The depth value of point determines the distribution situation of the depth value of all pixels point in described first chosen area;With And
First depth value scope estimation unit, be configured to according to determined by distribution situation estimate described the The depth value scope of the cutting object in one chosen area.
11. devices according to claim 9 are it is characterised in that described depth value scope estimates mould Block includes:
Reference zone chooses unit, is configured to choose a reference zone in described first chosen area;
Reference depth value scope determining unit, is configured to determine that the depth value scope of described reference zone; And
Second depth value scope estimation unit, is configured to be estimated according to the depth value scope of described reference zone Count the depth value scope of the cutting object in described first chosen area.
12. devices according to claim 11 are it is characterised in that described cutting object is figure map Picture, and described reference zone is area of skin color.
13. devices according to claim 9 are it is characterised in that described device also includes:
Pixel value acquisition module, is configured to obtain the pixel in the second chosen area in described depth image The pixel value of point;
Pixel value export area determining module, is configured to according to the pixel in described second chosen area Pixel value determine the pixel of the cutting object in described second chosen area, and according to determined by picture Vegetarian refreshments forms pixel value export area;And
Second segmentation module, is configured to according to described pixel value export area to described second chosen area In cutting object split, obtain the cutting object in described second chosen area.
14. devices according to claim 13 are it is characterised in that described pixel value export area is true Cover half block include following in any one:
Threshold value lead-out unit, is configured to determine described second selection area according to based on the dividing method of threshold value The pixel of the cutting object in domain, and according to determined by pixel formed pixel value export area;
Edge lead-out unit, is configured to determine described second selection area according to based on the dividing method at edge The pixel of the cutting object in domain, and according to determined by pixel formed pixel value export area;
Region lead-out unit, is configured to determine described second selection area according to based on the dividing method in region The pixel of the cutting object in domain, and according to determined by pixel formed pixel value export area;
Graph theory lead-out unit, is configured to determine described second selection area according to based on the dividing method of graph theory The pixel of the cutting object in domain, and according to determined by pixel formed pixel value export area;Or Person
Energy functional lead-out unit, is configured to determine described the according to based on the dividing method of energy functional The pixel of the cutting object in two chosen area, and according to determined by pixel formed pixel value derive Region.
15. devices according to claim 13 are it is characterised in that described device also includes:
Cutting object determining module, is configured to according to the cutting object in described first chosen area and institute State the cutting object in the second chosen area and determine the cutting object in described depth image.
16. devices according to claim 15 it is characterised in that
Described first chosen area and described second chosen area do not overlap;And
Described cutting object determining module is configured to:By the cutting object in described first chosen area and Cutting object in described second chosen area merges, to form the cutting object in described depth image,
Or,
Described first chosen area and described second chosen area are overlapped, form overlapping region,
Described cutting object determining module is configured to:Cutting object in described first chosen area is existed Cutting object in part outside described overlapping region, described second chosen area is in described overlapping region Outside part and following in any one merging, right to form the segmentation in described depth image As:
Part in described overlapping region for the cutting object in described first chosen area;Or
Part in described overlapping region for the cutting object in described second chosen area.
A kind of 17. segmenting devices of depth image are it is characterised in that include:
Processor;
For storing the memorizer of processor executable;
Wherein, described processor is configured to:
Obtain the depth value of the pixel in the first chosen area in described depth image;
Estimate the depth value scope of the cutting object in described first chosen area;
Determine that depth value in described first chosen area is in the pixel in the range of described depth value, And according to determined by pixel formed depth value export area;And
According to described depth value export area, the cutting object in described first chosen area is carried out Segmentation, obtains the cutting object in described first chosen area.
CN201510520359.6A 2015-08-21 2015-08-21 Depth image segmentation method and segmentation device Active CN106469446B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510520359.6A CN106469446B (en) 2015-08-21 2015-08-21 Depth image segmentation method and segmentation device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510520359.6A CN106469446B (en) 2015-08-21 2015-08-21 Depth image segmentation method and segmentation device

Publications (2)

Publication Number Publication Date
CN106469446A true CN106469446A (en) 2017-03-01
CN106469446B CN106469446B (en) 2021-04-20

Family

ID=58229055

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510520359.6A Active CN106469446B (en) 2015-08-21 2015-08-21 Depth image segmentation method and segmentation device

Country Status (1)

Country Link
CN (1) CN106469446B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109215043A (en) * 2017-06-30 2019-01-15 北京小米移动软件有限公司 Image-recognizing method and device, computer readable storage medium
CN109977834A (en) * 2019-03-19 2019-07-05 清华大学 The method and apparatus divided manpower from depth image and interact object
CN110490891A (en) * 2019-08-23 2019-11-22 杭州依图医疗技术有限公司 The method, equipment and computer readable storage medium of perpetual object in segmented image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103903246A (en) * 2012-12-26 2014-07-02 株式会社理光 Object detection method and device
CN103907123A (en) * 2011-09-30 2014-07-02 英特尔公司 Human head detection in depth images
WO2014125502A2 (en) * 2013-02-18 2014-08-21 Tata Consultancy Services Limited Segmenting objects in multimedia data
CN104112275A (en) * 2014-07-15 2014-10-22 青岛海信电器股份有限公司 Image segmentation method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103907123A (en) * 2011-09-30 2014-07-02 英特尔公司 Human head detection in depth images
CN103903246A (en) * 2012-12-26 2014-07-02 株式会社理光 Object detection method and device
WO2014125502A2 (en) * 2013-02-18 2014-08-21 Tata Consultancy Services Limited Segmenting objects in multimedia data
CN104112275A (en) * 2014-07-15 2014-10-22 青岛海信电器股份有限公司 Image segmentation method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109215043A (en) * 2017-06-30 2019-01-15 北京小米移动软件有限公司 Image-recognizing method and device, computer readable storage medium
CN109977834A (en) * 2019-03-19 2019-07-05 清华大学 The method and apparatus divided manpower from depth image and interact object
CN109977834B (en) * 2019-03-19 2021-04-06 清华大学 Method and device for segmenting human hand and interactive object from depth image
CN110490891A (en) * 2019-08-23 2019-11-22 杭州依图医疗技术有限公司 The method, equipment and computer readable storage medium of perpetual object in segmented image

Also Published As

Publication number Publication date
CN106469446B (en) 2021-04-20

Similar Documents

Publication Publication Date Title
US9953506B2 (en) Alarming method and device
US10534972B2 (en) Image processing method, device and medium
US10007841B2 (en) Human face recognition method, apparatus and terminal
US10115019B2 (en) Video categorization method and apparatus, and storage medium
US10284773B2 (en) Method and apparatus for preventing photograph from being shielded
CN105631408A (en) Video-based face album processing method and processing device
CN106528879A (en) Picture processing method and device
CN107025419B (en) Fingerprint template inputting method and device
WO2017071065A1 (en) Area recognition method and apparatus
CN106651955A (en) Method and device for positioning object in picture
CN106682736A (en) Image identification method and apparatus
CN104125396A (en) Image shooting method and device
CN105631803B (en) The method and apparatus of filter processing
CN106408603A (en) Camera method and device
EP2998960A1 (en) Method and device for video browsing
CN105472239A (en) Photo processing method and photo processing device
CN104700353A (en) Image filter generating method and device
US20170339287A1 (en) Image transmission method and apparatus
CN106600530A (en) Photograph synthetic method and apparatus
CN105335714B (en) Photo processing method, device and equipment
CN106131441A (en) Photographic method and device, electronic equipment
CN105528765A (en) Method and device for processing image
CN107967459A (en) convolution processing method, device and storage medium
CN107091704A (en) Pressure detection method and device
CN109784327B (en) Boundary box determining method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant