CN102855459B - For the method and system of the detection validation of particular prospect object - Google Patents

For the method and system of the detection validation of particular prospect object Download PDF

Info

Publication number
CN102855459B
CN102855459B CN201110181505.9A CN201110181505A CN102855459B CN 102855459 B CN102855459 B CN 102855459B CN 201110181505 A CN201110181505 A CN 201110181505A CN 102855459 B CN102855459 B CN 102855459B
Authority
CN
China
Prior art keywords
depth
depth map
current environment
background
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110181505.9A
Other languages
Chinese (zh)
Other versions
CN102855459A (en
Inventor
王鑫
范圣印
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Priority to CN201110181505.9A priority Critical patent/CN102855459B/en
Publication of CN102855459A publication Critical patent/CN102855459A/en
Application granted granted Critical
Publication of CN102855459B publication Critical patent/CN102855459B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides and detect and the method and apparatus of multiple particular prospect objects in authentication image, the method can comprise: set up background model according to the change tread of pixel distance each in image; From present image, reduce background model obtain foreground area; Foreground area is split; And each prospect is verified whether it is certain class special object.Special object can be a certain class object such as people or chair.Present invention achieves the fast algorithm of detection, and improve the accuracy of detection, reduce false drop rate.

Description

For the method and system of the detection validation of particular prospect object
Technical field
The present invention relates to a kind of detection validation method and system for display foreground object, relate more specifically to a kind ofly use captured depth image to carry out the method and system of detection validation to special object.
Background technology
In the application such as various man-machine interaction, game and intelligent system, usually video monitoring to be carried out to the personnel participated in, and the important link of video monitoring one is exactly need to detect the participant in video image, and this testing process must carry out image procossing to the video image comprising participant.And these image procossing adopt common camera shooting to carry out Video processing usually.In actual application, the image processing techniques that the video that this employing common camera is taken detects participant faces many problems: such as verification and measurement ratio is low, false drop rate is high, cannot be real-time etc., cause the situation that the reason of these problems is to comprise the behavior complicacy of the participant in the scene of participant, the light brightness that is relatively darker and scene of scene exists sudden change usually.People are in order to solve the problems referred to above adopting the video of common camera shooting to carry out detecting the method existence of participant, propose the method for many solutions, such as, participant in scene increases sensor with it, but this method can only be useful in some specific scene, and the experience effect of participant (user) is bad.
In addition, along with the appearance of depth cameras, due to depth cameras collection is range information in scene, and therefore people attempt to utilize range information to detect participant, but still immature at present based on the algorithm of depth camera.At present, have already been proposed some detection techniques for people.
U.S. Patent application US20090210193A1 proposes one and detects and the method for locating people.The method uses the distance based on object in the range image sensor output region of TOF, and gives this distance change, detects the region comprising this distance change; Then adopt segmentation module from detected distance region of variation, be partitioned into the given shape of the people of participant, thus the direction of people from location.Obviously, this U.S. Patent application employs the feature of a kind of concrete people, such as trunk, and the features such as leg split the image of people in distance region of variation.This patent uses trunk, the signature verification people such as leg according to distance change inspected object.
In addition, in view of the three-dimensional feature of usual object, European patent application EP 1983484A1 proposes a kind of harvester that adopts and gathers three-dimensional body and calculate the method that disparity map (disparitycomponentdata) detects three dimensional object.The method sets up the model of three dimensional object in advance, calculate one group of gray-scale map of the two-dimensional projection obtained to this three-dimensional model from different view, and this group gray-scale map is defined as object template, then comparison other template and the image-region collected, certain region of collected image with have between described object template the highest associate angle value time, then think that this region has a three dimensional object.Obviously, three dimensional object model adopts gray-scale map, cannot be normalized.
In view of the face characteristic of participant, U.S. Patent application US20100158387A1 proposes a kind of method for detecting human face.The method adopts a kind of image processing module to use multiple image calculate range information and be partitioned into foreground area and background area according to range information, then adopt face detection module to carry out convergent-divergent according to distance to foreground area, then detect face in image after scaling.But this patented claim can only detect face, therefore cannot detect other parts of the people occurred in scene or other objects.
Summary of the invention
In order to these problems of the prior art mentioned above solving, the present invention proposes a kind of detection validation method and system of particular prospect object.
Specifically, the invention provides a kind of detection validation method of particular prospect object, comprising: adopt depth cameras to obtain the depth information of current environment, and create the depth map of current environment based on the depth information of described acquisition; Compare the depth map of created current environment and the degree of depth of each pixel of initial background depth map, upgrade background depth graph model; Deduct upgraded background depth graph model the depth map of the current environment again taken from depth cameras, thus obtain the depth map of the foreground area of the background of current environment; Connected domain one or more in the depth map of obtained foreground area is numbered, and when having multiple connected domain, these connected area segmentation is come, as the particular prospect object of multiple candidate; And adopt template matches mechanism verify split acquisition particular prospect object whether belong to by the particular prospect object of the template type mated.
According to the detection validation method of particular prospect object of the present invention, wherein saidly compare the depth map of created current environment and the degree of depth of each pixel of initial background depth map comprises to upgrade background depth graph model: by adopting medium filtering to carry out noise reduction filtering process to continuous a few two field picture.
According to the detection validation method of particular prospect object of the present invention, the degree of depth of the wherein said depth map and each pixel of initial background depth map that compare created current environment comprises to upgrade background depth graph model the following steps repeatedly performed: compare the depth map of created current environment and the degree of depth of each pixel of background depth map existing before the depth map of the current environment created, and when finding to be greater than the degree of depth of the corresponding pixel points in described background depth map at the current depth of a pixel of current environment depth map, the degree of depth of the corresponding pixel points in described background depth map is updated to the current depth value of a pixel of current environment depth map, repeatedly perform above-mentioned steps, until the quantity that the pixel of above-mentioned renewal occurs in a schedule time threshold value is less than a predetermined amount threshold.
According to the detection validation method of particular prospect object of the present invention, described method also comprises: whether the particular prospect object splitting acquisition in checking institute belongs to by before the particular prospect object of template type that mates, sets up special object template.
According to the detection validation method of particular prospect object of the present invention, wherein said special object template is a kind of depth map of special object, has fixing size and depth value is such special object to the fixed range of specifying video camera.
According to the detection validation method of particular prospect object of the present invention, described method also comprises: described employing template matches mechanism verify split before whether the particular prospect object obtained belong to by the step of the particular prospect object of the template type mated, to the size of the range information change foreground object depth map that foreground object to be verified comprises according to its depth map.
According to the detection validation method of particular prospect object of the present invention, the step of the described size to the range information change foreground object depth map that foreground object to be verified comprises according to its depth map comprises: the mean depth calculating the depth value of each pixel of described foreground object depth map; The scaling of described foreground object depth map is calculated based on the constant depth value of specifying in described special object template and the mean depth of the foreground object depth map calculated; And the size of described foreground object depth map is changed according to calculated scaling.
According to the detection validation method of particular prospect object of the present invention, wherein said employing template matches mechanism verify split the particular prospect object obtained whether belong to by the step of the particular prospect object of the template type mated by the algorithm of employing normalized correlation coefficient (NCC) to described special object template and described varying sized after foreground object depth map carry out template matches to carry out.
According to another aspect of the present invention, additionally provide a kind of detection validation system of particular prospect object, comprising: depth map acquisition equipment, obtain the depth information of current environment, and create the depth map of current environment based on the depth information of described acquisition; Background modeling unit, compares the depth map of created current environment and the degree of depth of each pixel of initial background depth map, upgrades background depth graph model; Background subtraction unit, deducts upgraded background depth graph model the depth map of the current environment again taken from depth cameras, thus obtains the depth map of the foreground area of the background of current environment; Foreground object cutting unit, is numbered connected domain one or more in the depth map of obtained foreground area, and when having multiple connected domain, is come by these connected area segmentation, as the particular prospect object of multiple candidate; And foreground object authentication unit, adopt template matches mechanism verify split acquisition particular prospect object whether belong to by the particular prospect object of the template type mated.
The present invention is just with depth map, and the detection of employing is characterized as the profile of special object, has more standby robustness.
Accompanying drawing explanation
Shown in Fig. 1 is the scene schematic diagram of the detection validation method and system adopting foreground object of the present invention.
Shown in Fig. 2 is the process flow diagram of detection validation method according to foreground object of the present invention.
Shown in Fig. 3 is process flow diagram according to background modeling step of the present invention.
Shown in Fig. 4 is process flow diagram according to background subtraction step of the present invention and foreground object segmentation step.
It is schematic diagram according to background subtraction step of the present invention and foreground object segmentation step that Fig. 5 explains shown.
Shown in Fig. 6 is process flow diagram according to foreground object verification step of the present invention.
Shown in Fig. 7-1 is process flow diagram according to change depth map size procedure of the present invention.
Shown in Fig. 7-2 is schematic diagram according to change depth map size procedure of the present invention.
Shown in Fig. 8-1 is the schematic diagram takeing on template according to head of the present invention.
Shown in Fig. 8-2 is the schematic diagram takeing on the different gray-scale map template of template from head according to the present invention.
Shown in Fig. 9 is according to the process flow diagram by object shapes and template matches step of the present invention.
Shown in Figure 10 is by visual for NCC matching result schematic diagram according to of the present invention.
Shown in Figure 11 is according to system block diagram of the present invention.
Embodiment
Below, specific embodiments of the invention are described in detail with reference to the accompanying drawings.
Shown in Fig. 1 is the scene schematic diagram of the detection validation method and system adopting foreground object of the present invention.Taken by a three-dimensional camera, the data of the present invention simultaneously to shooting process.Output can be shown on the display device.
Shown in Fig. 2 is the process flow diagram of detection validation method according to foreground object of the present invention.First, in step 11 place, by depth map acquisition unit U10, obtain the depth information of current environment, and create the depth map of current environment based on the depth information of described acquisition.Subsequently, in step 12 place, compare the depth map of created current environment and the degree of depth of each pixel of initial background depth map by background modeling unit U13, upgrade background depth graph model.Then, in step 13 place, deduct upgraded background depth graph model the depth map of the current environment again taken from depth cameras by background subtraction unit U14, thus obtain the depth map of the foreground area of the background of current environment.Then, in step 14 place, by foreground object cutting unit U15, connected domain one or more in the depth map of obtained foreground area is numbered, and when having multiple connected domain, these connected area segmentation are come, as the particular prospect object of multiple candidate.Finally, in step 14 place, template matches mechanism is adopted to verify whether the particular prospect object of split acquisition belongs to by the particular prospect object of the template type mated by foreground object authentication unit U16.And in step 15 place, export the object through checking.
Depth map can be obtained by three-dimensional camera, such as PrimeSense.So-called depth map is exactly, depth cameras is positioned at the environment before camera lens by shooting, and the distance of each pixel distance depth cameras in environment captured by calculating, and adopt the numerical value of such as 16 to record distance between object representated by each pixel and depth cameras, thus the figure having 16 bit value of the incidental expression distance of these each pixels to form a width to represent the spacing of each pixel and camera.A degree of depth Figure 10 is an image, and the implication of its each pixel value is the range information of this position from video camera.The absolute figure of distance is cannot be visual, makes it meet the constraint of digital picture numerical value, therefore be referred to as depth map so need to carry out some process to data.The degree of depth Figure 10 mentioned in follow-up explanation refers to its original distance value instead of processed visual pixel value.
Shown in Fig. 3 is process flow diagram according to background modeling step of the present invention.First, in step S110 place, input initialization model.First initial background model with the depth map of the first frame (or mean value of former frame) as initial back-ground model, can constantly update background model afterwards dynamically.In order to use the present invention in any scene, therefore, background model needs real-time update.For this reason, need to adopt depth cameras to obtain the N frame depth map of institute of the present invention application scenarios continuously in step S111 place.In view of the degree of depth of every frame figure may exist instability, therefore, in step S112 place, noise reduction process is performed to obtained N frame depth map.For example, noise-reduction method is: obtain N frame depth map, to the corresponding N number of depth value of the pixel of the same position in this N frame depth map, uses noise reduction function to carry out.Noise reduction function can use medium filtering function, and its expression formula is:
d ‾ ( x , y ) = median 1 ≤ i ≤ N ( d i ( x , y ) ) - - - ( 1 )
Wherein d (x, y) represents the depth value on position (x, y), and N represents number of image frames.
After performing above-mentioned process to the depth value of position each in obtained depth map, in the depth map of step S113 place output packet containing depth value after noise reduction process.Afterwards, in step S115 place, adopt the depth map after noise reduction process to upgrade initial back-ground model.Update process process is specific as follows: the background model existed before the input of step S114 place upgrades, depth value between the background model depth map existed before more described renewal afterwards and the depth map obtained after noise reduction process, if the situation that the depth value of the background model depth map that certain pixel of two width depth map correspondence positions had existed there is the more described renewal of depth value of the depth map obtained after noise reduction process between depth value before is large, just show that this position pixel distance depth cameras of the depth map obtained after noise reduction process is farther, and show that this pixel is blocked by certain foreground object when forming the background model before being updated, and this foreground object is removed when the above-mentioned N frame depth map of formation.Therefore, this position pixel of the depth map obtained after this noise reduction process more should become a part for background, thus by the depth value of this pixel more shape be the depth value of corresponding pixel points in background model depth map.Concrete renewal expression formula is:
d B ( x , y ) = d ‾ ( x , y ) , d ‾ ( x , y ) > d B ( x , y ) d B ( x , y ) , otherwise - - - ( 2 )
Wherein represent the depth value in the depth map after noise reduction, and d brepresent the depth value in background model.
In view of can not be in change procedure in use scene process of the present invention always, on the contrary, after the change at initial stage, usually certain steady state (SS) can be in.Such as a meeting occasion, after people settle down, the change of scene is generally little.In order to reduce the calculated amount of renewal, the present invention can specify further to upgrade and stop.For this reason, the process that the present invention sets up background model also comprises step S116.In step S116 place, utilize termination condition to stop background model renewal process, termination condition is defined as in stipulated time T, and the pixel quantity occurring to upgrade in the step S115 of renewal background model is less than a given threshold value Count th.
By the way, background model depth map can be carried out and be in dynamical state, therefore, it is possible to use in real time the present invention and not by the impact of environmental change.
Shown in Fig. 4 is process flow diagram according to background subtraction step of the present invention and foreground object segmentation step.After background model upgrades or simultaneously, by deducting the depth map of current background model from the new depth map obtained, obtaining thus in the new depth map obtained may the depth map of foreground object.The detailed process of this flow process is as follows:
First, in step S120 place, from the camera of depth cameras, obtain a frame depth map.Then in step S121 place, (this background model can be initial to receive the background model inputted, also can be just updated), and from obtained depth map, deduct the depth map of inputted background model, and export foreground depth Figure 122.
Concrete subduction strategy is provided by following expression formula (3):
d F ( x , y ) = 0 , | d ( x , y ) - d B ( x , y ) | < Th Sub d ( x , y ) , otherwise - - - ( 3 )
Wherein d brepresent the pixel value of background model depth map, d represents the pixel value of the depth map of input, d frepresent the pixel value of foreground depth figure and Th subfor predefined threshold value.
Then in step S130 place, foreground depth figure is divided into multiple foreground object by the connected domain based on the degree of depth of foreground depth figure, and exports as foreground object set 131.The partitioning algorithm used is the connected domain analysis (DCCA) based on the degree of depth.Find in the U.S. Patent application US20090183125 (A1) that concrete algorithm can be submitted in PrimeSense company.By reference the content of this patented claim is contained in this at this.
It is schematic diagram according to background subtraction step of the present invention and foreground object segmentation step that Fig. 5 explains shown.The data layout of the object in foreground object set 131 is identical with depth map, and the net result in schematic diagram only lists part foreground object.
Shown in Fig. 6 is process flow diagram according to foreground object verification step of the present invention.The foreground object checking that the present invention carries out adopts the method for template matches.Detailed process is as follows.
First, in step S141 place, from inputted foreground object set 131, select any one foreground object.Then the mean depth of selected foreground object is calculated in step S142 place.Below the expression formula of the method calculating mean depth:
d avg = &Sigma; ( x , y ) &Element; Obj d ( x , y ) / size - - - ( 4 )
Wherein Obj represents foreground object, and d represents depth value and size represents the area of foreground object.
After the average depth value obtaining foreground object, in step S144 place, the size of the foreground object depth map selected by changing based on calculated average depth value.Shown in Fig. 7-1 is process flow diagram according to change depth map size procedure of the present invention.
As shown in Fig. 7-1, change depth map size procedure S143 and comprise: recalculate depth value step S1430, calculate scaling step S1431 and convergent-divergent foreground object step S1432.
In step S1430 place, recalculate depth value.This step is to ensure the consistance of the depth value of depth map before and after convergent-divergent.Being different from gray-scale map carrying out convergent-divergent does not need to change the gray-scale value of its pixel, the depth value of foreground object need be recalculated.Reason is that the value of depth map pixel represents range information, relevant to its size.Expression formula (5) provides the method recalculated:
d′(x,y)=D Norm*d(x,y)/d avg(5)
Wherein d represents the depth value of foreground object, d avggot by expression formula (4) calculating and DNorm is normalized parameter, represent that all foreground object all zoom to this distance parameter.In follow-up explanation, suppose to use DNorm=3 exemplarily.
Then, in step S1431 place, calculate scaling.Scaling is provided by expression formula (6):
ratio=d avg/D Norm(6)
Wherein d avgfor expression formula (4) calculates the average depth value got, D normfor normalized parameter.
Finally, in step S1432 place, carry out the foreground object selected by convergent-divergent based on calculated scaling.The parameter of convergent-divergent is calculated by expression formula (7):
h = H / ratio w = W / ratio - - - ( 7 )
Wherein H is the height of the image of original foreground object and h is the height of sight object images after convergent-divergent; W is the wide of the image of original foreground object and w is the wide of sight object images after convergent-divergent; Ratio is the scaling that expression formula (6) calculates.Foreground object carrying out template matches after convergent-divergent foreground object step S1432 process, because template size is fixed, thus can fall face coupling need calculated amount and the time.
Shown in Fig. 7-2 is schematic diagram according to change depth map size procedure of the present invention.
Return see accompanying drawing 6, in step S144 place, receive existing shape template input, and the template of the shape of the foreground object after convergent-divergent and a certain type is carried out coupling contrast.
General template matching method is size by constantly changing template and finds the position of mating most in given image.But due to the difference of position, actual object size is in the picture different, and such template matches needs a large amount of computing times, specifically, need to attempt all possible template size just can complete and once mate.The solution raised speed to template matches in the present invention is, according to depth value, foreground object and template are normalized to same yardstick.Shown in Fig. 8-1 is the schematic diagram takeing on template instances according to head of the present invention.Shown in Fig. 8-2 be and the schematic diagram takeing on the different gray-scale map template of template according to head of the present invention shown in Fig. 8-1.The head shoulder of we finder has " Ω " shape that is stable and robust, and this stability can as the feature of template matches.Shown in Fig. 8-1 in the present invention, template is normalization three-dimensional template, mainly contains two characteristics: the first, three-dimensional template: template is depth map, and what its pixel value represented is distance value.The second, normalization template: the D that template size and aforementioned expression (6) provide normrelevant, for what fix.Concrete meaning is when the distance of object distance camera is D normthis object is obtained as template time (such as 3 meters).Compared with the gray-scale map template shown in Fig. 8-2, head shown in Fig. 8-1 of the present invention shoulder template has the following advantages the accuracy that is to improve coupling reduce noise.Even if the outward appearance of certain object has similar " Ω " shape, if but its three-dimensional surface does not possess the shape of an ellipsoid, so just can distinguish by normalization three-dimensional head shoulder template the head shoulder that this object is not people, and the template of the gray-scale map shown in Fig. 8-2 can not realize above-mentioned technique effect.
Shown in Fig. 9 is according to the process flow diagram by object shapes and template matches step of the present invention.This coupling proof procedure comprises: step S1440, performs NCC template matches; Step S1441, the maximum matching value of thresholding; And step S1442, calculate the actual position of head shoulder.
In step S1440 place, perform NCC template matches, use normalized correlation coefficient (NCC) as template matches exactly.Related coefficient (NCC) is similar to convolution process in essence.The computing formula that expression formula (8) is related coefficient:
R ccoeff ( x , y ) = &Sigma; x &prime; , y &prime; [ T ( x &prime; , y &prime; ) &CenterDot; I ( x + x &prime; , y + y &prime; ) ] 2 - - - ( 8 )
Wherein T represents template image; I represents target image, the foreground object namely in the present invention.
Expression formula (9) provides normalization coefficient; Expression formula (10) provides NCC calculation expression:
Z ( x , y ) = &Sigma; x &prime; , y &prime; T ( x &prime; , y &prime; ) 2 &CenterDot; &Sigma; x &prime; , y &prime; I ( x + x &prime; , y + y &prime; ) 2 - - - ( 9 )
R ccoeff _ normed ( x , y ) = R ccoeff ( x , y ) Z ( x , y ) - - - ( 10 )
Wherein Z (x, y) is calculated by expression formula (9).
Suppose that the wide of template itself and height are w and h respectively, and the wide and height of foreground object is W and H respectively; So the result of NCC is the two-dimensional array that (W-w+1) is multiplied by (H-h+1).The value of NCC represents matching degree, and result represents from 0 to 1,1 and mates completely.
Shown in Figure 10 is by visual for NCC matching result schematic diagram according to of the present invention.
Then in step S1441 place, the maximum matching value of thresholding, namely finds maximum matching value V from NCC result max(x0, y0), judges whether it is that the strategy that head is takeed on is as follows:
Location = ( x 0 , y 0 ) , V Max > Match th NULL , otherwise - - - ( 11 )
Wherein Match thit is predefined matching threshold.
Subsequently, in step S1442 place, calculate the actual position of head shoulder, namely calculate its position at input picture according to the head shoulder result of coupling.
First, according to NCC result and expression formula (11), the position in the foreground object after scaling of head shoulder region is a rectangular area, with RECT (x0, y0, w, h) represent, wherein x0, y0 represent the coordinate obtaining maximum matching value, w and h is the wide and high of template respectively.Then, the head shoulder position of region in original input picture is RECT (x0*ratio+ Δ x, y0*ratio+ Δ y, w*ratio, h*ratio), wherein ratio is the scaling that expression formula (6) calculates, and Δ x and Δ y is the relative position of foreground object and entire depth figure.
Finally, as shown in Figure 2, the identified head be verified shoulder region is exported in step 15 place as the foreground object be verified to carry out subsequent treatment.
Although the head that the present invention only use only people is takeed on as special object so that embodiments of the present invention to be described, those skilled in the art can understand, the present invention can be applied to any special object, such as such as tiger, the various animal such as lion, vehicle.Unlike setting up different matching templates.
Shown in Figure 11 is block scheme according to system of the present invention.As shown in the figure, system according to the present invention comprises: central processing unit U11, memory device U12, display device U17, depth map acquisition equipment U10, obtains the depth information of current environment, and creates the depth map of current environment based on the depth information of described acquisition; Background modeling unit U13, compares the depth map of created current environment and the degree of depth of each pixel of initial background depth map, upgrades background depth graph model; Background subtraction unit U14, deducts upgraded background depth graph model the depth map of the current environment again taken from depth cameras, thus obtains the depth map of the foreground area of the background of current environment; Foreground object cutting unit U15, is numbered connected domain one or more in the depth map of obtained foreground area, and when having multiple connected domain, is come by these connected area segmentation, as the particular prospect object of multiple candidate; And foreground object authentication unit U16, adopt template matches mechanism verify split acquisition particular prospect object whether belong to by the particular prospect object of the template type mated.
Method of the present invention in the upper execution of a computing machine (processor), or can be performed by multiple stage computer distribution type.In addition, program can be transferred to the remote computer at executive routine there.
Will be understood by those skilled in the art that, according to designing requirement and other factors, as long as it falls in the scope of claims or its equivalent, various amendment, combination, incorporating aspects can be occurred and substitute.

Claims (8)

1. a detection validation method for particular prospect object, comprising:
Adopt depth cameras to obtain the depth information of current environment, and create the depth map of current environment based on the depth information of the current environment of described acquisition;
Compare the depth map of created current environment and the degree of depth of each pixel of initial background depth map, upgrade background depth graph model;
Deduct upgraded background depth graph model the depth map of the current environment again taken from depth cameras, thus obtain the depth map of the foreground area of the background of current environment;
Connected domain one or more in the depth map of obtained foreground area is numbered, and when having multiple connected domain, these connected area segmentation is come, as the particular prospect object of multiple candidate; And
Adopt template matches mechanism verify split acquisition particular prospect object whether belong to by the particular prospect object of the template type mated,
The degree of depth of the wherein said depth map and each pixel of initial background depth map that compare created current environment comprises to upgrade background depth graph model the following steps repeatedly performed:
Compare the depth map of created current environment and the degree of depth of each pixel of background depth map existing before the depth map of the current environment created, and when finding to be greater than the degree of depth of the corresponding pixel points in described background depth map at the current depth of a pixel of current environment depth map, the degree of depth of the corresponding pixel points in described background depth map is updated to the current depth value of a pixel of current environment depth map;
Repeatedly perform above-mentioned steps, until the quantity that the pixel of above-mentioned renewal occurs in a schedule time threshold value is less than a predetermined amount threshold.
2. the method for claim 1, wherein saidly compares the depth map of created current environment and the degree of depth of each pixel of initial background depth map comprises to upgrade background depth graph model:
Carry out depth ratio comparatively before, by adopting medium filtering to carry out noise reduction filtering process to continuous a few two field picture.
3. the method for claim 1, described method also comprises:
Whether the particular prospect object splitting acquisition in checking institute belongs to by before the particular prospect object of template type that mates, sets up special object template.
4. method as claimed in claim 3, wherein said special object template is a kind of depth map of special object, has fixing size and depth value is the fixed range of described special object to appointment video camera.
5. method as claimed in claim 4, described method also comprises: described employing template matches mechanism verify split before whether the particular prospect object obtained belong to by the step of the particular prospect object of the template type mated, to the size of the range information change foreground object depth map that foreground object to be verified comprises according to its depth map.
6. method as claimed in claim 5, the step of the described size to the range information change foreground object depth map that foreground object to be verified comprises according to its depth map comprises:
Calculate the mean depth of the depth value of each pixel of described foreground object depth map;
The scaling of described foreground object depth map is calculated based on the constant depth value of specifying in described special object template and the mean depth of the foreground object depth map calculated; And
The size of described foreground object depth map is changed according to calculated scaling.
7. method as claimed in claim 6, wherein said employing template matches mechanism verify split the particular prospect object obtained whether belong to by the step of the particular prospect object of the template type mated by the algorithm of employing normalized correlation coefficient (NCC) to described special object template and described varying sized after foreground object depth map carry out template matches to carry out.
8. a detection validation system for particular prospect object, comprising:
Depth map acquisition equipment, obtains the depth information of current environment, and creates the depth map of current environment based on the depth information of the current environment of described acquisition;
Background modeling unit, compares the depth map of created current environment and the degree of depth of each pixel of initial background depth map, upgrades background depth graph model;
Background subtraction unit, deducts upgraded background depth graph model the depth map of the current environment again taken from depth cameras, thus obtains the depth map of the foreground area of the background of current environment;
Foreground object cutting unit, is numbered connected domain one or more in the depth map of obtained foreground area, and when having multiple connected domain, is come by these connected area segmentation, as the particular prospect object of multiple candidate; And
Foreground object authentication unit, adopt template matches mechanism verify split acquisition particular prospect object whether belong to by the particular prospect object of the template type mated,
The degree of depth of depth map and each pixel of initial background depth map that wherein said background modeling unit compares created current environment comprises to upgrade background depth graph model the following steps repeatedly performed:
Compare the depth map of created current environment and the degree of depth of each pixel of background depth map existing before the depth map of the current environment created, and when finding to be greater than the degree of depth of the corresponding pixel points in described background depth map at the current depth of a pixel of current environment depth map, the degree of depth of the corresponding pixel points in described background depth map is updated to the current depth value of a pixel of current environment depth map;
Repeatedly perform above-mentioned steps, until the quantity that the pixel of above-mentioned renewal occurs in a schedule time threshold value is less than a predetermined amount threshold.
CN201110181505.9A 2011-06-30 2011-06-30 For the method and system of the detection validation of particular prospect object Active CN102855459B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110181505.9A CN102855459B (en) 2011-06-30 2011-06-30 For the method and system of the detection validation of particular prospect object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110181505.9A CN102855459B (en) 2011-06-30 2011-06-30 For the method and system of the detection validation of particular prospect object

Publications (2)

Publication Number Publication Date
CN102855459A CN102855459A (en) 2013-01-02
CN102855459B true CN102855459B (en) 2015-11-25

Family

ID=47402038

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110181505.9A Active CN102855459B (en) 2011-06-30 2011-06-30 For the method and system of the detection validation of particular prospect object

Country Status (1)

Country Link
CN (1) CN102855459B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705552A (en) * 2019-10-11 2020-01-17 沈阳民航东北凯亚有限公司 Luggage tray identification method and device

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105225217B (en) * 2014-06-23 2018-04-10 株式会社理光 Background model update method and system based on depth
CN105447895A (en) * 2014-09-22 2016-03-30 酷派软件技术(深圳)有限公司 Hierarchical picture pasting method, device and terminal equipment
CN105678696B (en) * 2014-11-20 2019-03-29 联想(北京)有限公司 A kind of information processing method and electronic equipment
KR102360424B1 (en) * 2014-12-24 2022-02-09 삼성전자주식회사 Method of detecting face, method of processing image, face detection device and electronic system including the same
US10277888B2 (en) * 2015-01-16 2019-04-30 Qualcomm Incorporated Depth triggered event feature
CN105760846B (en) * 2016-03-01 2019-02-15 北京正安维视科技股份有限公司 Target detection and localization method and system based on depth data
CN107636727A (en) * 2016-12-30 2018-01-26 深圳前海达闼云端智能科技有限公司 Target detection method and device
CN107742296A (en) * 2017-09-11 2018-02-27 广东欧珀移动通信有限公司 Dynamic image generation method and electronic installation
DE102018206848A1 (en) * 2018-05-03 2019-11-07 Robert Bosch Gmbh Method and apparatus for determining a depth information image from an input image
CN109165339A (en) * 2018-07-12 2019-01-08 西安艾润物联网技术服务有限责任公司 Service push method and Related product
CN109658433B (en) * 2018-12-05 2020-08-28 青岛小鸟看看科技有限公司 Image background modeling and foreground extracting method and device and electronic equipment
CN110136174B (en) * 2019-05-22 2021-06-22 北京华捷艾米科技有限公司 Target object tracking method and device
CN110135382B (en) * 2019-05-22 2021-07-27 北京华捷艾米科技有限公司 Human body detection method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236599A (en) * 2007-12-29 2008-08-06 浙江工业大学 Human face recognition detection device based on multi- video camera information integration
CN101657825A (en) * 2006-05-11 2010-02-24 普莱姆传感有限公司 Modeling of humanoid forms from depth maps

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8867820B2 (en) * 2009-10-07 2014-10-21 Microsoft Corporation Systems and methods for removing a background of an image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101657825A (en) * 2006-05-11 2010-02-24 普莱姆传感有限公司 Modeling of humanoid forms from depth maps
CN101236599A (en) * 2007-12-29 2008-08-06 浙江工业大学 Human face recognition detection device based on multi- video camera information integration

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705552A (en) * 2019-10-11 2020-01-17 沈阳民航东北凯亚有限公司 Luggage tray identification method and device
CN110705552B (en) * 2019-10-11 2022-05-06 沈阳民航东北凯亚有限公司 Luggage tray identification method and device

Also Published As

Publication number Publication date
CN102855459A (en) 2013-01-02

Similar Documents

Publication Publication Date Title
CN102855459B (en) For the method and system of the detection validation of particular prospect object
CN112991389B (en) Target tracking method and device and mobile robot
JP2014127208A (en) Method and apparatus for detecting object
JP6221390B2 (en) Image processing apparatus, program, and image processing method
KR102476016B1 (en) Apparatus and method for determining position of eyes
US11810311B2 (en) Two-stage depth estimation machine learning algorithm and spherical warping layer for equi-rectangular projection stereo matching
CN111696196B (en) Three-dimensional face model reconstruction method and device
CN113077476B (en) Height measurement method, terminal device and computer storage medium
CN111160291B (en) Human eye detection method based on depth information and CNN
US7561732B1 (en) Method and apparatus for three-dimensional shape estimation using constrained disparity propagation
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
CN110992424B (en) Positioning method and system based on binocular vision
Tombari et al. Evaluation of stereo algorithms for 3d object recognition
CN116449384A (en) Radar inertial tight coupling positioning mapping method based on solid-state laser radar
KR102223484B1 (en) System and method for 3D model generation of cut slopes without vegetation
CN106295657A (en) A kind of method extracting human height&#39;s feature during video data structure
Wang et al. Depth map enhancement based on color and depth consistency
CN113936210A (en) Anti-collision method for tower crane
KR20110021500A (en) Method for real-time moving object tracking and distance measurement and apparatus thereof
CN106326850A (en) Fast lane line detection method
Parmehr et al. Automatic registration of optical imagery with 3d lidar data using local combined mutual information
CN114511608A (en) Method, device, terminal, imaging system and medium for acquiring depth image
CN116883945B (en) Personnel identification positioning method integrating target edge detection and scale invariant feature transformation
US9299191B2 (en) Adaptive artifact removal
CN110956616A (en) Target detection method and system based on stereoscopic vision

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant