CN102034247B - Motion capture method for binocular vision image based on background modeling - Google Patents

Motion capture method for binocular vision image based on background modeling Download PDF

Info

Publication number
CN102034247B
CN102034247B CN 201010602544 CN201010602544A CN102034247B CN 102034247 B CN102034247 B CN 102034247B CN 201010602544 CN201010602544 CN 201010602544 CN 201010602544 A CN201010602544 A CN 201010602544A CN 102034247 B CN102034247 B CN 102034247B
Authority
CN
China
Prior art keywords
background
binocular vision
binocular
image
prospect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201010602544
Other languages
Chinese (zh)
Other versions
CN102034247A (en
Inventor
王阳生
时岭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN 201010602544 priority Critical patent/CN102034247B/en
Publication of CN102034247A publication Critical patent/CN102034247A/en
Application granted granted Critical
Publication of CN102034247B publication Critical patent/CN102034247B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a motion capture method for a binocular vision image based on background modeling. By the method, a human body serving as a foreground can be divided and the upper torso of the human body is subjected to motion capture simultaneously so as to achieve the effect of man-machine interaction. The method comprises the following steps of: on the basis of background modeling, establishing a Gaussian model for a clean background acquired by a camera, comparing an acquired video with a background model, providing a probability value which belongs to the foreground or the background for each pixel of a scene through depth information acquired by a binocular camera, and dividing the foreground and the background of the scene through an image cutting algorithm. When the divided foreground is the upper torso of the human body, a basic skeleton model of the human body is acquired by refining a foreground profile, de-noising and determining a key point so as to complete the motion capture process.

Description

A kind of based on the motion capture method of background modeling to the binocular vision image
Technical field
The invention belongs to computer vision technique and interactive digital entertainment field, relate to a kind of background segment that binocular camera shooting head and background modeling technology finish and motion-captured process utilized.
Background technology
Movement capturing technology refers to computer vision or other means utilized, and can capture in real time, exactly the motion process of human body.Along with the development of computer software and hardware and the raising of computer user's demand, movement capturing technology is more obvious in the effect of the inside, the fields such as digital entertainment, video monitoring, motion analysis.
Yet the development of movement capturing technology also is subject to various Restricted requirements and many limitation occur.Such as problems such as blocking of the variation of light, complicated background and motion process.These factors are more difficulty so that motion-captured process becomes.Yet, carry out the result of background segment by the method for utilizing binocular vision, the prospect in scene only has under the prerequisite of human body, and motion-captured problem will be converted into the prospect profile problem of analyzing scene, so that calculated amount is simplified greatly.Simultaneously, in the interactive digital entertainment field, movement capturing technology also is the study hotspot of man-machine interaction in playing in recent years as a kind of video interactive technology.And camera become the general outfit of PC, and man-machine interaction mode general, immersion more and more becomes the focus of digital entertainment research.So, have the widely Research Prospects of application based on the binocular vision movement capturing technology of background segment technology.
Summary of the invention
Cutting apart of the prospect that the objective of the invention is to utilize the binocular camera shooting head to obtain scene and background, finish on this basis motion-captured process simultaneously.This method is at first trained clean background, gathers the background picture of certain frame number, finishes the foundation of background model.On this basis, utilize the new image that gathers with the color distortion of background model and the depth information of binocular vision, finish the foundation that figure cuts network chart, and the method for utilizing Dynamic Graph to cut, scene prospect and background are cut apart.Simultaneously on the basis of cutting apart, the human body of prospect is carried out construction analysis, obtain the location of upper body trunk various piece, thereby finish motion-captured process.
For achieving the above object, the invention provides based on background modeling the motion capture method of binocular vision image comprised that step is as follows:
Step S1: binocular camera shooting head position is fixed, closed white balance, obtain the binocular vision image;
Step S2: to the binocular vision image that obtains, under the clean background image of setting frame number, carry out background modeling, obtain background model;
Step S3: the binocular depth information that utilizes computer binocular vision to obtain, calculating pixel belongs to the probability of prospect and background;
Step S4: utilize binocular depth information and background modeling data and Dynamic Graph to cut algorithm, binocular vision display foreground and background are cut apart, and extract prospect profile;
Step S5: prospect profile carries out refinement, determines the human body key point, finishes motion-captured.
Good effect of the present invention:
The present invention utilizes computer vision and image processing techniques, naturally isolates the human body of prospect from scene, and finishes the motion-captured of upper body trunk, thereby realizes the man-machine interaction of nature.The characteristics of traditional interactive mode are take the hand contact as main, such as mouse, keyboard etc.Development along with computer vision technique, increasing system is by the process of having finished man-machine interaction of the method nature of camera, the user can experience more easily by the mode of vision the enjoyment of man-machine interaction, simultaneously, as the interface of game, so that the game player obtains more feeling of immersion.
In addition, the present invention has utilized the collection of binocular vision and the foundation of background model.The employing of binocular vision mainly is to take full advantage of depth information, and the prospect of considering often belongs to from the zone of camera close to, the problem of avoided simultaneously by shade, blocking the segmentation errors that causes.In addition, setting up background model can be so that the cost of cutting apart better obtains calculating, and the method for utilizing simultaneously Dynamic Graph to cut is so that cut apart quicker.
Description of drawings
Figure 1A is overall flow figure of the present invention;
Fig. 1 is binocular vision image of the present invention;
Fig. 2 is left figure and right figure and the parallax that the present invention utilizes binocular vision to obtain;
Fig. 3 is that figure of the present invention cuts the max-flow of algorithm or the network flow graph of minimal cut;
Fig. 4 is process flow diagram of the present invention;
Fig. 5 is one group of design sketch that video background is cut apart of the present invention;
Fig. 6 is background segment result's of the present invention edge smoothing synoptic diagram;
Fig. 7 is the refinement of profile of the present invention and the result that key position extracts.
Embodiment
Below in conjunction with accompanying drawing the present invention is described in detail, described embodiment only is intended to be convenient to the understanding of the present invention, and it is not played any restriction effect.
Further specify a kind of operating process of the motion capture method based on background modeling below by example.
All codes of this example are C++ and write, and move under Microsoft visual studio 2005 environment, can also adopt other software and hardware conditions, do not repeat them here.
Figure 1A illustrates and the present invention is based on background modeling to the overall flow figure of the motion capture method of binocular vision image.
The present invention is based on background modeling to the motion capture method of binocular vision image, based on binocular vision and background segment, its method comprises that step is as follows:
Step S1: binocular camera shooting head position is fixed, closed white balance, obtain the binocular vision image;
Step S2: to the binocular vision image that obtains, under the clean background image of setting frame number, carry out background modeling, obtain background model;
Step S3: the binocular depth information that utilizes computer binocular vision to obtain, calculating pixel belongs to the probability of prospect and background;
Step S4: utilize binocular depth information and background modeling data and Dynamic Graph to cut algorithm, binocular vision display foreground and background are cut apart, and extract prospect profile;
Step S5: prospect profile carries out refinement, determines the human body key point, finishes motion-captured.
Comprise according to obtaining the binocular vision image step described in the step S2:
Step S211: the position that guarantees camera is fixed, and does not have obvious light and shade to change in the scene;
Step S212: close the Automatic white balance of camera, in the hardware parameter of camera, the function of automatic exposure parameter and Automatic white balance is arranged generally, in order to when scene light changes, realize automatically regulating the function of picture quality; In background modeling, need to set white balance parameter and fix;
Step S213: gather the fixedly clean background image of frame number (100 frame), be stored in the internal memory.
Comprise as follows according to the step of under the clean background image of setting frame number, carrying out background modeling described in the step S2:
Step S221: utilize the Gaussian Background model to gather the coloured image of each frame in the binocular vision image, use respectively R, G, B represent red, and green and blue three-channel value, span are 0~255;
Step S221: obtained N image in the background modeling process, each image comprises 320 * 240 pixels, calculates brightness I and the colourity (r, g) of each pixel.Wherein, r=R/ (R+G+B), g=G/ (R+G+B), R, G, B represent respectively the redness in the Color Channel, the value of green and blue component;
Step S221: the fusion background model of setting up pixel scale; Calculate brightness and average and the variance of colourity in N image of each pixel, and deposit internal memory in;
Step S221: set up feature background model at brightness space, chrominance space set up based on colorimetry model.Deposit the colourity obtained and the background model in the brightness space in internal memory.
According to the depth data cost of each pixel in the binocular vision image described in the step S2 is calculated, obtain the degree of depth cost of each pixel, thereby the binocular depth information is introduced, concrete steps comprise:
Step 231: gather and preservation binocular vision image, be designated as respectively left image and right image;
Step 232: set a depth value for each pixel of left image, described depth value represents with the parallax of left image and right image;
Step 233: for each depth value, calculate the difference cost of left image and right image;
Step 234: add up the cost value in the left image, and the cost value in the left image is divided into four groups according to the size of described cost value;
Step 235: the cost value of each group is upgraded the prospect of this pixel and the cost of background, and the cost that wherein belongs to prospect reduces according to the exponential relationship to parallax, and the cost of background increases according to the exponential relationship of parallax.
According to utilizing binocular depth information and background modeling data and Dynamic Graph to cut algorithm described in the step S4, binocular vision display foreground and background to be cut apart, and extracted prospect profile, concrete steps comprise:
Step S41: background modeling reads in the binocular vision image that newly reads after finishing, and described binocular vision image comprises left image and right image;
Step S42: utilize the result of binocular vision data cost acquisition, obtain the data cost of binocular information;
Step S43: utilize background model, compare with the pixel of left figure, obtain based on the color cost value, the ultimate principle of utilizing figure to cut algorithm is set up the network flow of max-flow or minimal cut;
Step S44: two data cost value utilizing step S42 and step S43 to obtain obtain figure and cut data cost value in the algorithm;
Step S45: utilize the relationship of contrast between the left pixel, the level and smooth item that figure is cut in the algorithm carries out assignment;
Step S46: utilize Dynamic Graph to cut algorithm, will cut apart based on the video flowing of pixel aspect, segmentation result is divided into two parts, and a part is prospect, and a part is background in addition.
Step S47: the prospect background that will cut apart is stored in the picture of formed objects according to 0 or 1, and 0 or 1 prospect background picture is obtained edge contour;
Step S48: utilize the mode of High frequency filter with the edge denoising, so that the edge is more level and smooth;
Step S49: utilize the cut zone of the data error of former frames to proofread and correct.
Utilize picture denoising, refinement mode according to step S5, obtain the key point of trunk, thereby realize that motion-captured effect step comprises:
Step S51: will carry out convergent-divergent through the human body contour outline of aftertreatment;
Step S52: the human body contour outline of convergent-divergent is carried out refinement;
Step S53: the human body contour outline of refinement is enlarged, expand original size to;
Step S54: again profile is carried out refinement;
Step S55: find neighborhood territory pixel greater than 2 node, and get its center-of-gravity value, be set as the gravity center of human body;
Step S56: search for up and down along center of gravity, find node, be set as head and waist;
Step S57: along the center of gravity Left-right Searching, find left arm and right arm, and proportionally determine ancon and shoulder with eccentricity;
Step S58: 9 key points will determining compare with former frames, obtain comparatively stable and trunk position accurately.
The first step as shown in Figure 1 is to gather image.This method adopts the binocular vision video input.Among the figure, what (x, y, z) represented is the coordinate of world coordinate system; (x L, y L) and (x R, y R) pixel coordinate of the same object of expression in left figure and right figure.
(1) mostly the information of Digital Image Processing is two-dimensional signal, and the process information amount is very large.The piece image here is with two-dimensional function f (x, y) expression, x wherein, and y is two-dimensional coordinate, the colouring information of f (x, y) expression point (x, y) point.Camera gathers all optical information in the camera lens from the space, these information enter after the computing machine, is converted to the color model that meets computer standard, carries out Digital Image Processing with the program of entering, and guarantees continuity and the real-time of video.From the image that gathers, each pixel is processed altogether 76800 pixels of 320 * 240 pixels.The initial effect of video that gathers as shown in Figure 1.Project all operations and computing 320 * 240 pixels that all are based on this each frame subsequently.In the binocular vision, same pixel about become among two figure the position of image different, and the size of position difference, reflection be the degree of depth of image.Relatively moving of two pixels can be calculated by the coupling of pixel.Method of the present invention is utilized these information, auxiliary finishing cutting apart of prospect and background.As shown in Figure 2, the utilization of binocular information is that the cost that two width of cloth figure mate about usefulness realizes.What wherein P represented is the position of certain pixel in left figure, and P+d represents the position of this pixel in right figure, and what d represented is exactly the parallax (Display) of this pixel.
(2) the present invention is comprised of two parts in the process of utilizing the binocular depth information.
Step 1: the coupling cost in that pixel xi calculates is divided into four groups (maximal value of parallax d is set as 32) according to different parallax value:
A group: pixel x iThe parallax of coupling is arranged, i.e. optimum parallax (Disparity), the degree of coupling) d>16, represent that this pixel belongs to prospect very much;
B group: pixel x iThe parallax of coupling is arranged, i.e. optimum parallax (Disparity), the degree of coupling) d≤16and d>12, represent that this pixel has the very large prospect that may belong to;
C group: pixel x iThe parallax of coupling is arranged, i.e. optimum parallax (Disparity), the degree of coupling) d≤12and d>5, represent that this pixel has the very large background that may belong to;
D group: pixel x iThe parallax of coupling is arranged, i.e. optimum parallax (Disparity), the degree of coupling) d≤5, represent that this pixel belongs to background very much.
Under such hypothesis, the present invention needs the time still less that pixel is divided into four groups, rather than each pixel is carried out 32 possible parallaxes suppose.
Step 2: set suitable data cost value for figure cuts algorithm.Data item of the present invention comprises that respectively this pixel belongs to the cost of prospect or background, uses respectively D i(B) and D i(F) expression.The parallax value of pixel is larger, so it to belong to the possibility of prospect larger, so D i(F) value correspondence reduces D i(B) value is corresponding to be increased.By such corresponding relation, the present invention proposes a corresponding scheme, express with following formula:
D i , t s ( B ) = D i ( B ) + λ t e - d c t , D i , t s ( F ) = D i ( F ) - λ t e - d c t c t
For all t=A, B, C, D, λ t>0.Wherein What represent is the background model data item that incorporates binocular information, belongs to respectively t=A, B, C, four groups of D.D i(B) expression is the background segment data item of monocular vision.λ tBe the parameter of binocular data cost, what i represented is pixel coordinate.
Figure BDA0000040221370000074
What represent is the foreground model data item that incorporates binocular information.The parallax value that d represents (Disparity).c tWhat represent is the parameter of control d.
As shown in Figure 3, figure cut algorithm max-flow or the network flow graph of minimal cut.P wherein, what q represented is two adjacent pixels.Shown in Figure 4 is the process flow diagram that figure cuts algorithm, comprises the assignment of front end and the partitioning portion of rear end.
(3) figure cuts the important component part that algorithm is background segment, and it to the effect that utilizes the principle of max-flow or minimal cut, the pixel in the image is cut apart according to certain path, and which is calculated belong to respectively prospect or and background.
The segmentation problem of prospect or background in the image can be considered as the binary identified problems in the computer vision field.If pixel i belongs to prospect, the label f of this pixel of mark then i=F, F refers to prospect.In like manner, if this pixel belongs to background, then be labeled as f i=B.Correspond to two-value label problem, label set only comprises two labels.Figure cuts weighted graph that algorithm constructs and comprises two with it corresponding summit s and t.As shown in Figure 3, among the figure, left figure is the weighted graph G that provides by 3 * 3 original image structure, G=<V, ε 〉, wherein V is vertex set, is to be called respectively source node S and terminal node T is dimerous by ordinary node and two.Wherein S and T represent respectively the two-value label of prospect and background be summit ε representative be the limit of connect Vertex, the weights size on limit represents with the thickness of simplification in upper figure.
The flow process that Dynamic Graph is cut such as Fig. 4.Comprise data item and level and smooth in the energy function, arranging of they directly affects figure and cuts the final segmentation result of algorithm.That Fig. 5 represents is several groups of Video segmentation results of the present invention, and wherein left side 3 width of cloth figure are left figure images of doing video in the input video, and right side 3 width of cloth figure are the results after cutting apart.
(4) low-pass filter that designed in the frequency domain of the present invention comes smooth boundary.Along boundary curve C, the boundary curve of the process of edge smoothing of the present invention as shown in Figure 6, the picture left above represents the input source image, top right plot represents the result cut apart; The flat prospect of spending that lower-left figure represents or the edge of background, bottom-right graph represents is result after level and smooth.The point sequence z (i)=[x (i), y (i)] its complex representation form that obtains of sampling is at certain intervals:
z(i)=x(i)+jy(i)
The Fourier transform of discrete z (i) is:
f ( u ) = 1 K Σ i = 0 K - 1 z ( i ) e - j 2 πui / K
In the formula, j, u, K represent respectively complex symbol, frequency and constant term, f (u) is the Fourier transform of z (i), is called the Fourier descriptor on border, is the expression of boundary point sequence in frequency domain.By the Fourier transform theory as can be known, high fdrequency component comprises details, and low frequency component determines global shape.Curve is because jagged ability is rough, and high fdrequency component is contained in these rough zones.The HFS of f (u) is carried out filtering just can obtain smooth curve.The present invention defines the high-frequency energy of low frequency energy ratio and filtering 5%:
r ( l ) = Σ u = 0 l | f ( u ) | 2 / Σ k = 0 K - 1 | f ( u ) | 2
Wherein || be modulo operation.Getting the minimum l value that r (l)>0.95 is set up is the cutoff frequency of low-pass filter.Utilize the character of fourier coefficient
Figure BDA0000040221370000083
(
Figure BDA0000040221370000084
The conjugate complex number of f).In coefficient f (u), the radio-frequency component of cancellation in from l to the K-1-l scope.Carry out inverse fourier transform, the part of curve sudden change has obtained smoothly again.
Being motion-captured result of the present invention as shown in Figure 7, wherein is two two field pictures of the left figure of video in the left hand view, and the right side is key point and the skeleton that segmentation result has extracted.Key point represents with circle that skeleton represents with line.
(5) motion-captured on the basis of cutting apart of the present invention comprises three steps,
Step 1: the result that will cut apart carries out aftertreatment, obtains relatively level and smooth and stable contour area, cuts apart owing to relate to profile, so the border does not need accurate calculating.In will not the situation than macroscopic-void, can finish preferably the skeleton motion tracking effect that this paper needs.
Step 2: the profile that will cut apart positions, and determines the basic comprising of nine points.Comprising A 1, A 2, A 3Representative group, herein A 1, A 2, A 3, A 4, A 5, A 6, A 7, A 8, A 9Nine points.A 1, A 2, A 3Represent respectively three points of head and trunk, A 4, A 5, A 6And A 7, A 8, A 9Represent respectively three points of left arm and right arm.
Step 3: the order of nine some mounting framework profiles is connected, finish motion-captured.
The above; only be the embodiment among the present invention, but protection scope of the present invention is not limited to this, anyly is familiar with the people of this technology in the disclosed technical scope of the present invention; can understand conversion or the replacement expected, all should be encompassed within the protection domain of claims of the present invention.

Claims (3)

1. one kind based on the motion capture method of background modeling to the binocular vision image, is based on the method for binocular vision and background segment, it is characterized in that, comprises that step is as follows:
Step S1: binocular camera shooting head position is fixed, closed white balance, obtain the binocular vision image;
Step S2: to the binocular vision image that obtains, under the clean background image of setting frame number, carry out background modeling, obtain background model;
Step S3: the binocular depth information that utilizes computer binocular vision to obtain, calculating pixel belongs to the cost of prospect and background;
Step S4: utilize binocular depth information and background modeling data and Dynamic Graph to cut algorithm, binocular vision display foreground and background are cut apart, and extract prospect profile;
Step S5: prospect profile carries out refinement, determines the human body key point, finishes motion-captured.
2. according to claim 1 based on the motion capture method of background modeling to the binocular vision image, it is characterized in that: the step of obtaining the binocular vision image described in the step S1 comprises as follows:
Step S11: the position that guarantees camera is fixed, and does not have obvious light and shade to change in the scene;
Step S12: close the Automatic white balance of camera, in the hardware parameter of camera, the function of automatic exposure parameter and Automatic white balance is arranged generally, in order to when scene light changes, realize automatically regulating the function of picture quality; In background modeling, need to set white balance parameter and fix;
Step S13: gather fixedly frame number clean background image, be stored in the internal memory.
3. according to claim 1 based on the motion capture method of background modeling to the binocular vision image, it is characterized in that: utilize picture denoising, refinement mode, obtain the key point of trunk, thereby realize motion-captured effect, step comprises as follows:
Step S51: will carry out convergent-divergent through the prospect profile of aftertreatment;
Step S52: the prospect profile of convergent-divergent is carried out refinement;
Step S53: the prospect profile of refinement is enlarged, expand original size to;
Step S54: again prospect profile is carried out refinement;
Step S55: find neighborhood territory pixel greater than 2 node, and get its center-of-gravity value, be set as the gravity center of human body;
Step S56: search for up and down along center of gravity, find node, be set as head and waist;
Step S57: along the center of gravity Left-right Searching, find left arm and right arm, and proportionally determine ancon and shoulder with eccentricity;
Step S58: 9 key points will determining compare with former frames, obtain comparatively stable and trunk position accurately, and described 9 key points are center of gravity, head, waist, left arm, right arm, left elbow, right elbow, left shoulder, right shoulder.
CN 201010602544 2010-12-23 2010-12-23 Motion capture method for binocular vision image based on background modeling Expired - Fee Related CN102034247B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010602544 CN102034247B (en) 2010-12-23 2010-12-23 Motion capture method for binocular vision image based on background modeling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010602544 CN102034247B (en) 2010-12-23 2010-12-23 Motion capture method for binocular vision image based on background modeling

Publications (2)

Publication Number Publication Date
CN102034247A CN102034247A (en) 2011-04-27
CN102034247B true CN102034247B (en) 2013-01-02

Family

ID=43887100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010602544 Expired - Fee Related CN102034247B (en) 2010-12-23 2010-12-23 Motion capture method for binocular vision image based on background modeling

Country Status (1)

Country Link
CN (1) CN102034247B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184008A (en) * 2011-05-03 2011-09-14 北京天盛世纪科技发展有限公司 Interactive projection system and method
CN102927652B (en) * 2012-10-09 2015-06-24 清华大学 Intelligent air conditioner control method based on positions of indoor persons and objects
JP2014238731A (en) 2013-06-07 2014-12-18 株式会社ソニー・コンピュータエンタテインメント Image processor, image processing system, and image processing method
CN103826071A (en) * 2014-03-11 2014-05-28 深圳市中安视科技有限公司 Three-dimensional camera shooting method for three-dimensional identification and continuous tracking
CN105516579B (en) * 2014-09-25 2019-02-05 联想(北京)有限公司 A kind of image processing method, device and electronic equipment
CN105374043B (en) * 2015-12-02 2017-04-05 福州华鹰重工机械有限公司 Visual odometry filtering background method and device
CN106056056B (en) * 2016-05-23 2019-02-22 浙江大学 A kind of non-contacting baggage volume detection system and its method at a distance
US10122969B1 (en) * 2017-12-07 2018-11-06 Microsoft Technology Licensing, Llc Video capture systems and methods
CN109064511B (en) * 2018-08-22 2022-02-15 广东工业大学 Method and device for measuring height of center of gravity of human body and related equipment
CN109214996B (en) * 2018-08-29 2021-11-12 深圳市元征科技股份有限公司 Image processing method and device
CN110490877B (en) * 2019-07-04 2021-10-22 西安理工大学 Target segmentation method for binocular stereo image based on Graph Cuts

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344965A (en) * 2008-09-04 2009-01-14 上海交通大学 Tracking system based on binocular camera shooting
CN101389004A (en) * 2007-09-13 2009-03-18 中国科学院自动化研究所 Moving target classification method based on on-line study

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7321386B2 (en) * 2002-08-01 2008-01-22 Siemens Corporate Research, Inc. Robust stereo-driven video-based surveillance
US7720282B2 (en) * 2005-08-02 2010-05-18 Microsoft Corporation Stereo image segmentation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101389004A (en) * 2007-09-13 2009-03-18 中国科学院自动化研究所 Moving target classification method based on on-line study
CN101344965A (en) * 2008-09-04 2009-01-14 上海交通大学 Tracking system based on binocular camera shooting

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Vladimir Kolmogorov et al..Probabilistic Fusion of Stereo with Color and Contrast for Bilayer Segmentation.《IEEE Transactions on Pattern Analysis and Machine Intelligence》.2006,第28卷(第9期),全文. *
Xiaoyu Wu et al..Video Background Segmentation Using Adaptive Background Models.《LNCS》.2009,第5716卷全文. *

Also Published As

Publication number Publication date
CN102034247A (en) 2011-04-27

Similar Documents

Publication Publication Date Title
CN102034247B (en) Motion capture method for binocular vision image based on background modeling
CN111797716B (en) Single target tracking method based on Siamese network
CN102567727B (en) Method and device for replacing background target
CN103871076B (en) Extracting of Moving Object based on optical flow method and super-pixel segmentation
CN102289948B (en) Multi-characteristic fusion multi-vehicle video tracking method under highway scene
CN107204010A (en) A kind of monocular image depth estimation method and system
CN113240691A (en) Medical image segmentation method based on U-shaped network
CN103856727A (en) Multichannel real-time video splicing processing system
CN105869178A (en) Method for unsupervised segmentation of complex targets from dynamic scene based on multi-scale combination feature convex optimization
CN103455984A (en) Method and device for acquiring Kinect depth image
CN102184551A (en) Automatic target tracking method and system by combining multi-characteristic matching and particle filtering
CN106952286A (en) Dynamic background Target Segmentation method based on motion notable figure and light stream vector analysis
CN112464847B (en) Human body action segmentation method and device in video
CN109712247B (en) Live-action training system based on mixed reality technology
CN111462027B (en) Multi-focus image fusion method based on multi-scale gradient and matting
CN107194948B (en) Video significance detection method based on integrated prediction and time-space domain propagation
CN104063871B (en) The image sequence Scene Segmentation of wearable device
CN105929962A (en) 360-DEG holographic real-time interactive method
CN103440662A (en) Kinect depth image acquisition method and device
CN103413323B (en) Based on the object tracking methods of component-level apparent model
CN108596923A (en) Acquisition methods, device and the electronic equipment of three-dimensional data
CN110070574A (en) A kind of binocular vision Stereo Matching Algorithm based on improvement PSMNet
CN101339661A (en) Real time human-machine interaction method and system based on moving detection of hand held equipment
CN106251348A (en) A kind of self adaptation multi thread towards depth camera merges background subtraction method
Liu et al. Stereo video object segmentation using stereoscopic foreground trajectories

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130102

Termination date: 20151223

EXPY Termination of patent right or utility model