CN102917224B - Mobile background video object extraction method based on novel crossed diamond search and five-frame background alignment - Google Patents

Mobile background video object extraction method based on novel crossed diamond search and five-frame background alignment Download PDF

Info

Publication number
CN102917224B
CN102917224B CN201210398165.XA CN201210398165A CN102917224B CN 102917224 B CN102917224 B CN 102917224B CN 201210398165 A CN201210398165 A CN 201210398165A CN 102917224 B CN102917224 B CN 102917224B
Authority
CN
China
Prior art keywords
frame
block
distortion
point
sigma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210398165.XA
Other languages
Chinese (zh)
Other versions
CN102917224A (en
Inventor
祝世平
郭智超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xiaolajiao Technology Co ltd
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201210398165.XA priority Critical patent/CN102917224B/en
Publication of CN102917224A publication Critical patent/CN102917224A/en
Application granted granted Critical
Publication of CN102917224B publication Critical patent/CN102917224B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a movable background video object extraction method based on novel crossed diamond search and five-frame background alignment. The movable background video object extraction method comprises the following steps of: firstly, dividing a (K-2)th frame, a (K-1)th frame, a Kth frame as a reference frame, a (K+1)th frame and a (K+2)th frame into 8*8 micro blocks, and screening all the micro blocks; carrying out blocking matching on the screened micro blocks by adopting a novel crossed diamond motion estimation method to obtain motion vector fields of the (K-2)th frame, the (K-1)th frame, the (K+1)th frame and the (K+2)th frame relative to the reference frame and calculating a global motion parameter by a least square method; carrying out motion compensation on the (K-2)th frame, the (K-1)th frame, the (K+1)th frame and the (K+2)th frame respectively to enable the (K-2)th frame, the (K-1)th frame, the (K+1)th frame and the (K+2)th frame to be respectively aligned to the background of the reference frame to obtain reestablished frames of the (K-2)th frame, the (K-1)th frame, the (K+1)th frame and the (K+2)th frame; extracting edge information of the reestablished frames the (K-2)th frame, the (K-1)th frame, the (K+1)th frame and the (K+2)th frame and the reference frame respectively by adopting a Prewitt operator, respectively calculating the frame differences of the edges of reestablished frames relative to the edge of the reference frame and carrying out binaryzation on the maximum variable threshold; carrying out AND operation on the frame differences of the first two frames and the last two frames of the continuous five frames; and finally carrying out OR operation and postprocessing to realize rapid and effective division of video objects under a movable background.

Description

Based on the dynamic background video object extraction of the search of novel cross rhombic and five frame background alignment
Technical field:
The present invention relates to the processing method in a kind of Video segmentation, particularly a kind of dynamic background video object extraction based on the search of novel cross rhombic and five frame background alignment.
Background technology:
For the extraction of Moving Objects in dynamic video sequence, the global motion produced due to video camera makes the dividing method under static background, as: frame difference or background subtraction method of grading is not suitable for segmentation under dynamic background, namely can not exactly by moving object extract out, therefore the impact of the global motion that camera motion causes first must be eliminated for the segmentation problem under dynamic background, by overall motion estimation and compensation technique, problem is changed into the segmentation problem under static background, and then under application static background widely dividing method to realize under dynamic background accurate, effective segmentation.
Overall motion estimation refers to the characteristics of motion estimating the sequence background region caused by camera motion, solves the multiple parameters in respective counts student movement movable model.Global motion compensation is at the globe motion parameter obtained according to estimation, in the mapping transformation of an intercropping corresponding background alignment of present frame and former frame.After compensating accurately, the methods such as frame difference or background subtraction just can be adopted like this to eliminate background area, outstanding interested there is local motion foreground area (see Yang Wenming. the video object segmentation [D] of temporal-spatial fusion. Zhejiang: Zhejiang University, 2006).
For the motion segmentation problem under dynamic background, existing considerable scholar has done a large amount of research work in the world at present.As utilized the watershed algorithm of improvement, the frame of video after motion compensation is divided into different gray areas, the movable information of sequence is obtained by optical flow computation, finally, the region of movable information and segmentation is comprehensively obtained object template by certain criterion, reach accurate location to object video (see Zhang Qingli. a kind of Video object segmentation algorithm based on movement background. Shanghai University's journal (natural science edition), 2005,11 (2): 111-115.).As set up four movement parameter radiation patterns to describe global motion, block matching method is adopted to carry out parameter Estimation, detect moving target in conjunction with Horn-Schunck algorithm and application card Kalman Filtering is followed the tracks of information such as the centroid positions of moving target, achieve the detection and tracking of Moving Objects in dynamic scene.(see Shi Jiadong. moving object detection and tracking in dynamic scene. Beijing Institute of Technology's journal, 2009,29 (10): 858-876.).The another kind of method adopting nonparametric probability, the impact of background motion under the overall motion estimation backoff algorithm elimination dynamic scene of first employing coupling weighting, then estimate that each pixel belongs to the probability density of prospect and background and combining form scheduling algorithm processes, achieve the accurate and effective segmentation of Moving Objects under dynamic background.(see Ma Zhiqiang. motion segmentation new algorithm under a kind of dynamic scene. computer engineering and science, 2012,34 (4): 43-46.).
In order to solve the segmentation problem under dynamic background, the inventive method achieves overall motion estimation and the compensation methodes such as a kind of employing macro block judges in advance, Block-matching, video camera six parameter affine model, least square method, and realizes dynamic background segment by five frame background alignment jointing edge information etc.Experiment proves, the method achieve the extraction of object video in dynamic background video sequence, and extraction accuracy is improved significantly.
Summary of the invention:
The technical problem to be solved in the present invention is: the operation time how reducing Block-matching, how to realize the accurate extraction of object video under dynamic background.
The technical solution adopted for the present invention to solve the technical problems is: based on the dynamic background video object extraction of the search of novel cross rhombic and five frame background alignment, comprise the following steps:
(1) K-2 frame, K-1 frame, reference frame K frame, K+1 frame and K+2 frame are divided into 8 × 8 macro blocks respectively, according to texture information, all macro blocks in this five frame are judged in advance, screened;
(2) Block-matching is carried out to the macro block employing SAD criterion after above-mentioned screening, novel cross diamond search strategy (NCDS), respectively using K-2 frame, K-1 frame, K+1 frame and K+2 frame as present frame, using K frame as reference frame, obtain the motion vector field of this four frame relative to reference frame K frame, and calculate globe motion parameter by least square method, obtain video camera six parameter model;
(3) motion compensation is carried out to K-2 frame, make K-2 frame and K frame background alignment, obtain reconstruction frames K-2', after the same method motion compensation is carried out to K-1 frame, K+1 frame and K+2 frame, make K-1 frame, K+1 frame and K+2 frame respectively with K frame background alignment, and obtain reconstruction frames K-1', reconstruction frames K+1' and reconstruction frames K+2';
(4) Prewitt operator extraction marginal information is adopted respectively to reconstruction frames K-2', K-1', K+1', K+2' and reference frame K frame, and calculate it respectively relative to the poor d of the frame of reference frame K-edge 1, d 2, d 3, d 4, adopt maximum variance threshold method to carry out binaryzation;
(5) respectively the frame difference binaryzation result that continuous five frame front cross frames and rear two frames obtain is carried out and computing; To obtain and operation result adopts or computing and morphology, medium filtering etc. carry out reprocessing, realize the effectively segmentation fast of object video under dynamic background.
Described step judges in advance for 8 × 8 macro blocks be divided in current K-2 frame, K-1 frame, K+1 frame, K+2 frame and reference frame K frame and screens in (1), and concrete steps are as follows:
Owing to apply least square method calculating globe motion parameter in following step in, the macro block that a lot of error is large is directly deleted, if macro block large for error can be rejected before least square method computing, arithmetic speed will be improved significantly, and reduce operand.And determine macro block error size, impact calculates the texture information that the key factor of accuracy is macro block, namely gradient information.The macro block that this part proposes judge in advance and the method for screening just from the gradient information of macro block, threshold value according to setting carries out screening or retaining for macro block, when the amount of information of macro block is less than this threshold value, this macro block is screened, not as the macro block participating in Block-matching in following step; When containing much information in this threshold value, then macro block being retained, participating in carrying out the computings such as following estimation as validity feature block.
Its key step is as follows:
The first step: each frame is divided into 8 × 8 sub-blocks, prove through test, according to the form being divided into 16 × 16 sub-blocks, then amount of calculation is excessive, if be divided into 4 × 4 sub-blocks, the methods such as Block-matching are accurate not, therefore adopt the form of 8 × 8 sub-blocks;
Second step: adopt Sobel operator to obtain the gradient map of each frame, using the basis for estimation that gradient information is rejected as macro block;
| ▿ f ( x , y ) | = mag ( ▿ f ( x , y ) ) = G x 2 + G y 2
Wherein represent the gradient information of this point, G x, G yrepresent partial derivative respectively.
3rd step: the gradient amount calculating each macro block; For 8 × 8 sub-blocks, its gradient information amount is:
| ▿ f ( x , y ) 8 × 8 | = Σ i = 1 i = 8 Σ j = 1 j = 8 | ▿ f ( x , y ) |
4th step: determine the threshold value that macro block is prejudged, 40% of all macro blocks of general reservation, according to the value that this is determined, sort to the gradient amount of all macro blocks, determine the optimal threshold T of reservation 40% time macro block screening;
5th step: complete the screening for macro block, if its gradient information amount >T, then retains macro block, participates in carrying out the computings such as following estimation as validity feature block; If its gradient information amount <T, screens this macro block, not as the macro block participating in Block-matching in following step.
In described step (2) respectively using K-2 frame, K-1 frame, K+1 frame, K+2 frame as present frame, using K frame as reference frame, Block-matching is carried out to the macro block employing SAD criterion after screening, NCDS search strategy, and the motion vector field of being tried to achieve by Block-matching utilizes least square method to obtain video camera six parameter model, its concrete steps are as follows:
(i) block matching criterion SAD
This part adopts SAD block matching criterion, and this criterion can not only find optimal match point, and amount of calculation is little, consuming time short.
SAD ( i , j ) = &Sigma; m = 1 M &Sigma; n = 1 N | f k ( m , n ) - f k - 1 ( m + i , n + j ) |
Wherein (i, j) is displacement, f kand f k-1be respectively the gray value of present frame and previous frame, M × N is the size of macro block, if a bit locate SAD (i, j) at certain to reach minimum, then this point is the Optimum Matching point that will look for.
(ii) novel cross diamond search strategy (NCDS)
The novel cross rhombus motion estimation searching method of this part is divided into two kinds of patterns: cross pattern and diamond pattern, as shown in Figure 2, wherein: cross pattern is divided into grand cross pattern and little cross pattern, and diamond pattern is divided into large diamond pattern and little diamond pattern.The first two steps of the cross diamond search method of this part adopt little cross pattern, and and first use grand cross pattern to search in unconventional cross rhombic searching method, thus make, in static block and accurate static block, just can to find match block with less Searching point.Then the point do not searched in the point and accurate stagnant zone that grand cross pattern do not search is searched for, for diamond search below finds the more accurate direction of search.Fig. 3 is a kind of cross rhombic searching method of the present embodiment, and concrete steps are as follows:
The first step: (little cross pattern) is in 5 Searching point of little cross pattern, the partial block distortion criterion of application enhancements, find out smallest blocks distortion (MBD) point, if smallest blocks distortion MBD point is at the center of little cross pattern, then a step search stops, obtain the final motion vector MV (0,0) required; Otherwise, enter second step;
Second step: construct new little cross pattern centered by the smallest blocks distortion MBD point that (little cross pattern) searches for by the first step, search 3 new Searching point, the partial block distortion criterion of application enhancements, find out new smallest blocks distortion MBD point, if this is at the center of little cross pattern, then two step search stop, and obtain the final motion vector MV (± 1 required, 0) or (0, ± 1); Otherwise, enter the 3rd step;
3rd step: (grand cross pattern) searches for the point that grand cross mode 3 does not also search, and the partial block distortion criterion of application enhancements, finds out new smallest blocks distortion MBD point, using the center searched for as next step;
4th step: centered by (large diamond pattern) smallest blocks distortion MBD point in the 3rd step, construct large diamond search pattern, the partial block distortion criterion of application enhancements, find out new smallest blocks distortion MBD point, if this is at the center of large rhombus, enter the 5th step; Otherwise, continue the 4th step;
5th step: centered by (little diamond pattern) smallest blocks distortion MBD point in the 4th step, construct little diamond search pattern, the partial block distortion criterion of application enhancements, find out new smallest blocks distortion MBD point.This vector corresponding to point is the final motion vector required.
Adopt the smallest blocks distortion MBD point described in the search of partial block distortion criterion improved, the partial block distortion criterion of improvement is specific as follows:
In block matching method BMA, the partial block distortion criterion of improvement only uses block one part of pixel wherein just can have good tolerance to the distortion factor.
The size of the definition block block that to be the 16 × 16, n-th frame top left co-ordinate be (m, n) and the (n-1)th frame top left co-ordinate are that the distortion metrics sad value of the interblock of (m+p, n+q) is provided by following formula:
SAD ( m , n ; p , q ) = &Sigma; i = 0 15 &Sigma; j = 0 15 | f n ( m + i , n + j ) - f n - 1 ( m + p + i , n + q + j ) |
Wherein, f n(m+i, n+j) represents that the n-th frame coordinate is the pixel value of (m+i, n+j) pixel.
By distortion metrics SAD (m, n; P, q) be divided into 16 partial distortion tolerance sad k(m, n; P, q) (k=1,2 ..., 16).A kth partial distortion tolerance is defined as follows shown in formula:
sad k ( m , n ; p , q ) = &Sigma; i = 0 3 &Sigma; j = 0 3 | f n ( m + 4 i + s k , n + 4 j + t k ) - f n - 1 ( m + p + 4 i + s k , n + q + 4 j + t k ) |
Wherein s k, t kbe respectively a kth partial distortion to measure top left corner pixel used point and offset relative to the horizontal and vertical in the block upper left corner.Partial distortion tolerance sad k(m, n; P, q) (k=1,2 ..., 16) computation sequence as shown in sequence number in Fig. 4 square frame.
Kth time increment part distortion metrics is defined as follows shown in formula:
SAD k ( m , n ; p , q ) = &Sigma; i = 1 k sad i ( m , n ; p , q )
If kth time increment part distortion metrics meets
16×SAD k(m,n;p,q)>k×min(SAD)
Wherein min (SAD) is the current minimum distortion obtained in search procedure, and k is the integer of oneself setting, and span is: 3≤k≤16, then think that this point can not be match point.Otherwise, continue to calculate kth+1 increment part distortion metrics SAD k+1(m, n; P, q), then compare.
(iii) least square method obtains video camera six parameter model
In the present frame K-2 frame got in selecting step (i), K-1 frame, K+1 frame, K+2 frame, both sides sub-block is as characteristic block, the motion vector that will obtain through (i) (ii) step substitute into video camera six parameter model (as shown in the formula) after, adopt Least Square Method parameter m 0, m 1, m 2, n 0, n 1, n 2.6 parameter affine transform models: can carry out modeling to translation, rotation, convergent-divergent motion, it is defined as follows:
x &prime; = m 0 + m 1 x + m 2 y y &prime; = n 0 + n 1 x + n 2 y
Wherein m 0and n 0represent the translation amplitude of pixel in x and y direction respectively, m 1, n 1, m 2, n 2four parametric descriptions convergent-divergent and rotary motion.
Obtain present frame K-2 frame, K-1 frame, K+1 frame, the reconstruction frames K-2' of K+2 frame, K-1', K+1', K+2' respectively by motion compensation in described step (3), its particular content is as follows:
For each point in present frame K-2 frame, K-1 frame, K+1 frame, K+2 frame according to the camera model of above-mentioned acquisition, calculate its correspondence position respectively in reference frame K and assignment is carried out to it, thus the global motion compensation realized for K-2 frame, K-1 frame, K+1 frame, K+2 frame, make the background alignment of the reconstruction frames K-2' after compensation, K-1', K+1', K+2' and reference frame K, thus realize following jointing edge information, self adaptation maximum variance threshold value based on methods of video segmentation under the dynamic background of novel cross rhombus estimation and five frame background alignment.
Employing Prewitt operator extraction marginal information in described step (4), and carry out difference with reference frame K-edge respectively, and adopt maximum variance threshold value to carry out binaryzation, its concrete steps are as follows:
(i) Prewitt operator extraction marginal information, and carry out difference with reference frame K-edge
Edge detection operator kind is a lot, selects Prewitt edge detection operator to carry out Edge Gradient Feature for reconstruction frames K-2', K-1', K+1', K+2' and reference frame K frame here.
Prewitt operator can realize with mask convolution:
f s(x,y)=|f(x,y)×G x|+|f(x,y)×G y|
Wherein: G x = - 1 0 1 - 1 0 1 - 1 0 1 G y = 1 1 1 0 0 0 - 1 - 1 - 1
The result that application Prewitt operator extracts edge respectively for reconstruction frames K-2', K-1', K+1', K+2' and reference frame K frame is: f k-2 '(x, y), f k-1 '(x, y), f k+1 '(x, y), f k+2 '(x, y) and f k(x, y).
With the edge of K frame, image difference computing is carried out respectively to reconstruction frames K-2', K-1', K+1', K+2', tries to achieve frame difference d 1, d 2, d 3, d 4, wherein:
Frame difference d 1=| f k-2'(x, y)-f k(x, y) |, frame difference d 2=| f k-1'(x, y)-f k(x, y) |
Frame difference d 3=| f k+1 '(x, y)-f k(x, y) |, frame difference d 4=| f k+2 '(x, y)-f k(x, y) |
(ii) maximum variance threshold value is adopted to carry out binaryzation
Maximum variance threshold value is a kind of adaptive Threshold, and the histogram of image is divided into two groups with optimum thresholding by it, when the variance between two groups is maximum, and decision threshold.So the binaryzation realizing edge image difference result is in this way adopted in this part.
If the gray value of piece image is 0 ~ m-1 level, the pixel count of gray value i is n i, then total pixel number:
N = &Sigma; i = 0 m - 1 n i
The probability of each value is:
If optimal threshold is T, with threshold value T, pixel is divided into two groups: C 0={ 0 ~ T-1} and C 1={ T ~ m-1}, C 0and C 1the probability produced and mean value are drawn by following formula:
C 0the probability produced w 0 = &Sigma; i = 0 T - 1 p i = w ( T )
C 1the probability produced w 1 = &Sigma; i = T m - 1 p i = 1 - w 0
C 0mean value &mu; 0 = &Sigma; i = 0 T - 1 ip i w 0 = &mu; ( T ) w ( T )
C 1mean value &mu; 1 = &Sigma; i = T m - 1 ip i w 1 = &mu; - &mu; ( T ) 1 - w ( T )
Wherein: &mu; = &Sigma; i = 0 m - 1 ip i , &mu; ( T ) = &Sigma; i = 0 T - 1 ip i
Then the average gray of all samplings is: μ=w 0μ 0+ w 1μ 1
Variance between two groups:
&delta; 2 ( T ) = w 0 ( &mu; 0 - &mu; ) 2 + w 1 ( &mu; 1 - &mu; ) 2 = w 0 w 1 ( &mu; 1 - &mu; 0 ) 2 = [ &mu; &CenterDot; w ( T ) - &mu; ( T ) ] 2 w ( T ) [ 1 - W ( T ) ]
T when asking above formula to be maximum between 1 ~ m-1, is optimal threshold.
Carry out binaryzation according to obtained optimal threshold T edge testing result, binaryzation result is respectively OtusBuf1, OtusBuf2, OtusBuf3, OtusBuf4.
In described step (5), the frame difference binaryzation result that continuous five frame front cross frames and rear two frames obtain is carried out and computing respectively, and pass through or the reprocessing such as computing and filtering.
Above-mentioned binaryzation result OtusBuf1, OtusBuf2, OtusBuf3, OtusBuf4 are carried out and computing, as follows with the result of computing:
Wherein: DifferBuf (1) be in five frames front cross frame K-2 and K-1 through the binaryzations such as motion compensation with the result of computing, DifferBuf (2) be in five frames after two frame K+1 and K+2 through the binaryzations such as motion compensation with the result of computing; OtusBuf1 (i), OtusBuf2 (i), OtusBuf3 (i), OtusBuf4 (i) represent frame difference d 1, d 2, d 3, d 4carry out the result of binaryzation respectively.
Carry out or computing with operation result above-mentioned:
DifferBuf ( i ) = 255 if ( DifferBuf 1 ( i ) = = 255 | | DifferBuf 2 ( i ) = 255 ) 0 else
Wherein DifferBuf (i) is the final process result of process or computing.
The advantage that the present invention is compared with prior art had is: this method prejudges the time that effectively can reduce Block-matching by what carry out macro block before block matching method, by continuous five frame video sequences are carried out background alignment and the follow-up process to five two field pictures by estimation, motion compensation, can accurately by the video object segmentation under dynamic background out.
Accompanying drawing illustrates:
Fig. 1 is the dynamic background video object extraction flow chart that the present invention is based on the search of novel cross rhombic and five frame background alignment;
Fig. 2 is search pattern schematic diagram in the dynamic background video object extraction cross rhombic search that the present invention is based on the search of novel cross rhombic and five frame background alignment;
Fig. 3 is the dynamic background video object extraction cross rhombic search example figure that the present invention is based on the search of novel cross rhombic and five frame background alignment;
Searching point schematic diagram used by the partial distortion criterion that Fig. 4 improves for the dynamic background video object extraction that the present invention is based on the search of novel cross rhombic and five frame background alignment;
Fig. 5 is the Video Object Extraction result after the 139th frame of the dynamic background video object extraction Coastguard video sequence that the present invention is based on the search of novel cross rhombic and five frame background alignment adopts the inventive method to compensate; Wherein (a) represents the 137th frame of Coastguard video sequence; B () represents the 138th frame of Coastguard video sequence; C () represents the 139th frame of Coastguard video sequence; D () represents the 140th frame of Coastguard video sequence; E () represents the 141st frame of Coastguard video sequence; F () represents the pretreated result of the 137th frame of Coastguard video sequence; G () represents the pretreated result of the 138th frame of Coastguard video sequence; H () represents the pretreated result of the 139th frame of Coastguard video sequence; I () represents the pretreated result of the 140th frame of Coastguard video sequence; J () represents the pretreated result of the 141st frame of Coastguard video sequence; K () represents the result of reconstruction frames through Prewitt rim detection of the 137th frame of Coastguard video sequence; L () represents the result of the 138th frame through Prewitt rim detection of Coastguard video sequence; M () represents the result of reconstruction frames through Prewitt rim detection of the 139th frame of Coastguard video sequence; N () represents the result of reconstruction frames through Prewitt rim detection of the 140th frame of Coastguard video sequence; O () represents the result of reconstruction frames through Prewitt rim detection of the 141st frame of Coastguard video sequence; P () represents the two-value video object plane that the 139th frame of Coastguard video sequence adopts the inventive method to extract after five frame background alignment methods of estimation, compensation; Q () represents the video object plane that the 139th frame of Coastguard video sequence adopts the inventive method to extract after five frame background alignment methods of estimation, compensation;
Embodiment:
The present invention is described in further detail below in conjunction with the drawings and the specific embodiments.
The present invention is based on the dynamic background video object extraction of the search of novel cross rhombic and five frame background alignment, comprise the following steps (as shown in Figure 1):
Step 1. greyscale transformation and morphology preliminary treatment.
First the video sequence of yuv format is done greyscale transformation, because Y-component comprises half-tone information, therefore Y-component is extracted from video sequence.Owing to inevitably there will be the interference of noise in video, therefore morphology opening and closing reconstruction is carried out to every two field picture, stress release treatment, smooth out some tiny edges with simplified image.Pretreated result can see Fig. 5 (f) (g) (h) (i) (j).
K-2 frame, K-1 frame, reference frame K frame, K+1 frame and K+2 frame are divided into 8 × 8 macro blocks by step 2., judge according to texture information in advance, screen all macro blocks in K-2 frame, K-1 frame, reference frame K frame, K+1 frame and K+2 frame.
Owing to apply least square method calculating globe motion parameter in following step in, the macro block that a lot of error is large is directly deleted, if macro block large for error can be rejected before least square method computing, arithmetic speed will be improved significantly, and reduce operand.And determine macro block error size, impact calculates the texture information that the key factor of accuracy is macro block, namely gradient information.The macro block that this part proposes judge in advance and the method for screening just from the gradient information of macro block, threshold value according to setting carries out screening or retaining for macro block, when the amount of information of macro block is less than this threshold value, this macro block is screened, not as the macro block participating in Block-matching in following step; When containing much information in this threshold value, then macro block being retained, participating in carrying out the computings such as following estimation as validity feature block.
Its key step is as follows:
The first step: each frame is divided into 8 × 8 sub-blocks, prove through test, according to the form being divided into 16 × 16 sub-blocks, then amount of calculation is excessive, if be divided into 4 × 4 sub-blocks, the methods such as Block-matching are accurate not, therefore adopt the form of 8 × 8 sub-blocks;
Second step: adopt Sobel operator to obtain the gradient map of each frame, using the basis for estimation that gradient information is rejected as macro block;
| &dtri; f ( x , y ) | = mag ( &dtri; f ( x , y ) ) = G x 2 + G y 2
Wherein represent the gradient information of this point, G x, G yrepresent partial derivative respectively.
3rd step: the gradient amount calculating each macro block; For 8 × 8 sub-blocks, its gradient information amount is:
| &dtri; f ( x , y ) 8 &times; 8 | = &Sigma; i = 1 i = 8 &Sigma; j = 1 j = 8 | &dtri; f ( x , y ) |
4th step: determine the threshold value that macro block is prejudged, 40% of all macro blocks of general reservation, according to the value that this is determined, sort to the gradient amount of all macro blocks, determine the optimal threshold T of reservation 40% time macro block screening;
5th step: complete the screening for macro block, if its gradient information amount >T, then retains macro block, participates in carrying out the computings such as following estimation as validity feature block; If its gradient information amount <T, screens this macro block, not as the macro block participating in Block-matching in following step
Step 3. adopts SAD criterion to the macro block after above-mentioned screening, novel cross diamond search strategy (NCDS) carries out Block-matching, using K-2 frame, K-1 frame, K+1 frame and K+2 frame as present frame, using K frame as reference frame, obtain the motion vector field of this four frame relative to reference frame K frame, and calculate globe motion parameter by least square method, obtain video camera six parameter model.
Block matching criterion conventional at present has: mean absolute error MAD (Mean Absolute Difference), least mean-square error MSE (Mean Square Error), minimum absolute difference SAD (Sum of Absolute).
This part adopts SAD block matching criterion, and this criterion can not only find optimal match point, and amount of calculation is little, consuming time short.
SAD ( i , j ) = &Sigma; m = 1 M &Sigma; n = 1 N | f k ( m , n ) - f k - 1 ( m + i , n + j ) |
Wherein (i, j) is displacement, f kand f k-1be respectively the gray value of present frame and previous frame, M × N is the size of macro block, if a bit locate SAD (i, j) at certain to reach minimum, then this point is the Optimum Matching point that will look for.
Novel cross rhombus motion estimation searching method is divided into two kinds of patterns: cross pattern and diamond pattern, as shown in Figure 2, wherein: cross pattern is divided into grand cross pattern and little cross pattern, and diamond pattern is divided into large diamond pattern and little diamond pattern.The first two steps of the cross diamond search method of this part adopt little cross pattern, and and first use grand cross pattern to search in unconventional cross rhombic searching method, thus make, in static block and accurate static block, just can to find match block with less Searching point.Then the point do not searched in the point and accurate stagnant zone that grand cross pattern do not search is searched for, for diamond search below finds the more accurate direction of search.Fig. 3 is a kind of cross rhombic searching method of the present embodiment, and concrete steps are as follows:
The first step: (little cross pattern) is in 5 Searching point of little cross pattern, the partial block distortion criterion of application enhancements, find out smallest blocks distortion (MBD) point, if smallest blocks distortion MBD point is at the center of little cross pattern, then a step search stops, obtain the final motion vector MV (0,0) required; Otherwise, enter second step;
Second step: construct new little cross pattern centered by the smallest blocks distortion MBD point that (little cross pattern) searches for by the first step, search 3 new Searching point, the partial block distortion criterion of application enhancements, find out new smallest blocks distortion MBD point, if this is at the center of little cross pattern, then two step search stop, and obtain the final motion vector MV (± 1 required, 0) or (0, ± 1); Otherwise, enter the 3rd step;
3rd step: (grand cross pattern) searches for the point that grand cross mode 3 does not also search, and the partial block distortion criterion of application enhancements, finds out new smallest blocks distortion MBD point, using the center searched for as next step;
4th step: centered by (large diamond pattern) smallest blocks distortion MBD point in the 3rd step, construct large diamond search pattern, the partial block distortion criterion of application enhancements, find out new smallest blocks distortion MBD point, if this is at the center of large rhombus, enter the 5th step; Otherwise, continue the 4th step;
5th step: centered by (little diamond pattern) smallest blocks distortion MBD point in the 4th step, construct little diamond search pattern, the partial block distortion criterion of application enhancements, find out new smallest blocks distortion MBD point.This vector corresponding to point is the final motion vector required.
Adopt the smallest blocks distortion MBD point described in the search of partial block distortion criterion improved, the partial block distortion criterion of improvement is specific as follows:
In block matching method BMA, the partial block distortion criterion of improvement only uses block one part of pixel wherein just can have good tolerance to the distortion factor.
The size of the definition block block that to be the 16 × 16, n-th frame top left co-ordinate be (m, n) and the (n-1)th frame top left co-ordinate are that the distortion metrics sad value of the interblock of (m+p, n+q) is provided by following formula:
SAD ( m , n ; p , q ) = &Sigma; i = 0 15 &Sigma; j = 0 15 | f n ( m + i , n + j ) - f n - 1 ( m + p + i , n + q + j ) |
Wherein, f n(m+i, n+j) represents that the n-th frame coordinate is the pixel value of (m+i, n+j) pixel.
By distortion metrics SAD (m, n; P, q) be divided into 16 partial distortion tolerance sad k(m, n; P, q) (k=1,2 ..., 16).A kth partial distortion tolerance is defined as follows shown in formula:
sad k ( m , n ; p , q ) = &Sigma; i = 0 3 &Sigma; j = 0 3 | f n ( m + 4 i + s k , n + 4 j + t k ) - f n - 1 ( m + p + 4 i + s k , n + q + 4 j + t k ) |
Wherein s k, t kbe respectively a kth partial distortion to measure top left corner pixel used point and offset relative to the horizontal and vertical in the block upper left corner.Partial distortion tolerance sad k(m, n; P, q) (k=1,2 ..., 16) computation sequence as shown in sequence number in Fig. 4 square frame.
Kth time increment part distortion metrics is defined as follows shown in formula:
SAD k ( m , n ; p , q ) = &Sigma; i = 1 k sad i ( m , n ; p , q )
If kth time increment part distortion metrics meets
16×SAD k(m,n;p,q)>k×min(SAD)
Wherein min (SAD) is the current minimum distortion obtained in search procedure, and k is the integer of oneself setting, and span is: 3≤k≤16, then think that this point can not be match point.Otherwise, continue to calculate kth+1 increment part distortion metrics SAD k+1(m, n; P, q), then compare.
Respectively the macro block after screening in K-2 frame, K-1 frame, K+1 frame and K+2 frame and reference frame K are carried out Block-matching according to above-mentioned SAD criterion and novel cross diamond search strategy (NCDS), obtain the motion vector field of present frame K-2 relative to reference frame K, present frame K-1 is relative to the motion vector field of reference frame K, and present frame K+1 is relative to the motion vector field of reference frame K and the present frame K+2 motion vector field relative to reference frame K.
Step 4. asks camera motion according to least square method.
In the present frame K-2 frame got in selecting step 2, K-1 frame, K+1 frame, K+2 frame, both sides sub-block is as characteristic block, the motion vector obtained through Block-matching, estimation is substituted into video camera six parameter model (as shown in the formula) after, adopt Least Square Method parameter m 0, m 1, m 2, n 0, n 1, n 2.6 parameter affine transform models: can carry out modeling to translation, rotation, convergent-divergent motion, it is defined as follows:
x &prime; = m 0 + m 1 x + m 2 y y &prime; = n 0 + n 1 x + n 2 y
Wherein m 0and n 0represent the translation amplitude of pixel in x and y direction respectively, m 1, n 1, m 2, n 2four parametric descriptions convergent-divergent and rotary motion.
Step 5. obtains present frame K-2 frame, K-1 frame, K+1 frame, the reconstruction frames K-2' of K+2 frame, K-1', K+1', K+2' respectively by motion compensation.
For each point in present frame K-2 frame, K-1 frame, K+1 frame, K+2 frame according to the camera model of above-mentioned acquisition, calculate its correspondence position respectively in reference frame K and assignment is carried out to it, thus the global motion compensation realized for K-2 frame, K-1 frame, K+1 frame, K+2 frame, make the background alignment of the reconstruction frames K-2' after compensation, K-1', K+1', K+2' and reference frame K, thus realize following jointing edge information, adaptive threshold based on methods of video segmentation under the dynamic background of novel cross rhombus estimation and five frame background alignment.
Step 6. adopts Prewitt operator extraction marginal information, carries out difference respectively with reference frame K-edge, and adopts maximum variance threshold value to carry out binaryzation.
Edge detection operator kind is a lot, selects Prewitt edge detection operator to carry out Edge Gradient Feature for reconstruction frames K-2', K-1', K+1', K+2' and reference frame K frame here.
Prewitt operator can realize with mask convolution:
f s(x,y)=|f(x,y)×G x|+|f(x,y)×G y|
Wherein: G x = - 1 0 1 - 1 0 1 - 1 0 1 G y = 1 1 1 0 0 0 - 1 - 1 - 1
The result that application Prewitt operator extracts edge respectively for reconstruction frames K-2', K-1', K+1', K+2' and reference frame K frame is: f k-2 '(x, y), f k-1 '(x, y), f k+1 '(x, y), f k+2 '(x, y) and f k(x, y), result can see Fig. 5 (k) (l) (m) (n) (o).
With the edge of K frame, image difference computing is carried out respectively to reconstruction frames K-2', K-1', K+1', K+2', tries to achieve frame difference d 1, d 2, d 3, d 4, wherein:
Frame difference d 1=| f k-2 '(x, y)-f k(x, y) |, frame difference d 2=| f k-1 '(x, y)-f k(x, y) |
Frame difference d 3=| f k+1 '(x, y)-f k(x, y) |, frame difference d 4=| f k+2 '(x, y)-f k(x, y) |
Maximum variance threshold value is a kind of adaptive Threshold, and the histogram of image is divided into two groups with optimum thresholding by it, when the variance between two groups is maximum, and decision threshold.So the binaryzation realizing edge image difference result is in this way adopted in this part.
If the gray value of piece image is 0 ~ m-1 level, the pixel count of gray value i is n i, then total pixel number:
N = &Sigma; i = 0 m - 1 n i
The probability of each value is:
If optimal threshold is T, with threshold value T, pixel is divided into two groups: C 0={ 0 ~ T-1} and C 1={ T ~ m-1}, C 0and C 1the probability produced and mean value are drawn by following formula:
C 0the probability produced w 0 = &Sigma; i = 0 T - 1 p i = w ( T )
C 1the probability produced w 1 = &Sigma; i = T m - 1 p i = 1 - w 0
C 0mean value &mu; 0 = &Sigma; i = 0 T - 1 ip i w 0 = &mu; ( T ) w ( T )
C 1mean value &mu; 1 = &Sigma; i = T m - 1 ip i w 1 = &mu; - &mu; ( T ) 1 - w ( T )
Wherein: &mu; = &Sigma; i = 0 m - 1 ip i , &mu; ( T ) = &Sigma; i = 0 T - 1 ip i
Then the average gray of all samplings is: μ=w 0μ 0+ w 1μ 1
Variance between two groups:
&delta; 2 ( T ) = w 0 ( &mu; 0 - &mu; ) 2 + w 1 ( &mu; 1 - &mu; ) 2 = w 0 w 1 ( &mu; 1 - &mu; 0 ) 2 = [ &mu; &CenterDot; w ( T ) - &mu; ( T ) ] 2 w ( T ) [ 1 - W ( T ) ]
T when asking above formula to be maximum between 1 ~ m-1, is optimal threshold.
According to obtained optimal threshold T to frame difference d 1, d 2, d 3, d 4carry out binaryzation respectively, the result of binaryzation is respectively OtusBuf1, OtusBuf2, OtusBuf3, OtusBuf4.
Step 7. and computing and reprocessing.
Above-mentioned binaryzation result OtusBuf1, OtusBuf2, OtusBuf3, OtusBuf4 are carried out and computing, as follows with the result of computing:
Wherein: DifferBuf (1) be in five frames front cross frame K-2 and K-1 through the binaryzations such as motion compensation with the result of computing, DifferBuf (2) be in five frames after two frame K+1 and K+2 through the binaryzations such as motion compensation with the result of computing; OtusBuf1 (i), OtusBuf2 (i), OtusBuf3 (i), OtusBuf4 (i) represent frame difference d 1, d 2, d 3, d 4carry out the result of binaryzation respectively.
Carry out again or computing with operation result above-mentioned:
DifferBuf ( i ) = 255 if ( DifferBuf 1 ( i ) = = 255 | | DifferBuf 2 ( i ) = 255 ) 0 else
Wherein DifferBuf (i) is the final process result of process or computing.
Due to inevitably noisy interference in video sequence, therefore with computing after also to carry out some reprocessing work, to remove isolated zonule, small―gap suture, the results are shown in Figure 5 (p) of reprocessing.For this reason, first this part adopts the method for medium filtering to remove the noise of some interference, then adopts morphological image method, mainly comprises corrosion and dilation operation, not only can remove noise and can play the effect of smoothed image.Erosion operation mainly eliminates boundary point, and border is internally shunk, and all background dots with object contact are then merged in this object by dilation operation, and border is expanded outwardly.

Claims (1)

1., based on a dynamic background video object extraction method for the search of novel cross rhombic and five frame background alignment, it is characterized in that comprising the following steps:
(1) K-2 frame, K-1 frame, reference frame K frame, K+1 frame and K+2 frame are divided into 8 × 8 macro blocks respectively, according to texture information, all macro blocks in this five frame are judged in advance, screened; Concrete steps are as follows:
The first step: each frame is divided into 8 × 8 sub-blocks;
Second step: adopt Sobel operator to obtain the gradient map of each frame, using the basis for estimation that gradient information is rejected as macro block;
| &dtri; f ( x , y ) | = mag ( &dtri; f ( x , y ) ) = G x 2 + G y 2
Wherein represent the gradient information at (x, y) place, G x, G yrepresent partial derivative respectively;
3rd step: the gradient amount calculating each macro block; For 8 × 8 sub-blocks, its gradient information amount is:
| &dtri; f ( x , y ) 8 &times; 8 | = &Sigma; i = 1 i = 8 &Sigma; j = 1 j = 8 | &dtri; f ( x , y ) |
4th step: determine the threshold value that macro block is prejudged to retain 40% of all macro blocks, according to the value that this is determined, sorts to the gradient amount of all macro blocks, determines the optimal threshold T of reservation 40% time macro block screening;
5th step: complete the screening for macro block, if its gradient information amount >T, then retains macro block, participates in carrying out following estimation computing as validity feature block; If its gradient information amount <T, screens this macro block, not as the macro block participating in Block-matching in following step;
(2) Block-matching is carried out to the macro block employing SAD criterion after above-mentioned screening, novel cross diamond search strategy, respectively using K-2 frame, K-1 frame, K+1 frame and K+2 frame as present frame, using K frame as reference frame, obtain the motion vector field of this four frame relative to reference frame K frame, and calculate globe motion parameter by least square method, obtain video camera six parameter model; Its concrete steps are as follows:
(i) block matching criterion SAD
Specific formula for calculation is as follows:
SAD ( i , j ) = &Sigma; m = 1 M &Sigma; n = 1 N | f k ( m , n ) - f k - 1 ( m + i , n + j ) |
Wherein (i, j) is displacement, f kand f k-1be respectively the gray value of present frame and previous frame, M × N is the size of macro block, if a bit locate SAD (i, j) at certain to reach minimum, then this point is the Optimum Matching point that will look for;
(ii) novel cross diamond search strategy
The novel cross rhombus motion estimation searching method of this part is divided into two kinds of patterns: cross pattern and diamond pattern, wherein: cross pattern is divided into grand cross pattern and little cross pattern, and diamond pattern is divided into large diamond pattern and little diamond pattern; The first two steps of the cross diamond search method of this part adopt little cross pattern, and and first use grand cross pattern to search in unconventional cross rhombic searching method, thus make, in static block and accurate static block, just can to find match block with less Searching point; Then the point do not searched in the point and accurate stagnant zone that grand cross pattern do not search is searched for, for diamond search below finds the more accurate direction of search; Concrete steps are as follows:
The first step: (little cross pattern) is in 5 Searching point of little cross pattern, the partial block distortion criterion of application enhancements, find out smallest blocks distortion MBD point, if smallest blocks distortion MBD point is at the center of little cross pattern, then a step search stops, obtain the final motion vector MV (0,0) required; Otherwise, enter second step;
Second step: construct new little cross pattern centered by the smallest blocks distortion MBD point that (little cross pattern) searches for by the first step, search 3 new Searching point, the partial block distortion criterion of application enhancements, find out new smallest blocks distortion MBD point, if this is at the center of little cross pattern, then two step search stop, and obtain the final motion vector MV (± 1 required, 0) or (0, ± 1); Otherwise, enter the 3rd step;
3rd step: (grand cross pattern) searches for the point that grand cross mode 3 does not also search, and the partial block distortion criterion of application enhancements, finds out new smallest blocks distortion MBD point, using the center searched for as next step;
4th step: centered by (large diamond pattern) smallest blocks distortion MBD point in the 3rd step, construct large diamond search pattern, the partial block distortion criterion of application enhancements, find out new smallest blocks distortion MBD point, if this is at the center of large rhombus, enter the 5th step; Otherwise, continue the 4th step;
5th step: centered by (little diamond pattern) smallest blocks distortion MBD point in the 4th step, construct little diamond search pattern, the partial block distortion criterion of application enhancements, find out new smallest blocks distortion MBD point; This vector corresponding to point is the final motion vector required;
Adopt the smallest blocks distortion MBD point described in the search of partial block distortion criterion improved, the partial block distortion criterion of improvement is specific as follows:
In block matching method BMA, the partial block distortion criterion of improvement only uses block one part of pixel wherein just can have good tolerance to the distortion factor;
The size of definition block is 16 × 16, and the distortion metrics sad value of K frame top left co-ordinate to be the block of (m, n) and K-1 frame top left co-ordinate the be interblock of (m+p, n+q) is provided by following formula:
SAD ( m , n ; p , q ) = &Sigma; i = 0 15 &Sigma; j = 0 15 | f n ( m + i , n + j ) - f n - 1 ( m + p + i , n + q + j ) |
Wherein, f n(m+i, n+j) represents that the n-th frame coordinate is the pixel value of (m+i, n+j) pixel;
By distortion metrics SAD (m, n; P, q) be divided into 16 partial distortion tolerance sad k(m, n; P, q) (k=1,2 ..., 16); A kth partial distortion tolerance is defined as follows shown in formula:
sad k ( m , n ; p , q ) = &Sigma; i = 0 3 &Sigma; j = 0 3 | f n ( m + 4 i + s k , n + 4 j + t k ) - f n - 1 ( m + p + 4 i + s k , n + q + 4 j + t k ) |
Wherein s k, t kbe respectively a kth partial distortion to measure top left corner pixel used point and offset relative to the horizontal and vertical in the block upper left corner;
Kth time increment part distortion metrics is defined as follows shown in formula:
SAD k ( m , n ; p , q ) = &Sigma; i = 1 k sad i ( m , n ; p , q )
If kth time increment part distortion metrics meets
16×SAD k(m,n;p,q)>k×min(SAD)
Wherein min (SAD) is the current minimum distortion obtained in search procedure, and k is the integer of oneself setting, and span is: 3≤k≤16, then think that this point can not be match point; Otherwise, continue to calculate kth+1 increment part distortion metrics SAD k+1(m, n; P, q), then compare;
(iii) least square method obtains video camera six parameter model
In the present frame K-2 frame got in selecting step (i), K-1 frame, K+1 frame, K+2 frame, both sides sub-block is as characteristic block, after the motion vector that will obtain through (i) (ii) step substitutes into video camera six parameter model, adopt Least Square Method parameter m 0, m 1, m 2, n 0, n 1, n 2; 6 parameter affine transform models: can carry out modeling to translation, rotation, convergent-divergent motion, it is defined as follows:
x &prime; = m 0 + m 1 x + m 2 y y &prime; = n 0 + n 1 x + n 2 y
Wherein m 0and n 0represent the translation amplitude of pixel in x and y direction respectively, m 1, n 1, m 2, n 2four parametric descriptions convergent-divergent and rotary motion;
(3) motion compensation is carried out to K-2 frame, make K-2 frame and K frame background alignment, obtain reconstruction frames K-2', after the same method motion compensation is carried out to K-1 frame, K+1 frame and K+2 frame, make K-1 frame, K+1 frame and K+2 frame respectively with K frame background alignment, and obtain reconstruction frames K-1', reconstruction frames K+1' and reconstruction frames K+2'; Its particular content is as follows:
For each point in present frame K-2 frame, K-1 frame, K+1 frame and K+2 frame according to the camera model of above-mentioned acquisition, calculate its correspondence position respectively in reference frame K and assignment is carried out to it, thus the global motion compensation realized for K-2 frame, K-1 frame, K+1 frame and K+2 frame, make the background alignment of the reconstruction frames K-2' after compensation, K-1', K+1', K+2' and reference frame K, thus realize following jointing edge information, self adaptation maximum variance threshold value based on methods of video segmentation under the dynamic background of novel cross rhombus estimation and five frame background alignment;
(4) Prewitt operator extraction marginal information is adopted respectively to reconstruction frames K-2', K-1', K+1', K+2' and reference frame K frame, and calculate it respectively relative to the poor d of the frame of reference frame K-edge 1, d 2, d 3, d 4, adopt maximum variance threshold method to carry out binaryzation; Its concrete steps are as follows:
(i) Prewitt operator extraction marginal information, and carry out difference with reference frame K-edge
Edge detection operator kind is a lot, selects Prewitt edge detection operator to carry out Edge Gradient Feature for reconstruction frames K-2', K-1', K+1', K+2' and reference frame K frame here;
Prewitt operator can realize with mask convolution:
f s(x,y)=|f(x,y)×G x|+|f(x,y)×G y|
Wherein: G x = - 1 0 1 - 1 0 1 - 1 0 1 G y = 1 1 1 0 0 0 - 1 - 1 - 1
The result that application Prewitt operator extracts edge respectively for reconstruction frames K-2', K-1', K+1', K+2' and reference frame K frame is: f k-2'(x, y), f k-1'(x, y), f k+1'(x, y), f k+2'(x, y) and f k(x, y);
With the edge of K frame, image difference computing is carried out respectively to reconstruction frames K-2', K-1', K+1', K+2', tries to achieve frame difference d 1, d 2, d 3, d 4, wherein:
Frame difference d 1=| f k-2'(x, y)-f k(x, y) |, frame difference d 2=| f k-1'(x, y)-f k(x, y) |
Frame difference d 3=| f k+1'(x, y)-f k(x, y) |, frame difference d 4=| f k+2'(x, y)-f k(x, y) |
(ii) maximum variance threshold value is adopted to carry out binaryzation
Maximum variance threshold value is a kind of adaptive Threshold, and the histogram of image is divided into two groups with optimum thresholding by it, when the variance between two groups is maximum, and decision threshold; So the binaryzation realizing edge image difference result is in this way adopted in this part;
If the gray value of piece image is 0 ~ m-1 level, the pixel count of gray value i is n i, then total pixel number:
N = &Sigma; i = 0 m - 1 n i
The probability of each value is:
If optimal threshold is T, with threshold value T, pixel is divided into two groups: C 0={ 0 ~ T-1} and C 1={ T ~ m-1}, C 0and C 1the probability produced and mean value have following formula to draw:
C 0the probability produced w 0 = &Sigma; i = 0 T - 1 p i = w ( T )
C 1the probability produced w 1 = &Sigma; i = T m - 1 p i = 1 - w 0
C 0mean value &mu; 0 = &Sigma; i = 0 T - 1 ip i w 0 = &mu; ( T ) w ( T )
C 1mean value &mu; 1 = &Sigma; i = T m - 1 ip i w 1 = &mu; - &mu; ( T ) 1 - w ( T )
Wherein: &mu; = &Sigma; i = 0 m - 1 ip i , &mu; ( T ) = &Sigma; i = 0 T - 1 ip i
Then the average gray of all samplings is: μ=w 0μ 0+ w 1μ 1
Variance between two groups:
T when asking above formula to be maximum between 1 ~ m-1, is optimal threshold;
Carry out binaryzation according to obtained optimal threshold T edge testing result, binaryzation result is respectively OtusBuf 1, OtusBuf 2, OtusBuf 3, OtusBuf 4;
(5) respectively the frame difference binaryzation result that continuous five frame front cross frames and rear two frames obtain is carried out and computing; To obtain and operation result adopts or computing and morphology, medium filtering carry out reprocessing, realize the effectively segmentation fast of object video under dynamic background;
Binaryzation result OtusBuf 1, OtusBuf 2, OtusBuf 3, OtusBuf 4 are carried out and computing, as follows with the result of computing:
Wherein: DifferBuf (1) be in five frames front cross frame K-2 and K-1 through motion compensation, binaryzation and the result with computing, DifferBuf (2) be in five frames after two frame K+1 and K+2 through motion compensation, binaryzation and the result with computing; OtusBuf 1 (i), OtusBuf 2 (i), OtusBuf 3 (i), OtusBuf 4 (i) represent frame difference d 1, d 2, d 3, d 4carry out the result of binaryzation respectively;
Carry out or computing with operation result above-mentioned:
DifferBuf ( i ) = 255 if ( DifferBuf 1 ( i ) = = 255 | | DifferBuf 2 ( i ) = = 255 ) 0 else
Wherein DifferBuf (i) is the final process result of process or computing.
CN201210398165.XA 2012-10-18 2012-10-18 Mobile background video object extraction method based on novel crossed diamond search and five-frame background alignment Active CN102917224B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210398165.XA CN102917224B (en) 2012-10-18 2012-10-18 Mobile background video object extraction method based on novel crossed diamond search and five-frame background alignment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210398165.XA CN102917224B (en) 2012-10-18 2012-10-18 Mobile background video object extraction method based on novel crossed diamond search and five-frame background alignment

Publications (2)

Publication Number Publication Date
CN102917224A CN102917224A (en) 2013-02-06
CN102917224B true CN102917224B (en) 2015-06-17

Family

ID=47615434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210398165.XA Active CN102917224B (en) 2012-10-18 2012-10-18 Mobile background video object extraction method based on novel crossed diamond search and five-frame background alignment

Country Status (1)

Country Link
CN (1) CN102917224B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105096309B (en) * 2015-05-22 2018-11-23 广东正业科技股份有限公司 A kind of edge detection method and device based on X-ray
CN113744137B (en) * 2020-05-27 2024-05-31 合肥君正科技有限公司 Frame difference smoothing method of spiral matrix

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1960491A (en) * 2006-09-21 2007-05-09 上海大学 Real time method for segmenting motion object based on H.264 compression domain
CN101098465A (en) * 2007-07-20 2008-01-02 哈尔滨工程大学 Moving object detecting and tracing method in video monitor
CN102075757A (en) * 2011-02-10 2011-05-25 北京航空航天大学 Video foreground object coding method by taking boundary detection as motion estimation reference
CN102163334A (en) * 2011-03-04 2011-08-24 北京航空航天大学 Method for extracting video object under dynamic background based on fisher linear discriminant analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1960491A (en) * 2006-09-21 2007-05-09 上海大学 Real time method for segmenting motion object based on H.264 compression domain
CN101098465A (en) * 2007-07-20 2008-01-02 哈尔滨工程大学 Moving object detecting and tracing method in video monitor
CN102075757A (en) * 2011-02-10 2011-05-25 北京航空航天大学 Video foreground object coding method by taking boundary detection as motion estimation reference
CN102163334A (en) * 2011-03-04 2011-08-24 北京航空航天大学 Method for extracting video object under dynamic background based on fisher linear discriminant analysis

Also Published As

Publication number Publication date
CN102917224A (en) 2013-02-06

Similar Documents

Publication Publication Date Title
CN102917220B (en) Dynamic background video object extraction based on hexagon search and three-frame background alignment
CN102917217B (en) Movable background video object extraction method based on pentagonal search and three-frame background alignment
CN102163334B (en) Method for extracting video object under dynamic background based on fisher linear discriminant analysis
CN108198201A (en) A kind of multi-object tracking method, terminal device and storage medium
CN103871076A (en) Moving object extraction method based on optical flow method and superpixel division
CN110378924B (en) Level set image segmentation method based on local entropy
CN102131058A (en) Speed conversion processing module and method of high definition digital video frame
CN109166137A (en) For shake Moving Object in Video Sequences detection algorithm
KR20170015299A (en) Method and apparatus for object tracking and segmentation via background tracking
CN108200432A (en) A kind of target following technology based on video compress domain
CN115375733A (en) Snow vehicle sled three-dimensional sliding track extraction method based on videos and point cloud data
CN107295217B (en) Video noise estimation method based on principal component analysis
CN104200434A (en) Non-local mean image denoising method based on noise variance estimation
CN102917222B (en) Mobile background video object extraction method based on self-adaptive hexagonal search and five-frame background alignment
CN103051893B (en) Dynamic background video object extraction based on pentagonal search and five-frame background alignment
CN102917224B (en) Mobile background video object extraction method based on novel crossed diamond search and five-frame background alignment
CN102970527B (en) Video object extraction method based on hexagon search under five-frame-background aligned dynamic background
CN102917218B (en) Movable background video object extraction method based on self-adaptive hexagonal search and three-frame background alignment
CN102800069A (en) Image super-resolution method for combining soft decision self-adaptation interpolation and bicubic interpolation
CN102917223B (en) Dynamic background video object extraction based on enhancement type diamond search and three-frame background alignment
CN102917221B (en) Based on the dynamic background video object extraction of the search of novel cross rhombic and three frame background alignment
CN110503625A (en) A kind of cmos image signal dependent noise method for parameter estimation
CN102917219B (en) Based on the dynamic background video object extraction of enhancement mode diamond search and five frame background alignment
Vignesh et al. Performance and Analysis of Edge detection using FPGA Implementation
Mei et al. An Algorithm for Automatic Extraction of Moving Object in the Image Guidance

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20170106

Address after: 518000 Guangdong city of Shenzhen province Nanshan District Shahe Street Xueyuan Road No. 1001 Nanshan Chi Park A7 building 4 floor

Patentee after: SHENZHEN XIAOLAJIAO TECHNOLOGY Co.,Ltd.

Address before: 100191 Haidian District, Xueyuan Road, No. 37,

Patentee before: BEIHANG University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220628

Address after: 518000 4th floor, building A7, Nanshan Zhiyuan, No. 1001, Xueyuan Avenue, Nanshan District, Shenzhen, Guangdong Province

Patentee after: Shenzhen Skylark Software Technology Co.,Ltd.

Address before: 518000, 4, A7 building, Nanshan Zhiyuan 1001, Shahe Road, Nanshan District, Shenzhen, Guangdong.

Patentee before: SHENZHEN XIAOLAJIAO TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240205

Address after: 518000 4th Floor, Building A7, Nanshan Zhiyuan, 1001 Xueyuan Avenue, Taoyuan Street, Nanshan District, Shenzhen City, Guangdong Province

Patentee after: SHENZHEN XIAOLAJIAO TECHNOLOGY Co.,Ltd.

Country or region after: China

Address before: 518000 4th floor, building A7, Nanshan Zhiyuan, No. 1001, Xueyuan Avenue, Nanshan District, Shenzhen, Guangdong Province

Patentee before: Shenzhen Skylark Software Technology Co.,Ltd.

Country or region before: China