CN102013022B - Selective feature background subtraction method aiming at thick crowd monitoring scene - Google Patents

Selective feature background subtraction method aiming at thick crowd monitoring scene Download PDF

Info

Publication number
CN102013022B
CN102013022B CN201010555962A CN201010555962A CN102013022B CN 102013022 B CN102013022 B CN 102013022B CN 201010555962 A CN201010555962 A CN 201010555962A CN 201010555962 A CN201010555962 A CN 201010555962A CN 102013022 B CN102013022 B CN 102013022B
Authority
CN
China
Prior art keywords
background
scene
model
characteristic
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201010555962A
Other languages
Chinese (zh)
Other versions
CN102013022A (en
Inventor
黄铁军
胡志鹏
田永鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN201010555962A priority Critical patent/CN102013022B/en
Publication of CN102013022A publication Critical patent/CN102013022A/en
Application granted granted Critical
Publication of CN102013022B publication Critical patent/CN102013022B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a selective feature background subtraction method aiming at a thick crowd monitoring scene. Aiming at a problem that a traditional background subtraction method can cause higher missed inspection and error inspection rate under a thick crowd scene, the invention discloses the selective feature background subtraction method, which comprises the following steps of: creating a scene sparsity model; selecting a video frame having higher similarity to the sparsity model as a training sample, and obtaining an initialized feature scene based on batching principal component analysis; updating the scene sparsity model, selecting a video frame having higher similarity to the sparsity model to update the feature background by incremental principal component analysis; selectively rebuilding the background at pixel level; and solving an adaptive threshold to threshold a difference image and obtain a foreground image. The selective feature background subtraction method can inspect out slowly moving and static foreground objects well in the thick crowd scene with relatively steady light and simultaneously keep lower error inspection rate.

Description

A kind of selecting type characteristic background subduction method to the crowd is dense monitoring scene
Technical field
The present invention relates to a kind of image and method for processing video frequency, especially a kind of selecting type characteristic background subduction method to the crowd is dense monitoring scene.
Background technology
What traditional monitor video analysis was taked is the mode that human eye is watched.Yet in today that video monitoring system blazons, traditional is that main solution is faced with the realistic problem that must solve the super large video data with the manpower.Therefore, the necessity of intelligent video monitoring is obvious all the more.Depend on the computer vision analysis technology, intelligent video monitoring system is through separating background in the scene and then detecting and follow the tracks of object appearing in the camera scene with target.Preset different events rule during the user can and use at different monitoring scenes, in case target has been made the behavior of violating predefined rule in scene, system can detect it as anomalous event, and sends warning automatically.
Can find out that from intelligent video monitoring system working mechanism the separation of target and background is a very important step in the scene.Target and background separating process in the scene is called as background subtraction.Through background subtraction, can directly obtain the foreground object in the scene, perhaps can the result of subduction be offered follow-up object detection and tracking module as pretreated process, dwindle its hunting zone, system can be operated in real time.
Background subtraction method commonly used at present can be divided three classes.The first kind is with paper " Adaptive background mixture models for real-time tracking. " (the ADAPTIVE MIXED background model that is used for real-time tracking) be the gauss hybrid models method of representative, these class methods are set up gauss hybrid models to describe its multimodal variation to each pixel; Second type is with paper " Nonparametric model for background subtraction. " (the nonparametric model method that is used for background subtraction) be the Density Estimator method of representative; The form that the probability density function that these class methods are obeyed observation data is obeyed is not done supposition; Probability density function is only approached and match by the linear weighted function of the data that observe through kernel function, rather than is controlled by parameter; The 3rd class methods are with paper " Real-time foreground background segmentation using codebook model. " (utilizing the real-time prospect background dividing method of code book model) be the method based on code book of representative; Set up the background code book in the training stage for each pixel, obtain the result that prospect background is cut apart through measured value and code book are compared in the subduction stage.
In practical application, the monitoring of the crowd is dense scene had great security protection meaning.Such as in the Waiting Lounge of airport, place Waiting Lounge for a long time if having the people to run suddenly or lose next bag, so probably terrorist activity etc. can take place.The movement velocity that an important feature of scene that the crowd is dense is exactly the pedestrian is slow, most people or even actionless.At this moment, above-mentioned three types of background subtraction methods all can reduce a static crowd part as a setting in the context update process, cause a large amount of flase drops, cause follow-up human detection and follow the tracks of the scheduling algorithm inefficacy.
Detecting motion is paper with a kind of effective ways of stationary objects slowly " A Bayesian computer vision system for modeling human interactions. " characteristic background (Eigenbackgrounds) method of (Bayes's computer vision system) proposition to the interbehavior modeling.This method comprises training and reduces two stages.Obtain describing the characteristic background (Eigenbackgrounds) of background characteristics in the training stage, at first utilize the background of characteristic background reconstruct present frame, then the difference of the background of reconstruct and present frame is carried out thresholding and obtain foreground image in the subduction stage.In the more sparse scene of crowd, this method can obtain background subtraction result preferably, and the slow and static crowd that moves can be detected.But under the crowd is dense scene, not only comprised the characteristic of background in the characteristic background that training obtains, also comprised the characteristic of part prospect.Therefore, in the crowd is dense scene, traditional Eigenbackground still can produce tangible omission and flase drop.
Can know by above analysis, because the special characteristic of the crowd is dense scene directly uses existing background subtraction technique to have certain difficulty to it.For the foreground object in the scene that detects that the crowd is dense as much as possible and reduce flase drop, the present invention proposes a kind of characteristic background subduction method of selecting type.
Summary of the invention
The object of the present invention is to provide a kind of background subtraction method to the crowd is dense monitoring scene, roughly workflow comprises training stage and subduction stage.In the training stage, set up scene degree of rarefication model and choose suitable frame of video calculated characteristics background; In the subduction stage, scene degree of rarefication model and characteristic background are upgraded, utilize the background of the Model Reconstruction present frame after upgrading, thereby carry out background subtraction.
Technical scheme of the present invention is following:
A kind of selecting type characteristic background subduction method to the crowd is dense monitoring scene is characterized in that this method comprises the following steps:
1) utilize existing training video frame to set up scene degree of rarefication model;
Described scene degree of rarefication model can be single Gauss model or gauss hybrid models;
2) choose the sparse training video frame of foreground object automatically according to the scene degree of rarefication model of setting up, utilize the batch processing principal component analysis to ask for initialized characteristic background;
The choosing method of said training video frame is: ask for the similarity of frame of video vector and scene degree of rarefication model, choose similarity greater than the frame of assign thresholds as the training video frame;
3) scene degree of rarefication model is upgraded, utilize the scene degree of rarefication model after upgrading to judge whether the crowd in the current video frame scene is sparse, if then utilize the increment type principal component analysis that characteristic background is upgraded; Otherwise characteristic background remains unchanged;
Implementation method is:
A) current video frame and frame of video are in the past distributed a weight coefficient, adopt the method for moving average that scene degree of rarefication model is upgraded;
B) ask for the similarity of current video frame vector and scene degree of rarefication model, if similarity greater than assign thresholds, then utilizes the increment type principal component analysis that characteristic background is upgraded; Otherwise characteristic background remains unchanged;
4) on Pixel-level, utilize principal component analysis that the background of current video frame is carried out reconstruct optionally;
Implementation method is: carry out background reconstruction to respectively tieing up the minimum characteristic background of element absolute value in each pixel selected characteristic background vector;
5) ask for the error image of the background frames of current video frame and reconstruct, and utilize adaptive threshold that error image is carried out thresholding, obtain the foreground image of current video frame.
Compare with existing background subtraction method, innovation part of the present invention is following 3 points:
1) chooses the sparse training video frame of foreground object automatically.The time period that present most of method hypothesis monitor video begins does not have foreground object to occur, and therefore directly utilizes the time started section of video to carry out the training of background model; Perhaps frame of video is sampled according to fixing step-length, utilize the frame of video after sampling to carry out the training of background model.Under the sparse scene of crowd, it is effective doing like this.But under the scene that the crowd is dense, these hypothesis are false, and can comprise more foreground information in the background that therefore causes setting up.Method of the present invention is automatically chosen through foundation description scene degree of rarefication model and is comprised the sparse frame of video of prospect, has overcome the defective of conventional method.
2) select suitable frame of video that characteristic background is upgraded in the subduction stage.Traditional method based on characteristic background directly utilizes fixing characteristic background to carry out background reconstruction in the subduction stage.Though some method is upgraded background model, what take is blind method for updating, also just says, whether contains foreground object no matter upgrade frame, all with it background model is upgraded.Method of the present invention has been introduced the selecting type update mechanism, only with comprising the sparse frame of video of foreground object background model is upgraded.What guarantee the background model reflection so all the time is the characteristic of background.
3) background reconstruction of Pixel-level.Traditional characteristic background method utilizes characteristic background directly on the frame level, to carry out the reconstruct of background, and like this, the background of reconstruct obtains optimal result on the frame level, can not guarantee that all pixels all obtain ideal results.In the method for the invention, in the reconstruct of the enterprising capable background of pixel level, each pixel is chosen the characteristic background that can react its background characteristics carry out background reconstruction, like this, each pixel all obtains optimum result.
Monitor video under given one section real scene, background subtraction method of the present invention can produce foreground object detection result preferably in real time on the server of any platform and PC.In addition, the present invention equally also is applicable to the data through the fixing camera collection, the specific tv broadcasting that TV card obtains etc.Through these data are carried out preliminary treatment, and use background subtraction method of the present invention, can in video editing, take the prospect in the video automatically, carry out again the synthetic work that waits afterwards.The present invention can detect slow and the static foreground object of motion preferably in more stable the crowd is dense the scene of illumination condition, simultaneously the lower false drop rate of maintenance.
Description of drawings
Fig. 1 selecting type characteristic background subduction framework;
Sample filtered on the data set of Gateway airport, Fig. 2 London.The top row: former training sample is concentrated the frame of video of picked at random; End row: the frame of video of picked at random in the filtered training set of sample;
Experiment on Gateway airport, Fig. 3 London data set camera 3.First row: original video frame; Second row: the background subtraction result of C-PCA; The third line: add the filtered result of sample; Fourth line: the result (FS-PCA result) who adds adaptive threshold; Fifth line: Pixel-level is carried out the background subtraction result (PS-PCA result) of background reconstruction;
ROC on Fig. 4 camera 3 analyzes;
Background subtraction result on Fig. 5 camera 1.First row: original video frame; The background subtraction result of secondary series: C-PCA; The 3rd row: the background subtraction result of FS-PCA; The 4th row: the background subtraction result of PS-PCA;
ROC on Fig. 6 camera 1 analyzes.
Embodiment
Fig. 1 has provided framework of the present invention.Below detailed description is carried out in enforcement of the present invention.
1. batch processing principal component analysis and increment type principal component analysis
Among the present invention, utilize the batch processing principal component analysis to try to achieve initialized characteristic background, utilize the increment type principal component analysis that characteristic background is carried out real-time update in the subduction stage in the training stage.Therefore, batch processing principal component analysis and increment type principal component analysis are bases of the present invention.
The batch processing principal component analysis can utilize following formula to describe:
C xu i=λu i (1)
C x = 1 N Σ i = 1 N [ x ( i ) - x ‾ ] [ x ( i ) - x ‾ ] T - - - ( 2 )
x ‾ = 1 N Σ i = 1 N x ( i ) - - - ( 3 )
Wherein X=[x (1), x (2) ... x (N)] is the set of training sample, U=[u 1, u 2..., u Mp] be to covariance matrix C xCarry out preceding Mp the set that characteristic value decomposition obtains with characteristic vector of eigenvalue of maximum.In the background subtraction field, characteristic vector U is known as characteristic background.
In order to utilize new background measured value that background model is upgraded in the background subtraction stage, the present invention has introduced the increment type principal component analysis.In order to reach real-time processing speed as far as possible, adopted " Candid covariance-free incremental principal component analysis " increment type principal component analytical method in (the increment type principal component analysis of no covariance matrix) literary composition.If x (n) is a n two field picture vector of input, specify υ i(n)=λ i(m) u i(n), λ wherein i(n) and u i(n) be i characteristic value and characteristic vector after being upgraded by x (n) respectively, then the increment type principal component analysis of the no covariance matrix of utilization is carried out method for updating and can be described below:
x ‾ ( n ) = n - 1 n x ‾ ( n - 1 ) + 1 n x ( n ) - - - ( 4 )
μ 1 ( n ) = x ( n ) - x ‾ ( n ) - - - ( 5 )
υ i ( n ) = n - 1 - ζ n υ i ( n - 1 ) + 1 + ζ n μ i ( n ) μ i T ( n ) υ i ( n - 1 ) | | υ i ( n - 1 ) | | - - - ( 6 )
μ i + 1 ( n ) = μ i ( n ) - μ i T ( n ) υ i ( n ) | | υ i ( n ) | | υ i ( n ) | | υ i ( n ) | | - - - ( 7 )
Wherein ζ is the renewal rate parameter, μ iInput vector when (n) being i characteristic vector of renewal.What formula (6) (7) was described is an iterative process, and each iteration is upgraded a characteristic vector, and the renewal of the characteristic vector of high-order is on the complementary space of low order characteristic vector, to carry out.
2. data dimensionality reduction
In the background subtraction stage, when having only current video frame to contain less foreground object, just upgrade characteristic background with it.Therefore, at first to set up the sparse degree that a statistical model is described foreground object in the scene.This carries out through training sample is carried out statistical analysis.A problem of doing like this is that the dimension of two field picture vector is very high, and has stronger correlation between each dimension.Therefore, be necessary sample is carried out dimensionality reduction.In view of its lower computation complexity and with the consistency of characteristic background method, the present invention adopts principal component analysis PCA to carry out the dimensionality reduction of data.All input data are used as the input of (2) formula, and (1) formula of utilization can be in the hope of the eigenvectors matrix U that is made up of Mp characteristic vector d, then with sample vector x to U dProjection in the lower dimensional space that constitutes promptly obtains the data x ' behind the dimensionality reduction:
x ′ = U d T ( x - x ‾ ) - - - ( 8 )
3. the foundation of scene degree of rarefication model
Can add up the probabilistic model of describing the sparse degree of scene foreground object to set up to the data behind the dimensionality reduction.Set up following single Gauss model among the present invention as such probabilistic model
f ( x ′ ) = 1 ( 2 π ) d / 2 | C x ′ | exp { - 1 2 d 2 ( x ′ ) } - - - ( 9 )
d ( x ′ ) = ( x ′ - x ′ ‾ ) T c x ′ - 1 ( x ′ - x ′ ‾ ) - - - ( 10 )
Wherein d (x ') be vector x ' with the mean vector of model
Figure GDA0000152257830000063
Between the Mahalanobis distance, C x' be the covariance matrix of trying to achieve by all dimensionality reduction data x '.Mean vector in the model and covariance matrix will upgrade in the background subtraction stage.
4. sample filtering
Because the crowd in the scene is than comparatively dense, background is often covered by prospect for a long time, if therefore utilize all samples to ask for characteristic background, can contain more foreground information inevitably in the characteristic background so.In order to obtain comparison " pure " and stable background model, be necessary sample is carried out filtering, remove the frame of video that contains more foreground object.The foreground object that contains in the frame of video is few more, and this frame of video vector is high more with the sparse model similarity of setting up of scene so.This similarity can use Mahalanobis in (10) formula apart from description.If the distance between the mean vector of current video frame vector and the Gauss model of setting up is less than specified threshold value T, this frame of video vector just is used to ask for initialized characteristic background in the training stage and perhaps in the subduction stage characteristic background is upgraded so.Fig. 2 has provided the sample filtered of training stage, can see, utilizes sample filtering method of the present invention, has selected automatically to contain the less frame of video of foreground object.
5. the renewal of model
In the subduction stage, at first utilize the frame of video vector x (t) of current input to carry out CCIPCA is used to carry out the data dimensionality reduction with renewal characteristic vector U dCharacteristic vector after utilize upgrading obtains vector x ' (t), just can carry out the renewal of moving average formula to the mean vector and the covariance matrix of Gauss model afterwards to frame of video x (t) dimensionality reduction of input:
x ′ ‾ ( t ) = ( 1 - α ) x ′ ‾ ( t - 1 ) + αx ′ ( t ) - - - ( 11 )
C x ′ ( t ) = ( 1 - α ) C x ′ ( t - 1 ) + α [ x ′ ( t ) - x ′ ‾ ( t ) ] [ x ′ ( t ) - x ′ ‾ ] T - - - ( 12 )
Wherein α is the renewal rate parameter, and its value is big more, and model modification is fast more.Gauss model after utilize upgrading can be in the hope of x ' (t) and the Mahalanobis distance between the mean vector
Figure GDA0000152257830000067
.If distance less than specified threshold value T, so just utilizes current video frame vector x (t) to carry out CCIPCA to upgrade characteristic background.Summary is got up, and the renewal of the sparse model of scene is blind renewal, promptly with all frame of video vectors the parameter of Gauss model is upgraded; The renewal of characteristic background is that selecting type upgrades, and has only utilized and has contained the less frame of video of foreground object.
6. the reconstruct of background
When utilizing the characteristic background method to carry out the reconstruct of background, the frame of video of input is drawn as the vector of a d=H*W dimension, and wherein H and W are respectively the height and the width of frame of video, and the dimension of the characteristic background of trying to achieve vector also is d.Therefore, i characteristic background u iJ element u IjCorresponding to j pixel in the frame of video, and the barment tag of j pixel has been described.In some characteristic background, j element described the barment tag of the background of j pixel, and in other characteristic background, j element described the barment tag of the prospect of j pixel.
Background reconstruction in traditional characteristic background subduction method can use formula (13) to describe
B ( x ) = UU T ( x - x ‾ ) + x ‾ - - - ( 13 )
U=[u wherein 1, u 2..., u Mp] be the matrix that all characteristic backgrounds constitute, x is the frame of video vector of current input,
Figure GDA0000152257830000072
Average for the frame of video vector.Can see that traditional characteristic background method is on the frame of video level, to carry out background reconstruction.It utilizes all characteristic backgrounds to carry out the reconstruct of background; Can know that by top analysis as far as a pixel, what corresponding element was described in some characteristic background is its foreground features; So the background after the reconstruct must comprise prospect, therefore cause omission and flase drop.
The absolute value of each element in the characteristic background can be used for weighing the dispersion of corresponding pixel points in the frame of video.Relative prospect, the dispersion of background pixel value is less, and therefore, the absolute value of certain element is more little in the characteristic background, and it has just described the background characteristics of corresponding pixel points more.In view of the above, the present invention proposes the background reconstruction method of pixel level.During the background value of i pixel of reconstruct, from all characteristic vectors, choose the minimum characteristic vector of absolute value of i element and carry out background reconstruction, the background value of i pixel of the background after the reconstruct be ask.This algorithm can be described below:
B(i)=ψ ki(i) (14)
ψ ki = u ki u ki T x - - - ( 15 )
u ki=min j{|u j(i)|} (16)
Wherein B (i) is the reconstructed background value of i pixel, ψ KiBe the background frames of reconstruct, u KiBe to be i the characteristic background that pixel is chosen, u j(i) be i element of j characteristic background.
The background subtraction of adaptive threshold:
In traditional background subtraction method, carry out the result that thresholding obtains background subtraction through error image to the background image of present frame and reconstruct based on characteristic background.Normally choose a fixed threshold T in the time of thresholding.White pixel point expression prospect in the image behind the thresholding, black pixel point is represented background.The disadvantage of this method is to be difficult to find an appropriate threshold to make all frame of video are all obtained desirable thresholding result.Utilize a certain fixed threshold, more flase drop can occur among the subduction result of some frame, and in other frames, perhaps more omission can occur.Therefore, in the present invention, utilize self-adapting threshold to carry out background subtraction, that is to say that each frame is chosen different threshold values and carried out background subtraction.The present invention adopts the method for " Thresholding for change detection " (thresholding method that is used for change-detection) to ask for adaptive threshold
T t = max k { Σ i = k PixRange i 2 bin t ( i ) Σ i = k PixRange bin t ( i ) - [ Σ i = k PixRange i bin ( t ) ( i ) Σ i = k PixRange bin ( t ) ( i ) ] 2 } - - - ( 17 )
bin t ( i ) = Σ j = 1 PixNumber s ti ( j ) - - - ( 18 )
s ti ( j ) = 1 | x t ( j ) - B t ( j ) | = i 0 otherwise - - - ( 19 )
T wherein tBe the threshold value of t two field picture, PixRange is the maximum of pixel value, x t(j) and B t(j) be the pixel value of j pixel and the background value of reconstruct respectively.
In order to verify the validity of algorithm of the present invention, used Gateway airport, the London data set in the event detection task of TRECVID2009 to test.This data set comprises the monitor video of five scenes of being taken by five cameras.Because the crowd is dense, scene is complicated, serious shielding etc., and this data set is one of the most difficult data set in present video analysis field.
For for simplicity, traditional characteristic background method is called C-PCA, it utilizes the batch processing principal component analysis to ask for initialized characteristic background, and utilizes the increment type principal component analysis of no covariance to upgrade, the selecting type operation that wherein has no.In order to verify the validity of Pixel-level background reconstruction method, algorithm is divided into two types of incomplete algorithm and complete algorithms.The former is called FS-PCA, and it utilizes all characteristic backgrounds on the frame level, to carry on the back reconstruct as traditional characteristic background method; The latter is called PS-PCA, and it is chosen best characteristic background for each pixel and carries out background reconstruction.In experiment, do not carry out any morphological operation after choosing 5 characteristic backgrounds and trying to achieve the background subtraction result.
In order to carry out quantitative evaluation and test, the artificial partial data to data set marks.Based on the evaluating method in " Background Subtraction in Varying Illuminations Using an Ensemble Based on an Enlarged Feature Set. " (the background subtraction method that under the variation illumination condition, makes up) based on the feature set of expanding; Randomly draw the partial video frame; Mark out the foreground object (comprising the pedestrian, traveling bag etc.) in the frame of video with rectangle frame.Behind the background subtraction,, think that then this prospect object has been detected if having the pixel more than 30% to be classified as prospect in the rectangle frame of mark.The number of detected object is used as just inspection rate with the ratio of the number of the object of actual mark, and the number of pixels of the prospect outside callout box and the ratio of detected total foreground pixel point are as false drop rate.
The necessity and the validity of the sample filtering that at first the present invention is proposed are verified.Sample filtering comprises that choosing the suitable training frame asks for initialized characteristic background and choose suitable input video frame characteristic background is upgraded.Second row and the third line of Fig. 3 have provided before the sample filtering and filtered comparing result.Can see before the sample filtering, a large amount of significantly omission and flase drops being arranged among the background subtraction result.This is because all frame of video all are used to ask for and upgrade characteristic background, so contains a large amount of foreground informations in the characteristic background.After the sample filtering, a lot of flase drops and omission have been removed.
Afterwards, the adding adaptive threshold carries out thresholding to the error image of the background image of present frame and reconstruct.In fact this is exactly the FS-PCA algorithm.The fourth line of Fig. 3 has provided the background subtraction result of FS-PCA algorithm.Compare with the experimental result of the third line, can see that adaptive threshold got rid of a lot of noises, especially for three two field pictures of back.
Then, FS-PCA and PS-PCA algorithm are compared.Last column of Fig. 3 has provided the background subtraction result of PS-PCA.Can see that flase drop is further got rid of, simultaneously a lot of omission objects have been detected.This is that this has been equivalent to reduce the foreground information in the characteristic background because each pixel is all chosen the characteristic background that can represent its background characteristics carries out background reconstruction.
In order to produce C-PCA, the ROC curve that FS-PCA and PS-PCA quantize to contrast is removed the adaptive threshold in the algorithm.Threshold value during through the variation background subtraction can produce ROC curve shown in Figure 4.Can see that the performance of FS-PCA has exceeded much than C-PCA.When false drop rate was about 20%, the just inspection rate of FS-PCA was still near 90%, but the just inspection rate of C-PCA has but dropped to 65%.Carry out background reconstruction through choosing the best features background for each pixel, PS-PCA has further improved the performance of FS-PCA.
Fig. 5 has provided the background subtraction result on camera 1, and Fig. 6 has provided the ROC contrast of three kinds of algorithms.From these results, can lead to the same conclusion: the background subtraction result of PS-PCA is best, and the result of C-PCA is unsatisfactory in the crowd is dense scene.Especially, on these data, PS-PCA has carried out very big improvement to the performance of FS-PCA.

Claims (4)

1. the selecting type characteristic background subduction method to the crowd is dense monitoring scene is characterized in that this method comprises the following steps:
1) utilize existing training video frame to set up scene degree of rarefication model;
2) choose the sparse training video frame of foreground object automatically according to the scene degree of rarefication model of setting up, utilize the batch processing principal component analysis to ask for initialized characteristic background;
3) scene degree of rarefication model is upgraded, utilize the scene degree of rarefication model after upgrading to judge whether the crowd in the current video frame scene is sparse, if then utilize the increment type principal component analysis that characteristic background is upgraded; Otherwise characteristic background remains unchanged;
4) on Pixel-level, utilize principal component analysis that the background of current video frame is carried out reconstruct optionally;
5) ask for the error image of the background frames of current video frame and reconstruct, and utilize adaptive threshold that error image is carried out thresholding, obtain the foreground image of current video frame;
The choosing method of the frame of training video step 2) is: ask for the similarity of frame of video vector and scene degree of rarefication model, choose similarity greater than the frame of assign thresholds as the training video frame;
The implementation method of said step 4) is: carry out background reconstruction to respectively tieing up the minimum characteristic background of element absolute value in each pixel selected characteristic background vector;
Adaptive threshold described in the step 5) refers to and chooses different threshold values when each frame carries out background subtraction.
2. background subtraction method as claimed in claim 1 is characterized in that: the scene degree of rarefication model described in the said step 1) is single Gauss model or gauss hybrid models.
3. background subtraction method as claimed in claim 1 is characterized in that: the implementation method of said step 3) is:
A) current video frame and frame of video are in the past distributed a weight coefficient, adopt the method for moving average that scene degree of rarefication model is upgraded;
B) ask for the similarity of current video frame vector and scene degree of rarefication model, if similarity greater than assign thresholds, then utilizes the increment type principal component analysis that characteristic background is upgraded; Otherwise characteristic background remains unchanged.
4. background subtraction method as claimed in claim 1 is characterized in that, the computational methods of adaptive threshold are in the said step 5): T t = Max k { Σ i = k PixRange i 2 Bin t ( i ) Σ i = k PixRange Bin t ( i ) - [ Σ i = k PixRange i Bin ( t ) ( i ) Σ i = k PixRange Bin ( t ) ( i ) ] 2 } ,
Wherein, Bin t ( i ) = Σ j = 1 PixNumber s Ti ( j ) , s Ti ( j ) = 1 | x t ( j ) - B t ( j ) | = i 0 Otherwise , T tBe the threshold value of t two field picture, PixRange is the maximum of pixel value, x t(j) and B t(j) be the pixel value of j pixel and the background value of reconstruct respectively.
CN201010555962A 2010-11-23 2010-11-23 Selective feature background subtraction method aiming at thick crowd monitoring scene Expired - Fee Related CN102013022B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010555962A CN102013022B (en) 2010-11-23 2010-11-23 Selective feature background subtraction method aiming at thick crowd monitoring scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010555962A CN102013022B (en) 2010-11-23 2010-11-23 Selective feature background subtraction method aiming at thick crowd monitoring scene

Publications (2)

Publication Number Publication Date
CN102013022A CN102013022A (en) 2011-04-13
CN102013022B true CN102013022B (en) 2012-10-10

Family

ID=43843194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010555962A Expired - Fee Related CN102013022B (en) 2010-11-23 2010-11-23 Selective feature background subtraction method aiming at thick crowd monitoring scene

Country Status (1)

Country Link
CN (1) CN102013022B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750712B (en) * 2012-06-07 2015-06-17 中山大学 Moving object segmenting method based on local space-time manifold learning
CN103576990A (en) * 2012-07-20 2014-02-12 中国航天科工集团第三研究院第八三五八研究所 Optical touch method based on single Gaussian model
US8913835B2 (en) * 2012-08-03 2014-12-16 Kodak Alaris Inc. Identifying key frames using group sparsity analysis
CN104156939B (en) * 2014-04-17 2016-10-05 四川大学 A kind of remnant object detection method based on SOBS and GMM
US9734588B2 (en) 2015-08-19 2017-08-15 Tata Consultancy Services Limited Method and system for compressing a video using background subtraction and change detection in videos
CN105741315B (en) * 2015-12-30 2019-04-02 电子科技大学 A kind of statistics background subtraction method based on down-sampled strategy
CN105844671B (en) * 2016-04-12 2018-06-22 河北大学 A kind of fast background relief method under the conditions of change illumination
CN106682814B (en) * 2016-11-28 2021-01-01 华北电力大学 Wind turbine generator fault intelligent diagnosis method based on fault knowledge base
CN106874862B (en) * 2017-01-24 2021-06-04 复旦大学 Crowd counting method based on sub-model technology and semi-supervised learning
CN110188597B (en) * 2019-01-04 2021-06-15 北京大学 Crowd counting and positioning method and system based on attention mechanism cyclic scaling
SG10202000965YA (en) * 2020-02-03 2020-03-30 Alipay Labs Singapore Pte Ltd Authentication System And Method
CN111325135B (en) * 2020-02-17 2022-11-29 天津中科智能识别产业技术研究院有限公司 Novel online real-time pedestrian tracking method based on deep learning feature template matching
CN111402208A (en) * 2020-03-03 2020-07-10 天津理工大学 Incremental-based weld surface defect feature extraction and classification identification method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7136525B1 (en) * 1999-09-20 2006-11-14 Microsoft Corporation System and method for background maintenance of an image sequence
CN101609552B (en) * 2009-03-30 2012-12-19 浙江工商大学 Method for detecting characteristics of video object in finite complex background
CN101621615A (en) * 2009-07-24 2010-01-06 南京邮电大学 Self-adaptive background modeling and moving target detecting method
CN101635852B (en) * 2009-08-26 2011-08-31 北京航空航天大学 Method for detecting real-time moving object based on adaptive background modeling

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
皮文凯等.基于自适应背景模型的全方位视觉人体运动检测.《北京大学学报(自然科学版)》.2004,第40卷(第3期),458-464. *

Also Published As

Publication number Publication date
CN102013022A (en) 2011-04-13

Similar Documents

Publication Publication Date Title
CN102013022B (en) Selective feature background subtraction method aiming at thick crowd monitoring scene
CN109154976B (en) System and method for training object classifier through machine learning
Younis et al. Real-time object detection using pre-trained deep learning models MobileNet-SSD
US9230175B2 (en) System and method for motion detection in a surveillance video
US10198657B2 (en) All-weather thermal-image pedestrian detection method
EP3035239B1 (en) Adapted vocabularies for matching image signatures with fisher vectors
Kalsotra et al. A comprehensive survey of video datasets for background subtraction
EP3096292A1 (en) Multi-object tracking with generic object proposals
US8724850B1 (en) Small object detection using meaningful features and generalized histograms
CN110929593A (en) Real-time significance pedestrian detection method based on detail distinguishing and distinguishing
Mu et al. A fast recognition algorithm for suspicious behavior in high definition videos
Hong et al. Fast multi-feature pedestrian detection algorithm based on histogram of oriented gradient using discrete wavelet transform
CN103118220A (en) Keyframe pick-up algorithm based on multi-dimensional feature vectors
Banerjee et al. Human motion detection and tracking for video surveillance
Kryjak et al. Real-time foreground object detection combining the PBAS background modelling algorithm and feedback from scene analysis module
Roy et al. Foreground segmentation using adaptive 3 phase background model
Malav et al. DHSGAN: An end to end dehazing network for fog and smoke
Vijayan et al. A universal foreground segmentation technique using deep-neural network
Breckon et al. Consistency in multimodal automated target detection using temporally filtered reporting
Mantini et al. Camera Tampering Detection using Generative Reference Model and Deep Learned Features.
Muniruzzaman et al. Deterministic algorithm for traffic detection in free-flow and congestion using video sensor
US11908178B2 (en) Verification of computer vision models
Neto et al. Computer-vision-based surveillance of intelligent transportation systems
Hsia et al. Low-complexity range tree for video synopsis system
Tank et al. A fast moving object detection technique in video surveillance system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121010

Termination date: 20201123

CF01 Termination of patent right due to non-payment of annual fee