CN107895379A - The innovatory algorithm of foreground extraction in a kind of video monitoring - Google Patents
The innovatory algorithm of foreground extraction in a kind of video monitoring Download PDFInfo
- Publication number
- CN107895379A CN107895379A CN201711001978.XA CN201711001978A CN107895379A CN 107895379 A CN107895379 A CN 107895379A CN 201711001978 A CN201711001978 A CN 201711001978A CN 107895379 A CN107895379 A CN 107895379A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- frame
- video
- msup
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 19
- 238000012544 monitoring process Methods 0.000 title claims abstract description 16
- 238000000034 method Methods 0.000 claims abstract description 15
- 239000000284 extract Substances 0.000 claims abstract description 8
- 238000012545 processing Methods 0.000 claims abstract description 5
- 238000001914 filtration Methods 0.000 claims abstract description 4
- 230000000877 morphologic effect Effects 0.000 claims abstract description 4
- 238000011946 reduction process Methods 0.000 claims abstract description 4
- 238000009826 distribution Methods 0.000 claims description 16
- 230000008859 change Effects 0.000 claims description 10
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 2
- 238000001514 detection method Methods 0.000 description 4
- 238000005286 illumination Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000011410 subtraction method Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of innovatory algorithm of foreground extraction in video monitoring, including step:Step (1), the color video frame collected pre-processed, cromogram is converted to gray-scale map first, then noise reduction process is carried out to image by medium filtering, finally entered column hisgram equalization, improve picture contrast;Step (2), five frame difference methods being improved to pretreated frame of video processing;Step (3), GMM modelings are carried out simultaneously to pretreated frame of video, extract background model;Step (4), with improved background subtraction frame of video is handled;Step (5), the frame of video obtained by step (2) and step (4) is subjected to logic or computing;Step (6), Morphological scale-space is carried out to the frame of video obtained by step (5), finally extract complete prospect.
Description
Technical field
The present invention relates to field of video monitoring, more particularly to a kind of innovatory algorithm of foreground extraction in video monitoring.
Background technology
With the development of computer technology, video monitoring is more and more intelligent, and one of intelligent video monitoring system is main
Task is that the target in video image or part interested are detected, identified, is tracked.The correct extraction of prospect is
The premise of video monitoring, the effect of foreground extraction can influence the accuracy and robustness of identification and the tracking of follow-up target.
In real application scenarios, due to the change of illumination in environment, the disturbance of leaf and the shake of camera itself etc. can not in background
The factor of change can have an impact to the effect of moving object detection.
The method of foreground extraction has many kinds, and it (is to enter the frame of video of fixed intervals that conventional algorithm, which has frame differential method,
Row compare, be adapted to dynamic change environment, but due to that can produce the cavity of large area, the integrality for extracting target is poor), the back of the body
(carrying out calculus of differences realization by current video frame and background frames can be preferable to moving object detection, this method for scape calculus of finite differences
Complete extraction target, is had a great influence by the change of illumination and background), optical flow method is (because its calculating is complicated, is difficult to meet motion inspection
The real-time of survey).
The content of the invention
The invention aims to overcome deficiency of the prior art, there is provided foreground extraction changes in a kind of video monitoring
Enter algorithm, the various algorithms of the algorithm synthesis are had complementary advantages a little, further increase the accuracy rate of foreground detection, essence
The foreground extraction algorithm that merges for improved five frame differences method and GMM, it can complete to answer in the change of illumination and the disturbance of background
Foreground extraction in miscellaneous background environment, by establishing GMM background models, and constantly background model is updated, in illumination
Change and leaf disturbance in can also accurately detect moving target, the quality of foreground detection is improved, so as to effective right
Foreground target is extracted.
The purpose of the present invention is achieved through the following technical solutions:
The innovatory algorithm of foreground extraction, comprises the following steps in a kind of video monitoring:
Step (1):The color video frame collected is pre-processed, cromogram is converted to gray-scale map first, then pass through
Cross medium filtering and noise reduction process is carried out to image, finally enter column hisgram equalization, improve picture contrast;
Step (2):The five frame difference methods processing being improved to pretreated frame of video;
Step (3):GMM modelings are carried out simultaneously to pretreated frame of video, extract background model;
Step (4):Frame of video is handled with improved background subtraction;
Step (5):Frame of video obtained by step (2) and step (4) is subjected to logic or computing;
Step (6):Morphological scale-space is carried out to the frame of video obtained by step (5), finally extracts complete prospect.
Improved five frame difference method described in step (2), its Establishing process specifically include following steps:
Step (201), choose the continuous 5 frame f tested in frame of video1(x, y), f2(x, y), f3(x, y), f4(x, y), f5
(x, y), preceding 2 two field picture chosen in this 5 frame use formula:Carry out absolute
Value difference point;
Step (202), binary conversion treatment is carried out, so as to obtain the bianry image D of front cross frame2(x, y), take the 2nd frame and the 3rd
Two field picture carries out identical operation and obtains bianry image D3(x, y), it can similarly obtain bianry image D4(x, y) and D5(x,y);
Step (203), to D2(x, y) and D3(x, y) carries out the add operation that counts, and obtains 1 frame and contains Moving Objects substantially model
The image g enclosed1(x, y), take D4(x, y) and D5(x, y) carries out same operation, obtains another frame and contains Moving Objects scope
Image g2(x,y);
Step (204), to g1(x, y) and g2(x, y) uses logic and operation, obtains final result I (x, y), that is, obtains
The image in the intermediate frame mobile object region of adjacent 5 two field picture.
GMM modelings are carried out simultaneously to pretreated frame of video described in step (3), specifically include following steps:
Step (301), K (3≤K≤5) individual Gaussian Profile is established to each pixel in the background model to be established;
Step (302), (x for a certain pixel0,y0), its historical record { X1,X2,...,Xt}={ I (x0,y0)
| 1≤i≤t }, then currently can it is observed that pixel value changes be:
Wherein, η (Xt,μi,t,∑i,t) for probability density (the average μ of i-th Gaussian Profilei,t, covariance matrix is
∑i,t),For weight corresponding to distribution, the average of each Gaussian Profile is μi,t, variance σi,t, covariance square
Battle array be approximately:(assuming that RGB is separate, I is unit battle array);
Step (303), by K Gaussian Profile according to priority ρi,t=ωi,t/σiSequence;
Step (304), preceding B Gaussian Profile is taken as background distributions
Step (305), by following formula judge whether with it is existing distribution match:
|Xt-μi,t-1|≤2.5σi,t-1
Wherein, XtIt is the gray value of each pixel, μi,t-1It is i-th of Gauss point in t-1 moment mixed Gauss models
The mean value vector of cloth, σi,t-1For the standard deviation of i-th of Gaussian Profile;Each pixel progress to current video frame is existing
The model of Gaussian Profile carries out matching operation, if matching, carries out step 306;If mismatching, step 307,308,309 are carried out;
Step (306) if, a pixel is with Gaussian Profile when matching, the distribution to matching carries out parameter renewal:
Step (307), other unmatched distributions only change weight, and weight updates according to the following rules:
ωi,t+1=(1- α) ωi,t;
Step (308) if, all mismatch, and when the number being currently distributed is less than K, increase a new Gaussian Profile;
Step (309) if, all mismatch, and when the number being currently distributed is equal to K, priority is replaced with new Gaussian Profile
Minimum Gaussian Profile, with xtAs average, a greater variance and smaller weight are initialized;
Step (310), the weight to model are ranked up, and obtain background model.
Step (4) specifically includes following steps:
Step (401), the Background chosen the 3rd two field picture and proposed obtain M using difference operation;
Step (402), with canny operators edge extracting is carried out to M, obtain the marginal information of mobile object;
Step (403), binary conversion treatment is carried out, obtain the foreground edge figure of moving target.
Compared with prior art, beneficial effect caused by technical scheme is:The present invention is improved five frame
Poor method is added in algorithm, and mixed Gauss model is merged, and can correctly extract big and slow foreground target, for
Foreground target unexpected static simultaneously stays for some time, and prospect will not be determined as background, still can accurately detect prospect
Target.
Brief description of the drawings
Fig. 1 is the innovatory algorithm overall structure block diagram of foreground extraction in video monitoring of the invention.
Embodiment
The invention will be further described below in conjunction with the accompanying drawings.
As shown in figure 1, the innovatory algorithm overall diagram for foreground extraction in video monitoring.Comprise the following steps:
Step 101:The color video frame of input is subjected to image preprocessing, cromogram is converted to gray-scale map first, then
Noise reduction process is carried out to image by medium filtering, finally enters column hisgram equalization, improves picture contrast.
Step 102:The five frame difference methods processing being improved to pretreated frame of video, chooses the company in experiment frame of video
Continuous 5 frame f1(x, y), f2(x, y), f3(x, y), f4(x, y), f5(x, y), preceding 2 two field picture chosen in this 5 frame use formula:Carry out absolute value difference;Then binary conversion treatment is carried out, before obtaining
The bianry image D of two frames2(x, y), the 2nd frame and the 3rd two field picture is taken to carry out identical operation and obtain bianry image D3(x, y), similarly may be used
Obtain bianry image D4(x, y) and D5(x,y);To D2(x, y) and D3(x, y) carries out the add operation that counts, and obtains 1 frame and contains motion
The image g of object approximate range1(x, y), take D4(x, y) and D5(x, y) carries out same operation, obtains another frame and contains motion
The image g of object range2(x,y);To g1(x, y) and g2(x, y) uses logic and operation, obtains final result I (x, y), that is, obtains
The image in the intermediate frame mobile object region of adjacent 5 two field picture obtained.
Step 103:GMM modelings are carried out simultaneously to pretreated frame of video, extract background model.To the back of the body to be established
Each pixel in scape model establishes K (3≤K≤5) individual Gaussian Profile;For (the x of a certain pixel0,y0), it
Historical record { X1,X2,...,Xt}={ I (x0,y0) | 1≤i≤t }, then currently can it is observed that pixel value changes be:
Wherein, η (Xt,μi,t,∑i,t) for probability density (the average μ of i-th Gaussian Profilei,t, covariance matrix is
∑i,t),For weight corresponding to distribution, the average of each Gaussian Profile is μi,t, variance σi,t, covariance square
Battle array can be approximated to be:(assuming that RGB is separate, I is unit battle array);By K Gaussian Profile according to preferential
Level ρi,t=ωi,t/σiSequence;B Gaussian Profile is as background distributions before taking:Pass through following public affairs
Formula judges whether to match with existing distribution:|Xt-μi,t-1|≤2.5σi,t-1, wherein, XtIt is the gray value of each pixel,
μi,t-1It is the mean value vector of i-th of Gaussian Profile in t-1 moment mixed Gauss models, σi,t-1For the standard of i-th of Gaussian Profile
Difference;The model progress matching operation of existing Gaussian Profile is carried out to each pixel of current video frame, if a picture
When vegetarian refreshments matches with Gaussian Profile, the distribution to matching carries out parameter renewal:
Other unmatched distributions only change weight, and weight updates according to the following rules:
ωi,t+1=(1- α) ωi,t;
If all mismatching, and when the number being currently distributed is less than K, increase a new Gaussian Profile;If all mismatch, and
When the number being currently distributed is equal to K, the minimum Gaussian Profile of priority is replaced with new Gaussian Profile, with xtAs average, just
One greater variance of beginningization and smaller weight;The weight of model is ranked up, obtains background model.
Step 104:Frame of video is handled with improved background subtraction.In the present embodiment, improved background subtraction
Canny operators are exactly added in traditional background subtraction edge extracting is carried out to foreground image, so as to obtain mobile object
Marginal information, more accurate extraction foreground target and objective contour.The Background chosen the 3rd two field picture and proposed uses
Difference operation obtains M;Edge extracting is carried out to M with canny operators, obtains the marginal information of mobile object;Then two-value is carried out
Change is handled, and obtains the foreground edge figure of moving target.
Step 105:Logic or computing are carried out to the output image of step 102 and step 104, obtain five frame difference methods and improvement
Background subtraction method be combined extracted foreground target.
Step 106:Image after being obtained to step 105 processing carries out Morphological scale-space, by multiple opening and closing operation, makes
Prospect bianry image originally is more complete, removes the discontinuous and cavitation at edge.
Step 107:Handled more than, finally give complete foreground image.
The present invention is not limited to embodiments described above.The description to embodiment is intended to describe and said above
Bright technical scheme, above-mentioned embodiment is only schematical, is not restricted.This is not being departed from
In the case of invention objective and scope of the claimed protection, one of ordinary skill in the art may be used also under the enlightenment of the present invention
The specific conversion of many forms is made, these are belonged within protection scope of the present invention.
Claims (4)
1. the innovatory algorithm of foreground extraction in a kind of video monitoring, it is characterised in that comprise the following steps:
Step (1):The color video frame collected is pre-processed, cromogram is converted to gray-scale map first, then passed through
Value filtering carries out noise reduction process to image, finally enters column hisgram equalization, improves picture contrast;
Step (2):The five frame difference methods processing being improved to pretreated frame of video;
Step (3):GMM modelings are carried out simultaneously to pretreated frame of video, extract background model;
Step (4):Frame of video is handled with improved background subtraction;
Step (5):Frame of video obtained by step (2) and step (4) is subjected to logic or computing;
Step (6):Morphological scale-space is carried out to the frame of video obtained by step (5), finally extracts complete prospect.
2. the innovatory algorithm of foreground extraction in a kind of video monitoring as claimed in claim 1, it is characterised in that in step (2)
Described improved five frame difference method, its Establishing process specifically include following steps:
Step (201), choose the continuous 5 frame f tested in frame of video1(x, y), f2(x, y), f3(x, y), f4(x, y), f5(x, y),
Preceding 2 two field picture chosen in this 5 frame uses formula:Carry out absolute difference
Point;
Step (202), binary conversion treatment is carried out, so as to obtain the bianry image D of front cross frame2(x, y), take the 2nd frame and the 3rd frame figure
Bianry image D is obtained as carrying out identical operation3(x, y), it can similarly obtain bianry image D4(x, y) and D5(x,y);
Step (203), to D2(x, y) and D3(x, y) carries out the add operation that counts, and obtains 1 frame and contains Moving Objects approximate range
Image g1(x, y), take D4(x, y) and D5(x, y) carries out same operation, obtains the image that another frame contains Moving Objects scope
g2(x,y);
Step (204), to g1(x, y) and g2(x, y) uses logic and operation, obtains final result I (x, y), that is, what is obtained is adjacent
The image in the intermediate frame mobile object region of 5 two field pictures.
3. the innovatory algorithm of foreground extraction in a kind of video monitoring as claimed in claim 1, it is characterised in that in step (3)
It is described that GMM modelings are carried out simultaneously to pretreated frame of video, specifically include following steps:
Step (301), K (3≤K≤5) individual Gaussian Profile is established to each pixel in the background model to be established;
Step (302), (x for a certain pixel0,y0), its historical record { X1,X2,...,Xt}={ I (x0,y0)1≤i
≤ t }, then currently can it is observed that pixel value changes be:
<mrow>
<mi>P</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>X</mi>
<mi>t</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>K</mi>
</munderover>
<msub>
<mi>&omega;</mi>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>t</mi>
</mrow>
</msub>
<mo>&times;</mo>
<mi>&eta;</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>X</mi>
<mi>t</mi>
</msub>
<mo>,</mo>
<msub>
<mi>&mu;</mi>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>t</mi>
</mrow>
</msub>
<mo>,</mo>
<msub>
<mi>&Sigma;</mi>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>t</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mi>&eta;</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>X</mi>
<mi>t</mi>
</msub>
<mo>,</mo>
<msub>
<mi>&mu;</mi>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>t</mi>
</mrow>
</msub>
<mo>,</mo>
<msub>
<mi>&Sigma;</mi>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>t</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<msup>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mi>&pi;</mi>
<mo>)</mo>
</mrow>
<mfrac>
<mi>n</mi>
<mn>2</mn>
</mfrac>
</msup>
<mo>|</mo>
<msub>
<mi>&Sigma;</mi>
<mi>i</mi>
</msub>
<msup>
<mo>|</mo>
<mfrac>
<mn>1</mn>
<mn>2</mn>
</mfrac>
</msup>
</mrow>
</mfrac>
<msup>
<mi>e</mi>
<mrow>
<mo>-</mo>
<mfrac>
<mn>1</mn>
<mn>2</mn>
</mfrac>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>X</mi>
<mi>t</mi>
</msub>
<mo>-</mo>
<msub>
<mi>&mu;</mi>
<mi>t</mi>
</msub>
<mo>)</mo>
</mrow>
<mi>T</mi>
</msup>
<msup>
<mi>&Sigma;</mi>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<mrow>
<mo>(</mo>
<msub>
<mi>X</mi>
<mi>t</mi>
</msub>
<mo>-</mo>
<msub>
<mi>&mu;</mi>
<mi>t</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</msup>
</mrow>
Wherein, η (Xt,μi,t,∑i,t) for probability density (the average μ of i-th Gaussian Profilei,t, covariance matrix is ∑i,t),For weight corresponding to distribution, the average of each Gaussian Profile is μi,t, variance σi,t, covariance matrix approximation
For:
Step (303), by K Gaussian Profile according to priority ρi,t=ωi,t/σiSequence;
Step (304), preceding B Gaussian Profile is taken as background distributions
<mrow>
<mi>B</mi>
<mo>=</mo>
<msub>
<mi>arg</mi>
<mi>b</mi>
</msub>
<mi>m</mi>
<mi>i</mi>
<mi>n</mi>
<mrow>
<mo>(</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>k</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>b</mi>
</munderover>
<msub>
<mi>w</mi>
<mi>k</mi>
</msub>
<mo>></mo>
<mi>T</mi>
<mo>)</mo>
</mrow>
<mo>;</mo>
</mrow>
Step (305), by following formula judge whether with it is existing distribution match:
|Xt-μi,t-1|≤2.5σi,t-1
Wherein, XtIt is the gray value of each pixel, μi,t-1It is i-th of Gaussian Profile in t-1 moment mixed Gauss models
Mean value vector, σi,t-1For the standard deviation of i-th of Gaussian Profile;Existing Gauss is carried out to each pixel of current video frame
The model of distribution carries out matching operation, if matching, carries out step 306;If mismatching, step 307,308,309 are carried out;
Step (306) if, a pixel is with Gaussian Profile when matching, the distribution to matching carries out parameter renewal:
<mrow>
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>&omega;</mi>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>=</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<mi>&alpha;</mi>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<msub>
<mi>&omega;</mi>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>t</mi>
</mrow>
</msub>
<mo>+</mo>
<mi>&alpha;</mi>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msub>
<mi>&mu;</mi>
<mi>t</mi>
</msub>
<mo>=</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<mi>&beta;</mi>
<mo>)</mo>
</mrow>
<msub>
<mi>&mu;</mi>
<mrow>
<mi>t</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>+</mo>
<msub>
<mi>&beta;X</mi>
<mi>t</mi>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msubsup>
<mi>&sigma;</mi>
<mi>t</mi>
<mn>2</mn>
</msubsup>
<mo>=</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<mi>&beta;</mi>
<mo>)</mo>
</mrow>
<msubsup>
<mi>&sigma;</mi>
<mrow>
<mi>t</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
<mn>2</mn>
</msubsup>
<mo>+</mo>
<mi>&beta;</mi>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>X</mi>
<mi>t</mi>
</msub>
<mo>-</mo>
<msub>
<mi>&mu;</mi>
<mi>t</mi>
</msub>
<mo>)</mo>
</mrow>
<mi>T</mi>
</msup>
<mrow>
<mo>(</mo>
<msub>
<mi>X</mi>
<mi>t</mi>
</msub>
<mo>-</mo>
<msub>
<mi>&mu;</mi>
<mi>t</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>;</mo>
</mrow>
Step (307), other unmatched distributions only change weight, and weight updates according to the following rules:
ωi,t+1=(1- α) ωi,t;
Step (308) if, all mismatch, and when the number being currently distributed is less than K, increase a new Gaussian Profile;
Step (309) if, all mismatch, and when the number being currently distributed is equal to K, replace priority minimum with new Gaussian Profile
Gaussian Profile, with xtAs average, a greater variance and smaller weight are initialized;
Step (310), the weight to model are ranked up, and obtain background model.
4. the innovatory algorithm of foreground extraction in a kind of video monitoring as claimed in claim 1, it is characterised in that step (4) has
Body comprises the following steps:
Step (401), the Background chosen the 3rd two field picture and proposed obtain M using difference operation;
Step (402), with canny operators edge extracting is carried out to M, obtain the marginal information of mobile object;
Step (403), binary conversion treatment is carried out, obtain the foreground edge figure of moving target.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711001978.XA CN107895379A (en) | 2017-10-24 | 2017-10-24 | The innovatory algorithm of foreground extraction in a kind of video monitoring |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711001978.XA CN107895379A (en) | 2017-10-24 | 2017-10-24 | The innovatory algorithm of foreground extraction in a kind of video monitoring |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107895379A true CN107895379A (en) | 2018-04-10 |
Family
ID=61802909
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711001978.XA Pending CN107895379A (en) | 2017-10-24 | 2017-10-24 | The innovatory algorithm of foreground extraction in a kind of video monitoring |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107895379A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108205891B (en) * | 2018-01-02 | 2019-03-05 | 霍*** | A kind of vehicle monitoring method of monitoring area |
CN109448397A (en) * | 2018-11-20 | 2019-03-08 | 山东交通学院 | A kind of group's mist monitoring method based on big data |
CN109684996A (en) * | 2018-12-22 | 2019-04-26 | 北京工业大学 | Real-time vehicle based on video passes in and out recognition methods |
CN110348305A (en) * | 2019-06-06 | 2019-10-18 | 西北大学 | A kind of Extracting of Moving Object based on monitor video |
CN111524082A (en) * | 2020-04-26 | 2020-08-11 | 上海航天电子通讯设备研究所 | Target ghost eliminating method |
CN111524158A (en) * | 2020-05-09 | 2020-08-11 | 黄河勘测规划设计研究院有限公司 | Method for detecting foreground target in complex scene of hydraulic engineering |
CN111832392A (en) * | 2020-05-27 | 2020-10-27 | 湖北九感科技有限公司 | Flame smoke detection method and device |
CN112036254A (en) * | 2020-08-07 | 2020-12-04 | 东南大学 | Moving vehicle foreground detection method based on video image |
CN113657319A (en) * | 2021-08-23 | 2021-11-16 | 安徽农业大学 | Method for recognizing non-interference sleep action behaviors based on image recognition technology |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140112547A1 (en) * | 2009-10-07 | 2014-04-24 | Microsoft Corporation | Systems and methods for removing a background of an image |
CN106504273A (en) * | 2016-10-28 | 2017-03-15 | 天津大学 | A kind of innovatory algorithm based on GMM moving object detections |
CN107154053A (en) * | 2017-05-11 | 2017-09-12 | 南宁市正祥科技有限公司 | Moving target detecting method under static background |
-
2017
- 2017-10-24 CN CN201711001978.XA patent/CN107895379A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140112547A1 (en) * | 2009-10-07 | 2014-04-24 | Microsoft Corporation | Systems and methods for removing a background of an image |
CN106504273A (en) * | 2016-10-28 | 2017-03-15 | 天津大学 | A kind of innovatory algorithm based on GMM moving object detections |
CN107154053A (en) * | 2017-05-11 | 2017-09-12 | 南宁市正祥科技有限公司 | Moving target detecting method under static background |
Non-Patent Citations (2)
Title |
---|
潘峥嵘 等: "改进的背景减法与五帧差分法相结合的运动目标检测", 《自动化与仪表》 * |
郭伟 等: "改进的基于混合高斯模型的运动目标检测算法", 《计算机工程与应用》 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108205891B (en) * | 2018-01-02 | 2019-03-05 | 霍*** | A kind of vehicle monitoring method of monitoring area |
CN109448397B (en) * | 2018-11-20 | 2020-11-13 | 山东交通学院 | Group fog monitoring method based on big data |
CN109448397A (en) * | 2018-11-20 | 2019-03-08 | 山东交通学院 | A kind of group's mist monitoring method based on big data |
CN109684996A (en) * | 2018-12-22 | 2019-04-26 | 北京工业大学 | Real-time vehicle based on video passes in and out recognition methods |
CN110348305B (en) * | 2019-06-06 | 2021-06-25 | 西北大学 | Moving object extraction method based on monitoring video |
CN110348305A (en) * | 2019-06-06 | 2019-10-18 | 西北大学 | A kind of Extracting of Moving Object based on monitor video |
CN111524082A (en) * | 2020-04-26 | 2020-08-11 | 上海航天电子通讯设备研究所 | Target ghost eliminating method |
CN111524082B (en) * | 2020-04-26 | 2023-04-25 | 上海航天电子通讯设备研究所 | Target ghost eliminating method |
CN111524158A (en) * | 2020-05-09 | 2020-08-11 | 黄河勘测规划设计研究院有限公司 | Method for detecting foreground target in complex scene of hydraulic engineering |
CN111524158B (en) * | 2020-05-09 | 2023-03-24 | 黄河勘测规划设计研究院有限公司 | Method for detecting foreground target in complex scene of hydraulic engineering |
CN111832392A (en) * | 2020-05-27 | 2020-10-27 | 湖北九感科技有限公司 | Flame smoke detection method and device |
CN112036254A (en) * | 2020-08-07 | 2020-12-04 | 东南大学 | Moving vehicle foreground detection method based on video image |
WO2022027931A1 (en) * | 2020-08-07 | 2022-02-10 | 东南大学 | Video image-based foreground detection method for vehicle in motion |
CN112036254B (en) * | 2020-08-07 | 2023-04-18 | 东南大学 | Moving vehicle foreground detection method based on video image |
CN113657319A (en) * | 2021-08-23 | 2021-11-16 | 安徽农业大学 | Method for recognizing non-interference sleep action behaviors based on image recognition technology |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107895379A (en) | The innovatory algorithm of foreground extraction in a kind of video monitoring | |
CN104050471B (en) | Natural scene character detection method and system | |
CN110033002B (en) | License plate detection method based on multitask cascade convolution neural network | |
CN102867188B (en) | Method for detecting seat state in meeting place based on cascade structure | |
CN110276264B (en) | Crowd density estimation method based on foreground segmentation graph | |
CN108268859A (en) | A kind of facial expression recognizing method based on deep learning | |
CN107481188A (en) | A kind of image super-resolution reconstructing method | |
CN105427626B (en) | A kind of statistical method of traffic flow based on video analysis | |
CN107507221A (en) | With reference to frame difference method and the moving object detection and tracking method of mixed Gauss model | |
CN103258332B (en) | A kind of detection method of the moving target of resisting illumination variation | |
CN105513053B (en) | One kind is used for background modeling method in video analysis | |
CN109934224B (en) | Small target detection method based on Markov random field and visual contrast mechanism | |
CN106096602A (en) | Chinese license plate recognition method based on convolutional neural network | |
CN107657625A (en) | Merge the unsupervised methods of video segmentation that space-time multiple features represent | |
CN102214291A (en) | Method for quickly and accurately detecting and tracking human face based on video sequence | |
CN103886589A (en) | Goal-oriented automatic high-precision edge extraction method | |
CN102063727B (en) | Covariance matching-based active contour tracking method | |
CN103996018A (en) | Human-face identification method based on 4DLBP | |
CN108537143B (en) | A kind of face identification method and system based on key area aspect ratio pair | |
CN109359549A (en) | A kind of pedestrian detection method based on mixed Gaussian and HOG_LBP | |
CN106778650A (en) | Scene adaptive pedestrian detection method and system based on polymorphic type information fusion | |
CN104599291B (en) | Infrared motion target detection method based on structural similarity and significance analysis | |
CN105825233A (en) | Pedestrian detection method based on random fern classifier of online learning | |
CN106570885A (en) | Background modeling method based on brightness and texture fusion threshold value | |
CN103824305A (en) | Improved Meanshift target tracking method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180410 |
|
WD01 | Invention patent application deemed withdrawn after publication |