CN106780646B - Parameter-free background modeling method suitable for multiple scenes - Google Patents

Parameter-free background modeling method suitable for multiple scenes Download PDF

Info

Publication number
CN106780646B
CN106780646B CN201611095522.XA CN201611095522A CN106780646B CN 106780646 B CN106780646 B CN 106780646B CN 201611095522 A CN201611095522 A CN 201611095522A CN 106780646 B CN106780646 B CN 106780646B
Authority
CN
China
Prior art keywords
value
image
point
pixel
entering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611095522.XA
Other languages
Chinese (zh)
Other versions
CN106780646A (en
Inventor
王海滨
黄志举
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vistek Technology Beijing Co ltd
Original Assignee
Vistek Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vistek Technology Beijing Co ltd filed Critical Vistek Technology Beijing Co ltd
Priority to CN201611095522.XA priority Critical patent/CN106780646B/en
Publication of CN106780646A publication Critical patent/CN106780646A/en
Application granted granted Critical
Publication of CN106780646B publication Critical patent/CN106780646B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a parameter-free background modeling method suitable for multiple scenes, which is characterized by comprising the following steps of: a) initializing the image, judging whether the acquired frame image is the first frame of the image, if so, entering the step h, otherwise, entering the step b; b) c, searching a moving point of the frame image, and then entering the step c; c) judging whether a model needs to be reestablished, if the model does not need to be reestablished, entering the step d, otherwise, entering the step h; d) conditionally updating the model based on previously processed processing results; e) finding out a flash point according to the result of the previous processing and removing the flash point; f) removing the noise points generated by the treatment one by one; g) smoothing the moving foreground and the static background generated by the algorithm, and then entering the step i; h) establishing a background model sequence, and then entering the step i; i) and outputting the result.

Description

Parameter-free background modeling method suitable for multiple scenes
Technical Field
The invention relates to a background modeling method, in particular to a multi-scene non-parameter background modeling method applied to video monitoring of a security system.
Background
In recent years, with the public security being emphasized by society, video monitoring with intelligent analysis function has become an indispensable part of security systems. Most of intelligent analysis is based on background modeling, and general background modeling methods have some technical defects, which are mainly reflected in that: complex parameters are provided, and different scenes need to be readjusted; more false alarms are given to images with jitter; the time for building the background model is long, and videos with dozens of frames or even hundreds of frames are often needed, so that more time is consumed, and if moving objects exist in the frames, the objects are lost and cannot be well detected; the Ghost area is difficult to eliminate, and is introduced if an object temporarily stays in place during background modeling and moves later; the method comprises the following steps that (1) Ghost is eliminated early, some background modeling algorithms can eliminate a Ghost area quickly, but the early elimination is not necessarily good, and the accuracy of the algorithm is seriously influenced for nursing of articles; the flashing points are difficult to eliminate, ripples and leaves swinging along with wind can bring some flashing points, which are considered as moving objects and are often useless; most of them are only suitable for gunlock, and the ball machine which can rotate can not be used.
In order to overcome the technical defects, the invention provides a parameter-free background modeling method based on variable learning rate and random neighborhood pixels.
Disclosure of Invention
The application aims to provide a parameter-free background modeling method suitable for multiple scenes, which is characterized by comprising the following steps of: a) initializing the image, judging whether the acquired frame image is the first frame of the image, if so, entering the step h, otherwise, entering the step b; b) c, searching a moving point of the frame image, and then entering the step c; c) judging whether a model needs to be reestablished, if the model does not need to be reestablished, entering the step d, otherwise, entering the step h; d) conditionally updating the model based on previously processed processing results; e) finding out a flash point according to the result of the previous processing and removing the flash point; f) removing the noise points generated by the treatment one by one; g) smoothing the moving foreground and the static background generated by the algorithm, and then entering the step i; h) establishing a background model sequence, and then entering the step i; i) and outputting the result.
Preferably, the specific method of the image initialization processing in step a is as follows: a1) inputting a sequence of images; a2) aiming at the RGB image, carrying out graying processing on each pixel point (x, y), wherein the pixel value of each pixel point (x, y) is between 0 and 255; a3) the histogram of the gray level image is converted into a uniformly distributed form, and the dynamic range of the gray level value is increased, so that the effect of enhancing the contrast of the image is achieved; a4) and outputting a gray image with the gray value of each pixel point (x, y) between 0 and 255.
Preferably, the method for searching the motion point in step b is as follows: b1) reading each pixel point (x, y) of the new gray image; b2) reading a background model sequence and a color moment R, wherein the color moment R represents the difference value of gray values of pixel points (x, y) in a gray image and the same-position points (x, y) in the background sequence model; b3) judging whether all the pixel points with the position (x, y) in the background model are read, if all the pixel points with the position (x, y) in the background model are read, determining the pixel points (x, y) of the gray level image as non-motion points, and if not, entering the step b 4; b4) judging whether the color moment R is larger than the color moment threshold value, if so, entering a step b5, otherwise, returning to the step b 3; b5) and c, judging whether the count of the color moment R which is larger than the color moment threshold is larger than the count threshold, if so, determining the point as a motion point, otherwise, returning to the step b 3.
Preferably, the method for updating the model in step d is as follows: d1) reading a non-moving pixel point (x, y); d2) randomly replacing one of the background model sequences with the pixel value of the current pixel point (x, y) with the probability of having the SUBSAMPLE _ FACTOR; d3) the probability of SUBSAMPLE _ FACTOR randomly replaces one of the background model sequences with the neighborhood of the pixel value of the current pixel point (x, y).
Preferably, the method for removing noise in step f adopts a water diffusion method, and the specific steps are as follows: f1) reading a binary image; f2) performing closing operation on the image; f3) the inner sides with the image width of 1 are completely blackened, namely the pixel value is set to be 0; f4) taking the point (x0, y0) at the upper left corner of the image as an origin, and painting the area with the pixel value of 0 and the black area communicated with the point (x0, y0) as white by using a water diffusion method, namely the pixel value is 255; f5) the pixel value of the pixel point (x, y) of the black area on the image is changed to 255, namely white; f6) and outputting a black-white binary image.
Preferably, the method for establishing the background model sequence in step h includes: h1) reading a gray scale image with the gray scale value between 0 and 255; h2) traversing some pixels in the gray-scale image, and judging whether all the pixels in the gray-scale image are completely traversed, if not, entering a step h3, otherwise, entering a step h 4; h3) randomly setting the pixel value of the same position in the background model sequence as the original pixel or the neighborhood thereof; h4) and outputting a background model sequence. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
Further objects, features and advantages of the present invention will become apparent from the following description of embodiments of the invention, with reference to the accompanying drawings, in which:
FIG. 1 shows a flow chart of a method of multiparameter-free background modeling in accordance with the present invention;
FIG. 2 is a flow chart illustrating the preliminary processing of an image in a multi-scene adaptive non-parametric background modeling method according to the present invention;
FIG. 3 is a flow chart illustrating a background model sequence established in the multi-scenario-applicable non-parametric background modeling method according to the present invention;
FIG. 4 is a flow chart of a method for judging a motion point in a multi-scene-applicable parameter-free background modeling method according to the invention;
FIG. 5 is a flow chart of a dynamic color threshold and learning rate updating method in a multi-scene-applicable parameter-free background modeling method according to the invention;
FIG. 6 shows a flow chart of water diffusion false deletion in a multi-scenario-applicable parameter-free background modeling method according to the present invention.
Detailed Description
The objects and functions of the present invention and methods for accomplishing the same will be apparent by reference to the exemplary embodiments. However, the present invention is not limited to the exemplary embodiments disclosed below; it can be implemented in different forms. The nature of the description is merely to assist those skilled in the relevant art in a comprehensive understanding of the specific details of the invention.
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the drawings, the same reference numerals denote the same or similar parts, or the same or similar steps.
To facilitate understanding and practice of the invention by those of ordinary skill in the art, the invention is described in further detail below with reference to the accompanying drawings.
FIG. 1 shows a flow chart of a method of multiparameter-free background modeling in accordance with the present invention;
step 101: initializing the image, judging whether the acquired frame image is the first frame of the image, if so, entering the step 108, otherwise, entering the step 102; step 102: searching for a motion point of the frame image, and then entering step 103;
step 103: judging whether a model needs to be reestablished, if the model does not need to be reestablished, entering a step 104, otherwise, entering a step 108;
step 104: conditionally updating the model based on previously processed processing results;
step 105: finding out a flash point according to the result of the previous processing and removing the flash point;
step 106: removing the noise points generated by the treatment one by one;
step 107: smoothing the moving foreground and the static background generated by the algorithm, and then entering step 109;
step 108: establishing a background model sequence, and then entering step 109;
step 109: and outputting the result.
FIG. 2 is a flow chart illustrating the preliminary processing of an image in a multi-scene adaptive non-parametric background modeling method according to the present invention;
step 201: firstly, performing primary processing on an image, and inputting an image sequence;
step 202: aiming at the RGB image, carrying out graying processing on each pixel point (x, y), wherein the pixel value of each pixel point (x, y) is between 0 and 255;
step 203: the histogram of the gray level image is converted into a uniformly distributed form, and the dynamic range of the gray level value is increased, so that the effect of enhancing the contrast of the image is achieved;
step 204: and outputting a gray image with the gray value of each pixel point (x, y) between 0 and 255.
Then, a background model sequence is established by utilizing the first image, and fig. 3 shows a flow chart of establishing the background model sequence in the multi-scene-applicable parameter-free background modeling method according to the invention; the steps are shown in figure 3:
step 301: reading a gray scale image with the gray scale value between 0 and 255;
step 302: traversing some pixel points of the gray-scale image by taking the upper left corner of the image as a starting point and the lower right corner of the image as an end point, judging whether all the pixel points of the gray-scale image are completely traversed, if not, entering step 303, otherwise, entering step 304;
step 303: the pixel values of the same position in the background model sequence are randomly the original pixel or the neighborhood thereof, and the detailed method is as follows:
the background sequence model is composed of 20 images with the same size as the original image, and if it is noted that the read position is (x, y), then in these 20 background models, the pixel value at the same (x, y) position may be any one of its fields, and may be any one of (x-1, y-1), (x-1, y +1), (x, y-1), (x, y +1), (x +1, y-1), (x +1, y +1), and if some fields do not exist, any one of the other neighborhoods. After traversing, a background model sequence consisting of 20 images is formed, so that even if the image has some jitter, the algorithm can stably detect the moving target, and the elimination of the Ghost area is greatly influenced;
and step 304, outputting a background model sequence.
Then detecting the moving points, and fig. 4 shows a flow chart of a judging method of the moving points in the multi-scene-applicable parameter-free background modeling method according to the invention; the specific steps are shown in fig. 4:
step 401: reading each pixel point (x, y) of the new gray image;
step 402: reading a background model sequence and a color moment R, wherein the color moment R represents the difference value of gray values of pixel points (x, y) in a gray image and the same-position points (x, y) in the background sequence model;
step 403; judging whether all the pixel points with the position (x, y) in the background model are read, if all the pixel points with the position (x, y) in the background model are read, determining the pixel points (x, y) of the gray level image as non-motion points, and if not, entering step 404;
step 404: judging whether the color moment R is larger than a color moment threshold value, if so, entering a step 405, otherwise, returning to the step 403;
step 405: and judging whether the count of the color moment R larger than the color moment threshold is larger than the count threshold, if so, the point is a motion point, otherwise, returning to the step 403.
Examples are shown below:
when traversing to the point (x, y), the pixel Value at the (x, y) position is differentiated with the model sequence one by one, if the difference is larger than the accumulation threshold Value _ thre, counting is started, if the model is completely compared, the accumulation does not reach the Frame _ change _ num, the background point is determined, otherwise, the motion point is determined, a new image m _ cfg is established, the pixel Value of the motion point is 255, and the background point is 0.
Value _ thre and Frame _ change _ num are not constant, the moving position of each position is recorded, and when the moving time is too long, areas with frequent movement appear, the values of Frame _ change _ num and Value _ thre are increased appropriately to reduce false detection. Similarly, if there are long-term non-moving points, it is necessary to reduce Value _ thre and Frame _ change _ num so that more points can be detected and the algorithm improves efficiency.
FIG. 5 is a flow chart of a dynamic color threshold and learning rate updating method in a multi-scene-applicable parameter-free background modeling method according to the invention; as shown in fig. 5, the steps are: reading a non-moving pixel point (x, y); randomly replacing one of the background model sequences with the pixel value of the current pixel point (x, y) with the probability of having the SUBSAMPLE _ FACTOR; the probability of having a SUBSAMPLE _ FACTOR randomly replaces one of the background model sequences with the neighborhood of the pixel value of the current pixel point (x, y).
According to one embodiment of the invention, the specific method comprises the following steps:
step 501: inputting a pixel point;
step 502: judging whether the pixel points are motion points, if so, entering a step 503 a; if not, go to step 503 b;
step 503 a; the accumulated value of the motion points is increased; if the motion point accumulated value is judged to be excessively increased, the step 504a is entered;
step 503 b; decreasing the accumulated value of the motion points; if the motion point accumulated value is judged to be too small, the step goes to step 504 b;
step 504 a: increasing a color moment threshold value and increasing a color moment accumulation threshold value;
step 504 b: decreasing the color moment threshold, decreasing the color moment accumulation threshold.
The background update is then also followed by the strategy that if the point (x, y) is identified as a non-motion point, then it will have a 1/subset _ FACTOR probability to randomly update a frame in the background model with the same pixel value as the current point, and at the same time it will have a 1/subset _ FACTOR probability to randomly update a frame in the background model with the same pixel value as the current point neighborhood, so that the background will not change or change too fast, and the operation efficiency is improved.
Then, the number of moving points is judged, if the number of moving points of continuous multi-frame is too many, the situation of camera position change and the like can occur, so that the background model changes more, a new model sequence is established immediately, otherwise, the model sequence is regarded as the background.
The elimination of the Ghost area is performed as follows, and the strategy for eliminating the Ghost area is to update the background of the Ghost area, and record the motion state of each point by using a specific value Ghost _ value. If the motion state is detected, the Ghost _ value is increased, the denormalization is decreased, but the increasing speed is faster than the decreasing speed, and when the value reaches an adjustable threshold NUM _ REFERSH, the Ghost point is determined. However, this is not a necessary condition for updating the points, but there is also a condition that the moving area reaches a threshold value, otherwise the points are slowly replaced by random changes of the background even if the model is not re-established.
Then, the elimination of the flashing points is performed, such as the flashing light of the leaves and the shark, and the invention uses a method of fluctuation counting, if the states of the two frames before and after the point (x, y) are different, the fluctuation counter is increased, otherwise, the fluctuation counter is decreased. When a certain value is accumulated, the value writing point is recorded, the area is expanded in an expansion mode, and the area is removed from the moving point.
The false alarm area of the flash point is undoubtedly increased by the corrosion operation of the flash point, some areas are filled by using a water diffusion method, and FIG. 6 shows a flow chart of removing false deletion by using the water diffusion method in the multi-scenario applicable parameter-free background modeling method according to the invention; the specific method comprises the following steps: step 601: inputting a binary image;
step 602: performing a closing operation on the image, namely closing a white area, and copying the image to obtain a copy, namely m _ flood;
step 603: then, assigning a value to the inner edge with the width of 1 of the m _ flood image, and assigning the value to be 0, namely black;
step 604: taking the point (x0, y0) at the upper left corner of the image as an origin, and painting the area with the pixel value of 0 and the black area communicated with the point (x0, y0) as white by using a water diffusion method, namely the pixel value is 255;
step 605: then comparing the m _ cfg with the m _ flood, wherein areas which are black at the same time in the two images are mistakenly deleted or missed to be detected, and filling the areas on the m _ cfg into white;
step 606: and outputting a black-white binary image.
The median filtering is carried out, a clear black and white binary image can be formed through the processing, and the binary image obtained after the median filtering is carried out on the m _ cfg is clearer.
Finally, m _ cfg is merged with the original image, wherein the white area of the m _ cfg is a moving area, and the black area is a static area.
This patent can be applicable to different scenes, and does not need the accommodate parameter, and the modeling time is short, can eliminate the shake of image to and flash point, but the time of eliminating to the Ghost region is free to be set for, also can have a fine modeling effect to the ball machine that can be active.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims (4)

1. A parameter-free background modeling method suitable for multiple scenes is characterized by comprising the following steps:
a) initializing the image, judging whether the acquired frame image is the first frame of the image, if so, entering the step h, otherwise, entering the step b;
b) finding the moving points of the frame image, and then proceeding to step c,
the method for searching the motion point comprises the following steps:
b1) reading each pixel point (x, y) of the new gray image;
b2) reading a background model sequence and a color moment R, wherein the color moment R represents the difference value of gray values of pixel points (x, y) in a gray image and the same-position points (x, y) in the background sequence model;
b3) judging whether all the pixel points with the position (x, y) in the background model are read, if all the pixel points with the position (x, y) in the background model are read, determining the pixel points (x, y) of the gray level image as non-motion points, and if not, entering the step b 4;
b4) judging whether the color moment R is larger than the color moment threshold value, if so, entering a step b5, otherwise, returning to the step b 3;
b5) judging whether the count of the color moment R larger than the color moment threshold value is larger than the color moment accumulated threshold value, if so, the point is a motion point, otherwise, returning to the step b3
Establishing a new image m _ cfg, wherein the pixel value of a motion point is 255, and the pixel value of a background point is 0;
the color moment threshold value updating method comprises the following steps:
x 1): inputting a pixel point;
x 2): judging whether the pixel points are motion points, if so, entering a step x 3); if not, go to step x 4);
x 3); the accumulated value of the motion points is increased; if judging that the accumulated value of the motion points is excessively increased, entering a step x 5);
x 4); decreasing the accumulated value of the motion points; if the motion point accumulated value is judged to be too small, the step x6 is carried out);
x 5): increasing a color moment threshold value and increasing a color moment accumulation threshold value;
x 6): reducing the color moment threshold value and the color moment accumulation threshold value;
c) judging whether a model needs to be reestablished, if the model does not need to be reestablished, entering the step d, otherwise, entering the step h;
d) conditionally updating the model based on the previously processed processing results by:
d1) reading a non-moving pixel point (x, y);
d2) randomly replacing one of the background model sequences with the pixel value of the current pixel point (x, y) with the probability of having the SUBSAMPLE _ FACTOR;
d3) randomly replacing one of the background model sequences with the neighborhood of the pixel value of the current pixel point (x, y) with the probability of having SUBSAMPLE _ FACTOR; e) finding out a flash point according to the result of the previous processing and removing the flash point;
recording the motion state of each point by using a specific value Ghost _ value, wherein the specific value Ghost _ value starts to count after the background model sequence is reestablished, and the specific value Ghost _ value is counted again when the background model is reestablished;
if the motion state is the motion state, the specific value Ghost _ value is increased, after an adjustable threshold value is reached, the value is determined to be a Ghost point, and the motion area also reaches a threshold value at the moment, the background of the Ghost area is updated;
the sparkle point is eliminated using a method of fluctuation counting: if the states of the two frames before and after the point (x, y) are different, the fluctuation counter is increased, otherwise, the fluctuation counter is decreased; when the fluctuation counter accumulates a certain value, writing the value into a point, expanding the motion area, and removing the point (x, y) from the motion point;
f) removing the noise points generated by the treatment one by one;
g) smoothing the moving foreground and the static background generated by the algorithm, and then entering the step i;
h) establishing a background model sequence, and then entering the step i;
i) and outputting the result.
2. The method according to claim 1, wherein the specific method of the image initialization processing in step a is as follows:
a1) inputting a sequence of images;
a2) aiming at the RGB image, carrying out graying processing on each pixel point (x, y), wherein the pixel value of each pixel point (x, y) is between 0 and 255;
a3) the histogram of the gray level image is converted into a uniformly distributed form, and the dynamic range of the gray level value is increased, so that the effect of enhancing the contrast of the image is achieved;
a4) and outputting a gray image with the gray value of each pixel point (x, y) between 0 and 255.
3. The method according to claim 1, wherein the denoising method in step f is a water diffusion method, and comprises the following specific steps:
f1) inputting a binary image;
f2) closing the image once to close the white area, and copying the image for one copy, namely m _ flood;
f3) then, assigning a value to the inner edge with the width of 1 of the m _ flood image, and assigning the value to be 0, namely black;
f4) taking the point (x0, y0) at the upper left corner of the image as an origin, and painting the area with the pixel value of 0 and the black area communicated with the point (x0, y0) as white by using a water diffusion method, namely the pixel value is 255;
f5) then comparing the m _ cfg with the m _ flood, wherein areas which are black at the same time in the two images are mistakenly deleted or missed to be detected, and filling the areas on the m _ cfg into white;
f6) and outputting a black-white binary image.
4. The method according to claim 1, wherein the method of establishing the background model sequence in step h is:
h1) reading a gray scale image with the gray scale value between 0 and 255;
h2) traversing all pixel points of the gray-scale image, judging whether all the pixel points of the gray-scale image are completely traversed, if not, entering a step h3, otherwise, entering a step h 4;
h3) randomly setting the pixel value of the same position in the background model sequence as the original pixel or the neighborhood thereof;
h4) and outputting a background model sequence.
CN201611095522.XA 2016-12-01 2016-12-01 Parameter-free background modeling method suitable for multiple scenes Active CN106780646B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611095522.XA CN106780646B (en) 2016-12-01 2016-12-01 Parameter-free background modeling method suitable for multiple scenes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611095522.XA CN106780646B (en) 2016-12-01 2016-12-01 Parameter-free background modeling method suitable for multiple scenes

Publications (2)

Publication Number Publication Date
CN106780646A CN106780646A (en) 2017-05-31
CN106780646B true CN106780646B (en) 2020-06-16

Family

ID=58883454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611095522.XA Active CN106780646B (en) 2016-12-01 2016-12-01 Parameter-free background modeling method suitable for multiple scenes

Country Status (1)

Country Link
CN (1) CN106780646B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118535A (en) * 2018-06-15 2019-01-01 上海卫星工程研究所 A method of accurately calculating low orbit satellite windward area
TWI766218B (en) 2019-12-27 2022-06-01 財團法人工業技術研究院 Reconstruction method, reconstruction system and computing device for three-dimensional plane

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200834459A (en) * 2007-02-05 2008-08-16 Huper Lab Co Ltd Video object segmentation method applied for rainy situations
CN104952256B (en) * 2015-06-25 2017-11-07 广东工业大学 A kind of detection method of the intersection vehicle based on video information

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Background Subtraction: Experiments and Improvements for ViBe;M. Van Droogenbroeck等;《2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition workshops》;20120716;第1-6页 *
一种能快速抑制鬼影及静止目标的Vibe改进算法;吴尔杰等;《合肥工业大学学报(自然科学版)》;20160131;第39卷(第1期);第56-61页 *
机场跑道异物检测***算法研究与软件实现;成威;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20140715(第4期);第C031-261页 *

Also Published As

Publication number Publication date
CN106780646A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
EP3332356B1 (en) Semi-automatic image segmentation
JP4668921B2 (en) Object detection in images
Van Droogenbroeck et al. ViBe: A disruptive method for background subtraction
US10896495B2 (en) Method for detecting and tracking target object, target object tracking apparatus, and computer-program product
CN111723644A (en) Method and system for detecting occlusion of surveillance video
CN105574891B (en) The method and system of moving target in detection image
CN111062974B (en) Method and system for extracting foreground target by removing ghost
Hati et al. Intensity range based background subtraction for effective object detection
CA2910965A1 (en) Tracker assisted image capture
CN110060278B (en) Method and device for detecting moving target based on background subtraction
US8995718B2 (en) System and method for low complexity change detection in a sequence of images through background estimation
CN111444854A (en) Abnormal event detection method, related device and readable storage medium
CN112561946A (en) Dynamic target detection method
CN106780646B (en) Parameter-free background modeling method suitable for multiple scenes
CN109658441B (en) Foreground detection method and device based on depth information
Gao et al. A robust technique for background subtraction in traffic video
CN107169997B (en) Background subtraction method for night environment
CN113936242B (en) Video image interference detection method, system, device and medium
Huynh-The et al. Locally statistical dual-mode background subtraction approach
CN106951831B (en) Pedestrian detection tracking method based on depth camera
JP6603123B2 (en) Animal body detection apparatus, detection method, and program
CN110580706A (en) Method and device for extracting video background model
CN109670419B (en) Pedestrian detection method based on perimeter security video monitoring system
Li et al. GMM-based efficient foreground detection with adaptive region update
Wan et al. Moving object detection based on high-speed video sequence images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant