CN104680521B - A kind of improved background modeling and foreground detection method - Google Patents

A kind of improved background modeling and foreground detection method Download PDF

Info

Publication number
CN104680521B
CN104680521B CN201510065105.XA CN201510065105A CN104680521B CN 104680521 B CN104680521 B CN 104680521B CN 201510065105 A CN201510065105 A CN 201510065105A CN 104680521 B CN104680521 B CN 104680521B
Authority
CN
China
Prior art keywords
background
pixel
point
dot
foreground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510065105.XA
Other languages
Chinese (zh)
Other versions
CN104680521A (en
Inventor
樊滨温
刘晓炯
王明江
曲中鑫
卢婷舒
刘明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN201510065105.XA priority Critical patent/CN104680521B/en
Publication of CN104680521A publication Critical patent/CN104680521A/en
Application granted granted Critical
Publication of CN104680521B publication Critical patent/CN104680521B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention proposes a kind of improved background modeling and foreground detection method, initially set up background model, then judge that each pixel of present frame belongs to background dot or foreground point, then background collection is updated, finally each picture frame is judged pixel-by-pixel using above-mentioned flow, the bianry image of prospect and background separation is finally given, so as to split foreground moving object from monitor video.The method of the present invention overcomes some shortcomings of conventional target detection method, improves detection adaptability, stability and real-time etc..

Description

A kind of improved background modeling and foreground detection method
Technical field
The present invention relates to image intelligent detection, identification technology field, more particularly to a kind of background modeling and foreground target inspection The method of survey.
Background technology
In intelligent monitoring, moving object detection is significant in the daily production and living of people, and the purpose is to will Moving foreground object is split from background.But in a practical situation, monitored environment is filled with many uncertain factors, the back of the body Scape is often highly susceptible to various influences, such as illumination variation, dynamic background, shade, partial occlusion etc. influence, and these are unfavorable Factor brings very big challenge to anaphase movement Target detection and identification.Therefore, to various complex situations analysis modelings, try one's best Reduce extraneous complex environment factor to adversely affect to caused by target detection recognition result, reduce erroneous judgement, improve the adaptation of algorithm Property, robustness etc. just become particularly significant.
In foreground detection field, main flow algorithm mainly has frame difference method, optical flow method and background subtraction method etc..Frame difference method have compared with Good real-time, its background is not assembled, and renewal speed is fast, algorithm is easy, amount of calculation is small.The deficiency of algorithm is to make an uproar to environment Sound is more sensitive, and the selection of threshold value is quite crucial, selects the too low noise being not enough in oppressive image, too high, has neglected image In useful change.It is possible to occur to be abstracted in target internal, can not comprehensively extracts action target.Optical flow method need not be advance Obtain scene information, it becomes possible to detect target, but noise, multiple light courcess, shade and the factor such as block can be to the calculating of optical flow field Convergence impacts;And optical flow method calculates complexity, it is difficult to reaches the purpose handled in real time.Background subtraction method must have background Image, reference background image is subtracted using present image, and background image must be the change with illumination or external environment condition And real-time update, therefore the key of background subtraction method is background modeling and its renewal, the quality of segmentation result is largely Dependent on selected background modeling method.
Due to the property complicated and changeable of actual environment, many traditional background modeling methods have its limitation, therefore many The method for building up for being proved to the background model of robust is suggested.These methods not only largely improve Detection results, And improve adaptability, stability and calculating speed of algorithm etc..
The content of the invention
In order to solve the problems of the prior art, the invention provides a kind of improved background modeling and foreground detection side Method, foreground moving object is split from monitor video.Some shortcomings of conventional target detection method are the method overcome, are improved Detect adaptability, stability and real-time etc..
The present invention is achieved through the following technical solutions:
A kind of improved background modeling and foreground detection method, it is characterised in that:It the described method comprises the following steps:
Step A1:N two field pictures before selection, calculate the average I of each pixel positionN(x, y), established for each pixel Respective background collection, the background collection element of each pixel by from preceding N two field pictures with the pixel value of the pixel same position Composition is randomly selected, each background is concentrated and includes M element.
Step A2:Judge that each pixel of present frame belongs to background dot or foreground point, if current pixel background is concentrated Number of the similarity less than a given threshold th1 of element and the pixel be more than T1, then the point is judged as background dot, otherwise It is determined as foreground point.
Step A3:Background collection is not updated if certain pixel is detected as foreground point, updates this if background dot is detected as The background collection of position pixel.
Further, the step A3 specifically includes following steps:
Step B1:If certain point is judged to background dot, the speed of pixel change is first determined whether, that is, before finding out present frame Continuous K two field pictures in the position pixel be judged to the pixel of background dot, K<N, and count these background dots and average IN(x,y) Difference, if to exceed T2 individual for the number of point of the absolute difference more than given threshold th2, can determine whether that the current background dot belongs to change Change fast background dot, otherwise current background point, which belongs to, changes slow background dot.
Step B2:If the pixel for being detected as background dot, which belongs to background, changes slower point, its back of the body is replaced with the pixel value Scape concentrates some element randomly selected;If belonging to background changes faster point, replace its background with the pixel value and concentrate The m element randomly selected.
Brief description of the drawings
Fig. 1 is the flow chart of the background modeling and foreground detection method of the present invention.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, it is right below in conjunction with drawings and Examples The present invention is further elaborated.It should be appreciated that specific embodiment described herein is only to explain the present invention, not For limiting the present invention.
Accompanying drawing 1 is the flow chart of the background modeling and foreground detection method of the present invention.
Initially set up background model:N two field pictures before selection, respective background collection, each pixel are established for each pixel Background collection element by being formed from preceding N two field pictures with being randomly selected in the pixel value of the pixel same position, each background Concentration includes M element.
Then judge that each pixel of present frame belongs to background dot or foreground point:First judge that current pixel background is concentrated Element and the pixel similarity (Euclidean distance i.e. between pixel, such as between 2 points (r1, g1, b1) and (r2, g2, b2) Euclidean distance beIf the pixel concentrates the similar of each element to its background Number of the degree less than a given threshold th1 is more than T1, then the point is judged as background dot, is otherwise determined as foreground point.
Then background collection is updated:If current pixel is judged as background dot, it is to belong to change to continue to detect the background dot Change still changes slow point soon, if belonging to background changes slower point, replaces its background with the pixel value and concentrates what is randomly selected Some element;If belonging to background changes faster point, with the pixel value replace its background concentrate m randomly selected it is first Element.
Each picture frame is carried out pixel-by-pixel using the background modeling of the invention shown in Fig. 1 and foreground detection method flow Judge, finally give the bianry image of prospect and background separation.
Above content is to combine specific preferred embodiment further description made for the present invention, it is impossible to is assert The specific implementation of the present invention is confined to these explanations.For general technical staff of the technical field of the invention, On the premise of not departing from present inventive concept, some simple deduction or replace can also be made, should all be considered as belonging to the present invention's Protection domain.

Claims (1)

1. a kind of improved background modeling and foreground detection method, it is characterised in that:It the described method comprises the following steps:
Step A1:N two field pictures before selection, calculate the average I of each pixel positionN(x, y), established for each pixel respective Background collection, the background collection element of each pixel by from preceding N two field pictures with it is random in the pixel value of the pixel same position Extract and form, each background is concentrated and includes M element;
Step A2:Judge that each pixel of present frame belongs to background dot or foreground point, if the member that current pixel background is concentrated Element is more than T1 with number of the similarity of the pixel less than a given threshold th1, then the point is judged as background dot, otherwise judges For foreground point;
Step A3:Background collection is not updated if certain pixel is detected as foreground point, and the position is updated if background dot is detected as The background collection of pixel, is specifically included:
Step B1:If certain point is judged to background dot, the speed of pixel change is first determined whether, that is, finds out the company before present frame The position pixel is judged to the pixel of background dot, K in continuous K two field pictures<N, and count these background dots and average IN(x, y) it Difference, if the number of point of the absolute difference more than given threshold th2 can determine whether that the current background dot belongs to change more than T2 Fast background dot, otherwise current background point, which belongs to, changes slow background dot;
Step B2:If the pixel for being detected as background dot, which belongs to background, changes slower point, its background collection is replaced with the pixel value In some element for randomly selecting;If belonging to background changes faster point, replace its background with the pixel value and concentrate at random M element of selection;The value of the m is 3~5.
CN201510065105.XA 2015-02-06 2015-02-06 A kind of improved background modeling and foreground detection method Expired - Fee Related CN104680521B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510065105.XA CN104680521B (en) 2015-02-06 2015-02-06 A kind of improved background modeling and foreground detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510065105.XA CN104680521B (en) 2015-02-06 2015-02-06 A kind of improved background modeling and foreground detection method

Publications (2)

Publication Number Publication Date
CN104680521A CN104680521A (en) 2015-06-03
CN104680521B true CN104680521B (en) 2018-04-06

Family

ID=53315514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510065105.XA Expired - Fee Related CN104680521B (en) 2015-02-06 2015-02-06 A kind of improved background modeling and foreground detection method

Country Status (1)

Country Link
CN (1) CN104680521B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6754610B2 (en) * 2016-05-18 2020-09-16 株式会社デンソーアイティーラボラトリ Arithmetic processing unit, arithmetic processing method, and program
CN106157332A (en) * 2016-07-07 2016-11-23 合肥工业大学 A kind of motion inspection optimization method based on ViBe algorithm
CN106548488B (en) * 2016-10-25 2019-02-15 电子科技大学 A kind of foreground detection method based on background model and inter-frame difference
CN108537799B (en) * 2018-03-21 2021-03-23 广西师范大学 Color sampling method based on pixel type and weight statistics
CN111209771A (en) * 2018-11-21 2020-05-29 晶睿通讯股份有限公司 Neural network identification efficiency improving method and relevant identification efficiency improving device thereof
CN110864412B (en) * 2019-08-12 2021-02-12 珠海格力电器股份有限公司 Air conditioner control method and system
CN112669294B (en) * 2020-12-30 2024-04-02 深圳云天励飞技术股份有限公司 Camera shielding detection method and device, electronic equipment and storage medium
CN113850133A (en) * 2021-08-24 2021-12-28 中国船舶重工集团公司第七0九研究所 Ship line-crossing detection method and system for ship lock video monitoring

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101621615A (en) * 2009-07-24 2010-01-06 南京邮电大学 Self-adaptive background modeling and moving target detecting method
CN101777186A (en) * 2010-01-13 2010-07-14 西安理工大学 Multimodality automatic updating and replacing background modeling method
CN103971386A (en) * 2014-05-30 2014-08-06 南京大学 Method for foreground detection in dynamic background scenario

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2015252B1 (en) * 2007-07-08 2010-02-17 Université de Liège Visual background extractor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101621615A (en) * 2009-07-24 2010-01-06 南京邮电大学 Self-adaptive background modeling and moving target detecting method
CN101777186A (en) * 2010-01-13 2010-07-14 西安理工大学 Multimodality automatic updating and replacing background modeling method
CN103971386A (en) * 2014-05-30 2014-08-06 南京大学 Method for foreground detection in dynamic background scenario

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于实时视频的运动目标检测算法;邱祯艳;《中国优秀硕士学位论文全文数据库 信息科技辑》;20140215;第14-16、22页 *

Also Published As

Publication number Publication date
CN104680521A (en) 2015-06-03

Similar Documents

Publication Publication Date Title
CN104680521B (en) A kind of improved background modeling and foreground detection method
CN107085714B (en) Forest fire detection method based on video
CN103971386B (en) A kind of foreground detection method under dynamic background scene
CN106548488B (en) A kind of foreground detection method based on background model and inter-frame difference
CN106780565B (en) Multi-student sitting-up detection method based on optical flow and k-means clustering
JP2013196684A (en) Object counting method and object counting device
CN104318266B (en) A kind of image intelligent analyzes and processes method for early warning
CN101738394A (en) Method and system for detecting indoor smog
CN102609724B (en) Method for prompting ambient environment information by using two cameras
CN105243356B (en) A kind of method and device that establishing pedestrian detection model and pedestrian detection method
WO2015024257A8 (en) Unstructured road boundary detection
TWI382360B (en) Object detection method and device threrof
CN103310444A (en) Method of monitoring pedestrians and counting based on overhead camera
Wang et al. A new fire detection method based on flame color dispersion and similarity in consecutive frames
CN104835145A (en) Foreground detection method based on self-adaptive Codebook background model
CN111160203A (en) Loitering and lingering behavior analysis method based on head and shoulder model and IOU tracking
CN105469054B (en) The model building method of normal behaviour and the detection method of abnormal behaviour
Vlaminck et al. Obstacle detection for pedestrians with a visual impairment based on 3D imaging
WO2017193679A1 (en) Method for automatically detecting whether bicycle has fallen onto the ground
CN109711256A (en) A kind of low latitude complex background unmanned plane target detection method
CN103646242B (en) Extended target tracking based on maximum stable extremal region feature
CN105184308A (en) Remote sensing image building detection and classification method based on global optimization decision
CN108737785B (en) Indoor automatic detection system that tumbles based on TOF 3D camera
CN107729811B (en) Night flame detection method based on scene modeling
Huang et al. A method for fast fall detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180406

Termination date: 20210206

CF01 Termination of patent right due to non-payment of annual fee