CN110060278B - Method and device for detecting moving target based on background subtraction - Google Patents

Method and device for detecting moving target based on background subtraction Download PDF

Info

Publication number
CN110060278B
CN110060278B CN201910324595.9A CN201910324595A CN110060278B CN 110060278 B CN110060278 B CN 110060278B CN 201910324595 A CN201910324595 A CN 201910324595A CN 110060278 B CN110060278 B CN 110060278B
Authority
CN
China
Prior art keywords
pixels
background
flicker
pixel
foreground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910324595.9A
Other languages
Chinese (zh)
Other versions
CN110060278A (en
Inventor
贾振红
左军辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinjiang University
Original Assignee
Xinjiang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinjiang University filed Critical Xinjiang University
Priority to CN201910324595.9A priority Critical patent/CN110060278B/en
Publication of CN110060278A publication Critical patent/CN110060278A/en
Application granted granted Critical
Publication of CN110060278B publication Critical patent/CN110060278B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention relates to a detection method and a detection device of a moving target based on background subtraction, relates to the technical field of image recognition, and mainly solves the technical problem that the detection effect of the moving target is poor. The detection method comprises the following steps: constructing a visual extraction background sample model based on a visual background extraction sub-algorithm; constructing an auxiliary background sample model of image pixels according to the image of the video; analyzing the flicker degree of the conversion degree of the image pixels of the video between the foreground and the background, and replacing the flicker pixels with samples corresponding to the positions of the flicker pixels in the auxiliary sample model for the flicker pixels with the flicker degree meeting the preset conditions to generate a flicker analyzed image; and classifying foreground pixels and background pixels in the flicker-analyzed image according to the vision-extracted background sample model, and generating a moving target according to the foreground pixels. Compared with the prior art, the method can effectively inhibit the detection of the dynamic pixels on the moving target, and improves the detection effect of the moving target.

Description

Method and device for detecting moving target based on background subtraction
Technical Field
The embodiment of the invention relates to the technical field of image recognition, in particular to a method and a device for detecting a moving target based on background subtraction.
Background
With the development of digital video technology, people can detect, track, identify, analyze and the like moving objects in the monitoring video. By using the techniques, people can quickly obtain effective information such as the position, the track, the behavior and the like of the moving object to be detected. The detection of the moving target is the basis of technologies such as moving target tracking, behavior recognition, scene description and the like, and the detection result directly influences the accuracy of a subsequent algorithm. Therefore, how to improve the accuracy and the robustness of target detection is one of the main research directions in the field of computer vision. At present, the moving object detection method mainly comprises an inter-frame difference method, a background subtraction method and an optical flow method.
The optical flow method is a method for detecting according to the brightness information of the detected target image, and the method has high calculation complexity and weak anti-interference capability, so that the method is not generally adopted.
The inter-frame difference method is to use continuous video frame images to carry out difference operation, so as to realize the extraction of moving targets, and has stronger adaptability to background changes, but the detected targets have a cavity phenomenon, and the targets with slow movement have omission.
The background subtraction method is the most widely used method, and the background subtraction is to firstly establish a background model, then make a difference between a current frame image and the background model to extract a moving target. A visual background extraction sub-algorithm (visual background extractor, abbreviated as Vibe) in background subtraction is a background modeling algorithm based on sample random clustering, and the method is different from other methods in that background modeling is performed by adopting a frame of image, so that the speed of background modeling is increased, but in the existing detection method of a moving target, the dynamic background interference is larger, and the detection effect of the moving target is poor.
Disclosure of Invention
In view of the above, the embodiment of the invention provides a method and a device for detecting a moving object based on background subtraction, which mainly solve the technical problem that the detecting effect of the moving object is poor.
In order to achieve the above purpose, the embodiment of the present invention mainly provides the following technical solutions:
in one aspect, an embodiment of the present invention provides a method for detecting a moving object based on background subtraction, including:
constructing a visual extraction background sample model based on a visual background extraction sub-algorithm;
constructing an auxiliary background sample model of image pixels according to the image of the video;
analyzing the flicker degree of the conversion degree of the image pixels of the video between the foreground and the background, and replacing the flicker pixels with samples corresponding to the positions of the flicker pixels in the auxiliary sample model for the flicker pixels with the flicker degree meeting the preset conditions to generate a flicker-analyzed image;
and classifying foreground pixels and background pixels in the image after flicker analysis according to the vision extraction background sample model, and generating a moving target according to the foreground pixels.
The aim and the technical problems of the embodiments of the present invention can be further achieved by the following technical measures.
Optionally, the method for detecting a moving object based on background subtraction further includes:
and carrying out background updating on pixels corresponding to the background pixels in the visual extraction background sample model.
Optionally, the foregoing method for detecting a moving object based on background subtraction, wherein the auxiliary background sample model of the pixel is formed according to an image of the video, specifically includes:
p samples are randomly selected from the pixels in the neighborhood of one frame of image pixel of the video, p sample sets corresponding to each pixel form an auxiliary sample model, and p is a positive integer greater than or equal to 2.
Optionally, in the foregoing method for detecting a moving object based on background subtraction, the analyzing a flicker degree of a degree of conversion between a foreground and a background of an image pixel of a video includes, for a flicker pixel whose flicker degree satisfies a preset condition, replacing a sample corresponding to a position of the flicker pixel in the auxiliary sample model, including:
accumulating the flicker times of each pixel in transition between the foreground and the background;
counting whether each pixel continuously transits between a foreground and a background in at least three latest frames of images in the video image;
if yes, judging whether the flicker times of the pixels in continuous transition are larger than preset times;
and if so, replacing the flicker pixels with the flicker times larger than the preset times with samples corresponding to the positions of the flicker pixels in the auxiliary background sample model.
Optionally, the foregoing method for detecting a moving object based on background subtraction, wherein generating the moving object according to the foreground pixel includes:
calculating an absolute value of a difference value between a pixel value of a foreground pixel and a pixel value of a pixel of the foreground pixel neighborhood;
calculating the ghost probability of the foreground pixel according to the ratio of the matching threshold value of the Euclidean space of the foreground pixel and the absolute value of the difference value;
and if the ghost probability is larger than the preset probability, updating the foreground pixels larger than the preset probability into background pixels.
In another aspect, an embodiment of the present invention provides a detection apparatus for a moving object based on background subtraction, including:
the first construction unit is used for constructing a visual extraction background sample model based on a visual background extraction sub-algorithm;
a second construction unit for constructing an auxiliary background sample model of image pixels from the image of the video;
the sample replacing unit is used for analyzing the flicker degree of the conversion degree of the image pixels of the video between the foreground and the background, and for the flicker pixels with the flicker degree meeting the preset condition, adopting the sample replacement corresponding to the position of the flicker pixels in the auxiliary sample model to generate a flicker analyzed image;
a moving object generating unit for classifying foreground pixels and background pixels in the flicker analysis image according to the vision extraction background sample model, and generating a moving object according to the foreground pixels
The aim and the technical problems of the embodiments of the present invention can be further achieved by the following technical measures.
Optionally, the aforementioned detection device for a moving object based on background subtraction further includes:
a background updating unit, configured to update a background of a pixel corresponding to the background pixel in the visual extraction background sample model
Optionally, the foregoing detection apparatus for a moving object based on background subtraction, wherein the second construction unit specifically includes:
randomly selecting p samples from pixels in the neighborhood of one frame of image pixel of the video, forming an auxiliary sample model by p sample sets corresponding to each pixel, wherein p is a positive integer greater than or equal to 2
Optionally, the foregoing detection apparatus for a moving object based on background subtraction, wherein the sample replacing unit includes:
the accumulation module is used for accumulating the flicker times of each pixel in transition between the foreground and the background;
the statistics module is used for counting whether each pixel continuously transits between the foreground and the background in at least three latest frames of images in the video image;
if yes, judging whether the flicker times of the pixels in continuous transition are larger than preset times;
and if so, replacing the flicker pixels with the flicker times larger than the preset times with samples corresponding to the positions of the flicker pixels in the auxiliary background sample model.
Optionally, the aforementioned detection device for a moving object based on background subtraction, wherein the moving object generating unit includes:
a first calculation module for calculating the pixel value of the foreground pixel and the absolute value of the difference value of the pixel in the neighborhood of the foreground pixel;
the second calculation module is used for calculating the ghost probability of the foreground pixel according to the ratio of the matching threshold value of the Euclidean space of the foreground pixel and the absolute value of the difference value;
and if the ghost probability is larger than the preset probability, updating the foreground pixels larger than the preset probability into background pixels.
By means of the technical scheme, the detection method and the detection device for the moving target based on the background subtraction provided by the technical scheme have the following advantages:
in the technical scheme provided by the embodiment of the invention, a visual extraction background sample model is constructed based on a visual background extraction sub-algorithm; constructing an auxiliary background sample model of image pixels according to the image of the video; and analyzing the flicker degree of the conversion degree of the image pixels of the video between the foreground and the background, and replacing the flicker pixels with the flicker degree meeting the preset condition by adopting samples corresponding to the positions of the flicker pixels in the auxiliary sample model to generate a flicker analyzed image, so that the interference of dynamic pixels is effectively removed. And classifying foreground pixels and background pixels in the flicker analysis image according to the vision extraction background sample model, and generating a moving target according to the foreground pixels.
The foregoing description is only an overview of the embodiments of the present invention, and is presented in terms of the following detailed description of the preferred embodiments of the present invention in conjunction with the accompanying drawings, so that the technical means of the embodiments of the present invention can be more clearly understood.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
fig. 1 is a flowchart of a method for detecting a moving object based on background subtraction according to an embodiment of the present invention;
fig. 2 is a schematic partial flow chart of a method for detecting a moving object based on background subtraction according to an embodiment of the present invention;
FIG. 3 is a flowchart of another method for detecting a moving object based on background subtraction according to an embodiment of the present invention;
fig. 4 is a schematic diagram of unit connection of a detection device of a moving object based on background subtraction according to an embodiment of the present invention;
fig. 5 is a schematic block diagram of a detection apparatus for a moving object based on background subtraction according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted for achieving the purpose of the preset invention embodiment, the following is a detailed description of the specific implementation, structure, characteristics and effects of the method and apparatus for detecting a moving object based on background subtraction according to the embodiment of the present invention with reference to the accompanying drawings and the preferred embodiment. In the following description, different "an embodiment" or "an embodiment" do not necessarily refer to the same embodiment. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Fig. 1 to fig. 2 are diagrams illustrating an embodiment of a method for detecting a moving object based on background subtraction according to the present invention, referring to fig. 1 to fig. 2, the method for detecting a moving object based on background subtraction according to the present invention includes:
101, constructing a visual extraction background sample model based on a visual background extraction sub-algorithm;
each pixel in the background sample model is composed of n background samples, v (x) denotes the pixel value of the image at pixel x in a given euclidean space, vi being identified as the background sample value corresponding to pixel i. Building a vision extraction background sample model M is defined as M (x) = { v 1 ,v 2 ,…,v n-1 ,v n And n is a preset value.
In the background initialization, N sample values are selected from M neighborhoods N (x) of the image pixel x for initializing a background model, M 0 (x)={v 0 (y|y∈N G (x) M is a preset value, the value of m is 8 in general, and the specific calculation process can be seen in related documents of the prior art.
Specifically, a visual extraction background sample model can be constructed based on a visual background extraction sub-algorithm according to a first frame image of a video.
102, constructing an auxiliary background sample model of image pixels according to the image of the video;
wherein the auxiliary background sample model is used to update noise point pixels or ghost pixels in the image.
In some embodiments, an auxiliary background sample model may be built for each pixel; the method comprises the following specific steps: p samples are randomly selected from the pixels in the neighborhood of one frame of image pixel of the video, p sample sets corresponding to each pixel form an auxiliary sample model, and p is a positive integer greater than or equal to 2, for example, p is 6.
In some embodiments, an auxiliary background sample model may be constructed for pixels that may be misdetected as target pixels, where the pixels that may be misdetected as target pixels may be pixels that are preset according to different videos, or may be pixels that are determined according to a misdetection recognition result of the target pixels in subsequent pixel recognition, for example, the misdetection recognition result of the target pixels includes 55 pixels, and then the auxiliary background sample model is constructed for the 55 pixels.
103, analyzing the flicker degree of the conversion degree of the image pixels of the video between the foreground and the background, and replacing the flicker pixels with samples corresponding to the positions of the flicker pixels in the auxiliary sample model for the flicker pixels with the flicker degree meeting the preset condition to generate a flicker analyzed image;
in practice, it is understood that the magnitude of the flicker level may be expressed in various ways, for example, in terms of frequency, etc. In some specific embodiments, the analyzing the flicker degree of the conversion degree between the foreground and the background of the image pixel of the video, for the flicker pixel whose flicker degree meets the preset condition, replacing with the sample corresponding to the position of the flicker pixel in the auxiliary sample model may include:
1031 accumulates the number of flashes per pixel transitioning between foreground and background;
in the present embodiment of the present invention,
Figure BDA0002035781280000061
the number of blinks at time T for pixel i is shown. />
Figure BDA0002035781280000062
The accumulating method of (a) can be as follows:
Figure BDA0002035781280000063
beta is a constant.
1032 counts whether each pixel transitions continuously between foreground and background in at least the last three frames of video images;
because the flicker level is calculated through accumulation, a pixel with high flicker level does not necessarily belong to a complex dynamic background pixel. At this time, a mutation attribute is set for the pixel point, and the mutation attribute is used for representing the change degree of the current pixel. Taking statistics of the last three frames of images as an example, one pixel is judged to be a target pixel at the time T-2, and is judged to be a background pixel at the time T-1, but the target pixel is judged to be the target pixel at the time T, at this time, we judge that the pixel point has mutation, namely continuous transition, and the calculation method can be as follows:
Figure BDA0002035781280000071
if it is
Figure BDA0002035781280000072
And if the pixel is equal to 0, judging that the pixel i is not continuously converted, otherwise, judging that the pixel i is continuously converted, namely, the mutation occurs.
1033 if yes, judging whether the flicker times of the pixels in continuous transition are larger than preset times;
for example, calculate δ:
Figure BDA0002035781280000073
and E is a preset number of times, such as 30, 50 or 100, and the like, if delta is equal to 1, the flicker number of the pixels in continuous transition is judged to be larger than the preset number of times, and if delta is equal to 0, the flicker number of the pixels in continuous transition is judged to be not larger than the preset number of times.
1034 if yes, replacing the scintillating pixels with the number of scintillating times greater than the preset number of times with samples corresponding to the scintillating pixel positions in the auxiliary background sample model.
The flicker pixel having the flicker number greater than the preset number may be determined as a dynamic background pixel.
It will be readily appreciated that the sequential implementation of steps 1032, 1033 may be reversed, i.e
If the flicker frequency of the pixels with continuous transition is judged to be greater than the preset frequency, counting whether each pixel is continuously transitioned between the foreground and the background in at least three frames of images in the video image, and if so, replacing the flicker pixels with the flicker frequency greater than the preset frequency by samples corresponding to the flicker pixel positions in the auxiliary background sample model.
Or one calculation
Figure BDA0002035781280000074
If->
Figure BDA0002035781280000075
Is equal to beta 2 (1 when β is taken to be 1), it means that pixel i is determined to be a dynamic background pixel at time T, and conversely, a foreground pixel.
104, classifying foreground pixels and background pixels in the flicker analysis image according to the vision extraction background sample model, and generating a moving target according to the foreground pixels.
In the technical scheme provided by the embodiment of the invention, a visual extraction background sample model is constructed based on a visual background extraction sub-algorithm; constructing an auxiliary background sample model of image pixels according to the image of the video; and analyzing the flicker degree of the conversion degree of the image pixels of the video between the foreground and the background, and replacing the flicker pixels with the flicker degree meeting the preset condition by adopting samples corresponding to the positions of the flicker pixels in the auxiliary sample model to generate a flicker analyzed image, so that the interference of dynamic pixels is effectively removed. And classifying foreground pixels and background pixels in the flicker analysis image according to the vision extraction background sample model, and generating a moving target according to the foreground pixels.
In a specific implementation, the method further includes: and 105, performing background updating on pixels corresponding to the background pixels in the visual extraction background sample model.
In practice, if the pixel p (x) is classified as a background pixel, a value is randomly selected from n background samples corresponding to the pixel p (x) in M (x) to replace the p (x) so as to update the visual extraction background sample model.
The visual extraction background sample model is built to serve as an updated background model, and the auxiliary background sample model is used for updating a model which can be misdetected as a target pixel point.
And finally, denoising the detected moving target result by using a median filtering method.
Fig. 3 is an embodiment of a method for detecting a moving object based on background subtraction according to the present invention, please refer to fig. 3, and the method for detecting a moving object based on background subtraction according to the present invention includes:
201, constructing a visual extraction background sample model based on a visual background extraction sub-algorithm;
202 constructing an auxiliary background sample model of image pixels according to the image of the video;
203, analyzing the flicker degree of the conversion degree of the image pixels of the video between the foreground and the background, and replacing the flicker pixels with samples corresponding to the positions of the flicker pixels in the auxiliary sample model for the flicker pixels with the flicker degree meeting the preset condition so as to generate a flicker analyzed image;
204 classifying foreground pixels and background pixels in the flicker analysis image according to the vision extraction background sample model;
205 calculating an absolute value of a difference value of a pixel value of a foreground pixel and a pixel value of a pixel of the foreground pixel neighborhood;
the pixel value of the foreground pixel x1 is identified as v (x 1), representing the pixel value of the image at pixel x1 in a given euclidean space, and the pixel value of the pixel x2 of the neighborhood of the foreground pixel x1 is identified as v (x 2), representing the pixel value of the image at pixel x2 in a given euclidean space.
The absolute value of the difference between the pixel value of the foreground pixel and the pixel value of the pixel of the foreground pixel neighborhood is noted as dist (v (x 2 ),v(x 1 ))。
206 calculating the ghost probability of the foreground pixel according to the ratio of the matching threshold value of the Euclidean space of the foreground pixel and the absolute value of the difference value; and if the ghost probability is larger than the preset probability, updating the foreground pixels larger than the preset probability into background pixels.
In the step 204 pixel classification, the matching threshold is a radius r (x 1) of the euclidean space constructed with the pixel value v (x 1) of the foreground pixel x1 as the center, and the preset ghost parameter α may be introduced in the calculation of the ghost probability, and the value of α may be set between 0 and 1, for example, 0.5 is taken. The method for calculating the ghost probability can be as follows:
Figure BDA0002035781280000091
the predetermined probability is, for example, 0.5, and if Px1 is greater than 0.5, the foreground pixel x1 is considered to be a ghost pixel, and x1 is updated to be a background pixel.
In the background updating step, the visual background extraction sub-algorithm updates the static foreground pixel point into the background pixel by a space propagation method in the background updating process, and the time complexity is high; therefore, in order to accelerate the ghost removal speed, secondary judgment is introduced in the pixel point space propagation process to judge the ghost probability of the current target pixel point.
Based on the same inventive concept, fig. 4 is a schematic diagram of some embodiments of a background subtraction-based moving object detection apparatus according to the present invention, referring to fig. 4, and the background subtraction-based moving object detection apparatus according to some embodiments of the present invention includes:
a first construction unit 10 for constructing a visual extraction background sample model based on a visual background extraction sub-algorithm;
a second construction unit 20 for constructing an auxiliary background sample model of image pixels from the image of the video;
a sample replacing unit 30, configured to analyze a flicker degree of a degree of conversion between a foreground and a background of an image pixel of a video, and replace a flicker pixel of which the flicker degree meets a preset condition with a sample corresponding to the position of the flicker pixel in the auxiliary sample model, so as to generate a flicker-analyzed image;
and a moving object generating unit 40, configured to classify the pixels in the flicker-analyzed image into foreground pixels and background pixels according to the visual extraction background sample model, and generate a moving object according to the foreground pixels.
In a further embodiment of the foregoing, as shown in fig. 5, the method further includes:
and a background updating unit 50, configured to update a background of a pixel corresponding to the background pixel in the visual extraction background sample model.
In further embodiments, the second construction unit 20 specifically includes:
p samples are randomly selected from the pixels in the neighborhood of one frame of image pixel of the video, p sample sets corresponding to each pixel form an auxiliary sample model, and p is a positive integer greater than or equal to 2.
In a further embodiment, the sample replacing unit includes:
an accumulation module 31 for accumulating the number of flashes of each pixel transitioning between foreground and background;
a statistics module 32 for counting whether each pixel continuously transitions between foreground and background in at least the last three frames of the video image;
if yes, judging whether the flicker times of the pixels in continuous transition are larger than preset times;
and if so, replacing the flicker pixels with the flicker times larger than the preset times with samples corresponding to the positions of the flicker pixels in the auxiliary background sample model.
In a further embodiment of the above-described method,
the moving object generation unit includes:
a first calculation module 41, configured to calculate an absolute value of a difference between a pixel value of a foreground pixel and a pixel value of a pixel in the neighborhood of the foreground pixel;
a second calculation module 42, configured to calculate a ghost probability of the foreground pixel according to a ratio of a matching threshold value of the euclidean space of the foreground pixel and an absolute value of the difference value;
and if the ghost probability is larger than the preset probability, updating the foreground pixels larger than the preset probability into background pixels.
The invention discloses a method and a device for detecting a moving target based on background subtraction. The method effectively solves the problem of 'ghosting' existing in the original algorithm, and effectively inhibits the interference of the dynamic background, thereby improving the detection effect of the moving target and the accuracy of the detection of the moving target. The method comprises the following steps: firstly, constructing a visual extraction background sample model for a first frame image for detecting a moving target video, and constructing an auxiliary background sample model while constructing the visual extraction background sample model; then adding pixel point ghost probability judgment in the moving target detection process, and adaptively adjusting the matching threshold value and the updating rate of the pixel points; performing flicker degree analysis on the detected video frame image pixel points, and updating noise point pixels which are falsely detected as moving target pixels by using an auxiliary background sample model; and finally, carrying out noise reduction treatment on the detection result by using a median filtering method, thereby obtaining a better detection effect.
An embodiment of the present disclosure provides a storage medium including a stored program, wherein the program, when executed, controls an apparatus in which the storage medium is located to perform the detection method of a moving object based on background subtraction of the above embodiment.
The storage medium may include volatile memory, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), among other forms in computer readable media, the memory including at least one memory chip.
Embodiments of the present disclosure provide a detection apparatus of a moving object based on background subtraction, the apparatus including a storage medium; and one or more processors coupled to the storage medium, the processors configured to execute the program instructions stored in the storage medium; the program instructions execute the method for detecting a moving object based on background subtraction according to the above embodiment when running.
It will be apparent to those skilled in the art that embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, embodiments of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Moreover, embodiments of the present disclosure may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
It will be appreciated by those skilled in the art that embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, embodiments of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Moreover, embodiments of the present disclosure may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (8)

1. A method for detecting a moving object based on background subtraction, comprising:
constructing a visual extraction background sample model based on a visual background extraction sub-algorithm;
constructing an auxiliary background sample model of image pixels according to the image of the video;
analyzing the flicker degree of the conversion degree of the image pixels of the video between the foreground and the background, and replacing the flicker pixels with samples corresponding to the positions of the flicker pixels in the auxiliary sample model for the flicker pixels with the flicker degree meeting the preset conditions to generate a flicker-analyzed image;
classifying foreground pixels and background pixels in the image after flicker analysis according to the vision extraction background sample model, and generating a moving target according to the foreground pixels;
generating a moving object from the foreground pixels, comprising:
calculating an absolute value of a difference value between a pixel value of a foreground pixel and a pixel value of a pixel of the foreground pixel neighborhood;
calculating the ghost probability of the foreground pixel according to the ratio of the matching threshold value of the Euclidean space of the foreground pixel and the absolute value of the difference value;
if the ghost probability is greater than the preset probability, updating the foreground pixels which are greater than the preset probability into background pixels;
the analyzing the flicker degree of the conversion degree of the image pixels of the video between the foreground and the background, for the flicker pixels with the flicker degree meeting the preset condition, replacing the samples corresponding to the position of the flicker pixels in the auxiliary sample model, including:
accumulating the flicker times of each pixel in transition between the foreground and the background;
counting whether each pixel continuously transits between a foreground and a background in at least three latest frames of images in the video image;
if yes, judging whether the flicker times of the pixels in continuous transition are larger than preset times;
and if so, replacing the flicker pixels with the flicker times larger than the preset times with samples corresponding to the positions of the flicker pixels in the auxiliary background sample model.
2. The method of detecting according to claim 1, further comprising:
and carrying out background updating on pixels corresponding to the background pixels in the visual extraction background sample model.
3. The method according to claim 1, wherein,
an auxiliary background sample model for forming pixels according to images of a video is specifically:
p samples are randomly selected from the pixels in the neighborhood of one frame of image pixel of the video, p sample sets corresponding to each pixel form an auxiliary sample model, and p is a positive integer greater than or equal to 2.
4. A detection apparatus for a moving object based on background subtraction, comprising:
the first construction unit is used for constructing a visual extraction background sample model based on a visual background extraction sub-algorithm;
a second construction unit for constructing an auxiliary background sample model of image pixels from the image of the video;
the sample replacing unit is used for analyzing the flicker degree of the conversion degree of the image pixels of the video between the foreground and the background, and for the flicker pixels with the flicker degree meeting the preset condition, adopting the sample replacement corresponding to the position of the flicker pixels in the auxiliary sample model to generate a flicker analyzed image;
the moving target generation unit is used for classifying foreground pixels and background pixels in the flicker analysis image according to the vision extraction background sample model and generating a moving target according to the foreground pixels;
the moving object generation unit includes:
a first calculation module for calculating the pixel value of the foreground pixel and the absolute value of the difference value of the pixel in the neighborhood of the foreground pixel;
the second calculation module is used for calculating the ghost probability of the foreground pixel according to the ratio of the matching threshold value of the Euclidean space of the foreground pixel and the absolute value of the difference value;
if the ghost probability is greater than the preset probability, updating the foreground pixels which are greater than the preset probability into background pixels;
the sample replacement unit includes:
the accumulation module is used for accumulating the flicker times of each pixel in transition between the foreground and the background;
the statistics module is used for counting whether each pixel continuously transits between the foreground and the background in at least three latest frames of images in the video image;
if yes, judging whether the flicker times of the pixels in continuous transition are larger than preset times;
and if so, replacing the flicker pixels with the flicker times larger than the preset times with samples corresponding to the positions of the flicker pixels in the auxiliary background sample model.
5. The detection apparatus according to claim 4, further comprising:
and the background updating unit is used for updating the background of the pixels corresponding to the background pixels in the visual extraction background sample model.
6. The detecting device according to claim 4, wherein,
the second construction unit specifically comprises: p samples are randomly selected from the pixels in the neighborhood of one frame of image pixel of the video, p sample sets corresponding to each pixel form an auxiliary sample model, and p is a positive integer greater than or equal to 2.
7. A storage medium comprising a stored program, wherein the program, when run, controls a device in which the storage medium is located to perform the background subtraction-based moving object detection method of any one of claims 1 to 3.
8. A detection device of a moving object based on background subtraction, characterized in that the device comprises a storage medium; and one or more processors coupled to the storage medium, the processors configured to execute the program instructions stored in the storage medium; the program instructions, when executed, perform the background subtraction-based moving object detection method of any one of claims 1 to 3.
CN201910324595.9A 2019-04-22 2019-04-22 Method and device for detecting moving target based on background subtraction Active CN110060278B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910324595.9A CN110060278B (en) 2019-04-22 2019-04-22 Method and device for detecting moving target based on background subtraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910324595.9A CN110060278B (en) 2019-04-22 2019-04-22 Method and device for detecting moving target based on background subtraction

Publications (2)

Publication Number Publication Date
CN110060278A CN110060278A (en) 2019-07-26
CN110060278B true CN110060278B (en) 2023-05-12

Family

ID=67320313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910324595.9A Active CN110060278B (en) 2019-04-22 2019-04-22 Method and device for detecting moving target based on background subtraction

Country Status (1)

Country Link
CN (1) CN110060278B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062974B (en) * 2019-11-27 2022-02-01 中国电力科学研究院有限公司 Method and system for extracting foreground target by removing ghost
CN113129331B (en) * 2019-12-31 2024-01-30 中移(成都)信息通信科技有限公司 Target movement track detection method, device, equipment and computer storage medium
CN111666881B (en) * 2020-06-08 2023-04-28 成都大熊猫繁育研究基地 Giant panda pacing, bamboo eating and estrus behavior tracking analysis method
CN112184755A (en) * 2020-09-29 2021-01-05 国网上海市电力公司 Inspection process monitoring method for transformer substation unmanned inspection system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104392468A (en) * 2014-11-21 2015-03-04 南京理工大学 Improved visual background extraction based movement target detection method
CN105279771A (en) * 2015-10-23 2016-01-27 中国科学院自动化研究所 Method for detecting moving object on basis of online dynamic background modeling in video
CN105894534A (en) * 2016-03-25 2016-08-24 中国传媒大学 ViBe-based improved moving target detection method
CN107169991A (en) * 2017-05-11 2017-09-15 南宁市正祥科技有限公司 A kind of moving target detecting method of multilayer background model
CN107169997A (en) * 2017-05-31 2017-09-15 上海大学 Background subtraction algorithm under towards night-environment
CN108038866A (en) * 2017-12-22 2018-05-15 湖南源信光电科技股份有限公司 A kind of moving target detecting method based on Vibe and disparity map Background difference
CN108537821A (en) * 2018-04-18 2018-09-14 电子科技大学 A kind of moving target detecting method based on video
CN108961293A (en) * 2018-06-04 2018-12-07 国光电器股份有限公司 A kind of method, apparatus of background subtraction, equipment and storage medium
CN109035296A (en) * 2018-06-28 2018-12-18 西安理工大学 A kind of improved moving objects in video detection method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104392468A (en) * 2014-11-21 2015-03-04 南京理工大学 Improved visual background extraction based movement target detection method
CN105279771A (en) * 2015-10-23 2016-01-27 中国科学院自动化研究所 Method for detecting moving object on basis of online dynamic background modeling in video
CN105894534A (en) * 2016-03-25 2016-08-24 中国传媒大学 ViBe-based improved moving target detection method
CN107169991A (en) * 2017-05-11 2017-09-15 南宁市正祥科技有限公司 A kind of moving target detecting method of multilayer background model
CN107169997A (en) * 2017-05-31 2017-09-15 上海大学 Background subtraction algorithm under towards night-environment
CN108038866A (en) * 2017-12-22 2018-05-15 湖南源信光电科技股份有限公司 A kind of moving target detecting method based on Vibe and disparity map Background difference
CN108537821A (en) * 2018-04-18 2018-09-14 电子科技大学 A kind of moving target detecting method based on video
CN108961293A (en) * 2018-06-04 2018-12-07 国光电器股份有限公司 A kind of method, apparatus of background subtraction, equipment and storage medium
CN109035296A (en) * 2018-06-28 2018-12-18 西安理工大学 A kind of improved moving objects in video detection method

Also Published As

Publication number Publication date
CN110060278A (en) 2019-07-26

Similar Documents

Publication Publication Date Title
CN110060278B (en) Method and device for detecting moving target based on background subtraction
US9767570B2 (en) Systems and methods for computer vision background estimation using foreground-aware statistical models
US9852511B2 (en) Systems and methods for tracking and detecting a target object
US10474921B2 (en) Tracker assisted image capture
EP1995691B1 (en) Method and apparatus for segmenting a motion area
CN110599523A (en) ViBe ghost suppression method fused with interframe difference method
CN111062974B (en) Method and system for extracting foreground target by removing ghost
CN111260684A (en) Foreground pixel extraction method and system based on combination of frame difference method and background difference method
CN106327488B (en) Self-adaptive foreground detection method and detection device thereof
JP2009147911A (en) Video data compression preprocessing method, video data compression method employing the same and video data compression system
JP2012073684A (en) Image recognition method, apparatus and program
CN110728700B (en) Moving target tracking method and device, computer equipment and storage medium
CN108876807B (en) Real-time satellite-borne satellite image moving object detection tracking method
CN107301655B (en) Video moving target detection method based on background modeling
JP2014110020A (en) Image processor, image processing method and image processing program
Geng et al. Real time foreground-background segmentation using two-layer codebook model
CN106780646B (en) Parameter-free background modeling method suitable for multiple scenes
CN113936242B (en) Video image interference detection method, system, device and medium
CN109949337A (en) Moving target detecting method and device based on Gaussian mixture model-universal background model
CN110580706A (en) Method and device for extracting video background model
Kushwaha et al. Automatic moving object segmentation methods under varying illumination conditions for video data: comparative study, and an improved method
CN113657218A (en) Video object detection method and device capable of reducing redundant data
CN108010054B (en) Method and system for extracting moving target of video image of segmented Gaussian mixture model
CN112149683A (en) Method and device for detecting living objects in night vision environment
CN115330834B (en) Moving object detection method, system, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant