CN107657626B - Method and device for detecting moving target - Google Patents

Method and device for detecting moving target Download PDF

Info

Publication number
CN107657626B
CN107657626B CN201610598717.XA CN201610598717A CN107657626B CN 107657626 B CN107657626 B CN 107657626B CN 201610598717 A CN201610598717 A CN 201610598717A CN 107657626 B CN107657626 B CN 107657626B
Authority
CN
China
Prior art keywords
frame
target
foreground
image
moving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610598717.XA
Other languages
Chinese (zh)
Other versions
CN107657626A (en
Inventor
陈艳良
祝中科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201610598717.XA priority Critical patent/CN107657626B/en
Publication of CN107657626A publication Critical patent/CN107657626A/en
Application granted granted Critical
Publication of CN107657626B publication Critical patent/CN107657626B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for detecting a moving target, wherein the method comprises the following steps: extracting a first image from a monitoring video, wherein the first image comprises a foreground image; opening the first image to obtain a second image, and acquiring a first target frame set of the current frame from the second image, wherein the first target frame set comprises a plurality of first target frames; performing a closing operation on the first image to obtain a third image, and obtaining a second target frame set of the current frame from the third image, wherein the second target frame set comprises a plurality of second target frames; obtaining a foreground frame set of a current frame by using the first target frame set and the second target frame set; and detecting a moving target by using the foreground frame set. By the technical scheme, whether the moving target passes through the wire mixing and whether the moving target breaks into the forbidden zone or leaves the forbidden zone can be accurately detected, and the accuracy of detecting and maintaining the moving target is improved.

Description

Method and device for detecting moving target
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method and an apparatus for detecting a moving object.
Background
In recent years, with the rapid development of computers, networks, image processing and transmission technologies, the popularization trend of video monitoring systems is more and more obvious, the video monitoring systems gradually advance to high-definition and intelligentization, and the video monitoring systems can be applied to numerous fields such as intelligent transportation, smart parks, safe cities and the like.
In the video monitoring system, the monitoring video can be analyzed by utilizing a computer vision technology so as to achieve the purposes of automatically detecting abnormity and alarming in real time, thereby replacing the working mode of manually and regularly inspecting at fixed points or manually watching the monitoring video for a long time, avoiding missing inspection and avoiding fatigue caused by manually watching the monitoring video for a long time, and having huge application prospect in the aspect of security protection of public and private places.
At present, technologies for real-time analysis of surveillance videos have appeared, such as setting a line in a surveillance video or an interested area to replace a manual observation point, and automatically outputting an alarm when a certain condition is satisfied by performing area intrusion detection, pedestrian anomaly detection, local pedestrian flow statistics, local crowd density detection, and the like.
Wherein, the line is mixed to the setting is: a straight line or a curve is drawn in the monitoring video to replace the function of entrance guard, and when a moving object passes through the monitoring video from any direction, an alarm can be automatically output. Setting the region of interest means: an area is drawn in the monitoring video as a forbidden zone, or the area of the whole monitoring video is taken as the forbidden zone, when a moving object enters the forbidden zone or leaves the forbidden zone, an alarm can be automatically output.
At present, no proper solution is provided, and the existing solutions have the defects of large detection error and the like.
Disclosure of Invention
The invention provides a method for detecting a moving target, which comprises the following steps:
extracting a first image from a monitoring video, wherein the first image comprises a foreground image;
opening the first image to obtain a second image, and acquiring a first target frame set of the current frame from the second image, wherein the first target frame set comprises a plurality of first target frames;
performing a closing operation on the first image to obtain a third image, and obtaining a second target frame set of the current frame from the third image, wherein the second target frame set comprises a plurality of second target frames;
obtaining a foreground frame set of the current frame by using the first target frame set and the second target frame set;
and detecting a moving target by using the foreground frame set.
The process of performing an opening operation on the first image to obtain a second image specifically includes:
carrying out corrosion operation on the first image through a first corrosion template to obtain an image after the corrosion operation;
performing expansion operation on the image subjected to the corrosion operation through a first expansion template to obtain a second image;
the process of performing a close operation on the first image to obtain a third image specifically includes:
performing expansion operation on the first image through a second expansion template to obtain an image after the expansion operation;
and carrying out corrosion operation on the image subjected to the expansion operation through a second corrosion template to obtain a third image.
The second expansion die plate specifically comprises:
Figure BDA0001058960040000021
the second etching template specifically includes:
Figure BDA0001058960040000022
the process of obtaining the foreground frame set of the current frame by using the first target frame set and the second target frame set specifically includes: and obtaining a foreground frame set of a previous frame, and obtaining the foreground frame set of the current frame by using the first target frame set, the second target frame set and the foreground frame set of the previous frame.
The process of obtaining the foreground frame set of the current frame by using the first target frame set, the second target frame set, and the foreground frame set of the previous frame specifically includes: executing the following processing for each second target frame in the second target frame set to obtain a foreground frame set of the current frame;
counting the sum of the pixel numbers of all first target frames falling in the second target frame in the first target frame set to obtain the pixel number corresponding to the second target frame; acquiring the area corresponding to the second target frame; acquiring the shortest distance between the second target frame and the foreground frame set of the previous frame;
judging whether the second target frame meets the selection condition or not by utilizing the number, the area and the shortest distance of the pixels corresponding to the second target frame; if yes, adding the second target frame to a foreground frame set of the current frame; and if not, forbidding adding the second target frame to the foreground frame set of the current frame.
Judging whether the second target frame meets the selection condition by utilizing the number, the area and the shortest distance of the pixels corresponding to the second target frame, which specifically comprises the following steps:
if (the number of pixels-the preset first value-the area) -the preset second value-the shortest distance is greater than or equal to a preset third value and the area is greater than a preset fourth value, determining that the second target frame meets the selection condition, otherwise, determining that the second target frame does not meet the selection condition; the preset first numerical value, the preset second numerical value, the preset third numerical value and the preset fourth numerical value are all larger than 0.
Based on the tracking frame corresponding to the moving target, the process of detecting the moving target by using the foreground frame set specifically includes: counting the number of corner points of each foreground frame of which the corner points of the tracking frame fall in the foreground frame set; if all foreground frames do not contain the angular points of the tracking frames, determining that the moving target is lost; and if one or more foreground frames contain the angular points of the tracking frame, acquiring the foreground frame with the largest number of angular points, determining the acquired foreground frame as the foreground frame matched with the moving target, and updating the acquired foreground frame as the tracking frame corresponding to the moving target.
The process of updating the obtained foreground frame to the tracking frame corresponding to the moving target based on the tracking frames corresponding to the multiple moving targets specifically includes:
if the coordinates of the foreground frame matched with the moving target are different from the coordinates of the foreground frames matched with other moving targets, updating the obtained foreground frame into a tracking frame corresponding to the moving target;
after determining the obtained foreground frame as a foreground frame matching the moving object, the method further comprises: and if the coordinates of the foreground frame matched with the moving target are the same as those of the foreground frames matched with other moving targets, acquiring the mass centers of all corner points in the tracking frame corresponding to the moving target currently, and updating the tracking frame corresponding to the moving target by taking the mass centers as centers.
After the detecting a moving object using the set of foreground frames, the method further comprises:
detecting whether the moving target passes through a wire mixing line or not by using the detected position of the moving target; and/or detecting whether the moving target breaks into an forbidden zone or leaves the forbidden zone.
The invention provides a detection device of a moving target, which specifically comprises:
the device comprises an extraction module, a processing module and a display module, wherein the extraction module is used for extracting a first image from a monitoring video, and the first image comprises a foreground image;
a first obtaining module, configured to perform an open operation on the first image to obtain a second image, and obtain a first target frame set of a current frame from the second image, where the first target frame set includes a plurality of first target frames;
a second obtaining module, configured to perform a close operation on the first image to obtain a third image, and obtain a second target frame set of the current frame from the third image, where the second target frame set includes a plurality of second target frames;
a third obtaining module, configured to obtain a foreground frame set of the current frame by using the first target frame set and the second target frame set;
and the detection module is used for detecting the moving target by utilizing the foreground frame set.
The first obtaining module is specifically configured to, in the process of performing an opening operation on the first image to obtain a second image, perform a corrosion operation on the first image through the first corrosion template to obtain an image after the corrosion operation; performing expansion operation on the image subjected to the corrosion operation through a first expansion template to obtain a second image;
the second obtaining module is specifically configured to, in the process of performing a closing operation on the first image to obtain a third image, perform an expansion operation on the first image through a second expansion template to obtain an image after the expansion operation; and carrying out corrosion operation on the image subjected to the expansion operation through a second corrosion template to obtain a third image.
The second expansion die plate specifically comprises:
Figure BDA0001058960040000041
the second etching template specifically includes:
Figure BDA0001058960040000051
the third obtaining module is specifically configured to, in a process of obtaining a foreground frame set of a current frame by using the first target frame set and the second target frame set, obtain a foreground frame set of a previous frame, and obtain the foreground frame set of the current frame by using the first target frame set, the second target frame set, and the foreground frame set of the previous frame.
The third obtaining module is specifically configured to, in a process of obtaining the foreground frame set of the current frame by using the first target frame set, the second target frame set, and the foreground frame set of the previous frame, perform the following processing for each second target frame in the second target frame set to obtain a foreground frame set of the current frame; counting the sum of the pixel numbers of all first target frames falling in the second target frame in the first target frame set to obtain the pixel number corresponding to the second target frame; acquiring the area corresponding to the second target frame; acquiring the shortest distance between the second target frame and the foreground frame set of the previous frame;
judging whether the second target frame meets the selection condition or not by utilizing the number, the area and the shortest distance of the pixels corresponding to the second target frame; if yes, adding the second target frame to a foreground frame set of the current frame; and if not, forbidding adding the second target frame to the foreground frame set of the current frame.
The third obtaining module is specifically configured to, in a process of determining whether the second target frame meets a selection condition by using the number of pixels, the area, and the shortest distance corresponding to the second target frame, determine that the second target frame meets the selection condition if (the number of pixels-a preset first value x the area) -a preset second value x the shortest distance is greater than or equal to a preset third value, and the area is greater than a preset fourth value, and otherwise, determine that the second target frame does not meet the selection condition; the preset first numerical value, the preset second numerical value, the preset third numerical value and the preset fourth numerical value are all larger than 0.
The detection module is specifically configured to, in a process of detecting a moving target by using the foreground frame set, count the number of corner points of each foreground frame, where corner points of the tracking frame fall in the foreground frame set, based on a tracking frame corresponding to the moving target; if all foreground frames do not contain the angular points of the tracking frames, determining that the moving target is lost; and if one or more foreground frames contain the angular points of the tracking frame, acquiring the foreground frame with the largest number of angular points, determining the acquired foreground frame as the foreground frame matched with the moving target, and updating the acquired foreground frame as the tracking frame corresponding to the moving target.
The detection module is specifically configured to, based on the tracking frames corresponding to the multiple moving targets in the process of updating the obtained foreground frame to the tracking frame corresponding to the moving target, update the obtained foreground frame to the tracking frame corresponding to the moving target if the coordinates of the foreground frame matched with the moving target are different from the coordinates of the foreground frames matched with other moving targets;
and the detection module is further configured to, after the obtained foreground frame is determined as the foreground frame matched with the moving target, if the coordinates of the foreground frame matched with the moving target are the same as those of the foreground frames matched with other moving targets, obtain the centroids of all corner points in the tracking frame currently corresponding to the moving target, and update the tracking frame corresponding to the moving target with the centroids as the centers.
The detection module is further configured to detect whether the moving target passes through a wire mixing by using the detected position of the moving target after the moving target is detected by using the foreground frame set; and/or detecting whether the moving target breaks into an forbidden zone or leaves the forbidden zone.
Based on the technical scheme, in the embodiment of the invention, the detection and maintenance modes of the moving target are optimized, whether the moving target passes through the wire mixing or not and whether the moving target breaks into a forbidden zone or leaves the forbidden zone or not can be accurately detected, the accuracy of behavior judgment such as wire mixing and forbidden zone is improved, the accuracy of detecting and maintaining the moving target is improved, the influence of a complex background on the extraction of the moving target is greatly reduced, and the false detection rate is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments of the present invention or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a flow chart of a method of detecting a moving object in one embodiment of the invention;
FIG. 2 is a hardware block diagram of a head end device in one embodiment of the invention;
fig. 3 is a block diagram of a moving object detection device according to an embodiment of the present invention.
Detailed Description
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present invention. Depending on the context, moreover, the word "if" as used may be interpreted as "at … …" or "when … …" or "in response to a determination".
In view of the problems in the prior art, an embodiment of the present invention provides a method for detecting a moving object, which may be applied to a front-end device (e.g., a network camera, an analog camera, etc.), and is shown in fig. 1, which is a flowchart of the method for detecting a moving object, and the method may include the following steps:
step 101, extracting a first image from a monitoring video, wherein the first image comprises a foreground image.
Step 102, performing an opening operation on the first image to obtain a second image, and acquiring a first target frame set of the current frame from the second image, where the first target frame set includes a plurality of first target frames.
Step 103, performing a closing operation on the first image to obtain a third image, and acquiring a second target frame set of the current frame from the third image, where the second target frame set includes a plurality of second target frames.
And 104, acquiring a foreground frame set of the current frame by using the first target frame set and the second target frame set.
And 105, detecting a moving target by using the foreground frame set.
The execution sequence is only an example of the embodiment of the present invention, and the execution sequence is not limited in the embodiment of the present invention, for example, "close the first image to obtain the third image and acquire the second target frame set of the current frame from the third image" may be executed first, and then "open the first image to obtain the second image and acquire the first target frame set of the current frame from the second image" may be executed later. For convenience of description, the above execution sequence is taken as an example for explanation, and the execution sequence may also be adjusted in practical applications.
In step 101, in the process of acquiring a monitoring video, the front-end device may obtain video images of consecutive frames, and for each frame of video image, a foreground image may be extracted from the video image, where the foreground image is a first image of a current frame, that is, the first image of the current frame may be extracted from the video image.
In one example, the VIBE algorithm may be used to extract foreground images from video images. The VIBE algorithm is a pixel-level foreground detection algorithm, and the basic idea is as follows: storing a sample set for each pixel, wherein the sample set comprises the past pixel value of the pixel point and the pixel values of the neighboring points around the pixel point, comparing the pixel value with the acquisition point in the sample set when a new pixel is encountered after the sample set is obtained, judging whether the new pixel value is a background point, and finally obtaining the foreground point.
In the process of extracting the foreground image from the video image by adopting the VIBE algorithm, a random number needs to be generated, and the foreground image is extracted by utilizing the random number. Considering that a random number needs to be generated again in each foreground image extraction process, the performance of the VIBE algorithm is poor, and the real-time performance is poor. Based on this, in an example of the present invention, in order to improve the performance of the VIBE algorithm and meet the requirement of processing real-time performance, the conventional VIBE algorithm may be optimized, and the optimization method includes: a random number table is configured in advance, the random number table comprises a plurality of random numbers generated in advance, and in the process of extracting a foreground image from a video image by adopting a VIBE algorithm, the random numbers can be directly inquired from the random number table without generating a random number at present. Experiments show that compared with the traditional VIBE algorithm, the optimized VIBE algorithm has the advantages that the foreground detection effect is equivalent, the running time is shortened to about 1/3 of the traditional VIBE algorithm, and the real-time requirement is met.
In one example, the foreground image extracted from the video image is the first image of the current frame (an example for the sake of convenience of distinction), and the foreground image is a binarized foreground image.
With respect to step 102, the process of opening the first image to obtain the second image includes, but is not limited to, the following: carrying out corrosion operation on the first image through the first corrosion template to obtain an image after the corrosion operation; and performing expansion operation on the image subjected to the corrosion operation through the first expansion template to obtain a second image.
In one example, the first etch template may include, but is not limited to:
Figure BDA0001058960040000081
the first expansion template may include, but is not limited to:
Figure BDA0001058960040000091
the first erosion template and the first expansion template of 3x3 are adopted, so that the method has the functions of removing noise points, eliminating small objects and separating the objects at the connection of fine points.
For step 103, the process of closing the first image to obtain the third image includes, but is not limited to, the following ways: performing expansion operation on the first image through a second expansion template to obtain an image after the expansion operation; and carrying out corrosion operation on the image subjected to the expansion operation through a second corrosion template to obtain a third image.
In one example, the second expansion template may include, but is not limited to:
Figure BDA0001058960040000092
the second etch template may include, but is not limited to:
Figure BDA0001058960040000093
wherein, the second expansion template and the second corrosion template have the following functions: it is possible to merge split objects, fill fine holes in objects, and smooth object boundaries. Due to the inconsistent motion speed of each structure on the moving object or the difference between the color of each structure and the background color, a plurality of foreground frames are often detected on one moving object, for example, a pedestrian object, two foreground frames, one foreground frame on the head and one foreground frame on the trunk are often detected, and at the moment, the two split foreground frames should be combined to form a complete objectAnd the splitting targets can be effectively merged by closing the first image by using the second expansion template and the second erosion template.
Furthermore, although the second expansion template and the second corrosion template are used for closing the first image, the split target can be effectively merged, under the interference environments of leaf shaking, water surface ripple, illumination change and the like, a wrong foreground frame is often generated, for example, under the environment of leaf shaking, when the second expansion template and the second corrosion template are used for closing the first image, the shaken leaves can be merged into a large target, and therefore false detection is caused. In order to effectively detect a real moving target, the first image can be opened by using the first expansion template and the first corrosion template, and the closing operation and the opening operation are combined, so that the influence of the environmental factors is filtered, and the accuracy of detecting the moving target is greatly improved.
In one example, for step 102, a connected component algorithm may be used to perform target extraction on the second image, and obtain a first target frame set, where the first target frame set is denoted as RectA, and the first target frame set includes a plurality of first target frames, such as a1、A2、AmEtc., therefore, RectA ═ a1,A2,…,Am}。
In one example, for step 103, a connected component algorithm may be used to perform target extraction on the third image, and obtain a second target frame set, where the second target frame set is denoted as RectB, and the second target frame set includes a plurality of second target frames, such as B1、B2、BnEtc., therefore, RectB ═ B1,B2,…,Bn}。
And the first target frame set and the second target frame set are both target frame sets corresponding to the current frame.
Wherein for each first target frame A within the first set of target framesi={xi,yi,wi,hiI.e. the first target frame may be represented by the coordinates (x) in the upper left corner (1, 2, …, m)i,yi) And the width and height (w) of the first target framei,hi) Uniquely determined, and the first target frame AiThe number of the contained foreground pixels is Pi(i ═ 1,2, …, m). For example, the first target frame A1The number of the contained foreground pixels is P1. Similarly, for each second target frame B in the second target frame seti(i ═ 1,2, …, n), which is implemented in a similar manner to the first objective block and will not be described again here.
The connected region algorithm is a commonly used technique in image processing, is used for detecting connected regions in a foreground image, and serves as a target region detection function in many tracking detection algorithms, and is not described in detail herein.
For step 104, the process of obtaining the foreground frame set of the current frame by using the first target frame set and the second target frame set may include, but is not limited to, the following ways: and obtaining a foreground frame set of a previous frame, and obtaining a foreground frame set of a current frame by utilizing the first target frame set, the second target frame set and the foreground frame set of the previous frame. Further, the process of obtaining the foreground frame set of the current frame by using the first target frame set, the second target frame set and the foreground frame set of the previous frame may include, but is not limited to, the following ways: executing the following processing aiming at each second target frame in the second target frame set to obtain a foreground frame set of the current frame; counting the sum of the pixel numbers of all the first target frames falling in the second target frame in the first target frame set to obtain the pixel number corresponding to the second target frame; acquiring the area corresponding to the second target frame; acquiring the shortest distance between the second target frame and the foreground frame set of the previous frame; judging whether the second target frame meets the selection condition or not by utilizing the number, the area and the shortest distance of the pixels corresponding to the second target frame; if yes, adding the second target frame to a foreground frame set of the current frame; and if not, forbidding adding the second target frame to the foreground frame set of the current frame.
In one example, the method is directed to "counting all first target boxes within the first set of target boxes that fall within the second target boxThe sum of the number of pixels to obtain the number of pixels corresponding to the second target frame, and for each second target frame B in the second target frame set RectBi(i ═ 1,2, …, n), statistics are taken for the first target frame set, Recta, to fall within the second target frame, BiThe sum of the pixel numbers (i.e. the number of connected region pixels) of all the first target frames in the second target frame BiCorresponding number of pixels SPi. For example, if the first target frame A in the first target frame set RectA2,A5,A7Falls on the second target frame BiInner, then SPi=P2+P5+P7,P2Is a first target frame A2Number of foreground pixels contained therein, P5Is a first target frame A5Number of foreground pixels contained therein, P7Is a first target frame A7The number of foreground pixels contained therein.
In one example, for the process of "obtaining the area corresponding to the second target frame", the second target frame may be represented by the coordinates (x) at the upper left corneri,yi) And width and height (w) of the second target framei,hi) Uniquely determined, and thus can learn each second target box B in the second target box set RectBi(i-1, 2, …, n) and calculating a second target frame B based on the width and heightiArea of the second target frame BiArea of (A) is denoted as Areai(i=1,2,…,n)。
In one example, for the process of "obtaining the shortest distance between the second target frame and the foreground frame set of the previous frame", it is assumed that the foreground frame set of the previous frame is RectC ═ C1,C2,…,CkA second target frame B can be calculated by using a preset algorithm (e.g. Euclidean distance algorithm)iDistance to each foreground frame in the foreground frame set of the previous frame, e.g. second target frame BiAnd the foreground frame C1A second target frame BiAnd the foreground frame C2And so on, and selects a minimum distance from the calculated distances, the minimum distance being the second target frame BiAnd foreground frame setCan be recorded as si. The foreground frame set of the previous frame may be obtained by using the first target frame set and the second target frame set of the previous frame, and this process is not repeated.
In an example, the process of determining whether the second target frame meets the selection condition by using the number of pixels, the area, and the shortest distance corresponding to the second target frame may include, but is not limited to, the following manners: if the shortest distance (the number of the pixels, the preset first value, the area) and the preset second value is greater than or equal to a preset third value and the area is greater than a preset fourth value, determining that the second target frame meets the selection condition, otherwise, determining that the second target frame does not meet the selection condition; the preset first numerical value, the preset second numerical value, the preset third numerical value and the preset fourth numerical value are all larger than 0.
In one example, if the second object box BiWhile satisfying the following formula, the second object box B is explainediAnd (5) according with the selection condition, adding the selection condition into the foreground frame set. If the second target frame BiThe second target frame B is not satisfied with any one or more of the following formulasiIf the selection condition is not met, the foreground frame is not added to the foreground frame set.
Figure BDA0001058960040000121
In formula 1, due to the second target frame BiBelongs to the second target box set, RectB, and therefore equation 1 is satisfied.
In equation 2, SPiIs the second target frame BiCorresponding number of pixels, Areai(i-1, 2, …, n) is the second object frame BiArea of (d), siIs the second target frame BiThe shortest distance, lambda and mu, from the foreground frame set are respectively a preset first value and a preset second value, and epsilon is a preset third value. Wherein λ, μ and ∈ can be parameter coefficients and threshold conditions respectively, and can all be greater than 0, and in practical application, the values of λ, μ and ∈ can be obtained according to practical experienceThe configuration is not described herein. Equation 2 consists of two parts, the first half (SP)i-λAreai) The proportion of the effective pixels to the target is reflected, and the larger the value of the effective pixels, the more the effective pixels are, and the smaller the value of the effective pixels, the less the effective pixels are. Further, the more effective pixels, the smaller the difference between the first target frame set and the second target frame set obtained through the opening operation and the closing operation, and the more accurate the target. siReflecting the distance between the current target and the best matching target in the last frame, the smaller the value of the target is, the more matching the target is, and the larger the value of the target is, the more mismatching the target is. Further, the more valid pixels in an object and the more matching with the previous frame object, the more accurate the object is. In summary, (SP)i-λAreai)-μsiThe larger the value of (A), the more accurate the detected target is, and the interference of environments such as leaf shaking and water surface ripple can be effectively reduced under the condition of formula 2.
In formula 3, Areai(i-1, 2, …, n) is the second object frame BiThe area of β is a preset fourth value, and in practical application, the value of β may be configured according to practical experience, which is not described herein again. By setting a reasonable value of beta, a second target frame with a small filtering area can be filtered by a formula 3.
In equations 4 and 5, wiWidth of the second target frame, hiIs the height, w, of the second target framejWidth of the foreground frame, hjThe height of the foreground frame is the foreground frame with the smallest distance to the second target frame in the foreground frame set of the previous frame. Through the formula 4 and the formula 5, the second target frame of the current frame and the foreground frame of the previous frame can be compared, so that the second target frame with the deformation being changed violently can be filtered.
After the above-described processing, for each second target frame BiIf the second target frame BiIf the selection condition is met, adding the selection condition into the foreground frame set; if the second target frame BiIf the selection condition is not met, the foreground frame set is forbidden to be added to the foreground frame set, so that a foreground frame set can be obtained, and the foreground frame set RectD is { D }1,D2,…,Dt}. For example, when the second object box B1、B3、B5、B7、B9If the selection condition is met, the second target frame B is put into1、B3、B5、B7、B9Adding into foreground frame set when second target frame B2、B4、B6、B8If the selection condition is not met, the second target frame B is forbidden2、B4、B6、B8Adding to the foreground frame set, so that the foreground frame set contains a second target frame B1、B3、B5、B7、B9These second target frames are the foreground frames in the foreground frame set, and the foreground frames are described as an example in the following.
For step 105, the process of detecting a moving object by using the foreground frame set may include, but is not limited to, the following ways: and counting the number of corner points of each foreground frame of which the corner points fall in the foreground frame set based on the tracking frame corresponding to the moving target. And if all the foreground frames do not contain the corner points of the tracking frame, determining that the moving target is lost. And if one or more foreground frames contain the angular points of the tracking frame, acquiring the foreground frame with the largest number of angular points, determining the acquired foreground frame as the foreground frame matched with the moving target, and updating the acquired foreground frame as the tracking frame corresponding to the moving target.
Further, in the process of updating the obtained foreground frame to the tracking frame corresponding to the moving target, based on a plurality of tracking frames corresponding to a plurality of moving targets (each moving target corresponds to a unique tracking frame), if the coordinates of the foreground frame matched with the moving target are different from the coordinates of the foreground frame matched with other moving targets, the obtained foreground frame is updated to the tracking frame corresponding to the moving target.
Further, after the obtained foreground frame is updated to the tracking frame corresponding to the moving target, if the coordinates of the foreground frame matched with the moving target are the same as those of the foreground frames matched with other moving targets, the centroid of all corner points in the tracking frame corresponding to the moving target at present can be obtained, and the tracking frame corresponding to the moving target is updated by taking the centroid as the center.
In one example, a pyramid-based LK (Lucas Kanade) algorithm may be employed for target tracking. Each moving target corresponds to a tracking frame, the tracking frame can be configured according to actual needs in an initial state, and the tracking frame can be updated according to a foreground frame in a subsequent process.
For the moving target 1, assuming that the tracking frame is the tracking frame 1, firstly, the corner point of the tracking frame 1 is updated by using the LK algorithm, and the specific updating method is not repeated. Assuming that the foreground frame set includes a foreground frame 1 and a foreground frame 2, the number of corner points of the tracking frame 1 falling on the foreground frame 1 and the number of corner points falling on the foreground frame 2 are counted. And if neither the foreground frame 1 nor the foreground frame 2 contains the corner of the tracking frame 1, determining that the moving target 1 is lost, and adding the moving target 1 into the lost tracking target. If the foreground frame 1 and/or the foreground frame 2 contain the angular points of the tracking frame 1, obtaining the foreground frame containing the largest number of angular points, if the foreground frame 1 contains 15 angular points of the tracking frame 1, and the foreground frame 2 contains 10 angular points of the tracking frame 1, the obtained foreground frame is the foreground frame 1, determining the foreground frame 1 as the foreground frame matched with the moving target 1, and updating the foreground frame 1 as the tracking frame of the moving target 1. In the subsequent process, the tracking frame 1 of the moving object 1 is the foreground frame 1.
In one example, considering an abnormal situation that the foreground frame matching two or more moving objects is the same, it indicates that the one foreground frame contains two or more moving objects, in this case, the foreground frame cannot be updated to the tracking frame of the two or more moving objects.
In the above example, it has been determined that the foreground frame matching the moving object 1 is the foreground frame 1, assuming that the moving object 2 still exists and the tracking frame of the moving object 2 is the tracking frame 2, before updating the foreground frame 1 to the tracking frame of the moving object 1, the following processing may be performed: the number of corners of the tracking frame 2 falling on the foreground frame 1 and the number of corners falling on the foreground frame 2 are counted. If the foreground frame 1 and/or foreground frame 2 contains the corner points of the tracking frame 2, the foreground frame containing the largest number of corner points is obtained. In the first case, assuming that the obtained foreground frame is the foreground frame 2, since the coordinates of the foreground frame 1 matched with the moving target 1 are different from the coordinates of the foreground frame 2 matched with the moving target 2, the foreground frame 1 is updated to be the tracking frame of the moving target 1, and the foreground frame 2 is updated to be the tracking frame of the moving target 2. And in the second case, assuming that the obtained foreground frame is the foreground frame 1, because the coordinates of the foreground frame 1 matched with the moving target 1 are different from the coordinates of the foreground frame 1 matched with the moving target 2, that is, the two foreground frames are the same foreground frame, the foreground frame 1 cannot be updated to be the tracking frame of the moving target 1, nor the foreground frame 1 cannot be updated to be the tracking frame of the moving target 2, but the tracking frame of the moving target 1 is updated by taking the centroids of all the corner points in the tracking frame 1 corresponding to the moving target 1 (that is, the tracking frame 1 before updating) as the center, and the tracking frame of the moving target 2 is updated by taking the centroids of all the corner points in the tracking frame 2 corresponding to the moving target 2 (that is, the tracking frame 2 before updating) as the center.
In the process of updating the tracking frame of the moving object with the centroid of all the corner points in the tracking frame as the center, the centroid O (x, y) of all the corner points in the tracking frame may be calculated according to the following formula, and the tracking frame of the moving object may be updated with the rectangle determined by the width and height of the tracking frame with the centroid O (x, y) as the center. In this formula, (x)1,y1),(x2,y2),…(xm,ym) M is the number of corner points, which is the coordinates of all corner points within the tracking box.
Figure BDA0001058960040000151
Based on the implementation mode, the tracking target is corrected by using the foreground frame, so that the target maintenance is more accurate and stable, and the movement track of the target can be effectively maintained even when the moving targets are staggered.
In one example, after the step 105, whether the moving object passes through the wire mixing can be detected by using the detected position of the moving object; and/or detecting whether the moving target enters the forbidden zone or leaves the forbidden zone.
Wherein, the line mixing means that: a straight line or a curve is drawn in the monitoring video to replace the function of entrance guard, and when a moving object passes through the monitoring video from any direction, an alarm can be automatically output. The forbidden zone is as follows: an area is drawn in the monitoring video as a forbidden zone, or the area of the whole monitoring video is taken as the forbidden zone, when a moving object enters the forbidden zone or leaves the forbidden zone, an alarm can be automatically output.
Taking the example of detecting whether a moving target passes through a mixed line, first, based on user configuration, an initial direction vector and a ruled line equation can be obtained, assuming that the direction vector is<μ11>The ruled line equation is y ═ ax + b.
Through the processing in the steps 101 to 105, the tracking frames of the moving object of any frame can be acquired, and two tracking frames for the same moving object which are separated by P frames are selected. If P is assumed to be 6 for the moving object 1, the tracking frame of the 2 nd frame and the tracking frame of the 8 th frame may be selected. Suppose the coordinate of the center point of the tracking frame of frame 2 is (x)1,y1) The coordinate of the center point of the tracking frame of the 8 th frame is (x)2,y2) Then the motion direction vector of the moving object 1 is<μ22>And is and<μ22>can be calculated by the following formula:
Figure BDA0001058960040000152
further, the direction vector can be calculated using the following formula<μ11>And direction vector<μ22>Angle θ of (c):
Figure BDA0001058960040000153
on the basis, if theta is less than 90 degrees, the operation can be indicatedThe moving direction of the moving target meets the condition of triggering wire mixing. And, if the condition (ax) is satisfied1+b)(ax2+ b) < 0, a and b are preset numbers, and a and b are positive numbers, the moving object touch line can be indicated. When the moving direction of the moving target meets the condition and the moving target touches the wire, the moving target can be detected to pass through the wire mixing, and an alarm can be triggered.
Based on the technical scheme, in the embodiment of the invention, the detection and maintenance modes of the moving target are optimized, whether the moving target passes through the wire mixing and whether the moving target breaks into a forbidden zone or leaves the forbidden zone can be accurately detected, the accuracy of behavior judgment such as wire mixing and the forbidden zone is improved, the detection accuracy and the maintenance accuracy of the moving target are improved, the influence of a complex background (such as the interference environment of leaf shaking, water surface waves, illumination change and the like) on the extraction of the moving target is greatly reduced, the problem of staggering in the moving process of the target is effectively solved, and the false detection rate is reduced. Moreover, the method can also be operated on a middle-low-end chip (namely, the front-end equipment adopts the middle-low-end chip), so that the indexes and the cost of intelligent deployment and control are greatly reduced.
Based on the same inventive concept as the method, the embodiment of the present invention further provides a moving object detection apparatus, which can be applied to front-end equipment, and the moving object detection apparatus can be implemented by software, or implemented by hardware, or implemented by a combination of hardware and software. The software implementation is taken as an example, and as a logical device, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for operation through the processor of the front-end device where the device is located. From a hardware level, as shown in fig. 2, a hardware structure diagram of a front-end device where the moving object detection apparatus provided by the present invention is located is shown, where in addition to the processor, the network interface, the memory and the nonvolatile memory shown in fig. 2, the front-end device may further include other hardware, such as a forwarding chip responsible for processing a packet, and the like; the front-end device may also be a distributed device in terms of hardware structure, and may include a plurality of interface cards, so as to perform message processing extension at a hardware level.
As shown in fig. 3, a block diagram of a moving object detection apparatus according to the present invention includes:
the system comprises an extraction module 11, a processing module and a display module, wherein the extraction module is used for extracting a first image from a monitoring video, and the first image comprises a foreground image; a first obtaining module 12, configured to perform an open operation on the first image to obtain a second image, and obtain a first target frame set of the current frame from the second image, where the first target frame set includes a plurality of first target frames; a second obtaining module 13, configured to perform a close operation on the first image to obtain a third image, and obtain a second target frame set of the current frame from the third image, where the second target frame set includes a plurality of second target frames; a third obtaining module 14, configured to obtain a foreground frame set of the current frame by using the first target frame set and the second target frame set; and the detection module 15 is configured to detect a moving object by using the foreground frame set.
The first obtaining module 12 is specifically configured to, in the process of performing an opening operation on the first image to obtain a second image, perform a corrosion operation on the first image through the first corrosion template to obtain an image after the corrosion operation; performing expansion operation on the image subjected to the corrosion operation through a first expansion template to obtain a second image;
the second obtaining module 13 is specifically configured to, in the process of performing a closing operation on the first image to obtain a third image, perform an expansion operation on the first image through a second expansion template to obtain an image after the expansion operation; and carrying out corrosion operation on the image subjected to the expansion operation through a second corrosion template to obtain a third image.
The second expansion die plate specifically comprises:
Figure BDA0001058960040000171
the second etching template specifically includes:
Figure BDA0001058960040000172
in an example, the third obtaining module 14 is specifically configured to, in a process of obtaining a foreground frame set of a current frame by using the first target frame set and the second target frame set, obtain a foreground frame set of a previous frame, and obtain the foreground frame set of the current frame by using the first target frame set, the second target frame set, and the foreground frame set of the previous frame.
In an example, the third obtaining module 14 is specifically configured to, in a process of obtaining the foreground frame set of the current frame by using the first target frame set, the second target frame set, and the foreground frame set of the previous frame, perform the following processing on each second target frame in the second target frame set to obtain a foreground frame set of the current frame;
counting the sum of the pixel numbers of all first target frames falling in the second target frame in the first target frame set to obtain the pixel number corresponding to the second target frame; acquiring the area corresponding to the second target frame; acquiring the shortest distance between the second target frame and the foreground frame set of the previous frame;
judging whether the second target frame meets the selection condition or not by utilizing the number, the area and the shortest distance of the pixels corresponding to the second target frame; if yes, adding the second target frame to a foreground frame set of the current frame; and if not, forbidding adding the second target frame to the foreground frame set of the current frame.
The third obtaining module 14 is specifically configured to, in the process of determining whether the second target frame meets the selection condition by using the number of pixels, the area, and the shortest distance corresponding to the second target frame, determine that the second target frame meets the selection condition if (the number of pixels-a preset first value x the area) -a preset second value x the shortest distance is greater than or equal to a preset third value, and the area is greater than a preset fourth value, and otherwise, determine that the second target frame does not meet the selection condition; the preset first numerical value, the preset second numerical value, the preset third numerical value and the preset fourth numerical value are all larger than 0.
The detecting module 15 is specifically configured to, in a process of detecting a moving target by using the foreground frame set, count the number of corner points of each foreground frame whose corner points fall in the foreground frame set based on a tracking frame corresponding to the moving target; if all foreground frames do not contain the angular points of the tracking frames, determining that the moving target is lost; and if one or more foreground frames contain the angular points of the tracking frame, acquiring the foreground frame with the largest number of angular points, determining the acquired foreground frame as the foreground frame matched with the moving target, and updating the acquired foreground frame as the tracking frame corresponding to the moving target.
In an example, the detecting module 15 is specifically configured to, in the process of updating the obtained foreground frame to the tracking frame corresponding to the moving target, update the obtained foreground frame to the tracking frame corresponding to the moving target based on the tracking frames corresponding to the multiple moving targets if the coordinates of the foreground frame matched to the moving target are different from the coordinates of the foreground frames matched to other moving targets;
and the detection module is further configured to, after the obtained foreground frame is determined as the foreground frame matched with the moving target, if the coordinates of the foreground frame matched with the moving target are the same as those of the foreground frames matched with other moving targets, obtain the centroids of all corner points in the tracking frame currently corresponding to the moving target, and update the tracking frame corresponding to the moving target with the centroids as the centers.
In an example, the detecting module 15 is further configured to, after detecting a moving object by using the foreground frame set, detect whether the moving object passes through a wire mixing by using the detected position of the moving object; and/or detecting whether the moving target breaks into an forbidden zone or leaves the forbidden zone.
The modules of the device can be integrated into a whole or can be separately deployed. The modules can be combined into one module, and can also be further split into a plurality of sub-modules.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present invention may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better embodiment. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention. Those skilled in the art will appreciate that the drawings are merely schematic representations of one preferred embodiment and that the blocks or flow diagrams in the drawings are not necessarily required to practice the present invention.
Those skilled in the art will appreciate that the modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, and may be correspondingly changed in one or more devices different from the embodiments. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules. The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The above disclosure is only for a few specific embodiments of the present invention, but the present invention is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present invention.

Claims (8)

1. A method for detecting a moving object, the method comprising:
extracting a first image from a monitoring video, wherein the first image comprises a foreground image;
opening the first image to obtain a second image, and acquiring a first target frame set of the current frame from the second image, wherein the first target frame set comprises a plurality of first target frames;
performing a closing operation on the first image to obtain a third image, and obtaining a second target frame set of the current frame from the third image, wherein the second target frame set comprises a plurality of second target frames;
executing the following processing for each second target frame in the second target frame set to obtain a foreground frame set of the current frame: counting the sum of the pixel numbers of all first target frames falling in the second target frame in the first target frame set to obtain the pixel number corresponding to the second target frame; acquiring the area corresponding to the second target frame; acquiring the shortest distance between the second target frame and the foreground frame set of the previous frame; judging whether the second target frame meets the selection condition or not by utilizing the number, the area and the shortest distance of the pixels corresponding to the second target frame; if yes, adding the second target frame to a foreground frame set of the current frame; if not, forbidding to add the second target frame to the foreground frame set of the current frame;
and detecting a moving target by using the foreground frame set.
2. The method of claim 1,
the process of performing an opening operation on the first image to obtain a second image specifically includes:
carrying out corrosion operation on the first image through a first corrosion template to obtain an image after the corrosion operation;
performing expansion operation on the image subjected to the corrosion operation through a first expansion template to obtain a second image;
the process of performing a close operation on the first image to obtain a third image specifically includes:
performing expansion operation on the first image through a second expansion template to obtain an image after the expansion operation;
and carrying out corrosion operation on the image subjected to the expansion operation through a second corrosion template to obtain a third image.
3. The method of claim 2,
the second expansion die plate specifically comprises:
Figure FDA0002971649920000021
the second etching template specifically includes:
Figure FDA0002971649920000022
4. the method according to claim 1, wherein determining whether the second target frame meets the selection condition by using the number of pixels, the area, and the shortest distance corresponding to the second target frame specifically comprises:
if (the number of pixels-the preset first value-the area) -the preset second value-the shortest distance is greater than or equal to a preset third value and the area is greater than a preset fourth value, determining that the second target frame meets the selection condition, otherwise, determining that the second target frame does not meet the selection condition; the preset first numerical value, the preset second numerical value, the preset third numerical value and the preset fourth numerical value are all larger than 0.
5. The method according to claim 1, wherein the process of detecting the moving object by using the foreground frame set based on the tracking frame corresponding to the moving object specifically includes:
counting the number of corner points of each foreground frame of which the corner points of the tracking frame fall in the foreground frame set;
if all foreground frames do not contain the angular points of the tracking frames, determining that the moving target is lost;
and if one or more foreground frames contain the angular points of the tracking frame, acquiring the foreground frame with the largest number of angular points, determining the acquired foreground frame as the foreground frame matched with the moving target, and updating the acquired foreground frame as the tracking frame corresponding to the moving target.
6. The method according to claim 5, wherein the process of updating the obtained foreground frame to the tracking frame corresponding to the moving target based on the tracking frames corresponding to the plurality of moving targets specifically includes:
if the coordinates of the foreground frame matched with the moving target are different from the coordinates of the foreground frames matched with other moving targets, updating the obtained foreground frame into a tracking frame corresponding to the moving target;
after determining the obtained foreground frame as a foreground frame matching the moving object, the method further comprises: and if the coordinates of the foreground frame matched with the moving target are the same as those of the foreground frames matched with other moving targets, acquiring the mass centers of all corner points in the tracking frame corresponding to the moving target currently, and updating the tracking frame corresponding to the moving target by taking the mass centers as centers.
7. The method of claim 1,
after the detecting a moving object using the set of foreground frames, the method further comprises:
detecting whether the moving target passes through a wire mixing line or not by using the detected position of the moving target; and/or detecting whether the moving target breaks into an forbidden zone or leaves the forbidden zone.
8. A moving object detection device is characterized in that the device specifically comprises:
the device comprises an extraction module, a processing module and a display module, wherein the extraction module is used for extracting a first image from a monitoring video, and the first image comprises a foreground image;
a first obtaining module, configured to perform an open operation on the first image to obtain a second image, and obtain a first target frame set of a current frame from the second image, where the first target frame set includes a plurality of first target frames;
a second obtaining module, configured to perform a close operation on the first image to obtain a third image, and obtain a second target frame set of the current frame from the third image, where the second target frame set includes a plurality of second target frames;
a third obtaining module, configured to perform the following processing on each second target frame in the second target frame set to obtain a foreground frame set of the current frame: counting the sum of the pixel numbers of all first target frames falling in the second target frame in the first target frame set to obtain the pixel number corresponding to the second target frame; acquiring the area corresponding to the second target frame; acquiring the shortest distance between the second target frame and the foreground frame set of the previous frame; judging whether the second target frame meets the selection condition or not by utilizing the number, the area and the shortest distance of the pixels corresponding to the second target frame; if yes, adding the second target frame to a foreground frame set of the current frame; if not, forbidding to add the second target frame to the foreground frame set of the current frame;
and the detection module is used for detecting the moving target by utilizing the foreground frame set.
CN201610598717.XA 2016-07-25 2016-07-25 Method and device for detecting moving target Active CN107657626B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610598717.XA CN107657626B (en) 2016-07-25 2016-07-25 Method and device for detecting moving target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610598717.XA CN107657626B (en) 2016-07-25 2016-07-25 Method and device for detecting moving target

Publications (2)

Publication Number Publication Date
CN107657626A CN107657626A (en) 2018-02-02
CN107657626B true CN107657626B (en) 2021-06-01

Family

ID=61126724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610598717.XA Active CN107657626B (en) 2016-07-25 2016-07-25 Method and device for detecting moving target

Country Status (1)

Country Link
CN (1) CN107657626B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460353B (en) * 2018-03-07 2021-08-20 深圳市恒天伟焱科技股份有限公司 Infrared image human body detection method based on time sequence tracking
CN108932496B (en) * 2018-07-03 2022-03-25 北京佳格天地科技有限公司 Method and device for counting number of target objects in area
CN111091022A (en) * 2018-10-23 2020-05-01 宏碁股份有限公司 Machine vision efficiency evaluation method and system
CN110751134B (en) * 2019-12-23 2020-05-12 长沙智能驾驶研究院有限公司 Target detection method, target detection device, storage medium and computer equipment
CN111079694A (en) * 2019-12-28 2020-04-28 神思电子技术股份有限公司 Counter assistant job function monitoring device and method
CN113496500A (en) * 2020-04-02 2021-10-12 杭州萤石软件有限公司 Out-of-range detection method and device and storage medium
CN114866692A (en) * 2022-04-19 2022-08-05 合肥富煌君达高科信息技术有限公司 Image output method and system of large-resolution monitoring camera
CN115984335B (en) * 2023-03-20 2023-06-23 华南农业大学 Method for acquiring characteristic parameters of fog drops based on image processing

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104331905A (en) * 2014-10-31 2015-02-04 浙江大学 Surveillance video abstraction extraction method based on moving object detection
CN105357594A (en) * 2015-11-19 2016-02-24 南京云创大数据科技股份有限公司 Massive video abstraction generation method based on cluster and H264 video concentration algorithm

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101847265A (en) * 2010-04-20 2010-09-29 上海理工大学 Method for extracting moving objects and partitioning multiple objects used in bus passenger flow statistical system
CN102096925A (en) * 2010-11-26 2011-06-15 中国科学院上海技术物理研究所 Real-time closed loop predictive tracking method of maneuvering target
CN102509086B (en) * 2011-11-22 2015-02-18 西安理工大学 Pedestrian object detection method based on object posture projection and multi-features fusion
CN103425960B (en) * 2012-05-25 2017-04-05 信帧机器人技术(北京)有限公司 Fast moving objects method for detecting in a kind of video
CN103093197B (en) * 2013-01-15 2016-05-11 信帧电子技术(北京)有限公司 A kind of identification hang oneself method for supervising and the system of behavior
CN103227963A (en) * 2013-03-20 2013-07-31 西交利物浦大学 Static surveillance video abstraction method based on video moving target detection and tracing
CN103778645B (en) * 2014-01-16 2017-02-15 南京航空航天大学 Circular target real-time tracking method based on images
CN103810499B (en) * 2014-02-25 2017-04-12 南昌航空大学 Application for detecting and tracking infrared weak object under complicated background
CN104599502B (en) * 2015-02-13 2017-01-25 重庆邮电大学 Method for traffic flow statistics based on video monitoring
CN105224912B (en) * 2015-08-31 2018-10-16 电子科技大学 Video pedestrian's detect and track method based on movable information and Track association
CN105678811B (en) * 2016-02-25 2019-04-02 上海大学 A kind of human body anomaly detection method based on motion detection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104331905A (en) * 2014-10-31 2015-02-04 浙江大学 Surveillance video abstraction extraction method based on moving object detection
CN105357594A (en) * 2015-11-19 2016-02-24 南京云创大数据科技股份有限公司 Massive video abstraction generation method based on cluster and H264 video concentration algorithm

Also Published As

Publication number Publication date
CN107657626A (en) 2018-02-02

Similar Documents

Publication Publication Date Title
CN107657626B (en) Method and device for detecting moving target
CN107527009B (en) Remnant detection method based on YOLO target detection
Ragland et al. A survey on object detection, classification and tracking methods
CN108052917B (en) Method for automatically identifying illegal buildings based on new and old time phase change discovery
JP4429298B2 (en) Object number detection device and object number detection method
CN109727275B (en) Object detection method, device, system and computer readable storage medium
EP2798611A1 (en) Camera calibration using feature identification
Kim et al. Detecting regions of interest in dynamic scenes with camera motions
JP6679858B2 (en) Method and apparatus for detecting occlusion of an object
Dong et al. Visual UAV detection method with online feature classification
CN108846852B (en) Monitoring video abnormal event detection method based on multiple examples and time sequence
CN103093198A (en) Crowd density monitoring method and device
WO2020008667A1 (en) System and method for video anomaly detection
Lim et al. River flow lane detection and Kalman filtering‐based B‐spline lane tracking
CN109410248B (en) Flotation froth motion characteristic extraction method based on r-K algorithm
US9947106B2 (en) Method and electronic device for object tracking in a light-field capture
Verma et al. Analysis of moving object detection and tracking in video surveillance system
US20130027550A1 (en) Method and device for video surveillance
US20200394802A1 (en) Real-time object detection method for multiple camera images using frame segmentation and intelligent detection pool
Shammi et al. An automated way of vehicle theft detection in parking facilities by identifying moving vehicles in CCTV video stream
Tsesmelis et al. Tamper detection for active surveillance systems
JP4918615B2 (en) Object number detection device and object number detection method
CN111402185B (en) Image detection method and device
Wang et al. Multi-scale target detection in SAR image based on visual attention model
Michael et al. Fast change detection for camera-based surveillance systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant