CN111739059A - Moving object detection method and track tracking method based on frame difference method - Google Patents

Moving object detection method and track tracking method based on frame difference method Download PDF

Info

Publication number
CN111739059A
CN111739059A CN202010569106.9A CN202010569106A CN111739059A CN 111739059 A CN111739059 A CN 111739059A CN 202010569106 A CN202010569106 A CN 202010569106A CN 111739059 A CN111739059 A CN 111739059A
Authority
CN
China
Prior art keywords
image
vehicle
moving object
tracking
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010569106.9A
Other languages
Chinese (zh)
Inventor
史彦
刘柱
罗家毅
刘娟
代忠红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MAANSHAN TECHNICAL COLLEGE
Original Assignee
MAANSHAN TECHNICAL COLLEGE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MAANSHAN TECHNICAL COLLEGE filed Critical MAANSHAN TECHNICAL COLLEGE
Priority to CN202010569106.9A priority Critical patent/CN111739059A/en
Publication of CN111739059A publication Critical patent/CN111739059A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a moving object detection method and a track tracking method based on a frame difference method. The theoretical basis of the frame difference method is described first, and then each step of the processing is given, and the processing result of each step is given. The result shows that the method of the invention can well detect the moving object, can also track the moving track of the moving object, has higher processing speed and can meet the requirement of real-time property.

Description

Moving object detection method and track tracking method based on frame difference method
The technical field is as follows:
the invention relates to a moving object detection method and a track tracking method based on a frame difference method.
Background art:
the detection and extraction of moving objects aims to extract a change region from a background from a video sequence, and is a basic step of the whole processing process. The effective segmentation of moving objects is very important for subsequent steps such as object classification, tracking and behavior understanding, because the subsequent processing is completed on the basis of the subsequent steps. However, motion detection is a difficult task due to real-time changes in the background image, such as weather changes, lighting changes, and other factors.
The invention content is as follows:
the invention aims to provide a moving object detection method and a track tracking method based on a frame difference method, which have high processing speed and can meet the real-time requirement.
The purpose of the invention is realized by the following technical scheme: a moving object detection method and a track tracking method based on a frame difference method comprise the following steps:
A. detecting and tracking a moving object by adopting a frame difference method:
a. graying processing of an image: because the image in the video is colorful, the factors to be considered in processing are more, and because any pixel of the image has three different color components, and has a lot of information irrelevant to identification, the calculation is more complex, the color image is preprocessed to be changed into a gray image, and the processing speed is accelerated;
b. two frames are subtracted: subtracting two continuous frames of the video sequence to obtain a result; the detected area comprises the positions of a frame in front of the vehicle and a frame behind the vehicle due to the movement of the vehicle, and the black area is generated by subtracting the position of the vehicle tail in the frame in front of the vehicle from the background due to the operation of the vehicle tail to the next position;
c. binarization processing of an image: when tracking the image frame difference, binarization processing needs to be carried out on the frame difference result, namely, a program is used for calculating a gravity center point of a vehicle tail and tracking the track of the vehicle tail, the track is the track of a moving target, and after a detection result is obtained, binarization is carried out on the track, so that a final result can be obtained;
d. and (3) negation operation: the binary image is subjected to negation, and because the vehicle tail needs to be extracted, the result obtained after negation is more accurate;
e. application of mathematical morphology: performing morphological operation on the image subjected to the negation operation, and filling the internal cavity of the moving target in the binary image by using a mathematical morphology method, namely firstly adopting open operation and then adopting closed operation;
B. a moving object track tracking method based on the vehicle body region communication area comprises the following steps: matching and tracking the area of the communicated region of the tail of the vehicle, comprising the following steps:
s1, adopting a connected region analysis algorithm: the areas of the three regions are respectively calculated through the three connected regions, the region with obviously small area is a noise point, the gravity center of the tail of the vehicle is tracked through judgment, and the moving track of the moving object can be obtained by connecting the gravity center point;
s2, center of gravity point labeling: and marking the gravity center point of the tail of the vehicle by using a black square on the image to finish the tracking of the moving object.
S3, tracking the gravity center point tracks of a plurality of vehicles: and intercepting the image every two frames, carrying out gravity center point statistics, and carrying out gravity center movement displacement calculation.
The invention is further improved in that: the graying processing of the image in the step a adopts a weighted average value method: r, G, B are given different weights depending on importance or other criteria and g is made equal to the weighted average of their values, i.e.:
Figure BDA0002548809730000031
wherein WR,WG,WBThe weighting of R, G, B, since the human eye is most sensitive to green, less sensitive to red, and least sensitive to blue, is generally given by the following equation:
g=0.299R+0.587G+0.114B。
the invention is further improved in that: the binarization processing of the image specifically adopts a global threshold value method, and the global threshold value method is specifically a maximum inter-class variance algorithm; in the maximum between-class variance algorithm, variance is a measure of the uniformity of gray distribution, the larger the variance value is, the larger the difference between two parts forming an image is, and when part of objects are mistaken for backgrounds or part of the backgrounds are mistaken for objects, the difference between the two parts is reduced, so that the segmentation with the maximum between-class variance means that the probability of wrong segmentation is minimum; the algorithm analyzes the histogram of the input gray image and divides the histogram into two parts, so that the distance between the two parts reaches the maximum value, namely the inter-class variance reaches the maximum value, and the dividing point is the obtained threshold value;
let the gray level of the original gray image be m-1, and the number of pixel points with gray level i be niThen all pixels of the image are:
N=n0+n1+...+nm-1
probability of each value:
Figure BDA0002548809730000032
the gray levels are divided into two groups by threshold T: c0(0,1, 2.., T-1) and C1(T, T +1,.., m-1); the probabilities generated by each group are as follows:
C0probability of class occurrence:
Figure BDA0002548809730000033
C1probability of class occurrence:
Figure BDA0002548809730000034
C0average of class:
Figure BDA0002548809730000041
C1average of class:
Figure BDA0002548809730000042
wherein:
Figure BDA0002548809730000043
is the gray level average of the overall image;
Figure BDA0002548809730000044
the gray level average value when the threshold is T, so the gray level average value of all samples is:
μ=ω0μ01μ1
C0and C1The variance between classes can be found by:
Figure BDA0002548809730000045
varying T from 1 to m-1, finding T when the above formula is at its maximum, i.e. finding max2T at (T)*Value, at this time, T*It is the threshold value that is to be used,2(T) is called a threshold selection function.
The invention is further improved in that: and e, performing opening operation and closing operation in the step e, wherein the opening operation and the closing operation are defined as follows:
opening operation:
Figure BDA0002548809730000046
and (3) closed operation:
Figure BDA0002548809730000047
in the binary image processing, X is a binary image, S is a structural element, X is expanded by S after being corroded by S, and the opening operation result is an area which can be reached by the structural element when the structural element S moves in parallel in the image and does not overflow from the image; the result of the closed operation is a complement of the area that the structured element S can reach when the inversion of the structured element S is moved in parallel within the background of the image and does not overflow from the background; the opening operation can eliminate the more convex detail part smaller than the structural element in the image; the closed operation can eliminate concave parts in the image, and small holes and gaps are filled, so that the edge of the object is smoother; in order to remove noise generated by threshold segmentation, the image is firstly opened and then closed, so that the aims of eliminating noise and filling small holes are fulfilled.
The invention has the beneficial effects that:
(1) the invention can detect the motion trail of the moving object, and is quick and effective.
(2) The invention can complete the tracking of the moving object, and the object basically moves at a constant speed as can be seen from the gravity center point coordinates of the tail of the moving target vehicle.
(3) The processing speed is high, and the real-time requirement can be met.
(4) And the tracking of multiple vehicles can be completed.
The specific implementation mode is as follows:
in order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is obvious that the described embodiments are a part of the embodiments of the present invention, but not all of the embodiments. Elements and features described in one embodiment of the invention may be combined with elements and features shown in one or more other embodiments. It should be noted that the illustration omits illustration and description of components and processes not relevant to the present invention that are known to those of ordinary skill in the art for clarity purposes. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
The time difference method is also called a frame difference method because the motion region in the image is extracted by using the difference between adjacent frame images in the image sequence. The images of the same background at different moments are subjected to difference calculation, background parts with unchanged gray levels in the difference images are cut off, and the moving objects are displayed by subtracting two frames because the positions of the moving objects in adjacent frames are different.
In a continuous image sequence, a method of time difference based on pixel and threshold value is adopted between several adjacent frames to extract a motion region in an image, so that the motion region is called as an inter-frame difference. Due to the fact that the system error and the thermal noise are relatively close to each other between the adjacent frames, the method can well remove the influence of the system error and the thermal noise. The time difference motion detection method can also eliminate the influence of slow illumination change and the like, namely has stronger self-adaptability to dynamic environment. It can be seen from the above that the interframe difference method is simple, the detection is fast and strong adaptive, and the detection is effective and stable.
A moving object detection method and a track tracking method based on a frame difference method comprise the following steps:
A. detecting and tracking a moving object by adopting a frame difference method:
a. graying processing of an image: because the image in the video is colorful, the factors to be considered in processing are more, and because any pixel of the image has three different color components, and has a lot of information irrelevant to identification, the calculation is more complex, the color image is preprocessed to be changed into a gray image, and the processing speed is accelerated;
b. two frames are subtracted: subtracting two continuous frames of the video sequence to obtain a result; the detected area comprises the positions of a frame in front of the vehicle and a frame behind the vehicle due to the movement of the vehicle, and the black area is generated by subtracting the position of the vehicle tail in the frame in front of the vehicle from the background due to the operation of the vehicle tail to the next position;
c. binarization processing of an image: when tracking the image frame difference, binarization processing needs to be carried out on the frame difference result, namely, a program is used for calculating a gravity center point of a vehicle tail and tracking the track of the vehicle tail, the track is the track of a moving target, and after a detection result is obtained, binarization is carried out on the track, so that a final result can be obtained;
d. and (3) negation operation: the binary image is subjected to negation, and because the vehicle tail needs to be extracted, the result obtained after negation is more accurate;
e. application of mathematical morphology: performing morphological operation on the image subjected to the negation operation, and filling the internal cavity of the moving target in the binary image by using a mathematical morphology method, namely firstly adopting open operation and then adopting closed operation;
B. a moving object track tracking method based on the vehicle body region communication area comprises the following steps: the area of the communicated region of the tail of the vehicle is matched and tracked, the gravity center of the tail of the vehicle is tracked by taking the tail of the vehicle as a standard, and the graph of the tail of the vehicle is compared and regulated. The method comprises the following steps:
s1, adopting a connected region analysis algorithm: the areas of the three regions are respectively calculated through the three connected regions, the region with obviously small area is a noise point, the gravity center of the tail of the vehicle is tracked through judgment, and the moving track of the moving object can be obtained by connecting the gravity center point;
s2, center of gravity point labeling: and marking the gravity center point of the tail of the vehicle by using a black square on the image to finish the tracking of the moving object.
Assuming that a vehicle in a video segment is tracked, the following table is statistics of tracking target track points:
Figure BDA0002548809730000071
s3, tracking the gravity center point tracks of a plurality of vehicles: intercepting images every two frames, carrying out gravity center point statistics, and carrying out gravity center movement displacement calculation, wherein the following table is the statistics of tracking target track points based on a frame difference method:
serial number 1 2 3
Number of frames 200 202 204
Vehicle 1(X, Y) (66,32) (92,48) (119,66)
Vehicle 2(X, Y) (78,112) (116,135) (149,156)
Vehicle 3(X, Y) (203,127) (235,149) (265,163)
The vehicle 1 is the rearmost vehicle, the vehicle 2 is the middle vehicle, and the vehicle 3 is the foremost vehicle, and the moving values of the vehicle 1 between two frames are respectively 30 and 31 according to the above table; the vehicle 2 has two inter-frame movement values of 44 and 39; the vehicle 3 moves between two frames by 38, 33, respectively.
The invention uses the algorithm based on the frame difference method to detect and track the moving object. The theoretical basis of the frame difference method is described first, and then each step of the processing is given, and the processing result of each step is given. The result shows that the method of the invention can well detect the moving object, can also track the moving track of the moving object, has higher processing speed and can meet the requirement of real-time property.
Finally, it should be noted that: although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, devices, means, methods, or steps.

Claims (4)

1. A moving object detection method and a track tracking method based on a frame difference method are characterized in that: the method comprises the following steps:
A. detecting and tracking a moving object by adopting a frame difference method:
a. graying processing of an image: because the image in the video is colorful, the factors to be considered in processing are more, and because any pixel of the image has three different color components, and has a lot of information irrelevant to identification, the calculation is more complex, the color image is preprocessed to be changed into a gray image, and the processing speed is accelerated;
b. two frames are subtracted: subtracting two continuous frames of the video sequence to obtain a result; the detected area comprises the positions of a frame in front of the vehicle and a frame behind the vehicle due to the movement of the vehicle, and the black area is generated by subtracting the position of the vehicle tail in the frame in front of the vehicle from the background due to the operation of the vehicle tail to the next position;
c. binarization processing of an image: when tracking the image frame difference, binarization processing needs to be carried out on the frame difference result, namely, a program is used for calculating a gravity center point of a vehicle tail and tracking the track of the vehicle tail, the track is the track of a moving target, and after a detection result is obtained, binarization is carried out on the track, so that a final result can be obtained;
d. and (3) negation operation: the binary image is subjected to negation, and because the vehicle tail needs to be extracted, the result obtained after negation is more accurate;
e. application of mathematical morphology: performing morphological operation on the image subjected to the negation operation, and filling the internal cavity of the moving target in the binary image by using a mathematical morphology method, namely firstly adopting open operation and then adopting closed operation;
B. a moving object track tracking method based on the vehicle body region communication area comprises the following steps: matching and tracking the area of the communicated region of the tail of the vehicle, comprising the following steps:
s1, adopting a connected region analysis algorithm: the areas of the three regions are respectively calculated through the three connected regions, the region with obviously small area is a noise point, the gravity center of the tail of the vehicle is tracked through judgment, and the moving track of the moving object can be obtained by connecting the gravity center point;
s2, center of gravity point labeling: and marking the gravity center point of the tail of the vehicle by using a black square on the image to finish the tracking of the moving object.
S3, tracking the gravity center point tracks of a plurality of vehicles: and intercepting the image every two frames, carrying out gravity center point statistics, and carrying out gravity center movement displacement calculation.
2. The moving object detection method and the trajectory tracking method based on the frame difference method as claimed in claim 1, wherein: the graying processing of the image in the step a adopts a weighted average value method: r, G, B are given different weights depending on importance or other criteria and g is made equal to the weighted average of their values, i.e.:
Figure FDA0002548809720000021
wherein WR,WG,WBThe weighting of R, G, B, since the human eye is most sensitive to green, less sensitive to red, and least sensitive to blue, is generally given by the following equation:
g=0.299R+0.587G+0.114B。
3. the moving object detection method and the trajectory tracking method based on the frame difference method as claimed in claim 1, wherein: the binarization processing of the image specifically adopts a global threshold value method, and the global threshold value method is specifically a maximum inter-class variance algorithm; in the maximum between-class variance algorithm, variance is a measure of the uniformity of gray distribution, the larger the variance value is, the larger the difference between two parts forming an image is, and when part of objects are mistaken for backgrounds or part of the backgrounds are mistaken for objects, the difference between the two parts is reduced, so that the segmentation with the maximum between-class variance means that the probability of wrong segmentation is minimum; the algorithm analyzes the histogram of the input gray image and divides the histogram into two parts, so that the distance between the two parts reaches the maximum value, namely the inter-class variance reaches the maximum value, and the dividing point is the obtained threshold value;
let the gray level of the original gray image be m-1, and the number of pixel points with gray level i be niThen all pixels of the image are:
N=n0+n1+...+nm-1
probability of each value:
Figure FDA0002548809720000031
the gray levels are divided into two groups by threshold T: c0(0,1, 2.., T-1) and C1(T, T +1,.., m-1); the probabilities generated by each group are as follows:
C0probability of class occurrence:
Figure FDA0002548809720000032
C1probability of class occurrence:
Figure FDA0002548809720000033
C0average of class:
Figure FDA0002548809720000034
C1average of class:
Figure FDA0002548809720000035
wherein:
Figure FDA0002548809720000036
is the gray level average of the overall image;
Figure FDA0002548809720000037
the gray level average value when the threshold is T, so the gray level average value of all samples is:
μ=ω0μ01μ1
C0and C1The variance between classes can be found by:
Figure FDA0002548809720000038
varying T from 1 to m-1, finding T when the above formula is at its maximum, i.e. finding max2T at (T)*Value, at this time, T*It is the threshold value that is to be used,2(T) is called a threshold selection function.
4. The moving object detection method and the trajectory tracking method based on the frame difference method as claimed in claim 1, wherein: the opening operation and the closing operation in the step e are defined as follows:
opening operation:
Figure FDA0002548809720000041
and (3) closed operation:
Figure FDA0002548809720000042
in the binary image processing, X is a binary image, S is a structural element, X is expanded by S after being corroded by S, and the opening operation result is an area which can be reached by the structural element when the structural element S moves in parallel in the image and does not overflow from the image; the result of the closed operation is a complement of the area that the structured element S can reach when the inversion of the structured element S is moved in parallel within the background of the image and does not overflow from the background; the opening operation can eliminate the more convex detail part smaller than the structural element in the image; the closed operation can eliminate concave parts in the image, and small holes and gaps are filled, so that the edge of the object is smoother; in order to remove noise generated by threshold segmentation, the image is firstly opened and then closed, so that the aims of eliminating noise and filling small holes are fulfilled.
CN202010569106.9A 2020-06-20 2020-06-20 Moving object detection method and track tracking method based on frame difference method Pending CN111739059A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010569106.9A CN111739059A (en) 2020-06-20 2020-06-20 Moving object detection method and track tracking method based on frame difference method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010569106.9A CN111739059A (en) 2020-06-20 2020-06-20 Moving object detection method and track tracking method based on frame difference method

Publications (1)

Publication Number Publication Date
CN111739059A true CN111739059A (en) 2020-10-02

Family

ID=72651949

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010569106.9A Pending CN111739059A (en) 2020-06-20 2020-06-20 Moving object detection method and track tracking method based on frame difference method

Country Status (1)

Country Link
CN (1) CN111739059A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2616632A (en) * 2022-03-15 2023-09-20 Mercedes Benz Group Ag A method for tracking an object in the surroundings by an assistance system as well as a corresponding assistance system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107507221A (en) * 2017-07-28 2017-12-22 天津大学 With reference to frame difference method and the moving object detection and tracking method of mixed Gauss model
CN109102523A (en) * 2018-07-13 2018-12-28 南京理工大学 A kind of moving object detection and tracking
CN109460764A (en) * 2018-11-08 2019-03-12 中南大学 A kind of satellite video ship monitoring method of combination brightness and improvement frame differential method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107507221A (en) * 2017-07-28 2017-12-22 天津大学 With reference to frame difference method and the moving object detection and tracking method of mixed Gauss model
CN109102523A (en) * 2018-07-13 2018-12-28 南京理工大学 A kind of moving object detection and tracking
CN109460764A (en) * 2018-11-08 2019-03-12 中南大学 A kind of satellite video ship monitoring method of combination brightness and improvement frame differential method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
程娟: "复杂背景下运动目标识别算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2616632A (en) * 2022-03-15 2023-09-20 Mercedes Benz Group Ag A method for tracking an object in the surroundings by an assistance system as well as a corresponding assistance system

Similar Documents

Publication Publication Date Title
CN105427626B (en) A kind of statistical method of traffic flow based on video analysis
US20230289979A1 (en) A method for video moving object detection based on relative statistical characteristics of image pixels
CN108154118A (en) A kind of target detection system and method based on adaptive combined filter with multistage detection
CN108388885A (en) The identification in real time of more people's features towards large-scale live scene and automatic screenshot method
CN104835179B (en) Based on the adaptive improvement ViBe background modeling methods of dynamic background
CN103258332B (en) A kind of detection method of the moving target of resisting illumination variation
WO2020248515A1 (en) Vehicle and pedestrian detection and recognition method combining inter-frame difference and bayes classifier
CN106204594A (en) A kind of direction detection method of dispersivity moving object based on video image
CN104598929A (en) HOG (Histograms of Oriented Gradients) type quick feature extracting method
Soeleman et al. Adaptive threshold for background subtraction in moving object detection using Fuzzy C-Means clustering
CN105184771A (en) Adaptive moving target detection system and detection method
Iraei et al. Object tracking with occlusion handling using mean shift, Kalman filter and edge histogram
CN111739059A (en) Moving object detection method and track tracking method based on frame difference method
CN107123132A (en) A kind of moving target detecting method of Statistical background model
CN109978916A (en) Vibe moving target detecting method based on gray level image characteristic matching
CN108230334B (en) High-concentration wind-blown sand image segmentation method based on gray threshold
CN107301655B (en) Video moving target detection method based on background modeling
Najafzadeh et al. Object tracking using Kalman filter with adaptive sampled histogram
CN105139358A (en) Video raindrop removing method and system based on combination of morphology and fuzzy C clustering
CN106446832B (en) Video-based pedestrian real-time detection method
CN111724415A (en) Video image-based multi-target motion detection and tracking method in fixed scene
CN111724319A (en) Image processing method in video monitoring system
CN111724416A (en) Moving object detection method and trajectory tracking method based on background subtraction
CN115082517B (en) Horse racing scene multi-target tracking method based on data enhancement
CN111862152B (en) Moving target detection method based on inter-frame difference and super-pixel segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201002