CN111724416A - Moving object detection method and trajectory tracking method based on background subtraction - Google Patents

Moving object detection method and trajectory tracking method based on background subtraction Download PDF

Info

Publication number
CN111724416A
CN111724416A CN202010569101.6A CN202010569101A CN111724416A CN 111724416 A CN111724416 A CN 111724416A CN 202010569101 A CN202010569101 A CN 202010569101A CN 111724416 A CN111724416 A CN 111724416A
Authority
CN
China
Prior art keywords
image
background
moving object
value
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010569101.6A
Other languages
Chinese (zh)
Inventor
史彦
缸明义
罗家毅
刘柱
宁平华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MAANSHAN TECHNICAL COLLEGE
Original Assignee
MAANSHAN TECHNICAL COLLEGE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MAANSHAN TECHNICAL COLLEGE filed Critical MAANSHAN TECHNICAL COLLEGE
Priority to CN202010569101.6A priority Critical patent/CN111724416A/en
Publication of CN111724416A publication Critical patent/CN111724416A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a moving object detection method and a track tracking method based on background subtraction. The method comprises the steps of firstly describing a basic concept of background subtraction and a method used in an actual program, then realizing the tracking of a moving target by using a method based on a vehicle body communication region, giving out a processing process in detail, giving out a processing result of each step, and finally obtaining a detection result and a motion track of an object. Experiments prove that the method has better reliability.

Description

Moving object detection method and trajectory tracking method based on background subtraction
The technical field is as follows:
the invention relates to a moving object detection method and a track tracking method based on background subtraction.
Background art:
the detection and extraction of moving objects aims to extract a change region from a background from a video sequence, and is a basic step of the whole processing process. The effective segmentation of moving objects is very important for subsequent steps such as object classification, tracking and behavior understanding, because the subsequent processing is completed on the basis of the subsequent steps. However, motion detection is a difficult task due to real-time changes in the background image, such as weather changes, lighting changes, and other factors.
The invention content is as follows:
the invention aims to provide a moving object detection method and a track tracking method based on background subtraction, which are quick, effective and have good real-time performance and accuracy.
The purpose of the invention is realized by the following technical scheme: a moving object detection method and a track tracking method based on background subtraction are disclosed, which comprises the following steps:
A. detecting and tracking the moving object by adopting a background difference method:
a. graying processing of an image: because the image in the video is colorful, the factors to be considered in processing are more, and because any pixel of the image has three different color components, and has a lot of information irrelevant to identification, the calculation is more complex, the color image is preprocessed to be changed into a gray image, and the processing speed is accelerated;
b. and (3) background reconstruction: the background extraction method applied to moving object detection is an averaging method, the background obtained by the method can adapt to the condition that the video contains disturbance, can not be influenced by light change, has good background extraction and provides convenience for the following detection work;
c. subtracting to obtain a detection result: after the background is obtained, the moving vehicle can be detected by background subtraction, and only the moving target is displayed after the background is removed, so that the obtained result is ideal;
d. binarization processing of an image: after the detection result is obtained, binarization is carried out on the detection result, and the final result can be obtained. Because the image has the conditions of uneven brightness and the like due to the influence of uneven placement position, shooting angle and illumination, and all the factors can influence the binary segmentation of the image, the binary algorithm is very critical in a system, and the advantages and disadvantages of the binary algorithm have great influence on the subsequent work; the image binarization segmentation method is specifically a global threshold value method, which is to determine a threshold value according to a histogram or the spatial distribution of gray level of an image and complete the conversion from a gray level image to a binarization image according to the threshold value;
e. application of mathematical morphology: after the binary image is obtained, because the image has noise such as small holes, morphological operation is carried out on the binary image, the internal holes of the moving object in the binary image are filled up by using a mathematical morphology method, namely, open operation is firstly adopted, and then closed operation is adopted;
B. a moving object track tracking method based on the vehicle body region communication area comprises the following steps: the method realizes the tracking of the moving target according to the area relation of the connected regions of the object, and comprises the following steps:
s1, adopting a connected region analysis algorithm: tracking the area of the connected region, only tracking the center of gravity of the area of the connected region, and when detecting the track of the moving vehicle, firstly analyzing the connected region of the binary image after morphological processing to find out the center of gravity of the vehicle, wherein the motion track of the center of gravity is the motion track of the vehicle, and simultaneously calculating the displacement value between the two centers of gravity, which is the pixel value of the movement of the object between two frames, so as to calculate the instantaneous speed of the movement of the object;
s2, center of gravity point labeling: after a center of gravity point of the communicated region is obtained, returning to the original image and marking;
s3, tracking the gravity center point trajectory of the bicycle: calculating the statistics of the coordinates of the gravity center point of the moving target and the movement displacement of the gravity center;
s4, tracking the gravity center point tracks of a plurality of vehicles: and intercepting the image every two frames, carrying out gravity center point statistics, and carrying out gravity center movement displacement calculation.
The invention is further improved in that: the graying processing of the image in the step a adopts a weighted average value method: r, G, B are given different weights depending on importance or other criteria and g is made equal to the weighted average of their values, i.e.:
Figure BDA0002548808630000031
wherein WR,WG,WBThe weighting of R, G, B, since the human eye is most sensitive to green, less sensitive to red, and least sensitive to blue, is generally given by the following equation:
g=0.299R+0.587G+0.114B。
the invention is further improved in that: the averaging method in the step b specifically comprises the following steps:
the background image B (x, y) is composed of a sequence of images (I)0(x,y),I1(x,y)......IN-1(x, y)) as shown in the following formula:
Figure BDA0002548808630000032
the averaging method utilizes that the pixel value of a certain point in the image sequence obeys normal distribution, and the minimum number of the relative pixel value changes is the background part at most, so the background pixel value is solved by the averaging method, and an ideal background can be obtained; the larger the value of N, the smaller the influence of the moving object on the brightness of the point, and the cleaner the background is when the average frame number is larger.
The invention is further improved in that: in step c, the average background image of 100 frames is subtracted from the current frame to detect the moving object.
The invention is further improved in that: the binarization processing of the image specifically adopts a global threshold value method, and the global threshold value method is specifically a maximum inter-class variance algorithm; in the maximum between-class variance algorithm, variance is a measure of the uniformity of gray distribution, the larger the variance value is, the larger the difference between two parts forming an image is, and when part of objects are mistaken for backgrounds or part of the backgrounds are mistaken for objects, the difference between the two parts is reduced, so that the segmentation with the maximum between-class variance means that the probability of wrong segmentation is minimum; the algorithm analyzes the histogram of the input gray image and divides the histogram into two parts, so that the distance between the two parts reaches the maximum value, namely the inter-class variance reaches the maximum value, and the dividing point is the obtained threshold value;
let the gray level of the original gray image be m-1, and the number of pixel points with gray level i be niThen all pixels of the image are:
N=n0+n1+...+nm-1
probability of each value:
Figure BDA0002548808630000041
the gray levels are divided into two groups by threshold T: c0(0,1, 2.., T-1) and C1(T, T +1,.., m-1); the probabilities generated by each group are as follows:
C0probability of class occurrence:
Figure BDA0002548808630000042
C1probability of class occurrence:
Figure BDA0002548808630000043
C0average of class:
Figure BDA0002548808630000044
C1average of class:
Figure BDA0002548808630000045
wherein:
Figure BDA0002548808630000046
is the gray level average of the overall image;
Figure BDA0002548808630000047
is a thresholdThe value is T-time gray average, so the gray average of all samples is:
μ=ω0μ01μ1
C0and C1The variance between classes can be found by:
Figure BDA0002548808630000051
varying T from 1 to m-1, finding T when the above formula is at its maximum, i.e. finding max2T at (T)*Value, at this time, T*It is the threshold value that is to be used,2(T) is called a threshold selection function.
The invention is further improved in that: and e, performing opening operation and closing operation in the step e, wherein the opening operation and the closing operation are defined as follows:
opening operation:
Figure BDA0002548808630000052
and (3) closed operation:
Figure BDA0002548808630000053
in the binary image processing, X is a binary image, S is a structural element, X is expanded by S after being corroded by S, and the opening operation result is an area which can be reached by the structural element when the structural element S moves in parallel in the image and does not overflow from the image; the result of the closed operation is a complement of the area that the structured element S can reach when the inversion of the structured element S is moved in parallel within the background of the image and does not overflow from the background; the opening operation can eliminate the more convex detail part smaller than the structural element in the image; the closed operation can eliminate concave parts in the image, and small holes and gaps are filled, so that the edge of the object is smoother; in order to remove noise generated by threshold segmentation, the image is firstly opened and then closed, so that the aims of eliminating noise and filling small holes are fulfilled.
The invention has the beneficial effects that:
(1) the invention can effectively detect the moving object, can detect the whole contour of the moving target and can realize the detection of multiple moving objects.
(2) The invention can effectively inhibit the noise in the video background, and has good processing effect on the condition that the background contains disturbance, thereby having good anti-noise property.
(3) The invention can effectively solve the problem that the illumination changes along with the time.
(4) The invention is based on the tracking algorithm of the area of the vehicle body communication area, and can effectively track the motion track of the vehicle through the analysis of the area of the communication area, and the effectiveness and the better real-time performance are proved by experiments.
The specific implementation mode is as follows:
in order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is obvious that the described embodiments are a part of the embodiments of the present invention, but not all of the embodiments. Elements and features described in one embodiment of the invention may be combined with elements and features shown in one or more other embodiments. It should be noted that the illustration omits illustration and description of components and processes not relevant to the present invention that are known to those of ordinary skill in the art for clarity purposes. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
Background subtraction is one of the most common methods for segmenting moving objects. The background difference method is to implement the detection of a moving target based on the subtraction of an image sequence and a reference background model, then carry out binarization processing on the difference image, and then carry out some subsequent operations to obtain a result. The background difference method is suitable for the situation that a camera is static, a background model is established for a static background, and an area with large brightness change is determined by comparing a current image frame with the background model, so that the image is considered as a foreground area, namely a moving target.
The method has fast calculation speed and can obtain complete and accurate description about the region of the moving object. In practical application, a certain algorithm is needed to dynamically update the background model so as to adapt to the change of the environment.
The basic formula is as follows:
Dk(x,y)=fk(x,y)-Bk-1(x,y)+128
Figure BDA0002548808630000061
Bk-1as a background image, fkFor the current image, DkAs a differential image, RkAnd obtaining the binarized image.
In the actual procedure, 128 is added to the value obtained by subtracting the current frame from the background frame, so that the background can be eliminated, and the information of the subtracted image can be displayed.
When background subtraction is applied, the establishment of a background model is crucial to the method, the first frame cannot be simply used as the background, and because the first frame may contain vehicles, many developers are dedicated to research different background models at present to reduce the influence of dynamic scene changes on moving object detection, achieve better detection effect and enable the algorithm to have better robustness. The invention uses statistical method to build background model and dynamically updates in real time, thus obtaining clean background and better detecting moving object. Experiments prove that the method is quick and effective and has good real-time performance and accuracy.
A moving object detection method and a track tracking method based on background subtraction comprise the following steps:
A. detecting and tracking the moving object by adopting a background difference method:
a. graying processing of an image:
because the image in the video is colorful, the factors to be considered in processing are more, and because any pixel of the image has three different color components, there are many pieces of information which are irrelevant to identification, and the calculation is carried outThe color image is preprocessed to become a gray image[20]And the processing speed is accelerated.
In the RGB model, if R ═ G ═ B, the color represents a gray-scale color, where the value of R ═ G ═ B is called the gray-scale value, and we denote by G. The process of converting color into gray scale is called graying, and the graying processing method is as follows:
weighted average method: r, G, B are given different weights depending on importance or other criteria and g is made equal to the weighted average of their values, i.e.:
Figure BDA0002548808630000081
wherein WR,WG,WBRespectively R, G, B. Since the human eye is most sensitive to green, the second most sensitive to red and the lowest sensitive to blue, we generally use the following formula:
g=0.299R+0.587G+0.114B
the final purpose of processing the image is to highlight the moving object and eliminate the background, so different processing can be performed for different images, for example, when the moving object in the figure is red, only the value of R is considered. This not only saves time, but also provides conditions for the next detection.
b. And (3) background reconstruction: if the first frame is used as the background, when the first frame contains a moving object, the detection result is greatly influenced, so that the background is not obtained from the first frame of the image sequence generally, but is obtained by initializing a period of time statistics at the beginning. The background extraction method applied to moving object detection is an averaging method, the background obtained by the method can adapt to the condition that the video contains disturbance, can not be influenced by light change, has good background extraction and provides convenience for the following detection work;
the averaging method specifically comprises the following steps:
the background image B (x, y) is composed of a sequence of images (I)0(x,y),I1(x,y)......IN-1(x, y)) as shown in the following formula:
Figure BDA0002548808630000082
the averaging method utilizes that the pixel value of a certain point in the image sequence follows normal distribution, and the background pixel value can be obtained by the averaging method to obtain an ideal background because the background pixel value is obtained by the most number (near the average value) of the pixels with small relative pixel value changes. Generally, when there are many moving objects, it often takes 100 frames or more to obtain the ideal background.
After analysis and comparison, the invention uses an averaging method to extract the background, continuously updates the background, performs a large amount of experiments, uses the first 100 frames to perform the experiments, and counts that 11 vehicles pass through the period of time for 8 s. From the above formula, the larger the value of N, the smaller the influence of the moving object on the brightness of the point. The invention respectively makes the average of the first 5 frames, 10 frames, 20 frames, 50 frames and 100 frames, and the experimental result shows that the more the average frame number is, the cleaner the background is. Thus, the background obtained by the method of the present invention can adapt to the situation that the video contains disturbance, and can not be influenced by the light change. The background extraction is good, and convenience is provided for the following detection work.
c. Subtracting to obtain a detection result: after the background is obtained, the moving vehicle can be detected by background subtraction, and only the moving target is displayed after the background is removed, so that the obtained result is ideal; the invention uses the subtraction of the background image with 5 frames average and the background image with 100 frames average and the current frame to detect the moving object, the result is displayed, the detection effect of the latter is better. Compared with the results, the cleaner the background extraction is, the better the detail outline of the detection effect is.
d. Binarization processing of an image: after the detection result is obtained, binarization is carried out on the detection result, and the final result can be obtained. Because the conditions of uneven brightness and the like of an image can occur due to the influence of uneven placement position, shooting angle and illumination, and all the factors can influence the binary segmentation of the image, the binary algorithm is very critical in a system, and the advantages and disadvantages of the binary algorithm can greatly influence the subsequent work.
The image binarization segmentation method is a global threshold value method, which determines a threshold value according to the histogram or the spatial distribution of the gray level of the image, and completes the conversion from the gray level image to the binarization image according to the threshold value.
The representative algorithm of the global threshold is a maximum between-class variance algorithm, the maximum between-class variance threshold is also called as a Dajin threshold, and the maximum between-class variance threshold is derived on the basis of the least square principle, so that a better effect can be obtained.
In the maximum between-class variance algorithm, variance is a measure of the uniformity of the gray distribution, and the larger the variance value is, the larger the difference between two parts constituting an image is, and when part of an object is mistaken for a background or part of the object is mistaken for a background, the difference between the two parts is reduced, so that the segmentation with the maximum between-class variance means the probability of wrong segmentation is minimum. The algorithm analyzes the histogram of the input gray image and divides it into two parts so that the distance between them (i.e. the inter-class variance) reaches the maximum value, and the division point is the obtained threshold value.
Let the gray level of the original gray image be m-1, and the number of pixel points with gray level i be niThen all pixels of the image are:
N=n0+n1+...+nm-1
probability of each value:
Figure BDA0002548808630000101
the gray levels are divided into two groups by threshold T: c0(0,1, 2.., T-1) and C1=(T,T+1,...,m-1)。
The probabilities generated by each group are as follows:
C0probability of class occurrence
Figure BDA0002548808630000102
C1Probability of class occurrence
Figure BDA0002548808630000103
C0Mean value of class
Figure BDA0002548808630000104
C1Mean value of class
Figure BDA0002548808630000105
Wherein:
Figure BDA0002548808630000106
is the gray level average of the overall image;
Figure BDA0002548808630000107
the gray level average value when the threshold is T, so the gray level average value of all samples is:
μ=ω0μ01μ1
C0and C1The variance between classes can be found by:
Figure BDA0002548808630000108
varying T from 1 to m-1, finding T when the above formula is at its maximum, i.e. finding max2T at (T)*Value, at this time, T*Is the threshold value, the2(T) is called a threshold selection function.
Because the method has good self-adaptability, the method uses the maximum inter-class variance method to carry out binarization, realizes the segmentation of binarization, and expresses the background in black, so that a moving target is more prominent, and convenience is provided for subsequent processing work.
e. Application of mathematical morphology: after some processing, when some points in the target are closer to the background image, a hole appears, and a complete result can be obtained only by further removing noise points. The invention fills the inner hole of the moving object in the binary image by using a mathematical morphology method, and can obtain a better result through tests.
Mathematical morphology is composed of a set of morphological algebraic operators, whose basic operations are 4: dilation (or dilation), erosion (or erosion), open or closed, which are characteristic in binary and grayscale images, respectively. Based on these basic operations, various mathematical morphology utility algorithms can be derived and combined, which can be used to analyze and process image shapes and structures, including image segmentation, feature extraction, boundary detection, etc. The mathematical morphology method is to collect the information of the image by using a probe called a structural element, and when the probe moves in the image, the mutual relationship between each part of the image can be examined, so as to know the structural characteristics of the image.
The invention adopts the open operation and then the close operation, and the open operation and the close operation are defined as follows:
opening operation:
Figure BDA0002548808630000111
and (3) closed operation:
Figure BDA0002548808630000112
the open operation of X by S means that X is eroded by S and then expanded by S, and the open operation result is an area that the structured element S can reach when the structured element S is moved in parallel within the image without overflowing from the image. The result of the closing operation is a complement of the area that the structured element S can reach when the inversion of the structured element S is moved in parallel within the background of the image and does not overflow from the background. Obviously, the opening operation can eliminate the more convex detail part smaller than the structural element in the image; the closed operation can eliminate concave parts in the image, and small holes and gaps are filled so that the edge of the object is smoother. In order to remove noise generated by threshold segmentation, the image can be firstly opened and then closed, so that the aims of eliminating noise and filling small holes are fulfilled.
And (3) analyzing a connected region:
B. the detection of the moving object is completed in the above, and the following processes are performed on the trajectory tracking of the moving object. The method adopted by the invention realizes the tracking of the moving target according to the area relation of the connected regions of the object. The method comprises the following steps:
s1, for background subtraction, the area of the connected region is the whole vehicle, and the essence of the algorithm is to track the area of the connected region.
For the track tracking of the moving object, the gravity center of the area of the connected region is tracked. The experimental result of the text takes a vehicle as an example, when a moving vehicle track is detected, firstly, a connection area analysis is carried out on a binary image after morphological processing, the gravity center of the vehicle can be found out, the motion track of the gravity center is the motion track of the vehicle, and meanwhile, a displacement value between two gravity centers can be calculated, and the value is a pixel value of the movement of an object between two frames, so that the instantaneous speed of the movement of the object is calculated.
S2, center of gravity point labeling: after a center of gravity point of the communicated region is obtained, returning to the original image and marking; the gravity center point is marked, and then the gravity center point calculation is carried out on a series of continuous video frame images, so that the motion trail of the target can be obtained.
S3, tracking the gravity center point trajectory of the bicycle: calculating the statistics of the coordinates of the gravity center point of the moving target and the movement displacement of the gravity center; enumerating the target gravity center tracking of a video sequence, the following table is the statistics of the coordinates of the gravity center point of a moving target and the calculation of the movement displacement of two gravity centers:
Figure BDA0002548808630000121
Figure BDA0002548808630000131
s4, tracking the gravity center point tracks of a plurality of vehicles: assuming that the image contains multiple vehicles, intercepting the image every two frames, namely a 200 th frame, a 202 th frame and a 204 th frame, and performing center of gravity point statistics, wherein the steps are the same as those of a single vehicle, and the following table is statistics of tracking target track points.
Serial number 1 2 3
Number of frames 200 202 204
Vehicle 1(X, Y) (76,37) (104,54) (126,74)
Vehicle 2(X, Y) (86,120) (122,145) (161,169)
Vehicle 3(X, Y) (213,136) (246,157) (279,176)
The vehicle 1 is the rearmost vehicle, the vehicle 2 is the middle vehicle, the vehicle 3 is the foremost vehicle, and the moving values of the vehicle 1 between two frames are respectively 32 and 30 according to the calculation from the table; the moving values of the vehicle 2 between two frames are 41 and 45 respectively; the vehicle 3 moves between two frames by 39, 38 respectively.
The invention completes the detection and the track tracking of the moving object on the basis of the theory of background subtraction. The method comprises the steps of firstly describing a basic concept of background subtraction and a method used in an actual program, then realizing the tracking of a moving target by using a method based on a vehicle body communication region, giving out a processing process in detail, giving out a processing result of each step, and finally obtaining a detection result and a motion track of an object. Experiments prove that the method has better reliability.
Finally, it should be noted that: although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, devices, means, methods, or steps.

Claims (6)

1. A moving object detection method and a track tracking method based on background subtraction are characterized in that: the method comprises the following steps:
A. detecting and tracking the moving object by adopting a background difference method:
a. graying processing of an image: because the image in the video is colorful, the factors to be considered in processing are more, and because any pixel of the image has three different color components, and has a lot of information irrelevant to identification, the calculation is more complex, the color image is preprocessed to be changed into a gray image, and the processing speed is accelerated;
b. and (3) background reconstruction: the background extraction method applied to moving object detection is an averaging method, the background obtained by the method can adapt to the condition that the video contains disturbance, can not be influenced by light change, has good background extraction and provides convenience for the following detection work;
c. subtracting to obtain a detection result: after the background is obtained, the moving vehicle can be detected by background subtraction, and only the moving target is displayed after the background is removed, so that the obtained result is ideal;
d. binarization processing of an image: after the detection result is obtained, carrying out binarization on the detection result to obtain a final result; because the image has uneven brightness due to the influence of uneven placement position, shooting angle and illumination, and all the factors can influence the binary segmentation of the image, the binary algorithm is very critical in the system, and the advantages and disadvantages of the binary algorithm can greatly influence the subsequent work; the image binarization segmentation method is specifically a global threshold value method, which is to determine a threshold value according to a histogram or the spatial distribution of gray level of an image and complete the conversion from a gray level image to a binarization image according to the threshold value;
e. application of mathematical morphology: after the binary image is obtained, because the image has noise such as small holes, morphological operation is carried out on the binary image, the internal holes of the moving object in the binary image are filled up by using a mathematical morphology method, namely, open operation is firstly adopted, and then closed operation is adopted;
B. a moving object track tracking method based on the vehicle body region communication area comprises the following steps: the method realizes the tracking of the moving target according to the area relation of the connected regions of the object, and comprises the following steps:
s1, adopting a connected region analysis algorithm: tracking the area of the connected region, only tracking the center of gravity of the area of the connected region, and when detecting the track of the moving vehicle, firstly analyzing the connected region of the binary image after morphological processing to find out the center of gravity of the vehicle, wherein the motion track of the center of gravity is the motion track of the vehicle, and simultaneously calculating the displacement value between the two centers of gravity, which is the pixel value of the movement of the object between two frames, so as to calculate the instantaneous speed of the movement of the object;
s2, center of gravity point labeling: after a center of gravity point of the communicated region is obtained, returning to the original image and marking;
s3, tracking the gravity center point trajectory of the bicycle: calculating the statistics of the coordinates of the gravity center point of the moving target and the movement displacement of the gravity center;
s4, tracking the gravity center point tracks of a plurality of vehicles: and intercepting the image every two frames, carrying out gravity center point statistics, and carrying out gravity center movement displacement calculation.
2. The moving object detection method and trajectory tracking method based on background subtraction according to claim 1, characterized in that: the graying processing of the image in the step a adopts a weighted average value method: r, G, B are given different weights depending on importance or other criteria and g is made equal to the weighted average of their values, i.e.:
Figure FDA0002548808620000021
wherein WR,WG,WBThe weighting of R, G, B, since the human eye is most sensitive to green, less sensitive to red, and least sensitive to blue, is generally given by the following equation:
g=0.299R+0.587G+0.114B。
3. the moving object detection method and trajectory tracking method based on background subtraction according to claim 1, characterized in that: the averaging method in the step b specifically comprises the following steps:
the background image B (x, y) is composed of a sequence of images (I)0(x,y),I1(x,y)......IN-1(x, y)) as shown in the following formula:
Figure FDA0002548808620000031
the averaging method utilizes that the pixel value of a certain point in the image sequence obeys normal distribution, and the minimum number of the relative pixel value changes is the background part at most, so the background pixel value is solved by the averaging method, and an ideal background can be obtained; the larger the value of N, the smaller the influence of the moving object on the brightness of the point, and the cleaner the background is when the average frame number is larger.
4. The moving object detection method and trajectory tracking method based on background subtraction according to claim 1, characterized in that: in the step c, the average background image of 100 frames is subtracted from the current frame to detect the moving object.
5. The moving object detection method and trajectory tracking method based on background subtraction according to claim 1, characterized in that: the binarization processing of the image specifically adopts a global threshold value method, and the global threshold value method is specifically a maximum inter-class variance algorithm; in the maximum between-class variance algorithm, variance is a measure of the uniformity of gray distribution, the larger the variance value is, the larger the difference between two parts forming an image is, and when part of objects are mistaken for backgrounds or part of the backgrounds are mistaken for objects, the difference between the two parts is reduced, so that the segmentation with the maximum between-class variance means that the probability of wrong segmentation is minimum; the algorithm analyzes the histogram of the input gray image and divides the histogram into two parts, so that the distance between the two parts reaches the maximum value, namely the inter-class variance reaches the maximum value, and the dividing point is the obtained threshold value;
let the gray level of the original gray image be m-1, and the number of pixel points with gray level i be niThen all pixels of the image are:
N=n0+n1+...+nm-1
probability of each value:
Figure FDA0002548808620000041
the gray levels are divided into two groups by threshold T: c0(0,1, 2.., T-1) and C1(T, T +1,.., m-1); the probabilities generated by each group are as follows:
C0probability of class occurrence:
Figure FDA0002548808620000042
C1probability of class occurrence:
Figure FDA0002548808620000043
C0average of class:
Figure FDA0002548808620000044
C1average of class:
Figure FDA0002548808620000045
wherein:
Figure FDA0002548808620000046
is the gray level average of the overall image;
Figure FDA0002548808620000047
the gray level average value when the threshold is T, so the gray level average value of all samples is:
μ=ω0μ01μ1
C0and C1The variance between classes can be found by:
Figure FDA0002548808620000048
varying T from 1 to m-1, finding T when the above formula is at its maximum, i.e. finding max2T at (T)*Value, at this time, T*It is the threshold value that is to be used,2(T) is called a threshold selection function.
6. The moving object detection method and trajectory tracking method based on background subtraction according to claim 1, characterized in that: the opening operation and the closing operation in the step e are defined as follows:
opening operation:
Figure FDA0002548808620000049
and (3) closed operation:
Figure FDA00025488086200000410
in the binary image processing, X is a binary image, S is a structural element, X is expanded by S after being corroded by S, and the opening operation result is an area which can be reached by the structural element when the structural element S moves in parallel in the image and does not overflow from the image; the result of the closed operation is a complement of the area that the structured element S can reach when the inversion of the structured element S is moved in parallel within the background of the image and does not overflow from the background; the opening operation can eliminate the more convex detail part smaller than the structural element in the image; the closed operation can eliminate concave parts in the image, and small holes and gaps are filled, so that the edge of the object is smoother; in order to remove noise generated by threshold segmentation, the image is firstly opened and then closed, so that the aims of eliminating noise and filling small holes are fulfilled.
CN202010569101.6A 2020-06-20 2020-06-20 Moving object detection method and trajectory tracking method based on background subtraction Pending CN111724416A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010569101.6A CN111724416A (en) 2020-06-20 2020-06-20 Moving object detection method and trajectory tracking method based on background subtraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010569101.6A CN111724416A (en) 2020-06-20 2020-06-20 Moving object detection method and trajectory tracking method based on background subtraction

Publications (1)

Publication Number Publication Date
CN111724416A true CN111724416A (en) 2020-09-29

Family

ID=72568675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010569101.6A Pending CN111724416A (en) 2020-06-20 2020-06-20 Moving object detection method and trajectory tracking method based on background subtraction

Country Status (1)

Country Link
CN (1) CN111724416A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112037266A (en) * 2020-11-05 2020-12-04 北京软通智慧城市科技有限公司 Falling object identification method and device, terminal equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103794050A (en) * 2014-01-21 2014-05-14 华东交通大学 Real-time transport vehicle detecting and tracking method
CN103903278A (en) * 2012-12-28 2014-07-02 重庆凯泽科技有限公司 Moving target detection and tracking system
CN104282020A (en) * 2014-09-22 2015-01-14 中海网络科技股份有限公司 Vehicle speed detection method based on target motion track
CN106204594A (en) * 2016-07-12 2016-12-07 天津大学 A kind of direction detection method of dispersivity moving object based on video image
CN108346160A (en) * 2017-12-22 2018-07-31 湖南源信光电科技股份有限公司 The multiple mobile object tracking combined based on disparity map Background difference and Meanshift

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103903278A (en) * 2012-12-28 2014-07-02 重庆凯泽科技有限公司 Moving target detection and tracking system
CN103794050A (en) * 2014-01-21 2014-05-14 华东交通大学 Real-time transport vehicle detecting and tracking method
CN104282020A (en) * 2014-09-22 2015-01-14 中海网络科技股份有限公司 Vehicle speed detection method based on target motion track
CN106204594A (en) * 2016-07-12 2016-12-07 天津大学 A kind of direction detection method of dispersivity moving object based on video image
CN108346160A (en) * 2017-12-22 2018-07-31 湖南源信光电科技股份有限公司 The multiple mobile object tracking combined based on disparity map Background difference and Meanshift

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
双锴: "《计算机视觉》", 31 January 2020, 北京:北京邮电大学出版社 *
彭洁茹: "基于视频图像分析的雾天车辆检测与跟踪研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *
程娟: "复杂背景下运动目标识别算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112037266A (en) * 2020-11-05 2020-12-04 北京软通智慧城市科技有限公司 Falling object identification method and device, terminal equipment and storage medium
CN112037266B (en) * 2020-11-05 2021-02-05 北京软通智慧城市科技有限公司 Falling object identification method and device, terminal equipment and storage medium

Similar Documents

Publication Publication Date Title
CN104392468B (en) Based on the moving target detecting method for improving visual background extraction
EP1683105B1 (en) Object detection in images
CN106408594B (en) Video multi-target tracking based on more Bernoulli Jacob's Eigen Covariances
US20230289979A1 (en) A method for video moving object detection based on relative statistical characteristics of image pixels
CN105427626B (en) A kind of statistical method of traffic flow based on video analysis
CN108154118A (en) A kind of target detection system and method based on adaptive combined filter with multistage detection
CN105513053B (en) One kind is used for background modeling method in video analysis
CN106204594A (en) A kind of direction detection method of dispersivity moving object based on video image
CN110555868A (en) method for detecting small moving target under complex ground background
CN108010047A (en) A kind of moving target detecting method of combination unanimity of samples and local binary patterns
Ravanfar et al. Low contrast sperm detection and tracking by watershed algorithm and particle filter
CN111369570A (en) Multi-target detection tracking method for video image
Gurrala et al. A new segmentation method for plant disease diagnosis
Fung et al. Effective moving cast shadow detection for monocular color image sequences
CN111724319A (en) Image processing method in video monitoring system
CN111724416A (en) Moving object detection method and trajectory tracking method based on background subtraction
CN109978916A (en) Vibe moving target detecting method based on gray level image characteristic matching
Najafzadeh et al. Object tracking using Kalman filter with adaptive sampled histogram
ELHarrouss et al. Moving objects detection based on thresholding operations for video surveillance systems
CN111724415A (en) Video image-based multi-target motion detection and tracking method in fixed scene
He et al. Multi-moving target detection based on the combination of three frame difference algorithm and background difference algorithm
CN111739059A (en) Moving object detection method and track tracking method based on frame difference method
CN103049738B (en) Many Method of Vehicle Segmentations that in video, shade connects
Lv et al. Method to acquire regions of fruit, branch and leaf from image of red apple in orchard
CN114820718A (en) Visual dynamic positioning and tracking algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200929