CN110852228B - Method and system for extracting dynamic background and detecting foreground object in monitoring video - Google Patents

Method and system for extracting dynamic background and detecting foreground object in monitoring video Download PDF

Info

Publication number
CN110852228B
CN110852228B CN201911065308.3A CN201911065308A CN110852228B CN 110852228 B CN110852228 B CN 110852228B CN 201911065308 A CN201911065308 A CN 201911065308A CN 110852228 B CN110852228 B CN 110852228B
Authority
CN
China
Prior art keywords
background
frame
pixel point
value
foreground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911065308.3A
Other languages
Chinese (zh)
Other versions
CN110852228A (en
Inventor
王连涛
邓东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Campus of Hohai University
Original Assignee
Changzhou Campus of Hohai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Campus of Hohai University filed Critical Changzhou Campus of Hohai University
Priority to CN201911065308.3A priority Critical patent/CN110852228B/en
Publication of CN110852228A publication Critical patent/CN110852228A/en
Application granted granted Critical
Publication of CN110852228B publication Critical patent/CN110852228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a method and a system for extracting a dynamic background and detecting a foreground object in a surveillance video, which are used for acquiring an original background of a first frame and acquiring a frame to be detected; subtracting the background frame from the frame to be detected to obtain a processing frame; judging a foreground part and a background part in a differential processing frame; updating the background portion to a background frame portion, extracting a portion of interest from the foreground portion; shifting out the value of the pixel point with the longest existence time in the background, and shifting the value of the pixel point which is newly determined as the background part; and obtaining the background by a method of extracting background frames from a plurality of frames of background points together. The advantages are that: the invention can extract the complete background from the monitoring picture covered by the foreground object to obtain the possibly shielded mark and can also cope with the detection of the foreground object under the condition that the factors such as light rays and the like are continuously changed.

Description

Method and system for extracting dynamic background and detecting foreground object in monitoring video
Technical Field
The invention relates to a method and a system for extracting a dynamic background and detecting a foreground object in a surveillance video, and belongs to the technical field of digital image processing.
Background
Digital image processing is widely used in every aspect of life. However, to obtain an image of the object of interest, it must first be separated from the background, i.e., the background subtracted. Background subtraction is based on background modeling. The background modeling technology commonly used at present mainly adopts self-adaptive background updating and a simpler mean value method to obtain the background. The two methods have poor adaptability to outdoor complex environments and are not suitable for some specific application environments. For example, in the case of modeling a background road based on traffic video stream, for the adaptive background updating method, under the condition that the intersection is traffic-blocked for several hours for a long time, the method cannot obtain a good background. Under the condition that the outdoor light changes rapidly, the purpose of removing the background in the foreground detection can be achieved only by taking the pixel point with the closest storage time distance as the corresponding pixel point of the background frame. For outdoor background acquisition, not only needs to acquire a complete stationary object without foreground, but also needs to acquire a more complete stationary object whose brightness changes continuously for outdoor light.
Disclosure of Invention
The invention aims to solve the technical problem that the prior background modeling technology can only provide a background without a foreground object but has poor effect under outdoor variable light conditions, and provides a method and a system for dynamic background extraction and foreground object detection in a surveillance video so as to adapt to application of acquiring a foreground by digital image processing under outdoor conditions.
In order to solve the technical problem, the invention provides a method for extracting a dynamic background and detecting a foreground object in a surveillance video, which comprises the steps of obtaining an original background of a first frame and obtaining a frame to be detected; subtracting the background frame from the frame to be detected to obtain a processing frame; judging a foreground part and a background part in a distinguishing processing frame; updating the background portion to a background frame portion, extracting a portion of interest from the foreground portion; shifting out the value of the pixel point with the longest existence time in the background, and shifting the value of the pixel point which is newly determined as the background part; and obtaining the background by a method of extracting background frames from a plurality of background points together.
Further, a background frame is obtained by using an averaging method, and the background of the first frame in the obtained background frame has no foreground object or has a faint foreground object which is blurred.
Further, the acquired frame to be measured is converted from the rgb image into a gray image, filtered by a gaussian filter, and then subtracted by a background frame to obtain a processed frame.
Further, a matrix subtraction mode is adopted, the background frame is subtracted from the frame to be detected, the absolute value after subtraction is larger than a preset threshold value, and then the frame is determined as a foreground part, and otherwise, the frame is determined as a background part.
Further, a sum array and a count array are set, the latest five to one hundred values of each pixel point which is determined as a background pixel point are stored through the sum array, the number of the stored background points of each pixel point is recorded through the count array, the pixel value of each background point is obtained through back (x, y) which is sum (x, y)/count (x, y), wherein x and y represent the horizontal and vertical coordinates of the pixel point, back (x, y) represents the current background value of the (x, y) pixel point, count (x, y) represents the total value of how many background points are stored in sum (x, y) of the (x, y) pixel points, and sum (x, y) represents the sum of the pixel values of the background points which are stored in the previous count (x, y) pixel points.
A dynamic background extraction and foreground object detection system in a surveillance video comprises a data acquisition module, a judgment processing module and a background acquisition module;
the data acquisition module is used for acquiring an original background of the first frame and acquiring a frame to be detected;
the judging and processing module is used for subtracting the background frame from the frame to be detected to obtain a processing frame, judging and distinguishing a foreground part and a background part in the processing frame, updating the background part to a background frame part, and extracting an interested part from the foreground part;
the background acquisition module is used for shifting out the values of the pixel points with the longest existence time in the background, shifting the values of the pixel points which are newly determined as the background part into the background acquisition module, and acquiring the background by a method of extracting background frames from a plurality of frames of background points together.
Further, the data acquisition module includes a background frame acquisition module, configured to acquire a background frame by using an averaging method, where a first frame of background in the acquired background frame has no foreground object or a blurred and light foreground object.
Further, the determination processing module includes a frame to be measured preprocessing module, which is configured to convert the acquired frame to be measured from the rgb image into a grayscale image and filter the grayscale image with a gaussian filter.
Further, the judgment processing module comprises a background and foreground part judgment module, which is used for subtracting the background frame from the frame to be detected in a matrix subtraction mode, and if the absolute value after subtraction is greater than a preset threshold value, the background part is determined, otherwise, the background part is determined.
Further, the background acquisition module includes a background point pixel value acquisition module, configured to set a sum array and a count array, store the latest five to one hundred values of each pixel point identified as a background pixel point through the sum array, record the number of the background points stored in each pixel point through the count array, acquire the pixel value of each background point through back (x, y) ═ sum (x, y)/count (x, y), where x, y represent horizontal and vertical coordinates of the pixel point, back (x, y) represents the current background value of the (x, y) pixel point, count (x, y) represents how many background point values are stored in sum (x, y) of the (x, y) pixel points, and sum (x, y) of the pixel values of the previous count (x, y) background points is stored in sum (x, y) pixel points.
The invention has the following beneficial effects:
the invention can extract the complete background from the monitoring picture covered by the foreground object to obtain the possibly shielded mark and can also cope with the detection of the foreground object under the condition that the factors such as light rays and the like are continuously changed.
Drawings
FIG. 1 is a background image obtained by averaging;
FIG. 2 is a flow chart of the method of the present invention;
FIG. 3 is a background diagram obtained by background modeling in a conventional method;
FIG. 4 is a background image obtained after the update of the present invention;
FIG. 5 is a detected target frame;
FIG. 6 is a target frame after scene subtraction;
fig. 7 is a computer decision object of interest of the object frame after the over-binarization and noise reduction processes.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
The specific implementation principle is explained by applying the detection method of the dynamic background and the foreground object in the monitoring video to specific lane background modeling and vehicle processing.
As shown in fig. 1, the background map obtained by the averaging method shows the complete background of the lane, but there are very light vehicles in the map, because the vehicles appear less frequently in all frames. Of all frames, the most appearing is the lane. The larger the number of frames selected, the more reflective of the background, but the foreground will also appear in the background frames, which is an unsuccessful background modeling method, but can already be updated as the initial background frame of the present invention.
The method of the present invention is illustrated in figure 2. The operation is started from step 1, and step 1 reads in initialized background frames. The background frame only needs to be able to distinguish foreground objects, and does not need to reflect the complete background situation. The background generated by the averaging method is selected to achieve the effect, so that the background modeling is performed by selecting the FIG. 1 as an initialization background frame. Step 2, reading in the next frame of the video comprises converting the picture from the rgb image into a gray image and filtering by a Gaussian filter, wherein the principle of converting into the gray image is as follows: the principle of the Grag (i, j) ═ r (i, j) + g (i, j) + b (i, j))/3 gaussian filter is as shown in the formula:
Figure BDA0002259129770000051
(i, j) represents the coordinates of the pixel points, r (i, j), g (i, j), b (i, j) respectively represent the brightness of three different colors of red, green and blue under the method of rgb representation of the picture, Grag (i, j) represents the brightness of the gray level image of the pixel points, sigma is the standard deviation, A is
Figure BDA0002259129770000052
And 3, background subtraction is carried out on the updated background after the processing of the picture of the frame on the adopted background is finished, and the background is always in a constantly changing state along with the continuous advancing of the number of the processing frames. And 4, recording pixel points which are determined as foreground in the H frame by the Gengxin array in the step 4, setting all the pixel points to be zero when the Gengxin array is initialized to represent that the pixel points at the corresponding positions are background points, and setting the corresponding position of the Gengxin array to be 1 to represent that the pixel points are foreground points. In step 5 and step 6, if the point is a background point, the corresponding value of H should be 0, but not every point in H greater than 0 is a foreground point. The outdoor light is changed continuously, and most background points are not 0 in H. When determining whether a point of the H array is a background point, a threshold needs to be set. The invention is applied to the establishment of the lane background, wherein the (x, y) + (x, y +1) + (x, y +2) + (x, y +3) + (x, y +4) + (x +1, y) + (x +1, y +1) + (x +1, y +2) + (x +1, y +3) + (x +1, y +4))/8>120, x and y are oneThe coordinate of each pixel point is considered as a foreground point only when the condition is met. In steps 7 and 8, the Gengxin array is labeled 1 for spots deemed to be foreground in H that are greater than the threshold. In the later processing, when the corresponding value of the Gengxin array is 1, the foreground point is determined. In the step 10, whether Gengxin (i, j) is 1 is judged, if the Gengxin (i, j) is 0, the Gengxin (i, j) is a background point, if the Gengxin (i, j) is 1, the Gengxin (i, j) is a foreground point, and in the step 11, the Gengxin position of the foreground point is 0, so that the corresponding position of the next picture to be processed is prevented from being directly judged as the foreground point.
In step 12, Sum array (i.e. background pixel summation matrix) stores the Sum of the latest twenty values of the corresponding point determined as the background pixel. The background array (i.e. the pixel background point matrix) is a three-dimensional array, and stores the latest specific values of twenty points of each pixel point which are determined as background points, twenty points of each point at the position are arranged according to the time sequence, wherein the H2 array is the pixel point which records twenty background points of each pixel point for the longest time, and the step is completed by subtracting the background point which is recorded by the H2 and has the longest time, namely an invalid pixel point from Sum, so as to achieve the purpose of eliminating the influence of the pixel point which has the longest time on the background. In step 13, the background points in H are added to Sum to update the latest value determined as the background point to the background. In step 14, the oldest point in background is changed to the newest point in H, and the purpose of this action is to wait until the new pixel point becomes the oldest background point in the later update and then eliminate it. In step 15, H2 is used as the position of the pixel point pointing to the farthest time of each pixel point, and when the update of one pixel point is completed, the pixel point must point to the farthest time of the next pixel point. In steps 16 and 17, if it is determined that the next farthest background point points to 21, it represents that the farthest background pixel point is at position 1, and the value of the corresponding position of H2 needs to be set to 1, the count array (i.e., the matrix of the number of pixel points) in step 18 is used to record the number of pixel points included in the Sum value of each pixel point. When the number of the early background is not up to 20 frames when the early background starts to be updated, the value of the background point is obtained by adopting a Sum/count method. There is an error if Sum/20 is used at the beginning. The step 19 is used for recording the number of the pixel points at the corresponding positions of the pixel points Sum. The purpose of step 20 is to complete the background update and provide an accurate and up-to-date background for a new frame. Step 21 is to determine whether there is a next frame in the video, if there is a next frame, step 2 is performed, the background is continuously improved, and step 22 is performed to end the background updating if there is no new frame.
The processing method for the foreground is different according to different specific application scenes. Methods for foreground detection and processing in terms of vehicle counting are presented herein. The method of detection and processing of foreground begins at step 23. Step 23: and binarizing the image after background subtraction, and further eliminating the influence of the background on foreground detection. The binarization mode adopts a maximum inter-class variance method (Ostu), and the principle is as follows:
setting L gray values of the image to be in a range of 0-L-1, selecting T in the range to divide the image into two groups of G0 and G1, wherein the gray value of G0 is in a range of 0-T, and the gray value of G1 is in a range of T + 1-L-1. It is known that: the probability of occurrence of each gray value i is p i =n i /N,n i Representing the number of occurrences of the gray value, and N representing the total number of occurrences of all gray values, assuming that the number of pixels of the two groups G0 and G1 accounts for the total image
Figure BDA0002259129770000076
Then
Figure BDA0002259129770000071
Figure BDA0002259129770000072
Average gray value:
Figure BDA0002259129770000073
total mean gray value of image:
Figure BDA0002259129770000074
the between-class variance:
Figure BDA0002259129770000075
optimal threshold value: t ═ argmax (g (T)) is such that the maximum between-class variance corresponds toThe value of t of (a).
Step 24: the principle of etching the binarized image to eliminate the influence of individual noise points on the count is that H (i, j) ═ H (i +1, j-1) Λ H (i +1, j) ^ H (i +1, j +1) Λ H (i, j-1) Λ H (i, j) ^ H (i, j +1) Λ H (i-1, j-1) Λ H (i-1, j +1) is selected as the template of 3 x 3 for etching. Step 25: expanding the corroded image, reducing the influence of corrosion on a counting result, and selecting a 3X 3 template for expansion according to the following principle: h (i, j) ═ H (i +1, j-1) v H (i +1, j) ' H (i +1, j +1) v H (i, j-1) v H (i, j) ' H (i, j +1) v H (i-1, j-1) v H (i-1, j) ' v H (i-1, j +1), H (i, j) indicates the luminance value of the pixel point of the H matrix on the coordinate (i, j). Step 26: and counting the frames to be detected and outputting counting results.
FIG. 3 is a background image obtained by background modeling in a conventional method, and it can be seen that the environment is not shown when tree shadows appear; FIG. 4 is a background image obtained after updating with the present invention, which shows the tree shadow in the latest environment; FIG. 5 is a detected target frame; FIG. 6 is a target frame after scene subtraction; fig. 7 is a computer decision object of interest of the object frame after the over-binarization and noise reduction processes.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (8)

1. A method for extracting dynamic background and detecting foreground object in a surveillance video is characterized by obtaining an original background of a first frame and obtaining a frame to be detected; subtracting the background frame from the frame to be detected to obtain a processing frame; judging a foreground part and a background part in a differential processing frame; updating the background portion to a background frame portion, extracting a portion of interest from the foreground portion; shifting out the value of the pixel point with the longest existence time in the background, and shifting the value of the pixel point which is newly determined as the background part; obtaining a background by a method of extracting background frames from multiple background points together;
the method for acquiring the value of the pixel point in the background comprises the following steps: setting a sum array and a count array, storing the latest five to one hundred values of each pixel point which is identified as a background pixel point through the sum array, recording the number of the stored background points of each pixel point through the count array, obtaining the pixel value of each background point through back (x, y) = sum (x, y)/count (x, y), wherein x and y represent the horizontal and vertical coordinates of the pixel point, back (x, y) represents the current background value of the (x, y) pixel point, count (x, y) represents the total value of how many background points are stored in the sum (x, y) pixel point, and sum (x, y) represents the sum of the pixel values of the previous count (x, y) background points stored in the sum (x, y) pixel point.
2. The method according to claim 1, wherein an averaging method is used to obtain the background frame, and the background of the first frame of the obtained background frame has no foreground object or a very faint foreground object with blurring.
3. The method according to claim 1, wherein the frame to be detected is converted from rgb image to gray image and filtered by Gaussian filter, and then the background frame is subtracted to obtain the processed frame.
4. The method according to claim 1, wherein the background frame is subtracted from the frame to be detected by a matrix subtraction method, and the background frame is determined as the foreground portion if the absolute value of the subtracted frame is greater than a predetermined threshold, and the background portion is determined as the background portion if the absolute value of the subtracted frame is not greater than the predetermined threshold.
5. A dynamic background extraction and foreground object detection system in a surveillance video is characterized by comprising a data acquisition module, a judgment processing module and a background acquisition module;
the data acquisition module is used for acquiring an original background of the first frame and acquiring a frame to be detected;
the judging and processing module is used for subtracting the background frame from the frame to be detected to obtain a processing frame, judging and distinguishing a foreground part and a background part in the processing frame, updating the background part to the background frame part, and extracting an interested part from the foreground part;
the background acquisition module is used for shifting out the value of the pixel point with the longest existence time in the background, shifting the value of the pixel point which is newly determined as the background part into the background acquisition module, and acquiring the background by a method of extracting background frames from a plurality of frames of background points together;
the background acquisition module comprises a background point pixel value acquisition module which is used for setting a sum array and a count array, saving the latest five to one hundred values of each pixel point which is identified as a background pixel point through the sum array, recording the number of the background points saved by each pixel point through the count array, and acquiring the pixel value of each background point through back (x, y) = sum (x, y)/count (x, y), wherein x and y represent the horizontal and vertical coordinates of the pixel point, back (x, y) represents the current background value of the (x, y) pixel point, count (x, y) represents the total value of how many background points the (x, y) pixel point saves in sum (x, y) of the pixel values of the background points of the previous count (x, y).
6. The system according to claim 5, wherein the data obtaining module comprises a background frame obtaining module configured to obtain a background frame by using an averaging method, and a first frame of background in the obtained background frame has no foreground object or a very faint foreground object with blurring.
7. The system according to claim 5, wherein the decision processing module comprises a pre-processing module for the frames to be detected, which is used to convert the acquired frames to be detected from rgb image into gray image and filter with Gaussian filter.
8. The system according to claim 5, wherein the decision processing module comprises a background and foreground portion decision module for subtracting the background frame from the frame to be detected by matrix subtraction, and if the absolute value of the subtracted frame is greater than a predetermined threshold, the background portion is determined, otherwise, the background portion is determined.
CN201911065308.3A 2019-11-04 2019-11-04 Method and system for extracting dynamic background and detecting foreground object in monitoring video Active CN110852228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911065308.3A CN110852228B (en) 2019-11-04 2019-11-04 Method and system for extracting dynamic background and detecting foreground object in monitoring video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911065308.3A CN110852228B (en) 2019-11-04 2019-11-04 Method and system for extracting dynamic background and detecting foreground object in monitoring video

Publications (2)

Publication Number Publication Date
CN110852228A CN110852228A (en) 2020-02-28
CN110852228B true CN110852228B (en) 2022-09-13

Family

ID=69599330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911065308.3A Active CN110852228B (en) 2019-11-04 2019-11-04 Method and system for extracting dynamic background and detecting foreground object in monitoring video

Country Status (1)

Country Link
CN (1) CN110852228B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616290A (en) * 2015-01-14 2015-05-13 合肥工业大学 Target detection algorithm in combination of statistical matrix model and adaptive threshold

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9158985B2 (en) * 2014-03-03 2015-10-13 Xerox Corporation Method and apparatus for processing image of scene of interest

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616290A (en) * 2015-01-14 2015-05-13 合肥工业大学 Target detection algorithm in combination of statistical matrix model and adaptive threshold

Also Published As

Publication number Publication date
CN110852228A (en) 2020-02-28

Similar Documents

Publication Publication Date Title
CN110675346B (en) Image acquisition and depth map enhancement method and device suitable for Kinect
CN107273838B (en) Processing method and device for snapshot pictures of traffic signal lamps
US20070110309A1 (en) Shadow detection in images
KR100902491B1 (en) System for processing digit image, and method thereof
CN104616290A (en) Target detection algorithm in combination of statistical matrix model and adaptive threshold
KR20210006276A (en) Image processing method for flicker mitigation
CN111369570B (en) Multi-target detection tracking method for video image
CN111539980A (en) Multi-target tracking method based on visible light
CN111985314B (en) Smoke detection method based on ViBe and improved LBP
CN115937237A (en) Local feature extraction method based on edge transform domain
CN111723805B (en) Method and related device for identifying foreground region of signal lamp
CN108898561B (en) Defogging method, server and system for foggy image containing sky area
CN107832732B (en) Lane line detection method based on treble traversal
CN110852228B (en) Method and system for extracting dynamic background and detecting foreground object in monitoring video
CN115880683B (en) Urban waterlogging ponding intelligent water level detection method based on deep learning
CN110633705A (en) Low-illumination imaging license plate recognition method and device
CN115719359A (en) Rolling shutter image processing method based on visible light communication
CN112532938B (en) Video monitoring system based on big data technology
CN114519694A (en) Seven-segment digital tube liquid crystal display screen identification method and system based on deep learning
CN115035397A (en) Underwater moving target identification method and device
Abdusalomov et al. Robust shadow removal technique for improving image enhancement based on segmentation method
CN111105394B (en) Method and device for detecting characteristic information of luminous pellets
CN113822818A (en) Speckle extraction method, speckle extraction device, electronic device, and storage medium
CN112949389A (en) Haze image target detection method based on improved target detection network
CN112258548B (en) Moving target extraction method based on improved ViBe algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant