CN113361449A - Method for extracting key frame of monitoring video of front-end intelligent device - Google Patents
Method for extracting key frame of monitoring video of front-end intelligent device Download PDFInfo
- Publication number
- CN113361449A CN113361449A CN202110698213.6A CN202110698213A CN113361449A CN 113361449 A CN113361449 A CN 113361449A CN 202110698213 A CN202110698213 A CN 202110698213A CN 113361449 A CN113361449 A CN 113361449A
- Authority
- CN
- China
- Prior art keywords
- difference
- frame
- image
- binary image
- adjacent frames
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000012544 monitoring process Methods 0.000 title claims description 11
- 238000000605 extraction Methods 0.000 claims abstract description 16
- 230000008569 process Effects 0.000 claims description 8
- 238000011410 subtraction method Methods 0.000 claims description 4
- 230000001186 cumulative effect Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 abstract description 6
- 238000004364 calculation method Methods 0.000 abstract description 5
- 230000007547 defect Effects 0.000 abstract description 4
- 239000000284 extract Substances 0.000 abstract description 3
- 238000001514 detection method Methods 0.000 abstract description 2
- 238000004458 analytical method Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The invention discloses a method for extracting key frames of a surveillance video of front-end intelligent equipment, which adopts an improved three-frame difference method and extracts key frames by carrying out target detection based on background difference and interframe difference, thereby overcoming the defect of insufficient frame extraction in the transformation of the traditional lens-based key frame extraction technology in the application of a dynamic transformation environment and avoiding the problem that a large amount of information is easily lost for a moving target in the surveillance video. Meanwhile, the three-frame difference frame is used for calculating the difference relation between two adjacent frames depending on the three adjacent frames, so that the requirement on the inter-frame relation is not high, the calculation is simple, the requirement on hardware equipment is not high, and the method is suitable for being used in front-end intelligent equipment.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method for extracting a key frame of a monitoring video of front-end intelligent equipment.
Background
Due to the limited bandwidth, the front-end intelligent controller cannot timely transmit the front-end monitoring video to the background, and the key information in the monitoring video is generally transmitted to the background for background analysis in a key frame transmission mode in the engineering. At present, a method for extracting a frame based on shot extraction, extraction based on motion analysis, or even extraction of one frame every few frames is commonly adopted in a frame extraction method.
The key frame extraction algorithm based on the shot comprises the following steps: the algorithm is the first developed and the most mature general method in the field of video retrieval, and the general implementation process of the algorithm is as follows: firstly, a source video file is segmented according to shot changes according to a certain technical means, and then a first frame and a last frame are selected from each shot of the video to be used as key frames.
Method based on motion analysis: the method is a key frame extraction algorithm proposed by some scholars based on the attributes of the motion characteristics of objects, and the general implementation process is as follows: and analyzing the optical flow of the object motion in the video shot, and selecting the video frame with the minimum optical flow moving frequency in the video shot as the extracted key frame each time.
However, the methods have disadvantages for using the front-end intelligent controller with complex scene and limited computing power.
The key frame extraction algorithm technology based on the shot has the advantages that the implementation is simple, the calculation amount of the algorithm is small, but the method has great limitation, when the content in the video changes violently and the scene is very complex, the selection of the first frame and the last frame in the shot cannot represent the change of all the content of the video, so that the method is far from meeting the standards and requirements of people in the current society for extracting the key frames. The method based on motion analysis can extract a proper amount of key frames from most video shots, and the extracted key frames can also effectively express the characteristics of video motion, but the robustness of the algorithm is poor, because the algorithm not only depends on the local characteristics of object motion, but also has a complex calculation process, and the algorithm has high overhead cost in time and is not suitable for being carried in front-end equipment for use.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a method for extracting key frames of a surveillance video of a front-end intelligent device, which overcomes the defect that the traditional lens-based key frame extraction technology is insufficient in the transformation in the application of a dynamic transformation environment, and avoids the problem that a moving object in the surveillance video is easy to lose a large amount of information. Meanwhile, the requirement on the inter-frame relation is not high, the calculation is simple, the requirement on hardware equipment is not high, and the method is suitable for being used in front-end intelligent equipment.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for extracting key frames of a monitoring video of front-end intelligent equipment specifically comprises the following steps:
s1, extracting a moving target from the current frame image by using a background difference method to obtain a foreground binary image DBn(x,y);
S2, extracting a difference image by combining the symmetric difference: for three adjacent frames of images, firstly calculating the difference image of two adjacent frames, and then calculating the difference image DMn(x, y) and thereafter the pair of cumulative differential images DMn(x, y) performing thresholding to obtain a difference binary image;
and S3, performing OR operation on the foreground binary image obtained in the step S1 and the difference binary image obtained in the step S2 to obtain a moving target.
Further, the specific process of step S1 is:
note Bn(x, y) is a foreground color to be extracted, and a frame image at this time can be represented as In(x, y), the foreground binary image is:
wherein threshold represents a threshold value.
Further, the specific process of step S2 is:
three adjacent frames of images are respectively represented as In-1(x,y)、In(x,y)、In+1(x, y) calculating a difference image of two adjacent frames according to the frame images:
DFn(x,y)=In(x,y)-In-1(x,y) (2);
DFn+1(x,y)=In+1(x,y)-In(x,y) (3);
for accumulated difference image DMn(x, y) thresholding to obtain a difference binary image:
where Th denotes a threshold value.
Further, in step S3, an or operation is performed as follows:
in the formula, DBn(x, y) represents a foreground binary image, which is extracted by a background subtraction method, Dn(x, y) is a difference binary image obtained by thresholding the accumulated difference image, MnAnd (x, y) is the moving target image obtained by OR operation of the two.
The invention has the beneficial effects that: the invention adopts an improved three-frame difference method, overcomes the defect of insufficient frame extraction in the transformation of the traditional lens-based key frame extraction technology in the application of dynamic transformation environment, and avoids the problem that a large amount of information is easily lost for a moving object in a monitoring video. Meanwhile, the three-frame difference frame is used for calculating the difference relation between two adjacent frames depending on the three adjacent frames, so that the requirement on the inter-frame relation is not high, the calculation is simple, the requirement on hardware equipment is not high, and the method is suitable for being used in front-end intelligent equipment.
Drawings
FIG. 1 is a schematic flow chart of a method according to an embodiment of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings, and it should be noted that the present embodiment is based on the technical solution, and the detailed implementation and the specific operation process are provided, but the protection scope of the present invention is not limited to the present embodiment.
The embodiment provides a method for extracting a key frame of a monitoring video of front-end intelligent equipment, which is an improved three-frame difference method. In the method, when the scene changes and the pixel change value is greater than the set threshold, the lens is determined to be changed steeply, at this time, the key frame is extracted to retain effective information to the maximum extent, and meanwhile, the frame number used for generating the key frame is controlled by adjusting the threshold, so that the detail degree of the monitoring video description is controlled. The method has strong universality and interference resistance.
The improved three-frame difference method adopted in this embodiment is an object detection algorithm based on background difference and inter-frame difference. The background subtraction method is sensitive to light, but can completely detect a moving object. This is exactly the opposite of the frame-to-frame difference method, which is insensitive to lighting conditions but does not completely detect objects in the background. Therefore, if the advantages of the two algorithms are integrated, the practical requirements can be met: the method is insensitive to the illumination condition, and can detect a complete target. Therefore, the method combines the two algorithms to realize advantage complementation, and forms an improved three-frame difference method.
The method of the embodiment specifically comprises the following steps:
s1, extracting a moving target by a background difference method:
note Bn(x, y) is a foreground color to be extracted, and a frame image at this time can be represented as In(x, y), the foreground binary image is:
the threshold in equation (1) should be chosen appropriately so that the residual background can be better filtered. The selection of the threshold is mainly related to the camera device and can be determined through experiments, and the threshold can control the number of frames for generating key frames in the embodiment, and further control the detailed degree of description of the monitoring video. Performing a simple moving object extraction only by the above extraction method is still unreliable because the background extracted by equation (1) is not equal to the background of the current frame, and there is a possibility that a non-moving object remains after the current frame is subtracted from the background frame. Therefore, there is a need for improved algorithms that add differential information to achieve higher extraction accuracy.
S2, extracting difference image by combining symmetric difference
The background obtained using the symmetric difference method easily loses difference information because a symmetric difference is calculated by manipulating the difference between the current frame and the previous and subsequent frames, which inevitably generates noise. However, if two frame difference images are calculated and then threshold-processed, differential noise can be suppressed well.
The adjacent three frames of images can be respectively represented as In-1(x,y)、In(x,y)、In+1(x, y), calculating the difference image of two adjacent frames according to the frame images:
DFn(x,y)=In(x,y)-In-1(x,y) (2);
DFn+1(x,y)=In+1(x,y)-In(x,y) (3);
for accumulated difference image DMn(x, y) thresholding to obtain a difference binary image:
the threshold Th in the equation can be obtained by an image global adaptive threshold method.
S3, obtaining a moving target by performing OR operation on the foreground binary image obtained in the step S1 and the difference binary image obtained in the step S2, wherein the formula is as follows:
in the formula, DBn(x, y) represents a foreground binary image, which is extracted by a background subtraction method, Dn(x, y) is for the cumulative differential image thresholdDifference binary image, M, obtained by a quantization processnAnd (x, y) is the moving target image obtained by OR operation of the two. Experimental results show that the method can rapidly and effectively extract the moving target in the complex scene, and further determine the key frame.
Various corresponding changes and modifications can be made by those skilled in the art based on the above technical solutions and concepts, and all such changes and modifications should be included in the protection scope of the present invention.
Claims (4)
1. A front-end intelligent device monitoring video key frame extraction method is characterized by comprising the following steps:
s1, extracting a moving target from the current frame image by using a background difference method to obtain a foreground binary image DBn(x,y);
S2, extracting a difference image by combining the symmetric difference: for three adjacent frames of images, firstly calculating the difference image of two adjacent frames, and then calculating the difference image DMn(x, y) and thereafter the pair of cumulative differential images DMn(x, y) performing thresholding to obtain a difference binary image;
and S3, performing OR operation on the foreground binary image obtained in the step S1 and the difference binary image obtained in the step S2 to obtain a moving target.
3. The method according to claim 1, wherein the specific process of step S2 is as follows:
three adjacent frames of images are respectively represented as In-1(x,y)、In(x,y)、In+1(x, y) calculating a difference image of two adjacent frames according to the frame images:
DFn(x,y)=In(x,y)-In-1(x,y) (2);
DFn+1(x,y)=In+1(x,y)-In(x,y) (3);
for accumulated difference image DMn(x, y) thresholding to obtain a difference binary image:
where Th denotes a threshold value.
4. The method according to claim 1, wherein in step S3, the or operation is performed according to the following formula:
in the formula, DBn(x, y) represents a foreground binary image, which is extracted by a background subtraction method, Dn(x, y) is a difference binary image obtained by thresholding the accumulated difference image, MnAnd (x, y) is the moving target image obtained by OR operation of the two.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110698213.6A CN113361449A (en) | 2021-06-23 | 2021-06-23 | Method for extracting key frame of monitoring video of front-end intelligent device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110698213.6A CN113361449A (en) | 2021-06-23 | 2021-06-23 | Method for extracting key frame of monitoring video of front-end intelligent device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113361449A true CN113361449A (en) | 2021-09-07 |
Family
ID=77535953
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110698213.6A Withdrawn CN113361449A (en) | 2021-06-23 | 2021-06-23 | Method for extracting key frame of monitoring video of front-end intelligent device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113361449A (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160371827A1 (en) * | 2014-02-24 | 2016-12-22 | Shenzhen Huabao Electronic Technology Co., Ltd. | Method and apparatus for recognizing moving target |
CN112270247A (en) * | 2020-10-23 | 2021-01-26 | 杭州卷积云科技有限公司 | Key frame extraction method based on inter-frame difference and color histogram difference |
-
2021
- 2021-06-23 CN CN202110698213.6A patent/CN113361449A/en not_active Withdrawn
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160371827A1 (en) * | 2014-02-24 | 2016-12-22 | Shenzhen Huabao Electronic Technology Co., Ltd. | Method and apparatus for recognizing moving target |
CN112270247A (en) * | 2020-10-23 | 2021-01-26 | 杭州卷积云科技有限公司 | Key frame extraction method based on inter-frame difference and color histogram difference |
Non-Patent Citations (1)
Title |
---|
赵冠华等: "结合对称差分法和背景减法的目标检测方法", 《结合对称差分法和背景减法的目标检测方法》, pages 1 - 3 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107844779B (en) | Video key frame extraction method | |
CN101448077B (en) | Self-adapting video image 3D denoise method | |
CN103258332B (en) | A kind of detection method of the moving target of resisting illumination variation | |
Karaman et al. | Comparison of static background segmentation methods | |
CN111901532B (en) | Video stabilization method based on recurrent neural network iteration strategy | |
Srinivasan et al. | Improved background subtraction techniques for security in video applications | |
CN107392879B (en) | A kind of low-light (level) monitoring image Enhancement Method based on reference frame | |
CN108200432A (en) | A kind of target following technology based on video compress domain | |
CN107360344A (en) | Monitor video rapid defogging method | |
CN111241943B (en) | Scene recognition and loopback detection method based on background target and triple loss | |
CN113537071B (en) | Static and dynamic target detection method and equipment based on event camera | |
CN108010050B (en) | Foreground detection method based on adaptive background updating and selective background updating | |
Geng et al. | Real time foreground-background segmentation using two-layer codebook model | |
CN113361449A (en) | Method for extracting key frame of monitoring video of front-end intelligent device | |
CN107133972A (en) | A kind of video moving object detection method | |
Qin et al. | A shadow removal algorithm for ViBe in HSV color space | |
CN111369578A (en) | Intelligent tracking method and system for holder transaction | |
CN111627047B (en) | Underwater fish dynamic visual sequence moving target detection method | |
Wang et al. | Low-light traffic objects detection for automated vehicles | |
CN112862876A (en) | Real-time deep sea video image enhancement method for underwater robot | |
Ganesan et al. | Video object extraction based on a comparative study of efficient edge detection techniques. | |
Wan et al. | Moving object detection based on high-speed video sequence images | |
Wang et al. | Research on machine vision technology based detection and tracking of objects on video image | |
Feng et al. | An improved vibe algorithm of moving target extraction for night infrared surveillance video | |
Zhong et al. | Video deblurring network based on dark light enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210907 |