CN105678243A - On-line extraction method of monitoring video feature frames - Google Patents

On-line extraction method of monitoring video feature frames Download PDF

Info

Publication number
CN105678243A
CN105678243A CN201511025385.8A CN201511025385A CN105678243A CN 105678243 A CN105678243 A CN 105678243A CN 201511025385 A CN201511025385 A CN 201511025385A CN 105678243 A CN105678243 A CN 105678243A
Authority
CN
China
Prior art keywords
video
frame
change point
sliding window
timing variations
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201511025385.8A
Other languages
Chinese (zh)
Other versions
CN105678243B (en
Inventor
卢国梁
刘阳
闫鹏
王亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201511025385.8A priority Critical patent/CN105678243B/en
Publication of CN105678243A publication Critical patent/CN105678243A/en
Application granted granted Critical
Publication of CN105678243B publication Critical patent/CN105678243B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • G06V20/47Detecting features for summarising video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an on-line extraction method of monitoring video feature frames. Under an increment sliding window technical framework, the on-line extraction method detects sequential variation points of video sub sequences and utilizes the detected variation points to segment the video into video clips containing different contents so as to utilize a clustering method to complete extraction of key frames in the obtained video clips. The on-line extraction method of monitoring video feature frames does not need any man-made preset parameters, and can realize extraction of the key frames of the monitoring video without any monitoring.

Description

A kind of monitor video characteristic frame On-line testing method
Technical field
The present invention relates to a kind of monitor video characteristic frame On-line testing method, belong to the technical field of intelligent monitoring.
Background technology
Video frequency abstract (videosummarization) technology allows user to pass through to browse video features hardwood in finite time can grasp event in observation time. But, the existing video frame extraction method based on scene changes, shot transition detection can not be applicable to monitor video.
Summary of the invention
For the deficiencies in the prior art, the present invention provides a kind of monitor video characteristic frame On-line testing method. The method is under increment sliding window (incrementalsliding-window) technological frame, first the timing variations point of video sequence is detected, and utilize the change point detected that Video segmentation becomes the video segment comprising different content, and then clustering method is utilized to complete the extraction of key frame in the video segment obtained. The method does not need any artificial parameter set in advance, it may be achieved complete unsupervised monitor video key-frame extraction.
Technical scheme is as follows:
A kind of monitor video characteristic frame On-line testing method, comprises the following steps that and first video sequence carries out timing variations point detection; And utilize the change point detected that Video segmentation becomes the video segment comprising different content, and then clustering method is utilized to complete the extraction of key frame in the video segment obtained. Wherein said video is the video of N frame: F={f1,f2,...,fN},fiRepresent the frame of video at time i place, extract the key frame F={f in described videor1,fr2,...,frk, and carry out arrangement chronologically and obtain.
According to currently preferred, the described method that video sequence carries out timing variations point detection comprises the steps:
Step (1-1): set up sliding window model
Initialize the start frame n of video change point detection1=1 and the frame length L of corresponding sliding window1=L0;
Step (1-2)
A detection it is changed in the video sliding window set up in described step (1-1);
Step (1-3)
If sequential change point η having been detected in video sequence window, then the start frame that detects with time point η for next round also reinitializes sliding window frame length for L0, i.e. ni+1=η and L1=L0, subsequent video is carried out next round change point detection; If being not detected by timing variations point in video sequence window, then still with initialized niFor detection start frame, it may be assumed that ni+1=ni, and sliding window length is updated to Li+1=Li+ Δ L, Δ L are sliding window length increment step-length:Proceed change point detection;
Step (1-4)
Whole change point detection process is until all sequence of frames of video have all detected or arrived T preassigned deadline0Terminate, i.e. L > N or i > T0, wherein N is given complete monitor video totalframes, T0For preassigned deadline;Otherwise, i=i+1 return step (1-2).
According to currently preferred, Video segmentation is become the method for video segment comprising different content to be by the change point that described utilization detects: realizing the sequential to video split by detecting the timing variations point of video sequence in each window, wherein the timing variations point of video sequence detects and includes:
Step (2-1): video feature extraction
In hsv color space, frame of video is carried out the feature extraction based on color histogram, and adopts quantization method to form and aspect, saturation, lightness dimensionality reduction respectively to 16 dimensions, 8 dimensions, 8 dimensions, finally give the video time sequence feature of 32 dimensions, be designated as F={f equally1, f2..., fN}∈32×N;
Step (2-2): diversity detects
Assume YiIt is that in given video F, time span is one section of video sequence of LFrom time i, terminate to time i+L-1, for each reference change point η ∈ Yi, similarity measurement formula is:
L { Y i } = sup η ′ ∈ { η i } ( L { Y i | η ′ } ) = sup η ′ ∈ { η i } ( 1 m 2 Σ i , j = 1 m k ( x i , y i ) - 2 m n Σ i , j = 1 m , n k ( x i , y j ) + 1 n 2 k ( x i , y i ) )
WhereinIt is by sample point (xi, yj) be mapped in gaussian kernel; The sample point making former sample point different classes of in new space by such mapping has bigger separation property, and makes the sample point in new space that former data are had better descriptive power;
Step (2-3): hypothesis testing
Based on it is assumed hereinafter that inspection carries out timing variations point detection:
H0: L{Yi| η ' } < λi
HA: L{Yi|η′}≥λi
Wherein λiIt is a setting threshold value, self adaptation can obtain in algorithm performs, if H0Set up, be then not changed in a little; Otherwise, HAFor true time, there is timing variations point at η ' place, and at η ' place to YiSplit.
According to currently preferred, the described extracting method utilizing clustering method to complete key frame in the video segment obtained, including following content:
Obtaining change point η ' and to YiAfter splitting, first half video segment uses k-means clustering algorithm, and extracts key frame for frame of video immediate with cluster centre; Later half video segment then will be proceeded change point detection; When whole detection process terminates, extract all key frames composition set F and arrange chronologically, being the video frequency abstract ultimately produced.
Present invention have an advantage that
Extracting method of the present invention is under increment sliding window (incrementalsliding-window) technological frame, first the timing variations point of video sequence is detected, and utilize the change point detected that Video segmentation becomes the video segment comprising different content, and then clustering method is utilized to complete the extraction of key frame in the video segment obtained. The method does not need any artificial parameter set in advance, it may be achieved complete unsupervised monitor video key-frame extraction.
Accompanying drawing explanation
Fig. 1 is the flow chart of extracting method of the present invention.
Detailed description of the invention
Below in conjunction with embodiment and Figure of description, the present invention is described in detail, but is not limited to this.
As described in Figure 1.
Embodiment,
A kind of monitor video characteristic frame On-line testing method, comprises the following steps that and first video sequence carries out timing variations point detection; And utilize the change point detected that Video segmentation becomes the video segment comprising different content, and then clustering method is utilized to complete the extraction of key frame in the video segment obtained. Wherein said video is the video of N frame: F={f1,f2,...,fN},fiRepresent the frame of video at time i place, extract the key frame F={f in described videor1,fr2,...,frk, and carry out arrangement chronologically and obtain.
According to currently preferred, the described method that video sequence carries out timing variations point detection comprises the steps:
Step (1-1): set up sliding window model
Initialize the start frame n of video change point detection1=1 and the frame length L of corresponding sliding window1=L0;
Step (1-2)
A detection it is changed in the video sliding window set up in described step (1-1);
Step (1-3)
If sequential change point η having been detected in video sequence window, then the start frame that detects with time point η for next round also reinitializes sliding window frame length for L0, i.e. ni+1=η and L1=L0, subsequent video is carried out next round change point detection; If being not detected by timing variations point in video sequence window, then still with initialized niFor detection start frame, it may be assumed that ni+1=ni, and sliding window length is updated to Li+1=Li+ Δ L, Δ L are sliding window length increment step-length:Proceed change point detection;
Step (1-4)
Whole change point detection process is until all sequence of frames of video have all detected or arrived T preassigned deadline0Terminate, i.e. L > N or i > T0, wherein N is given complete monitor video totalframes, T0For preassigned deadline; Otherwise, i=i+1 return step (1-2).
Video segmentation is become the method for video segment comprising different content to be by the change point that described utilization detects: realizing the sequential to video split by detecting the timing variations point of video sequence in each window, wherein the timing variations point of video sequence detects and includes:
Step (2-1): video feature extraction
In hsv color space, frame of video is carried out the feature extraction based on color histogram, and adopts quantization method to form and aspect, saturation, lightness dimensionality reduction respectively to 16 dimensions, 8 dimensions, 8 dimensions, finally give the video time sequence feature of 32 dimensions, be designated as equally
Step (2-2): diversity detects
Assume YiIt is that in given video F, time span is one section of video sequence of LFrom time i, terminate to time i+L-1, for each reference change point η ∈ Yi, similarity measurement formula is:
L { Y i } = sup &eta; &prime; &Element; { &eta; i } ( L { Y i | &eta; &prime; } ) = sup &eta; &prime; &Element; { &eta; i } ( 1 m 2 &Sigma; i , j = 1 m k ( x i , y i ) - 2 m n &Sigma; i , j = 1 m , n k ( x i , y j ) + 1 n 2 k ( x i , y i ) )
WhereinIt is by sample point (xi, yj) be mapped in gaussian kernel; The sample point making former sample point different classes of in new space by such mapping has bigger separation property, and makes the sample point in new space that former data are had better descriptive power;
Step (2-3): hypothesis testing
Based on it is assumed hereinafter that inspection carries out timing variations point detection:
H0: L{Yi| η ' } < λi
HA: L{Yi|η′}≥λi
Wherein λiIt is a setting threshold value, self adaptation can obtain in algorithm performs, if H0Set up, be then not changed in a little; Otherwise, HAFor true time, there is timing variations point at η ' place, and at η ' place to YiSplit.
The described extracting method utilizing clustering method to complete key frame in the video segment obtained, including following content:
Obtaining change point η ' and to YiAfter splitting, first half video segment uses k-means clustering algorithm, and extracts key frame for frame of video immediate with cluster centre; Later half video segment then will be proceeded change point detection; When whole detection process terminates, extract all key frames composition set F and arrange chronologically, being the video frequency abstract ultimately produced.

Claims (4)

1. a monitor video characteristic frame On-line testing method, it is characterised in that described extracting method comprises the following steps that and first video sequence carries out timing variations point detection; And utilize the change point detected that Video segmentation becomes the video segment comprising different content, and then clustering method is utilized to complete the extraction of key frame in the video segment obtained.
2. a kind of monitor video characteristic frame On-line testing method according to claim 1, it is characterised in that the described method that video sequence carries out timing variations point detection comprises the steps:
Step (1-1): set up sliding window model
Initialize the start frame n of video change point detection1=1 and the frame length L of corresponding sliding window1=L0;
Step (1-2)
A detection it is changed in the video sliding window set up in described step (1-1);
Step (1-3)
If sequential change point η having been detected in video sequence window, then the start frame that detects with time point η for next round also reinitializes sliding window frame length for L0, i.e. ni+1=η and L1=L0, subsequent video is carried out next round change point detection; If being not detected by timing variations point in video sequence window, then still with initialized niFor detection start frame, it may be assumed that ni+1=ni, and sliding window length is updated to Li+1=Li+ Δ L, Δ L are sliding window length increment step-length:Proceed change point detection;
Step (1-4)
Whole change point detection process is until all sequence of frames of video have all detected or arrived T preassigned deadline0Terminate, namely
L > N or i > T0, wherein N is given complete monitor video totalframes, T0For preassigned deadline; Otherwise, i=i+1 return step (1-2).
3. a kind of monitor video characteristic frame On-line testing method according to claim 1, it is characterized in that, Video segmentation is become the method for video segment comprising different content to be by the change point that described utilization detects: realizing the sequential to video split by detecting the timing variations point of video sequence in each window, wherein the timing variations point of video sequence detects and includes:
Step (2-1): video feature extraction
In hsv color space, frame of video is carried out the feature extraction based on color histogram, and adopts quantization method to form and aspect, saturation, lightness dimensionality reduction respectively to 16 dimensions, 8 dimensions, 8 dimensions, finally give the video time sequence feature of 32 dimensions, be designated as F={f equally1, f2..., fN∈ 32 × N;
Step (2-2): diversity detects
Assume YiIt is that in given video F, time span is one section of video sequence of LFrom time i, terminate to time i+L-1, for each reference change point η ∈ Yi, similarity measurement formula is:
L { Y i } = sup &eta; &prime; &Element; { &eta; i } ( L { Y i | &eta; &prime; } ) = sup &eta; &prime; &Element; { &eta; i } ( 1 m 2 &Sigma; i , j = 1 m k ( x i , y i ) - 2 m n &Sigma; i , j = 1 m , n k ( x i , y j ) + 1 n 2 k ( x i , y i ) )
WhereinIt is by sample point (xi, yj) be mapped in gaussian kernel;
Step (2-3): hypothesis testing
Based on it is assumed hereinafter that inspection carries out timing variations point detection:
H0: L{Yi| η ' } < λi
HA: L{Yi|η′}≥λi
Wherein λiIt is a setting threshold value, if H0Set up, be then not changed in a little; Otherwise, HAFor true time, there is timing variations point at η ' place, and at η ' place to YiSplit.
4. a kind of monitor video characteristic frame On-line testing method according to claim 1, it is characterised in that the described extracting method utilizing clustering method to complete key frame in the video segment obtained, including following content:
Obtaining change point η ' and to YiAfter splitting, first half video segment uses k-means clustering algorithm, and extracts key frame for frame of video immediate with cluster centre; Later half video segment then will be proceeded change point detection; When whole detection process terminates, extract all key frames composition set F and arrange chronologically, being the video frequency abstract ultimately produced.
CN201511025385.8A 2015-12-30 2015-12-30 A kind of monitor video characteristic frame On-line testing method Expired - Fee Related CN105678243B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511025385.8A CN105678243B (en) 2015-12-30 2015-12-30 A kind of monitor video characteristic frame On-line testing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511025385.8A CN105678243B (en) 2015-12-30 2015-12-30 A kind of monitor video characteristic frame On-line testing method

Publications (2)

Publication Number Publication Date
CN105678243A true CN105678243A (en) 2016-06-15
CN105678243B CN105678243B (en) 2019-02-12

Family

ID=56298250

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511025385.8A Expired - Fee Related CN105678243B (en) 2015-12-30 2015-12-30 A kind of monitor video characteristic frame On-line testing method

Country Status (1)

Country Link
CN (1) CN105678243B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203277A (en) * 2016-06-28 2016-12-07 华南理工大学 Fixed lens real-time monitor video feature extracting method based on SIFT feature cluster
CN109344743A (en) * 2018-09-14 2019-02-15 广州市浪搏科技有限公司 A kind of monitor video data processing implementation method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101308501A (en) * 2008-06-30 2008-11-19 腾讯科技(深圳)有限公司 Method, system and device for generating video frequency abstract
CN101720006A (en) * 2009-11-20 2010-06-02 张立军 Positioning method suitable for representative frame extracted by video keyframe
CN103065301A (en) * 2012-12-25 2013-04-24 浙江大学 Method of bidirectional comparison video shot segmentation
CN105139421A (en) * 2015-08-14 2015-12-09 西安西拓电气股份有限公司 Video key frame extracting method of electric power system based on amount of mutual information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101308501A (en) * 2008-06-30 2008-11-19 腾讯科技(深圳)有限公司 Method, system and device for generating video frequency abstract
CN101720006A (en) * 2009-11-20 2010-06-02 张立军 Positioning method suitable for representative frame extracted by video keyframe
CN103065301A (en) * 2012-12-25 2013-04-24 浙江大学 Method of bidirectional comparison video shot segmentation
CN105139421A (en) * 2015-08-14 2015-12-09 西安西拓电气股份有限公司 Video key frame extracting method of electric power system based on amount of mutual information

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203277A (en) * 2016-06-28 2016-12-07 华南理工大学 Fixed lens real-time monitor video feature extracting method based on SIFT feature cluster
CN106203277B (en) * 2016-06-28 2019-08-20 华南理工大学 Fixed lens based on SIFT feature cluster monitor video feature extraction method in real time
CN109344743A (en) * 2018-09-14 2019-02-15 广州市浪搏科技有限公司 A kind of monitor video data processing implementation method
CN109344743B (en) * 2018-09-14 2023-07-25 广州市浪搏科技有限公司 Method for realizing monitoring video data processing

Also Published As

Publication number Publication date
CN105678243B (en) 2019-02-12

Similar Documents

Publication Publication Date Title
CN110263845B (en) SAR image change detection method based on semi-supervised countermeasure depth network
CN103700092B (en) A kind of forest brulee extraction method based on sequential remote sensing image
CN104239856B (en) Face identification method based on Gabor characteristic and self adaptable linear regression
CN106325485B (en) A kind of gestures detection recognition methods and system
CN107798351B (en) Deep learning neural network-based identity recognition method and system
CN103310230A (en) Method for classifying hyperspectral images on basis of combination of unmixing and adaptive end member extraction
CN107679461A (en) Pedestrian&#39;s recognition methods again based on antithesis integration analysis dictionary learning
CN102567738B (en) Rapid detection method for pornographic videos based on Gaussian distribution
CN104850859A (en) Multi-scale analysis based image feature bag constructing method
CN102262736A (en) Method for classifying and identifying spatial target images
CN115131580B (en) Space target small sample identification method based on attention mechanism
CN105678243A (en) On-line extraction method of monitoring video feature frames
CN103246877B (en) Based on the recognition of face novel method of image outline
CN102547477B (en) Video fingerprint method based on contourlet transformation model
CN113688887A (en) Training and image recognition method and device of image recognition model
CN113033665A (en) Sample expansion method, training method and system, and sample learning system
CN106971377A (en) A kind of removing rain based on single image method decomposed based on sparse and low-rank matrix
CN116721458A (en) Cross-modal time sequence contrast learning-based self-supervision action recognition method
Siméoni et al. Unsupervised object discovery for instance recognition
CN110738129B (en) End-to-end video time sequence behavior detection method based on R-C3D network
Shilpa et al. Approach for shadow detection and removal using machine learning techniques
CN111079811A (en) Sampling method for multi-label classified data imbalance problem
Xu et al. A novel shot detection algorithm based on clustering
CN115512272A (en) Time sequence event detection method for multi-event instance video
Pei et al. Pedestrian detection based on hog and lbp

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190212

Termination date: 20191230