CN109120932B - Video significance prediction method of HEVC compressed domain double SVM model - Google Patents
Video significance prediction method of HEVC compressed domain double SVM model Download PDFInfo
- Publication number
- CN109120932B CN109120932B CN201810766665.1A CN201810766665A CN109120932B CN 109120932 B CN109120932 B CN 109120932B CN 201810766665 A CN201810766665 A CN 201810766665A CN 109120932 B CN109120932 B CN 109120932B
- Authority
- CN
- China
- Prior art keywords
- video sequence
- hevc
- significance prediction
- significance
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention provides a video significance prediction method based on a HEVC compressed domain double-SVM model. The method classifies all training video sequences selected in a video data set, and uses classified training video sequences A and B to respectively train HEVC compressed domain double SVM significance prediction models to obtain two different compressed domain significance prediction models. A certain test video sequence is selected from a video data set to carry out pre-classification operation, and a trained HEVC dual-SVM significance prediction model is used for carrying out significance prediction on the test video sequence.
Description
Technical Field
The invention relates to a method for predicting video significance in a compressed domain, and belongs to the field of video significance detection.
Background
In view of the current increasing resolution of Video signals and the wide application of parallel processing, the 2013 hevc (high Efficiency Video coding) coding standard was released. Compared to the previous h.264/AVC standard, HEVC defines a flexible partitioning structure, while optimizing and improving individual coding modules and adding a large number of new coding tools. On the premise of the same application condition and video quality, the compression rate of HEVC is doubled compared with that of H.264/AVC, and the characteristic information of a video can be more effectively extracted, so that the HEVC compression technology increasingly becomes a common tool for video analysis.
When a human being observes an object, the human being can quickly capture a salient region different from the background and the surroundings, so that the useful information can be acquired maximally in a short time. Thus, a computational visual saliency model can greatly help solve many challenging computer vision and image processing problems. For example, by detecting the location of the protrusions and ignoring most of the extraneous background, object recognition will become more efficient and reliable; by detecting the space-time saliency points, the visual saliency model is beneficial to realizing the tracking of the target.
Motion saliency is an important feature of video saliency prediction to distinguish it from image saliency prediction, which can help machines to better predict important content in video. The video saliency prediction algorithm based on the pixel domain needs to fully decode the compressed video into the pixel domain before prediction, which increases the computational complexity of the terminal device. If the significance prediction is directly carried out in the compressed domain, the complex operation steps brought by decoding can be avoided, and the data processing efficiency can also be improved.
Disclosure of Invention
The purpose of the invention is: and obtaining a video significance prediction result consistent with a human eye visual fixation mechanism by utilizing the motion information in the compressed code stream.
In order to achieve the above object, the present invention provides a video saliency prediction method of an HEVC compressed domain bi-SVM model, which is characterized by comprising the following steps:
step 1, acquiring a video data set, and dividing video sequences into a training video sequence and a testing video sequence;
step 2, classifying all training video sequences selected in the video data set, wherein the classification process comprises the following steps:
step 201, performing significance prediction on a certain training video sequence by using an HEVC compressed domain video significance prediction method;
step 202, selecting a pixel domain significance prediction method to predict the significance of the same training video sequence in step 201;
step 203, evaluating the significance prediction result obtained in the step 201 and the significance prediction result obtained in the step 202 by using the significance prediction evaluation index;
step 204, classifying the current training video sequence according to the evaluation result of the step 203, wherein if the significance prediction result of the current training video sequence obtained in the step 202 is better than the significance prediction result of the same training video sequence obtained in the step 201, the current training video sequence is an A-class training video sequence, otherwise, the current training video sequence is a B-class training video sequence;
step 3, training the HEVC compressed domain dual-SVM significance prediction model by respectively using an A-type training video sequence and a B-type training video sequence to obtain two different compressed domain significance prediction models;
step 4, predicting the significance of a certain test video sequence by using two different trained compression domain significance prediction models, which comprises the following steps:
step 401, selecting a test video sequence from all test video sequences at will, and performing a pre-classification operation on the test video sequence to obtain a category to which the current test video sequence belongs;
step 402, obtaining an HEVC compressed code stream of a test video sequence, and extracting HEVC characteristics from the compressed code stream;
step 403, inputting the obtained HEVC features into a compressed domain significance prediction model corresponding to the category to which the current test video sequence belongs;
and step 404, performing Kalman filtering to obtain a final video saliency map.
Preferably, the step 201 comprises the steps of:
step 2011, a training video sequence is selected from all the training video sequences randomly, and an HEVC compressed code stream of the current training video sequence is obtained;
step 2012, extracting HEVC features from the HEVC compressed code stream;
step 2013, inputting the obtained HEVC features into an HEVC significance prediction model;
step 2014, performing forward smoothing filtering;
and step 2015, obtaining a final video saliency map.
Preferably, the significance prediction assessment indicators used in step 203 are AUC, CC and NSS.
Preferably, the step 3 comprises the steps of:
301, obtaining an HEVC compressed code stream of an A-class training video sequence or a B-class training video sequence;
step 302, extracting relevant HEVC features;
step 303, inputting the obtained HEVC features and the human eye visual attention to a double-SVM significance prediction model in an HEVC compressed domain;
and step 304, obtaining two classification training compressed domain significance prediction models.
The invention provides a new solution for the video significance prediction task, and by combining the characteristics of high real-time performance and capability of effectively utilizing video information of a compressed domain prediction method and the advantage that the significance prediction of a pixel domain prediction method is more accurate in certain scenes, the effective classification of video sequences can be realized, so that the high-efficiency and accurate training of an HEVC compressed domain SVM significance model is carried out on different types of video sequences. The method has high accuracy in predicting the video saliency, and provides a better basis for the subsequent application field based on the video saliency.
Drawings
FIG. 1 is a flow chart of the main process of the present invention;
FIG. 2 is a video classification flow diagram of the present invention;
FIG. 3 is a flow chart of the training process of the video classification-based dual SVM model of the present invention;
fig. 4 is a HEVC compressed domain video saliency prediction flow diagram;
fig. 5 is a flow chart of FES video saliency prediction based on pixel domain.
Detailed Description
In order to make the invention more comprehensible, preferred embodiments are described in detail below with reference to the accompanying drawings.
The invention provides a video significance prediction method of a HEVC compressed domain double SVM model, which comprises the following steps:
step 1, acquiring a video data set, and dividing video sequences into a training video sequence and a testing video sequence;
step 2, classifying all training video sequences selected in the video data set, wherein the classification process comprises the following steps:
step 201, performing significance prediction on a certain training video sequence by using an HEVC compressed domain video significance prediction method, including the following steps:
step 2011, a training video sequence is selected from all the training video sequences randomly, and an HEVC compressed code stream of the current training video sequence is obtained;
step 2012, extracting HEVC features from the HEVC compressed code stream;
step 2013, inputting the obtained HEVC features into an HEVC significance prediction model;
step 2014, performing forward smoothing filtering;
and step 2015, obtaining a final video saliency map.
Step 202, selecting a pixel domain significance prediction method to predict the significance of the same training video sequence in step 201;
step 203, evaluating the significance prediction result obtained in the step 201 and the significance prediction result obtained in the step 202 by using significance prediction evaluation indexes, wherein the used significance prediction evaluation indexes are AUC, CC and NSS;
step 204, classifying the current training video sequence according to the evaluation result of the step 203, wherein if the significance prediction result of the current training video sequence obtained in the step 202 is better than the significance prediction result of the same training video sequence obtained in the step 201, the current training video sequence is an A-class training video sequence, otherwise, the current training video sequence is a B-class training video sequence;
step 3, training the HEVC compressed domain dual-SVM significance prediction model by respectively using the A-type training video sequence and the B-type training video sequence to obtain two different compressed domain significance prediction models, and the method comprises the following steps:
301, obtaining an HEVC compressed code stream of an A-class training video sequence or a B-class training video sequence;
step 302, extracting relevant HEVC features;
step 303, inputting the obtained HEVC features and the human eye visual attention to a double-SVM significance prediction model in an HEVC compressed domain;
step 304, obtaining two compressed domain significance prediction models for classification training;
step 4, predicting the significance of a certain test video sequence by using two different trained compression domain significance prediction models, which comprises the following steps:
step 401, selecting a test video sequence from all test video sequences at will, and performing a pre-classification operation on the test video sequence to obtain a category to which the current test video sequence belongs;
step 402, obtaining an HEVC compressed code stream of a test video sequence, and extracting HEVC characteristics from the compressed code stream;
step 403, inputting the obtained HEVC features into a compressed domain significance prediction model corresponding to the category to which the current test video sequence belongs;
and step 404, performing Kalman filtering to obtain a final video saliency map.
With reference to fig. 1, this example shows a video saliency prediction method based on the HEVC compressed domain bi-SVM model. The proposed method acquires a video data set, and the video sequences therein are divided into two categories, training and testing. In this example, the video sequences used for training are Tennis, Kimono, ParkScene, Cactus, BQTerace, BasketbalDrive, Yan, Simo, Male, Female, Lee, Couple, RaceHorsessC, BQMall, PartyScene, BasketbalDrill, KeceHorsessD, Bquare, and BlongQSbush; examples of video sequences to be tested are BasketbalPass, FourPeople, Johnny, KristennedSara, Vidyo1, Vidyo3, Vidyo4, BasketbalDrillText, Chinaseed, SlideEditing, SlideShow, Traffic and PeopleOnStreet.
Fig. 2 shows the classification of all selected training video sequences in a video data set. In this embodiment, a pixel domain based fes (fast Efficient salience) significance prediction method is selected, and Tennis is taken as an example for explanation:
according to the method, the video sequence Tennis is subjected to significance prediction by using an HEVC compressed domain significance prediction method, as shown in FIG. 3, an FFMPEG tool is used for obtaining an HEVC compressed code stream of the video sequence Tennis; extracting related HEVC features of split-depth, mv and bit-allocation from a compressed code stream; inputting the obtained HEVC features into an HEVC significance prediction model; in order to better predict moving or emerging targets, forward smoothing filtering is carried out; and obtaining a final video saliency map.
According to the method, the significance of the Tennis is predicted by using an FES significance prediction method, as shown in FIG. 4, firstly, image frames of the Tennis are obtained; extracting a CIELab color vector of each pixel of each frame in a video sequence Tennis; calculating the significance of each pixel by using a center-around method based on Bayes and a trained Gaussian kernel density function; calculating the average significance of each pixel under different scales by using a multi-scale method; and calculating the significance of the Tennis 240 frames of the video sequence to obtain a final video significance map.
Two significance predictions of the video sequence Tennis were evaluated using significance prediction evaluation indices, here AUC, CC and NSS evaluation indices. And classifying the videos according to the evaluation result. If the significance evaluation result of the FES method is better than that of HEVC, namely at least two of the three indexes are better than that of HEVC, the video is classified as A, otherwise, the video is classified as B. In this example, class a includes: kimono, ParkScene, BQTerace, BasketbalDrive, Tennis, RaceHorsessC, BQMall, PartyScene, BasketbalDrill, Keiba, RaceHorsessD, BQSquare; class B includes: cactus, Yan, Simo, Male, Female, Lee, Couple, blowingbunbles.
The method comprises the steps of respectively training a double-SVM significance model of the HEVC compressed domain by using classified training video sequences of class A and class B to obtain two different compressed domain significance prediction models, and firstly, acquiring an HEVC compressed code stream of a classified video by using FFMPEG (fast Fourier transform and motion Picture experts group) as shown in figure 5; extracting related HEVC features of split-depth, mv and bit-allocation from a compressed code stream; inputting the obtained HEVC features and a Human eye visual attention Map (Human visualization Map) into an SVM learning model, and training the model; and obtaining two HEVC double SVM significance prediction models for classification training.
As shown in fig. 1, the present embodiment performs significance prediction on a test video Traffic using the HEVC dual SVM significance prediction model described above. Classifying the test video Traffic; obtaining an HEVC compressed code stream of a video sequence Traffic by using FFMPEG; three characteristics of HEVC are extracted from a video sequence Traffic compressed code stream: splitting-depth, mv and bit-allocation; inputting the obtained HEVC features into a corresponding classification-trained HEVC significance prediction model; in order to better predict the moving or emerging target, Kalman filtering is carried out; and obtaining a final video saliency map. The same operation is carried out on the rest test video sequences, and a better video saliency map can be obtained.
Claims (3)
1. A video significance prediction method of an HEVC compressed domain dual-SVM model is characterized by comprising the following steps of:
step 1, acquiring a video data set, and dividing video sequences into a training video sequence and a testing video sequence;
step 2, classifying all training video sequences selected in the video data set, wherein the classification process comprises the following steps:
step 201, performing significance prediction on a certain training video sequence by using an HEVC compressed domain video significance prediction method, including the following steps:
step 2011, a training video sequence is selected from all the training video sequences randomly, and an HEVC compressed code stream of the current training video sequence is obtained;
step 2012, extracting HEVC features from the HEVC compressed code stream;
step 2013, inputting the obtained HEVC features into an HEVC significance prediction model;
step 2014, performing forward smoothing filtering;
step 2015, obtaining a final video saliency map;
step 202, selecting a pixel domain significance prediction method to predict the significance of the same training video sequence in step 201;
step 203, evaluating the significance prediction result obtained in the step 201 and the significance prediction result obtained in the step 202 by using the significance prediction evaluation index;
step 204, classifying the current training video sequence according to the evaluation result of the step 203, wherein if the significance prediction result of the current training video sequence obtained in the step 202 is better than the significance prediction result of the same training video sequence obtained in the step 201, the current training video sequence is an A-class training video sequence, otherwise, the current training video sequence is a B-class training video sequence;
step 3, training the HEVC compressed domain dual-SVM significance prediction model by respectively using an A-type training video sequence and a B-type training video sequence to obtain two different compressed domain significance prediction models;
step 4, predicting the significance of a certain test video sequence by using two different trained compression domain significance prediction models, which comprises the following steps:
step 401, selecting a test video sequence from all test video sequences at will, and performing a pre-classification operation on the test video sequence to obtain a category to which the current test video sequence belongs;
step 402, obtaining an HEVC compressed code stream of a test video sequence, and extracting HEVC characteristics from the compressed code stream;
step 403, inputting the obtained HEVC features into a compressed domain significance prediction model corresponding to the category to which the current test video sequence belongs;
and step 404, performing Kalman filtering to obtain a final video saliency map.
2. The method of claim 1, wherein the significance prediction estimation criteria used in step 203 are AUC, CC and NSS.
3. The method of claim 1, wherein the step 3 comprises the steps of:
301, obtaining an HEVC compressed code stream of an A-class training video sequence or a B-class training video sequence;
step 302, extracting relevant HEVC features;
step 303, inputting the obtained HEVC features and the human eye visual attention to a double-SVM significance prediction model in an HEVC compressed domain;
and step 304, obtaining two classification training compressed domain significance prediction models.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810766665.1A CN109120932B (en) | 2018-07-12 | 2018-07-12 | Video significance prediction method of HEVC compressed domain double SVM model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810766665.1A CN109120932B (en) | 2018-07-12 | 2018-07-12 | Video significance prediction method of HEVC compressed domain double SVM model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109120932A CN109120932A (en) | 2019-01-01 |
CN109120932B true CN109120932B (en) | 2021-10-26 |
Family
ID=64862724
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810766665.1A Active CN109120932B (en) | 2018-07-12 | 2018-07-12 | Video significance prediction method of HEVC compressed domain double SVM model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109120932B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8965115B1 (en) * | 2013-03-14 | 2015-02-24 | Hrl Laboratories, Llc | Adaptive multi-modal detection and fusion in videos via classification-based-learning |
CN105138991A (en) * | 2015-08-27 | 2015-12-09 | 山东工商学院 | Video emotion identification method based on emotion significant feature integration |
CN105472380A (en) * | 2015-11-19 | 2016-04-06 | 国家新闻出版广电总局广播科学研究院 | Compression domain significance detection algorithm based on ant colony algorithm |
CN105893957A (en) * | 2016-03-30 | 2016-08-24 | 上海交通大学 | Method for recognizing and tracking ships on lake surface on the basis of vision |
CN106993188A (en) * | 2017-03-07 | 2017-07-28 | 北京航空航天大学 | A kind of HEVC compaction coding methods based on plurality of human faces saliency |
WO2018019126A1 (en) * | 2016-07-29 | 2018-02-01 | 北京市商汤科技开发有限公司 | Video category identification method and device, data processing device and electronic apparatus |
CN108134937A (en) * | 2017-12-21 | 2018-06-08 | 西北工业大学 | A kind of compression domain conspicuousness detection method based on HEVC |
-
2018
- 2018-07-12 CN CN201810766665.1A patent/CN109120932B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8965115B1 (en) * | 2013-03-14 | 2015-02-24 | Hrl Laboratories, Llc | Adaptive multi-modal detection and fusion in videos via classification-based-learning |
CN105138991A (en) * | 2015-08-27 | 2015-12-09 | 山东工商学院 | Video emotion identification method based on emotion significant feature integration |
CN105472380A (en) * | 2015-11-19 | 2016-04-06 | 国家新闻出版广电总局广播科学研究院 | Compression domain significance detection algorithm based on ant colony algorithm |
CN105893957A (en) * | 2016-03-30 | 2016-08-24 | 上海交通大学 | Method for recognizing and tracking ships on lake surface on the basis of vision |
WO2018019126A1 (en) * | 2016-07-29 | 2018-02-01 | 北京市商汤科技开发有限公司 | Video category identification method and device, data processing device and electronic apparatus |
CN106993188A (en) * | 2017-03-07 | 2017-07-28 | 北京航空航天大学 | A kind of HEVC compaction coding methods based on plurality of human faces saliency |
CN108134937A (en) * | 2017-12-21 | 2018-06-08 | 西北工业大学 | A kind of compression domain conspicuousness detection method based on HEVC |
Non-Patent Citations (3)
Title |
---|
Learning to Detect Video Saliency With HEVC Features;Mai Xu等;《IEEE Transactions on Image Processing》;20161114;第26卷(第1期);全文 * |
Visual Saliency Detection Based on Mutual Information in Compressed Domain;Ran Gao等;《2015 Visual Communications and Image Processing (VCIP)》;20151216;全文 * |
基于显著性检测的感兴趣区域编码;沈新雨;《中国优秀硕士学位论文全文数据库信息科技辑》;20180615(第6期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN109120932A (en) | 2019-01-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Singh et al. | Muhavi: A multicamera human action video dataset for the evaluation of action recognition methods | |
KR101942808B1 (en) | Apparatus for CCTV Video Analytics Based on Object-Image Recognition DCNN | |
Ye et al. | Unsupervised feature learning framework for no-reference image quality assessment | |
CN109241985B (en) | Image identification method and device | |
Wong et al. | Patch-based probabilistic image quality assessment for face selection and improved video-based face recognition | |
CN110929593B (en) | Real-time significance pedestrian detection method based on detail discrimination | |
CN108564066B (en) | Character recognition model training method and character recognition method | |
CN112016500A (en) | Group abnormal behavior identification method and system based on multi-scale time information fusion | |
CN110210335B (en) | Training method, system and device for pedestrian re-recognition learning model | |
CN112464807A (en) | Video motion recognition method and device, electronic equipment and storage medium | |
CN105184818A (en) | Video monitoring abnormal behavior detection method and detections system thereof | |
CN110795595A (en) | Video structured storage method, device, equipment and medium based on edge calculation | |
CN111401308B (en) | Fish behavior video identification method based on optical flow effect | |
CN112488071B (en) | Method, device, electronic equipment and storage medium for extracting pedestrian features | |
Giraldo et al. | Graph CNN for moving object detection in complex environments from unseen videos | |
CN114049581B (en) | Weak supervision behavior positioning method and device based on action segment sequencing | |
CN113313037A (en) | Method for detecting video abnormity of generation countermeasure network based on self-attention mechanism | |
CN112329656B (en) | Feature extraction method for human action key frame in video stream | |
Wang et al. | Real-time smoke detection using texture and color features | |
CN111738218A (en) | Human body abnormal behavior recognition system and method | |
Soon et al. | Malaysian car number plate detection and recognition system | |
CN113688804B (en) | Multi-angle video-based action identification method and related equipment | |
CN111027482B (en) | Behavior analysis method and device based on motion vector segmentation analysis | |
Javed et al. | Human movement recognition using euclidean distance: a tricky approach | |
CN109120932B (en) | Video significance prediction method of HEVC compressed domain double SVM model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |