CN108133020A - Video classification methods, device, storage medium and electronic equipment - Google Patents

Video classification methods, device, storage medium and electronic equipment Download PDF

Info

Publication number
CN108133020A
CN108133020A CN201711424730.4A CN201711424730A CN108133020A CN 108133020 A CN108133020 A CN 108133020A CN 201711424730 A CN201711424730 A CN 201711424730A CN 108133020 A CN108133020 A CN 108133020A
Authority
CN
China
Prior art keywords
weighted value
feature
eigenvector
intermediate data
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711424730.4A
Other languages
Chinese (zh)
Inventor
包怡欣
彭垚
绍杰
赵之健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI QINIU INFORMATION TECHNOLOGIES Co Ltd
Original Assignee
SHANGHAI QINIU INFORMATION TECHNOLOGIES Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI QINIU INFORMATION TECHNOLOGIES Co Ltd filed Critical SHANGHAI QINIU INFORMATION TECHNOLOGIES Co Ltd
Priority to CN201711424730.4A priority Critical patent/CN108133020A/en
Publication of CN108133020A publication Critical patent/CN108133020A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7847Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a kind of video classification methods, device, storage medium and electronic equipment, and this method includes obtaining the first eigenvector of video file;By first eigenvector input algorithm model study, obtain corresponding to the weighted value vector of each feature;The weighted value vector is multiplied one by one with the first eigenvector, obtains second feature vector;Classified according to the second feature vector to the video file.It can solve the problems, such as that visual classification noise is excessive.

Description

Video classification methods, device, storage medium and electronic equipment
Technical field
The present invention relates to video field, more specifically, being related to a kind of video classification methods, device, storage medium and electricity Sub- equipment.
Background technology
Video file is more with respect to picture noise.It specifically includes, one such as label is may matchings in the video of dog The content of label only has one section of video therein, and other parts are all the information unrelated with label;It is additionally included in relevant video In, per frame image also comprising much with the unmatched object of label, only fractional object matching label substance.Existing video Sorting technique is bad to noise processed.
Invention content
The technical problems to be solved by the invention are to provide a kind of video classification methods, device, storage medium and electronics and set It is standby, the influence of noise can be reduced.
The purpose of the present invention is achieved through the following technical solutions:
In a first aspect, the embodiment of the present application provides a kind of video classification methods, including:
Obtain the first eigenvector of video file;
By first eigenvector input algorithm model study, obtain corresponding to the weighted value vector of each feature;
The weighted value vector is multiplied one by one with the first eigenvector, obtains second feature vector;
Classified according to the second feature vector to the video file.
Second aspect, the embodiment of the present application provide a kind of visual classification device, including:
First acquisition unit, for obtaining the first eigenvector of video file;
Weighted value vector acquiring unit for first eigenvector input algorithm model to be learnt, obtains corresponding every The weighted value vector of a feature;
Second acquisition unit is multiplied one by one for the weighted value vector with the first eigenvector, obtains the second spy Sign vector;
Taxon, for being classified according to the second feature vector to the video file.
The third aspect, the embodiment of the present application provide a kind of storage medium, computer program are stored thereon with, when the calculating When machine program is run on computers so that the computer performs above-mentioned video classification methods.
Fourth aspect, the embodiment of the present application provide a kind of electronic equipment, and including processor and memory, the memory has Computer program, the processor is by calling the computer program, for performing above-mentioned video classification methods.
Video classification methods provided by the embodiments of the present application, device, storage medium and electronic equipment, by obtaining video text The first eigenvector of part;By first eigenvector input algorithm model study, obtain corresponding to the weighted value vector of each feature; Weighted value vector is multiplied one by one with first eigenvector, obtains second feature vector;According to second feature vector to video file Classify.It can solve the problems, such as that visual classification noise is excessive.Such as when obtaining the first eigenvector of video, and It is important not to be that each feature is just as.By algorithm model learning training, the weight of each feature is obtained, is extracted for dividing Class useful feature, the feature at the same time inhibiting visual classification noise excessive and not too important, so as to preferably be classified Effect.
Description of the drawings
Attached drawing to be used is needed to be briefly described.It should be evident that the accompanying drawings in the following description is only the application's Some embodiments, for those skilled in the art, without creative efforts, can also be attached according to these Figure obtains other attached drawings.
Fig. 1 is the first flow diagram of video classification methods provided by the embodiments of the present application;
Fig. 2 is second of flow diagram of video classification methods provided by the embodiments of the present application;
Fig. 3 is the third flow diagram of video classification methods provided by the embodiments of the present application;
Fig. 4 is the block diagram representation of video classification methods provided by the embodiments of the present application;
Fig. 5 is the 4th kind of flow diagram of video classification methods provided by the embodiments of the present application;
Fig. 6 is the 5th kind of flow diagram of video classification methods provided by the embodiments of the present application;
Fig. 7 is another block diagram representation of video classification methods provided by the embodiments of the present application;
Fig. 8 is the first structure diagram of visual classification device provided by the embodiments of the present application;
Fig. 9 is second of structure diagram of visual classification device provided by the embodiments of the present application;
Figure 10 is the third structure diagram of visual classification device provided by the embodiments of the present application.
Specific embodiment
Schema is please referred to, wherein identical element numbers represent identical component, the principle of the application is to implement one It is illustrated in appropriate computing environment.The following description be based on illustrated the application specific embodiment, should not be by It is considered as limitation the application other specific embodiments not detailed herein.
In the following description, the specific embodiment of the application will be with reference to as the step performed by one or multi-section computer And symbol illustrates, unless otherwise stating clearly.Therefore, these steps and operation will have to mention for several times is performed by computer, this paper institutes The computer execution of finger includes by representing with the computer processing unit of the electronic signal of the data in a structuring pattern Operation.This operation is converted at the data or the position being maintained in the memory system of the computer, reconfigurable Or in addition change the running of the computer in a manner of known to the tester of this field.The data structure that the data are maintained For the provider location of the memory, there is the specific feature as defined in the data format.But the application principle is with above-mentioned text Word illustrates that be not represented as a kind of limitation, this field tester will appreciate that plurality of step as described below and behaviour Also it may be implemented in hardware.
Term as used herein " unit " can regard the software object to be performed in the arithmetic system as.It is as described herein Different components, unit, engine and service can be regarded as the objective for implementation in the arithmetic system.And device as described herein and side Method can be implemented in a manner of software, can also be implemented on hardware certainly, within the application protection domain.
Term " comprising " and " having " and their any deformations in the application, it is intended that cover non-exclusive packet Contain.Such as it contains process, method, system, product or the equipment of series of steps or module and is not limited to the step listed Rapid or module, but some embodiments further include the step of not listing or module or some embodiments are further included for these Process, method, product or equipment intrinsic other steps or module.
Referenced herein " embodiment " is it is meant that a particular feature, structure, or characteristic described can wrap in conjunction with the embodiments It is contained at least one embodiment of the application.Each position in the description occur the phrase might not each mean it is identical Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and Implicitly understand, embodiment described herein can be combined with other embodiments.
The embodiment of the present application provides a kind of video classification methods, and the executive agent of the video classification methods can be the application The visual classification device of embodiment offer or the electronic equipment for being integrated with the visual classification device, the wherein visual classification fill It puts and the mode of hardware or software may be used realizes.
The embodiment of the present application will be described from the angle of visual classification device, which can specifically integrate In the electronic device.The visual classification includes:Obtain the first eigenvector of video file;First eigenvector is inputted into algorithm Model learning obtains corresponding to the weighted value vector of each feature;Weighted value vector is multiplied one by one with first eigenvector, obtains Two feature vectors;Classified according to second feature vector to video file.
Wherein electronic equipment is set including smart mobile phone, tablet computer, palm PC, computer, server, Cloud Server etc. It is standby.
Referring to Fig. 1, Fig. 1 is the first flow diagram of video classification methods provided by the embodiments of the present application.This Shen Please embodiment provide video classification methods, idiographic flow can be as follows:
Step 101, the first eigenvector of video file is obtained.
Video file can be the video file of the forms such as mjpeg, avi, rmvb, 3gp.Herein not to the lattice of video file Formula is defined.
Shift to an earlier date multiple features from video file, and multiple feature is formed into first eigenvector.Specifically, feature carries The extraction of characteristics of image and the extraction of audio frequency characteristics can be included by taking.
Step 102, by first eigenvector input algorithm model study, obtain corresponding to the weighted value vector of each feature.
By first eigenvector input algorithm model learning training, algorithm model can be that convolutional neural networks calculate mould Type, so as to which the weighted value for obtaining corresponding to each feature forms weighted value vector.
Step 103, weighted value vector is multiplied one by one with first eigenvector, obtains second feature vector.
Weighted value vector is multiplied one by one with first eigenvector, i.e., each feature in first eigenvector is corresponding Weighted value be multiplied, finally obtain second feature vector.
Step 104, classified according to second feature vector to video file.
After obtaining second feature vector, classified according to the second feature vector to video file.Specifically, it can incite somebody to action Second feature vector inputs algorithm model, and algorithm model is according to second feature vector to visual classification.
Referring to Fig. 2, Fig. 2 is second of flow diagram of video classification methods provided by the embodiments of the present application.This Shen By first eigenvector input algorithm model study, it please obtain corresponding to the step of the weighted value vector of each feature in embodiment, Idiographic flow can be as follows:
Step 1021, by first eigenvector input algorithm model study, according to feature importance by first eigenvector In be divided into two classes or multiclass feature.
Feature in first eigenvector is divided into two classes or multiclass feature, such as be divided into important and do not weigh according to importance Two classes are wanted, in another example being divided into important, general, inessential three classes, naturally it is also possible to be divided into more multiclass.
Step 1022, each feature in two classes or multiclass feature is set to different weights, obtains weighted value vector.
Each feature in corresponding two classes or multiclass feature sets different weighted values, such as is divided into important and inessential two Class, the feature of corresponding important class, weighted value are set as 0.9, and corresponding unessential feature, weighted value is set as 0.5.In another example point Into important, general, inessential three classes, the feature of corresponding important class, weighted value is set as 0.9, corresponds to the feature of general class, weight Value is set as 0.6, and corresponding unessential feature, weighted value is set as 0,3.It is any again by each weighted value according to fisrt feature to The arrangement form weighted value vector of amount.
Fig. 3 and Fig. 4 are please referred to, Fig. 3 is the third flow diagram of video classification methods provided by the embodiments of the present application, Fig. 4 is the block diagram representation of video classification methods provided by the embodiments of the present application.In the embodiment of the present application, by first eigenvector Algorithm model study is inputted, obtains corresponding to the step of the weighted value vector of each feature, idiographic flow can be as follows:
Step 1023, first eigenvector is inputted the first full articulamentum to be compressed to obtain the first intermediate data.
Step 1024, the first intermediate data is inputted the second full articulamentum to be expanded to obtain and first eigenvector length The second identical intermediate data.
Step 1025, the second intermediate data is converted to by weighted value vector according to preset function, in weighted value vector Weighted value is between 0-1.
For example, the first eigenvector that one one-dimensional length of input is 2048*256, by adjusting the weight of each feature, Export the second feature vector of a similary 2048*256 length.Can first eigenvector access there are into 512 sections first The full articulamentum of point is compressed, then expanded with the full articulamentum of 1024 nodes, and it is 2048*256's to obtain length Vector between being compressed to 0-1 with sigmoid functions to each value, finally obtains the weighted value vector of 2048*256.It will power Weight values vector is multiplied one by one with original first eigenvector, can finally obtain the second feature vector after recalibration.
Referring to Fig. 5, Fig. 5 is the 4th kind of flow diagram of video classification methods provided by the embodiments of the present application.This Shen By first eigenvector input algorithm model study, it please obtain corresponding to the step of the weighted value vector of each feature in embodiment, Idiographic flow can be as follows:
Step 1026, first eigenvector is inputted the first full articulamentum to be compressed to obtain the first intermediate data;
Step 1027, the first intermediate data is inputted the second full articulamentum to be expanded to obtain the second intermediate data;
Step 1028, the second intermediate data input full articulamentum of third is obtained in the third identical with feature vector length Between data;
Step 1029, third intermediate data is converted to by weighted value vector according to preset function, in weighted value vector Weighted value is between 0-1.
For example, the first eigenvector that one one-dimensional length of input is 2048*256, by adjusting the weight of each feature, Export the second feature vector of a similary 2048*256 length.Can first eigenvector access there are into 512 nodes first Full articulamentum compressed, then expanded with the full articulamentum of 1024 nodes, finally access full articulamentum and grown The vector for 2048*256 is spent, between being compressed to 0-1 with sigmoid functions to each value, finally obtains 2048*256's Weighted value vector.Weighted value vector is multiplied one by one with original first eigenvector, after can finally obtaining recalibration Second feature vector.
Fig. 6 and Fig. 7 are please referred to, Fig. 6 is the 5th kind of flow diagram of video classification methods provided by the embodiments of the present application, Fig. 7 is another block diagram representation of video classification methods provided by the embodiments of the present application.In the embodiment of the present application, by fisrt feature Vector input algorithm model study, obtains corresponding to the step of the weighted value vector of each feature, idiographic flow can be as follows:
Step 201, the first eigenvector of video file is obtained;
Step 202, first eigenvector is inputted the first full articulamentum to be compressed to obtain the first intermediate data;
Step 203, the first intermediate data is inputted the second full articulamentum to be expanded to obtain the second intermediate data;
Step 204, the second intermediate data is inputted the 4th full articulamentum to be compressed to obtain the 4th intermediate data;
Step 205, third intermediate data is inputted the 5th full articulamentum to be expanded to obtain and first eigenvector length The 5th identical intermediate data;
Step 206, the 5th intermediate data is converted to by weighted value vector, the power in weighted value vector according to preset function Weight values are between 0-1;
Step 207, weighted value vector is multiplied to obtain median feature vector with first eigenvector one by one;
Step 208, the feature in median feature vector is added respectively with corresponding second intermediate data, obtains second feature Vector;
Step 209, classified according to second feature vector to video file.
For example, the first eigenvector that one one-dimensional length of input is 2048*256, by adjusting the weight of each feature, Export the second feature vector of a similary 2048*256 length.By the of the full articulamentum acquistions of first group of reduce-expand Two intermediate data b values add the value a that one group of full articulamentum of reduce-expand finally uses sigmoid as residual error Each feature of first eigenvector is finally re-scaled with the method for ax+b as weighted value, obtains second feature vector.
In some embodiments, corresponding continuous multiple frames image can be extracted from video file, each frame image is existed Classify in algorithm model, and formed and represent the other first group of feature of object type and represent second group of feature of scene type, First group of feature and second group of Fusion Features are formed into an one-dimensional vector, are above-mentioned embodiment party by the one-dimensional vector Then first eigenvector in formula is trained study according to the first eigenvector.First as obtained in step 101 is special Sign vector is one-dimensional characteristic vector.
From the foregoing, it will be observed that video classification methods provided in an embodiment of the present invention, it can solve that visual classification noise is excessive to ask Topic.Such as when obtaining the first eigenvector of video, it is important to be not that each feature is just as.Pass through algorithm mould Type learning training obtains the weight of each feature, extracts for useful feature of classifying, at the same time inhibits visual classification noise Excessive and not too important feature, so as to obtain better classifying quality.By the importance of learning characteristic, extract for current The feature that the important feature of task and inhibition are relatively free of, so as to promote the performance of network.
Referring to Fig. 8, Fig. 8 is the first structure diagram of visual classification device provided by the embodiments of the present application.Wherein The visual classification device 500 includes first acquisition unit 501, weighted value vector acquiring unit 502,503 and of second acquisition unit Taxon 504.Wherein:
First acquisition unit 501, for obtaining the first eigenvector of video file.
Video file can be the video file of the forms such as mjpeg, avi, rmvb, 3gp.Herein not to the lattice of video file Formula is defined.
Shift to an earlier date multiple features from video file, and multiple feature is formed into first eigenvector.Specifically, feature carries The extraction of characteristics of image and the extraction of audio frequency characteristics can be included by taking.
Weighted value vector acquiring unit 502 for first eigenvector input algorithm model to be learnt, obtains corresponding each The weighted value vector of feature.
By first eigenvector input algorithm model learning training, algorithm model can be that convolutional neural networks calculate mould Type, so as to which the weighted value for obtaining corresponding to each feature forms weighted value vector.
Second acquisition unit 503 is multiplied one by one for weighted value vector with first eigenvector, obtain second feature to Amount.
Weighted value vector is multiplied one by one with first eigenvector, i.e., each feature in first eigenvector is corresponding Weighted value be multiplied, finally obtain second feature vector.
Taxon 504, for being classified according to second feature vector to video file.
After obtaining second feature vector, classified according to the second feature vector to video file.Specifically, it can incite somebody to action Second feature vector inputs algorithm model, and algorithm model is according to second feature vector to visual classification.
Referring to Fig. 9, Fig. 9 is second of structure diagram of visual classification device provided by the embodiments of the present application.Wherein Weighted value vector acquiring unit 502 includes classification subelement 5021 and weighted value obtains subelement 5022.Wherein:
Classify subelement 5021, for by first eigenvector input algorithm model study, according to feature importance by the It is divided into two classes or multiclass feature in one feature vector.
Feature in first eigenvector is divided into two classes or multiclass feature, such as be divided into important and do not weigh according to importance Two classes are wanted, in another example being divided into important, general, inessential three classes, naturally it is also possible to be divided into more multiclass.
Weighted value obtains subelement 5022, for each feature in two classes or multiclass feature to be set to different weights, Obtain weighted value vector.
Each feature in corresponding two classes or multiclass feature sets different weighted values, such as is divided into important and inessential two Class, the feature of corresponding important class, weighted value are set as 0.9, and corresponding unessential feature, weighted value is set as 0.5.In another example point Into important, general, inessential three classes, the feature of corresponding important class, weighted value is set as 0.9, corresponds to the feature of general class, weight Value is set as 0.6, and corresponding unessential feature, weighted value is set as 0,3.It is any again by each weighted value according to fisrt feature to The arrangement form weighted value vector of amount.
Referring to Fig. 10, Figure 10 is the third structure diagram of visual classification device provided by the embodiments of the present application.Its Middle weighted value vector acquiring unit includes the first median and obtains subelement 5023, the second median acquisition subelement 5024 and power Weight values obtain subelement 5022.Wherein:
First median obtains subelement 5023, for first eigenvector to be inputted the first full articulamentum compress To the first intermediate data;
Second median obtains subelement 5024, for the first intermediate data to be inputted the second full articulamentum expand To second intermediate data identical with first eigenvector length;
Weighted value obtain subelement 5022, for according to preset function by the second intermediate data be converted to weighted value to It measures, the weighted value in weighted value vector is between 0-1.
For example, the first eigenvector that one one-dimensional length of input is 2048*256, by adjusting the weight of each feature, Export the second feature vector of a similary 2048*256 length.Can first eigenvector access there are into 512 sections first The full articulamentum of point is compressed, then expanded with the full articulamentum of 1024 nodes, and it is 2048*256's to obtain length Vector between being compressed to 0-1 with sigmoid functions to each value, finally obtains the weighted value vector of 2048*256.It will power Weight values vector is multiplied one by one with original first eigenvector, can finally obtain the second feature vector after recalibration.
In some embodiments, weighted value vector acquiring unit includes the first median and obtains subelement, among second Value obtains subelement, third median obtains subelement and weighted value obtains subelement.Wherein:
First median obtains subelement, is compressed to obtain for first eigenvector to be inputted the first full articulamentum One intermediate data;
Second median obtains subelement, is expanded to obtain for the first intermediate data to be inputted the second full articulamentum Two intermediate data;
Third median obtains subelement, for the second intermediate data input full articulamentum of third to be obtained and feature vector The identical third intermediate data of length;
Weighted value obtains subelement, for third intermediate data to be converted to weighted value vector, power according to preset function Weighted value in weight values vector is between 0-1.
For example, the first eigenvector that one one-dimensional length of input is 2048*256, by adjusting the weight of each feature, Export the second feature vector of a similary 2048*256 length.Can first eigenvector access there are into 512 nodes first Full articulamentum compressed, then expanded with the full articulamentum of 1024 nodes, finally access full articulamentum and grown The vector for 2048*256 is spent, between being compressed to 0-1 with sigmoid functions to each value, finally obtains 2048*256's Weighted value vector.Weighted value vector is multiplied one by one with original first eigenvector, after can finally obtaining recalibration Second feature vector.
In some embodiments, weighted value vector acquiring unit includes the first median and obtains subelement, among second Value obtains subelement, the 4th median obtains subelement, the 5th median obtains subelement and weighted value obtains subelement.Its In:
First median obtains subelement, is compressed to obtain for first eigenvector to be inputted the first full articulamentum One intermediate data;
Second median obtains subelement, is expanded to obtain for the first intermediate data to be inputted the second full articulamentum Two intermediate data;
4th median obtains subelement, is compressed to obtain for the second intermediate data to be inputted the 4th full articulamentum Four intermediate data;
5th median obtain subelement, for by third intermediate data input the 5th full articulamentum expanded to obtain with The 5th identical intermediate data of first eigenvector length;
Weighted value obtains subelement, for the 5th intermediate data to be converted to weighted value vector, power according to preset function Weighted value in weight values vector is between 0-1;
Second acquisition unit 503 includes median feature vector and obtains subelement and second feature vector acquisition subelement.Its In:
Median feature vector obtains subelement, is multiplied to obtain intermediate spy one by one for weighted value vector and first eigenvector Sign vector;
Second feature vector obtain subelement, for the feature in median feature vector respectively with corresponding second mediant According to addition, second feature vector is obtained.
For example, the first eigenvector that one one-dimensional length of input is 2048*256, by adjusting the weight of each feature, Export the second feature vector of a similary 2048*256 length.By the of the full articulamentum acquistions of first group of reduce-expand Two intermediate data b values add the value a that one group of full articulamentum of reduce-expand finally uses sigmoid as residual error Each feature of first eigenvector is finally re-scaled with the method for ax+b as weighted value, obtains second feature vector.
From the foregoing, it will be observed that visual classification device provided in an embodiment of the present invention, it can solve that visual classification noise is excessive to ask Topic.Such as when obtaining the first eigenvector of video, it is important to be not that each feature is just as.Pass through algorithm mould Type learning training obtains the weight of each feature, extracts for useful feature of classifying, at the same time inhibits visual classification noise Excessive and not too important feature, so as to obtain better classifying quality.By the importance of learning characteristic, extract for current The feature that the important feature of task and inhibition are relatively free of, so as to promote the performance of network.
When it is implemented, Yi Shang modules can be independent entity to realize, arbitrary combination can also be carried out, is made It is realized for same or several entities, the specific implementation of more than modules can be found in the embodiment of the method for front, herein not It repeats again.
In the embodiment of the present application, visual classification device belongs to same design with the video classification methods in foregoing embodiments, The either method provided in video classification methods embodiment can be run on visual classification device, specific implementation process refers to The embodiment of video classification methods, details are not described herein again.
The embodiment of the present application also provides a kind of electronic equipment.Electronic equipment includes processor and memory.Wherein, it handles Device is electrically connected with memory.
Processor is the control centre of electronic equipment, utilizes each portion of various interfaces and the entire electronic equipment of connection Point, the data being stored in memory and are called at computer program in memory by operation or load store, are performed The various functions of electronic equipment simultaneously handle data, so as to carry out integral monitoring to electronic equipment.
Memory can be used for storage software program and unit, and processor is stored in the computer journey of memory by operation Sequence and unit, so as to perform various functions application and data processing.Memory can mainly include storing program area and storage Data field, wherein, storing program area can storage program area, the computer program needed at least one function (for example broadcast by sound Playing function, image player function etc.) etc.;Storage data field can be stored uses created data etc. according to electronic equipment.This Outside, memory can include high-speed random access memory, can also include nonvolatile memory, for example, at least a disk Memory device, flush memory device or other volatile solid-state parts.Correspondingly, memory can also include memory control Device, to provide access of the processor to memory.
In the embodiment of the present application, the processor in electronic equipment can be according to the steps, by one or more Computer program process it is corresponding instruction be loaded into memory, and calculating stored in memory is run by processor Machine program is as follows so as to fulfill various functions:
Obtain the first eigenvector of video file;
By first eigenvector input algorithm model study, obtain corresponding to the weighted value vector of each feature;
Weighted value vector is multiplied one by one with first eigenvector, obtains second feature vector;
Classified according to second feature vector to video file.
The embodiment of the present application also provides a kind of storage medium, and storage medium is stored with computer program, works as computer program When running on computers so that computer performs the application program management-control method in any of the above-described embodiment, such as:Acquisition regards The first eigenvector of frequency file;By first eigenvector input algorithm model study, obtain corresponding to the weighted value of each feature Vector;Weighted value vector is multiplied one by one with first eigenvector, obtains second feature vector;According to second feature vector to video File is classified.
In the embodiment of the present application, storage medium can be magnetic disc, CD, read-only memory (Read Only Memory, ) or random access memory (Random Access Memory, RAM) etc. ROM.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment Point, it may refer to the associated description of other embodiment.
It should be noted that for the video classification methods of the embodiment of the present application, this field common test personnel can be with Understand all or part of flow for realizing the embodiment of the present application video classification methods, be that can phase be controlled by computer program The hardware of pass is completed, and computer program can be stored in a computer read/write memory medium, as being stored in electronic equipment It in memory, and is performed, may include in the process of implementation such as audio broadcasting side by least one processor in the electronic equipment The flow of the embodiment of method.Wherein, storage medium can be magnetic disc, CD, read-only memory, random access memory etc..
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, it is impossible to assert The specific implementation of the present invention is confined to these explanations.For those of ordinary skill in the art to which the present invention belongs, exist Under the premise of not departing from present inventive concept, several simple deduction or replace can also be made, should all be considered as belonging to the present invention's Protection domain.

Claims (10)

1. a kind of video classification methods, which is characterized in that including:
Obtain the first eigenvector of video file;
By first eigenvector input algorithm model study, obtain corresponding to the weighted value vector of each feature;
The weighted value vector is multiplied one by one with the first eigenvector, obtains second feature vector;
Classified according to the second feature vector to the video file.
2. video classification methods as described in claim 1, which is characterized in that described that the first eigenvector is inputted into algorithm Model learning obtains corresponding to the step of the weighted value vector of each feature, including:
By first eigenvector input algorithm model study, two will be divided into first eigenvector according to feature importance Class or multiclass feature;
Each feature in two class or multiclass feature is set to different weights, obtains weighted value vector.
3. video classification methods as described in claim 1, which is characterized in that described that the first eigenvector is inputted into algorithm Model learning obtains corresponding to the step of the weighted value vector of each feature, including:
The first eigenvector is inputted the first full articulamentum to be compressed to obtain the first intermediate data;
First intermediate data the second full articulamentum of input is expanded to obtain identical with the first eigenvector length The second intermediate data;
Second intermediate data is converted to by weighted value vector, the weighted value in the weighted value vector according to preset function Between 0-1.
4. video classification methods as described in claim 1, which is characterized in that described that the first eigenvector is inputted into algorithm Model learning obtains corresponding to the step of the weighted value vector of each feature, including:
The first eigenvector is inputted the first full articulamentum to be compressed to obtain the first intermediate data;
First intermediate data is inputted the second full articulamentum to be expanded to obtain the second intermediate data;
The second intermediate data input full articulamentum of third is obtained into the third mediant identical with described eigenvector length According to;
The third intermediate data is converted to by weighted value vector, the weighted value in the weighted value vector according to preset function Between 0-1.
5. video classification methods as described in claim 1, which is characterized in that described that the first eigenvector is inputted into algorithm Model learning obtains corresponding to the step of the weighted value vector of each feature, including:
The first eigenvector is inputted the first full articulamentum to be compressed to obtain the first intermediate data;
First intermediate data is inputted the second full articulamentum to be expanded to obtain the second intermediate data;
Second intermediate data is inputted the 4th full articulamentum to be compressed to obtain the 4th intermediate data;
The third intermediate data the 5th full articulamentum of input is expanded to obtain identical with the first eigenvector length The 5th intermediate data;
5th intermediate data is converted to by weighted value vector, the weighted value in the weighted value vector according to preset function Between 0-1;
The weighted value vector is multiplied to obtain the step of second feature vector with the first eigenvector one by one, including:
The weighted value vector is multiplied to obtain median feature vector with the first eigenvector one by one;
Feature in the median feature vector is added respectively with corresponding second intermediate data, obtains second feature vector.
6. a kind of visual classification device, which is characterized in that including:
First acquisition unit, for obtaining the first eigenvector of video file;
Weighted value vector acquiring unit for first eigenvector input algorithm model to be learnt, obtains corresponding each special The weighted value vector of sign;
Second acquisition unit is multiplied one by one for the weighted value vector with the first eigenvector, obtain second feature to Amount;
Taxon, for being classified according to the second feature vector to the video file.
7. visual classification device as claimed in claim 6, which is characterized in that the weighted value vector acquiring unit includes:
Classification subelement, it is special by first according to feature importance for first eigenvector input algorithm model to be learnt It is divided into two classes or multiclass feature in sign vector;
Weighted value obtains subelement, for each feature in two class or multiclass feature to be set to different weights, obtains Weighted value vector.
8. visual classification device as claimed in claim 6, which is characterized in that the weighted value vector acquiring unit includes:
First median obtains subelement, is compressed to obtain for the first eigenvector to be inputted the first full articulamentum One intermediate data;
Second median obtain subelement, for by first intermediate data input the second full articulamentum expanded to obtain with The second identical intermediate data of the first eigenvector length;
Weighted value obtains subelement, for second intermediate data to be converted to weighted value vector, institute according to preset function The weighted value in weighted value vector is stated between 0-1.
9. a kind of storage medium, is stored thereon with computer program, which is characterized in that when the computer program on computers During operation so that the computer performs such as video classification methods described in any one of claim 1 to 5.
10. a kind of electronic equipment, including processor and memory, the memory has computer program, which is characterized in that described Processor is by calling the computer program, for performing such as video classification methods described in any one of claim 1 to 5.
CN201711424730.4A 2017-12-25 2017-12-25 Video classification methods, device, storage medium and electronic equipment Pending CN108133020A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711424730.4A CN108133020A (en) 2017-12-25 2017-12-25 Video classification methods, device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711424730.4A CN108133020A (en) 2017-12-25 2017-12-25 Video classification methods, device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN108133020A true CN108133020A (en) 2018-06-08

Family

ID=62392613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711424730.4A Pending CN108133020A (en) 2017-12-25 2017-12-25 Video classification methods, device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN108133020A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830330A (en) * 2018-06-22 2018-11-16 西安电子科技大学 Classification of Multispectral Images method based on self-adaptive features fusion residual error net
CN109255392A (en) * 2018-09-30 2019-01-22 百度在线网络技术(北京)有限公司 Video classification methods, device and equipment based on non local neural network
CN109902634A (en) * 2019-03-04 2019-06-18 上海七牛信息技术有限公司 A kind of video classification methods neural network based and system
CN110166828A (en) * 2019-02-19 2019-08-23 腾讯科技(深圳)有限公司 A kind of method for processing video frequency and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095964A (en) * 2015-08-17 2015-11-25 杭州朗和科技有限公司 Data processing method and device
EP3035246A2 (en) * 2014-12-15 2016-06-22 Samsung Electronics Co., Ltd Image recognition method and apparatus, image verification method and apparatus, learning method and apparatus to recognize image, and learning method and apparatus to verify image
CN106874857A (en) * 2017-01-19 2017-06-20 腾讯科技(上海)有限公司 A kind of living body determination method and system based on video analysis

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3035246A2 (en) * 2014-12-15 2016-06-22 Samsung Electronics Co., Ltd Image recognition method and apparatus, image verification method and apparatus, learning method and apparatus to recognize image, and learning method and apparatus to verify image
CN105095964A (en) * 2015-08-17 2015-11-25 杭州朗和科技有限公司 Data processing method and device
CN106874857A (en) * 2017-01-19 2017-06-20 腾讯科技(上海)有限公司 A kind of living body determination method and system based on video analysis

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830330A (en) * 2018-06-22 2018-11-16 西安电子科技大学 Classification of Multispectral Images method based on self-adaptive features fusion residual error net
CN108830330B (en) * 2018-06-22 2021-11-02 西安电子科技大学 Multispectral image classification method based on self-adaptive feature fusion residual error network
CN109255392A (en) * 2018-09-30 2019-01-22 百度在线网络技术(北京)有限公司 Video classification methods, device and equipment based on non local neural network
CN109255392B (en) * 2018-09-30 2020-11-24 百度在线网络技术(北京)有限公司 Video classification method, device and equipment based on non-local neural network
CN110166828A (en) * 2019-02-19 2019-08-23 腾讯科技(深圳)有限公司 A kind of method for processing video frequency and device
CN109902634A (en) * 2019-03-04 2019-06-18 上海七牛信息技术有限公司 A kind of video classification methods neural network based and system

Similar Documents

Publication Publication Date Title
Ma et al. Pyramidal feature shrinking for salient object detection
Picard et al. Improving image similarity with vectors of locally aggregated tensors
CN108090203A (en) Video classification methods, device, storage medium and electronic equipment
CN108133020A (en) Video classification methods, device, storage medium and electronic equipment
CN107545889A (en) Suitable for the optimization method, device and terminal device of the model of pattern-recognition
CN110990631A (en) Video screening method and device, electronic equipment and storage medium
CN108228844A (en) A kind of picture screening technique and device, storage medium, computer equipment
Hii et al. Multigap: Multi-pooled inception network with text augmentation for aesthetic prediction of photographs
CN109086697A (en) A kind of human face data processing method, device and storage medium
CN111898675B (en) Credit wind control model generation method and device, scoring card generation method, machine readable medium and equipment
CN108154120A (en) video classification model training method, device, storage medium and electronic equipment
CN110096617B (en) Video classification method and device, electronic equipment and computer-readable storage medium
CN111539290A (en) Video motion recognition method and device, electronic equipment and storage medium
CN113761359B (en) Data packet recommendation method, device, electronic equipment and storage medium
CN107864405A (en) A kind of Forecasting Methodology, device and the computer-readable medium of viewing behavior type
Hazrati et al. Addressing the New Item problem in video recommender systems by incorporation of visual features with restricted Boltzmann machines
US11347816B2 (en) Adaptive clustering of media content from multiple different domains
CN110389932B (en) Automatic classification method and device for power files
Argaw et al. The anatomy of video editing: A dataset and benchmark suite for ai-assisted video editing
CN115100717A (en) Training method of feature extraction model, and cartoon object recognition method and device
Yang et al. Xception-based general forensic method on small-size images
Varghese et al. A novel video genre classification algorithm by keyframe relevance
CN109213833A (en) Two disaggregated model training methods, data classification method and corresponding intrument
Kalakoti Key-Frame Detection and Video Retrieval Based on DC Coefficient-Based Cosine Orthogonality and Multivariate Statistical Tests.
Mallick et al. Video retrieval using salient foreground region of motion vector based extracted keyframes and spatial pyramid matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180608