Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the application for generating the method for model or the example of the device for generating model
Property system architecture 100.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out
Send message etc..Various telecommunication customer end applications can be installed, such as video record class is answered on terminal device 101,102,103
With the application of, video playback class, the application of interactive voice class, searching class application, instant messaging tools, mailbox client, social platform
Software etc..
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard
When part, it can be the various electronic equipments with display screen, including but not limited to smart phone, tablet computer, on knee portable
Computer and desktop computer etc..When terminal device 101,102,103 is software, above-mentioned cited electricity may be mounted at
In sub- equipment.Multiple softwares or software module (such as providing Distributed Services) may be implemented into it, also may be implemented into
Single software or software module.It is not specifically limited herein.
When terminal device 101,102,103 is hardware, it is also equipped with image capture device thereon.Image Acquisition is set
It is standby to can be the various equipment for being able to achieve acquisition image function, such as camera, sensor.User can use terminal device
101, the image capture device on 102,103, to acquire video.
Server 105 can be to provide the server of various services, such as uploading to terminal device 101,102,103
The video video processing service device that is stored, managed or analyzed.The available sample set of video processing service device.Sample
Concentration may include a large amount of sample.Wherein, the sample in above-mentioned sample set may include Sample video, be used to indicate sample view
First markup information in the source place of frequency and be used to indicate Sample video whether be hot video the second markup information.In addition,
Video processing service device can use the sample in sample set, be trained to convolutional neural networks, and can be by training result
(such as the video popularity prediction model generated) is stored.In this way, uploading view using terminal device 101,102,103 in user
After frequency, server 105 can determine the popularity for the video that user is uploaded, and in turn, can carry out the behaviour such as pushing to the video
Make.
It should be noted that server 105 can be hardware, it is also possible to software.When server is hardware, Ke Yishi
The distributed server cluster of ready-made multiple server compositions, also may be implemented into individual server.When server is software,
Multiple softwares or software module (such as providing Distributed Services) may be implemented into, single software or soft also may be implemented into
Part module.It is not specifically limited herein.
It should be noted that the method provided by the embodiment of the present application for generating model is generally held by server 105
Row, correspondingly, the device for generating model is generally positioned in server 105.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the process of one embodiment of the method for generating model according to the application is shown
200.The method for being used to generate model, comprising the following steps:
Step 201, sample set is obtained.
It in the present embodiment, can be with for generating the executing subject (such as server 105 shown in FIG. 1) of the method for model
Obtain sample set in several ways.For example, executing subject can by wired connection mode or radio connection, from
It is obtained in another server (such as database server) of storage sample and is stored in existing sample set therein.Example again
Such as, user can collect sample by terminal device (such as terminal device shown in FIG. 1 101,102,103).In this way, above-mentioned
Executing subject can receive sample collected by terminal, and these samples are stored in local, to generate sample set.It needs to refer to
Out, above-mentioned radio connection can include but is not limited to 3G/4G connection, WiFi connection, bluetooth connection, WiMAX connection,
Zigbee connection, UWB (ultra wideband) connection and other currently known or exploitation in the future radio connections.
It herein, may include a large amount of sample in sample set.Wherein, sample may include Sample video, be used to indicate sample
First markup information in the source place of this video and be used to indicate Sample video whether be hot video the second markup information.
Herein, whether some video is hot video, can be determined previously according to some indexs.For example, if the view
The big Mr. Yu of daily recommended amounts of frequency specifies numerical value (such as 5000), it may be considered that the video is hot video.Alternatively, if the view
Frequently the big Mr. Yu of click volume at the appointed time in section specifies numerical value, it may be considered that the video is hot video.Similarly, if the view
The small Mr. Yu of daily recommended amounts of frequency specifies numerical value (such as 500), it may be considered that the video is not hot video.Alternatively, if should
At the appointed time the small Mr. Yu of click volume in section specifies numerical value to video, it may be considered that the video is not hot video.
Step 202, sample is extracted from sample set.
In the present embodiment, sample is extracted in the sample set that executing subject can be obtained from step 201, and executes step
Rapid 203 to step 206 training step.Wherein, the extracting mode of sample and extraction quantity are not intended to limit in this application.Example
Such as, it can be and extract at least one sample at random, be also possible to therefrom extract clarity preferably (the i.e. Sample video of Sample video
Frame pixel it is higher) sample.
Step 203, the frame of the Sample video in extracted sample is input to the convolutional Neural net comprising full articulamentum
Network.
In the present embodiment, above-mentioned executing subject can be by the frame in the Sample video in the sample extracted in step 202
It is input to the convolutional neural networks comprising full articulamentum.Convolutional neural networks can carry out feature extraction to the frame in video, divide
The processing such as analysis, and then output information.It should be noted that the frame in the Sample video inputted, can be one randomly selected
Frame or multiframe;It is also possible to the multiframe extracted from above-mentioned Sample video according to specified time interval (such as 1s or 2s etc.).
It is not construed as limiting herein.
Herein, convolutional neural networks use various existing structures (such as DenseBox, VGGNet, ResNet, SegNet
Deng).In practice, convolutional neural networks (Convolutional Neural Network, CNN) are a kind of feedforward neural networks,
Its artificial neuron can respond the surrounding cells in a part of coverage area, have outstanding performance for image procossing, thus,
It can use the extraction for the frame feature that convolutional neural networks carry out in Sample video.
In the present embodiment, convolutional neural networks may include full articulamentum.In addition to convolutional layer, it also can according to need and set
Set other layers.Such as convolutional layer, pond layer, Fusion Features layer etc..Wherein, convolutional layer can be used for extracting characteristics of image.Chi Hua
Layer can be used for carrying out down-sampled (downsample) to the information of input.Fusion Features layer can be used for obtained each frame
Corresponding characteristics of image (for example, it may be form of the form of eigenmatrix or feature vector) is merged.For example, can
The characteristic value of the same position in the corresponding eigenmatrix of different frame to be averaged, to carry out Fusion Features, generates one and melt
Eigenmatrix after conjunction.Full articulamentum can be used for classifying obtained feature.
In the present embodiment, the full articulamentum of convolutional neural networks may include multiple branches, each branch corresponding one
A source place.Each branch of full articulamentum is independent from each other.The information that the preceding layer of full articulamentum is exported can be defeated respectively
Enter to each branch, each branch is independently processed from the information inputted, to obtain the information of each branch's output.Practice
In, each branch is considered as mutually independent full articulamentum.Herein, source place can be divided according to country.For example, in the future
Source it can be divided into China, the U.S., Thailand etc..Herein, can also according to area south, the north of China (such as etc.),
Continent (such as Asia, Europe etc.), city (such as Beijing, Shanghai etc.) etc. are divided.It is not construed as limiting herein.
In the present embodiment, after the frame in Sample video is input to convolutional neural networks, convolutional neural networks it is shallow
Layer (can be understood as layer before full articulamentum, such as convolutional layer, pond layer, Fusion Features layer etc. herein) can be to being inputted
Frame successively carry out feature extraction, analysis etc. processing.Then by each branch of treated information input to full articulamentum.Each point
Branch can further calculate received information, final output information.Herein, for each branch, the branch
The information of final output can be the probability for belonging to hot video of Sample video.Here it is possible to each branch be exported general
Popularity of the Sample video that rate is predicted as convolutional neural networks in different sources ground.In practice, full articulamentum it is each
The functions such as Sigmoid function or softmax function can be used in branch, calculate the popularity of Sample video.Each branch is counted
The popularity of calculating can be located at section [0,1].It should be noted that the popularity of video can be used for characterizing the video by
Degree of concern.In general, popularity is higher, the click volume or transfer amount of video are bigger.When the popularity of video is pre- greater than some
If threshold value, it may be considered that the video is hot video.
Step 204, the information that is exported based on each branch, the markup information in extracted sample and it is preset with each point
The corresponding loss function of branch, determines the penalty values of sample.
In the present embodiment, above-mentioned executing subject can be based in information that each branch is exported, extracted sample
Markup information (including the first markup information and the second markup information) and preset loss function corresponding with each branch, determine sample
This penalty values.Herein, for a certain branch, the corresponding loss function of the branch can be to be exported for estimating the branch
Information (such as Sample video is in the popularity in the corresponding source place of the branch) and true value (such as 1 or 0, characterization institute is defeated respectively
Whether the Sample video entered is hot video) inconsistent degree.It is a non-negative real-valued function.Under normal circumstances, it loses
The value (penalty values) of function is smaller, and the robustness of model is better.Loss function can be arranged according to actual needs.
Specifically, the penalty values of sample: the first step can be determined in accordance with the following steps, it is above-mentioned to hold for each branch
The second markup information in information and sample that the branch is exported can be substituting to the corresponding loss of the branch by row main body
Function obtains the corresponding penalty values of the branch.Second step can determine indicated by the first markup information in extracted sample
Source place, to determine the branch of the corresponding full articulamentum in the source place.Third step can only be extracted determined by second step
The corresponding penalty values of branch, using the penalty values as the penalty values of sample.
In some optional implementations of the present embodiment, above-mentioned executing subject can determine the sample in accordance with the following steps
This penalty values:
The second markup information in information and extracted sample that each branch is exported can be input to pre- by the first step
If loss function corresponding with corresponding branch, determine the penalty values of each branch.
Second step can determine the weight of the penalty values of each branch based on the first markup information in extracted sample.
Optionally, the weight of the penalty values of branch corresponding to source place indicated by the first markup information can be arranged
For some designated value (such as 1), by the weight of the penalty values of the corresponding branch in remaining source place be set as another designated value (such as
0).It is pre-established it should be noted that above-mentioned each designated value can be technical staff based on mass data statistics and analysis
Value.
Optionally, source place can be grouped first.It will be in affiliated group of source place indicated by the first markup information
The penalty values of the corresponding branch in each source place are set as some designated value (such as 1), by corresponding point of source place each in remaining group
The penalty values of branch are set as another designated value (such as 0).Herein, source place is grouped, is can be according to belonging to source place
What region was divided.For example, source place includes China, South Korea, Germany etc..Then can be according to the continent belonging to source place, it will
Source place is divided into Asia group, Europe group etc..
Optionally, the weight of the penalty values of each branch can be determined in accordance with the following steps: for each branch, in response to
Determine that the source place corresponding with the branch of source place indicated by the first markup information in extracted sample is identical, it can should
The weight of the penalty values of branch is determined as the first default value (such as 1);In response to the first mark in the extracted sample of determination
It is different to infuse source place source place corresponding from the branch indicated by information, it is pre- that the weight of the penalty values of the branch is determined as second
If numerical value (such as 0).Herein, the first default value, the second default value can be technical staff be based on mass data statistics and
The value analyzed and pre-established.
Third step can be weighted the penalty values of each branch, determine the penalty values of sample.
Step 205, based on penalty values compared with target value, determine whether convolutional neural networks train completion.
In the present embodiment, above-mentioned executing subject can be compared based on determined penalty values with target value.According to than
Relatively result determines whether convolutional neural networks train completion.It should be noted that if extracting in step 202 has multiple (at least two
It is a) sample, then the penalty values of each sample can be compared with target value by executing subject respectively.It may thereby determine that each
Whether the penalty values of sample are less than or equal to target value.As an example, if extracting in step 202 has multiple samples,
In the case that the penalty values of each sample are respectively less than or are equal to target value, executing subject can determine that convolutional neural networks have been trained
At.For another example executing subject can count the ratio that penalty values account for the sample of extraction less than or equal to the sample of target value.And at this
Ratio reaches default sample proportion (such as 95%), can determine that convolutional neural networks training is completed.It should be noted that target value
It can be generally used for indicating the ideal situation of the inconsistent degree between predicted value and true value.That is, when penalty values are small
In or when being equal to target value, it is believed that predicted value nearly or approximately true value.Target value can be arranged according to actual needs.
It should be noted that can then continue to execute step 206 in response to determining that convolutional neural networks have trained completion.
In response to determining that convolutional neural networks not complete by training, can be updated in convolutional neural networks based on identified penalty values
Parameter extracts sample from above-mentioned sample set again, the convolutional neural networks after using undated parameter as convolutional neural networks,
Continue to execute above-mentioned training step.Herein, it can use the gradient that back-propagation algorithm acquires penalty values relative to model parameter,
Then gradient updating model parameter is based on using gradient descent algorithm.It should be noted that under above-mentioned back-propagation algorithm, gradient
Drop algorithm and machine learning method are the well-known techniques studied and applied extensively at present, and details are not described herein.It may be noted that
It is that extracting mode here does not also limit in this application.Such as in the case where sample is concentrated with great amount of samples, executing subject
The sample being not extracted by can therefrom be extracted.
Step 206, in response to determining that convolutional neural networks training is completed, the convolutional neural networks after training are determined as regarding
Frequency Popularity prediction model.
In the present embodiment, in response to determining that convolutional neural networks training is completed, above-mentioned executing subject can will be after training
Convolutional neural networks be determined as video popularity prediction model.
In some optional implementations of the present embodiment, after training obtains video popularity prediction model, on
Executing subject is stated in response to receiving target video, it is pre- the frame in above-mentioned target video can be input to above-mentioned video popularity
Survey model.Wherein, above-mentioned target video has the mark for being used to indicate the source place of above-mentioned target video.Target video can be
Any video that terminal device uploads.Then, above-mentioned executing subject can be come the source place of above-mentioned target video as target
Source, using in the full articulamentum of above-mentioned video popularity prediction model with the corresponding branch in above-mentioned target source place as target
The information that above-mentioned intended branch is exported is determined as the popularity of above-mentioned target video by branch.So as to realize to user
The prediction of the popularity of the video of upload.
In some optional implementations of the present embodiment, it is greater than in advance in response to the popularity of the above-mentioned target video of determination
If threshold value, above-mentioned executing subject can determine that above-mentioned target video is hot video, and above-mentioned target video is pushed to target and is used
Family.Herein, target user can be the user randomly selected, be also possible to the user determined according to preset rules.Make
For example, the other users for the user that concern uploads the target video can be determined as target user.
With continued reference to the signal that Fig. 3, Fig. 3 are according to the application scenarios of the method for generating model of the present embodiment
Figure.In the application scenarios of Fig. 3, in the application scenarios of Fig. 3, model can be installed on terminal device 301 used by a user
Training class application.When user opens the application, and after uploading the store path of sample set or sample set, backstage is provided to the application
The server 302 of support can run the method for generating model, comprising:
It is possible, firstly, to obtain sample set.Wherein, the sample in above-mentioned sample set may include Sample video 303, for referring to
Show first markup information 304 in the source place of Sample video and be used to indicate Sample video whether be hot video the second mark
Information 305.Later, sample can be extracted from above-mentioned sample set, executes following training step: by the sample in extracted sample
The frame of this video is input to the convolutional neural networks 306 comprising full articulamentum, wherein and above-mentioned full articulamentum includes multiple branches,
Each branch corresponds to a source place;Markup information (first in the information that is exported based on each branch, extracted sample
Markup information 304 and the second markup information 305) and preset loss function corresponding with each branch, determine the penalty values of sample
307;Based on above-mentioned penalty values compared with target value, determine whether convolutional neural networks train completion;In response to determining convolution
Neural metwork training is completed, and the convolutional neural networks after training are determined as video popularity prediction model 308.
The method provided by the above embodiment of the application can therefrom extract sample to be rolled up by obtaining sample set
The training of product neural network.Wherein, the sample in above-mentioned sample set includes Sample video, the source place for being used to indicate Sample video
The first markup information and be used to indicate Sample video whether be hot video the second markup information.In this way, by the sample of extraction
The frame of Sample video in this is input to convolutional neural networks, can obtain each branch of the full articulamentum of convolutional neural networks
The information exported.It then, can be based on information that each branch of the full articulamentum of convolutional neural networks is exported, extracted
Markup information and preset loss function corresponding with each branch in sample, determine the penalty values of sample.Finally, can be based on
Above-mentioned penalty values determine whether convolutional neural networks train completion compared with target value.If convolutional neural networks have been trained
At, so that it may the convolutional neural networks after training are determined as video popularity prediction model.It can be with thus, it is possible to obtain one kind
For the model of video Popularity prediction, and the model is suitable for the prediction of the popularity to the video in different source places, mentions
The high applicability of model.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of the method for generating model.The use
In the process 400 for the method for generating model, comprising the following steps:
Step 401, sample set is obtained.
It in the present embodiment, can be with for generating the executing subject (such as server 105 shown in FIG. 1) of the method for model
Obtain sample set.It herein, may include a large amount of sample in sample set.Wherein, sample may include Sample video, be used to indicate
First markup information in the source place of Sample video and be used to indicate Sample video whether be hot video the second markup information.
Step 402, sample is extracted from sample set.
In the present embodiment, sample is extracted in the sample set that executing subject can be obtained from step 401, and executes step
Rapid 403 to step 408 training step.Wherein, the extracting mode of sample and extraction quantity are not intended to limit in this application.Example
Such as, it can be and extract at least one sample at random, be also possible to therefrom extract clarity preferably (the i.e. Sample video of Sample video
Frame pixel it is higher) sample.
Step 403, the frame of the Sample video in extracted sample is input to the convolutional Neural net comprising full articulamentum
Network.
In the present embodiment, above-mentioned executing subject can be by the frame in the Sample video in the sample extracted in step 402
It is input to the convolutional neural networks comprising full articulamentum.Herein, convolutional neural networks may include full articulamentum.Except convolutional layer
Outside, it also can according to need and other layers be set.Such as convolutional layer, pond layer, Fusion Features layer etc..Above-mentioned full articulamentum can wrap
Containing multiple branches, the corresponding source place of each branch.Each branch of full articulamentum is independent from each other.Wherein, source place
It can be divided according to country.
In the present embodiment, after the frame in Sample video is input to convolutional neural networks, convolutional neural networks it is shallow
Layer (can be understood as layer before full articulamentum, such as convolutional layer, pond layer, Fusion Features layer etc. herein) can be to being inputted
Frame successively carry out feature extraction, analysis etc. processing.Then by each branch of treated information input to full articulamentum.Each point
Branch can further calculate received information, final output information.Herein, for each branch, the branch
The information of final output can be the probability for belonging to hot video of Sample video.Here it is possible to each branch be exported general
Popularity of the Sample video that rate is predicted as convolutional neural networks in different sources ground.
Step 404, the second markup information in information and extracted sample that each branch is exported is input to default
Loss function corresponding with corresponding branch, determine the penalty values of each branch.
In the present embodiment, in the information and extracted sample that above-mentioned executing subject can be exported each branch
Two markup informations are input to preset loss function corresponding with corresponding branch, determine the penalty values of each branch.For a certain
Branch, the corresponding loss function of the branch can be that (such as Sample video is in this point for estimating information that the branch exported
The popularity in the corresponding source place of branch) and true value (such as 1 or 0, characterize whether inputted Sample video is hot spot view respectively
Frequently inconsistent degree).It is a non-negative real-valued function.Under normal circumstances, the value (penalty values) of loss function is smaller, model
Robustness it is better.Loss function can be arranged according to actual needs.As an example, cross entropy loss function can be used
Etc. existing loss function.
Step 405, for each branch, in response to coming indicated by the first markup information in the extracted sample of determination
Source corresponding with branch source place it is identical, the weight of the penalty values of the branch is determined as the first default value;In response to
Determine that the source place corresponding from the branch of source place indicated by the first markup information in extracted sample is different, by the branch
The weights of penalty values be determined as the second default value.
In the present embodiment, for each branch, in response to the first markup information institute in the extracted sample of determination
Indicate that source place source place corresponding with the branch is identical, above-mentioned executing subject can determine the weight of the penalty values of the branch
For the first default value (such as 1).In response to source place indicated by the first markup information in the extracted sample of determination and it is somebody's turn to do
The corresponding source place of branch is different, and the weight of the penalty values of the branch is determined as the second default value (such as 0).
Step 406, the penalty values of each branch are weighted, determine the penalty values of sample.
In the present embodiment, above-mentioned executing subject can be weighted the penalty values of each branch, determine the loss of sample
Value.
Step 407, based on penalty values compared with target value, determine whether convolutional neural networks train completion.
In the present embodiment, above-mentioned executing subject can be compared based on determined penalty values with target value.According to than
Relatively result determines whether convolutional neural networks train completion.It should be noted that if extracting in step 402 has multiple (at least two
It is a) sample, then the penalty values of each sample can be compared with target value by executing subject respectively.It may thereby determine that each
Whether the penalty values of sample are less than or equal to target value.As an example, if extracting in step 402 has multiple samples,
In the case that the penalty values of each sample are respectively less than or are equal to target value, executing subject can determine that convolutional neural networks have been trained
At.
It should be noted that can then continue to execute step 408 in response to determining that convolutional neural networks have trained completion.
In response to determining that convolutional neural networks not complete by training, can be updated in convolutional neural networks based on identified penalty values
Parameter extracts sample from above-mentioned sample set again, the convolutional neural networks after using undated parameter as convolutional neural networks,
Continue to execute above-mentioned training step.Herein, it can use the gradient that back-propagation algorithm acquires penalty values relative to model parameter,
Then gradient updating model parameter is based on using gradient descent algorithm.It should be noted that under above-mentioned back-propagation algorithm, gradient
Drop algorithm and machine learning method are the well-known techniques studied and applied extensively at present, and details are not described herein.It may be noted that
It is that extracting mode here does not also limit in this application.Such as in the case where sample is concentrated with great amount of samples, executing subject
The sample being not extracted by can therefrom be extracted.
Step 408, in response to determining that convolutional neural networks training is completed, the convolutional neural networks after training are determined as regarding
Frequency Popularity prediction model.
In the present embodiment, in response to determining that convolutional neural networks training is completed, above-mentioned executing subject can will be after training
Convolutional neural networks be determined as video popularity prediction model.
Figure 4, it is seen that the method for generating model compared with the corresponding embodiment of Fig. 2, in the present embodiment
Process 400 embody a kind of mode for determining the penalty values of extracted sample.The scheme of the present embodiment description can be with as a result,
Based on the different Sample video in source place, training obtains a kind of mould that Popularity prediction can be carried out to the video of different sources ground
Type improves the applicability of model.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for generating mould
One embodiment of the device of type, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer
For in various electronic equipments.
As shown in figure 5, being used to generate the device 500 of model described in the present embodiment includes: acquiring unit 501, it is configured
At obtaining sample set, wherein the sample in above-mentioned sample set include Sample video, be used to indicate Sample video source place the
One markup information and be used to indicate Sample video whether be hot video the second markup information;Training unit 502, is configured to
Sample is extracted from above-mentioned sample set, executes following training step: the frame of the Sample video in extracted sample is input to
Convolutional neural networks comprising full articulamentum, wherein above-mentioned full articulamentum includes multiple branches, and corresponding one of each branch comes
Source;Information, the markup information in extracted sample and the preset damage corresponding with each branch exported based on each branch
Function is lost, determines the penalty values of sample;Based on above-mentioned penalty values compared with target value, determine whether convolutional neural networks train
It completes;In response to determining that convolutional neural networks training is completed, it is pre- that the convolutional neural networks after training are determined as video popularity
Survey model.
In some optional implementations of the present embodiment, above-mentioned training unit 502 can be further configured to: will
The second markup information in information and extracted sample that each branch is exported is input to preset corresponding with corresponding branch
Loss function, determine the penalty values of each branch;Based on the first markup information in extracted sample, the damage of each branch is determined
The weight of mistake value;The penalty values of each branch are weighted, determine the penalty values of sample.
In some optional implementations of the present embodiment, above-mentioned training unit 502 can be further configured to: right
It is corresponding with the branch in response to source place indicated by the first markup information in the extracted sample of determination in each branch
Source place is identical, and the weight of the penalty values of the branch is determined as the first default value;In response in the extracted sample of determination
The first markup information indicated by source place source place corresponding from the branch it is different, the weights of the penalty values of the branch is determined
For the second default value.
In some optional implementations of the present embodiment, which can also include that updating unit (does not show in figure
Out).Wherein, above-mentioned updating unit may be configured in response to determining that convolutional neural networks not complete by training, is based on above-mentioned damage
Mistake value updates the parameter in convolutional neural networks, extracts sample again from above-mentioned sample set, use the convolution after undated parameter
Neural network continues to execute above-mentioned training step as convolutional neural networks.
The device provided by the above embodiment of the application can therefrom extract sample to be rolled up by obtaining sample set
The training of product neural network.Wherein, the sample in above-mentioned sample set includes Sample video, the source place for being used to indicate Sample video
The first markup information and be used to indicate Sample video whether be hot video the second markup information.In this way, by the sample of extraction
The frame of Sample video in this is input to convolutional neural networks, can obtain each branch of the full articulamentum of convolutional neural networks
The information exported.It then, can be based on information that each branch of the full articulamentum of convolutional neural networks is exported, extracted
Markup information and preset loss function corresponding with each branch in sample, determine the penalty values of sample.Finally, can be based on
Above-mentioned penalty values determine whether convolutional neural networks train completion compared with target value.If convolutional neural networks have been trained
At, so that it may the convolutional neural networks after training are determined as video popularity prediction model.It can be with thus, it is possible to obtain one kind
For the model of video Popularity prediction, and the model is suitable for the prediction of the popularity to the video in different source places, mentions
The high applicability of model.
Fig. 6 is referred to, it illustrates provided by the present application for generating the process of one embodiment of the method for information
600.The method for being used to generate information may comprise steps of:
Step 601, in response to receiving target video, the frame in target video is input to video Popularity prediction mould
Type.
In the present embodiment, for generating executing subject (such as the server shown in FIG. 1 105, or be stored with of information
Other servers of video popularity prediction model) in response to receiving target video, it can be by the frame in above-mentioned target video
It is input to video popularity prediction model.Wherein, above-mentioned target video has the source place for being used to indicate above-mentioned target video
Mark.
In the present embodiment, video popularity prediction model can be using the method as described in above-mentioned Fig. 2 embodiment
And generate.Specific generating process may refer to the associated description of Fig. 2 embodiment, and details are not described herein again.
Step 602, using the source place of target video as target source place, by the full connection of video popularity prediction model
As intended branch, the information that intended branch is exported is determined as target video for branch corresponding with target source place in layer
Popularity.
In the present embodiment, above-mentioned executing subject can be using the source place of above-mentioned target video as target source place, will
It, will with the corresponding branch in above-mentioned target source place as intended branch in the full articulamentum of above-mentioned video popularity prediction model
The information that above-mentioned intended branch is exported is determined as the popularity of above-mentioned target video.So as to realize the view uploaded to user
The prediction of the popularity of frequency.
In some optional implementations of the present embodiment, it is greater than in advance in response to the popularity of the above-mentioned target video of determination
If threshold value, above-mentioned executing subject can determine that above-mentioned target video is hot video, and above-mentioned target video is pushed to target and is used
Family.Herein, target user can be the user randomly selected, be also possible to the user determined according to preset rules.Make
For example, the other users for the user that concern uploads the target video can be determined as target user.
It should be noted that the method that the present embodiment is used to generate information can be used for testing the various embodiments described above and be generated
Video popularity prediction model.And then video popularity prediction model can constantly be optimized according to test result.This method
It is also possible to the practical application methods of the various embodiments described above video popularity prediction model generated.Utilize the various embodiments described above
Video popularity prediction model generated, can predict the popularity of the video from each source place, improve the suitable of model
With property, reduce model maintenance cost.Meanwhile can effectively predict video whether can become hot video, facilitate into
Capable targetedly video push.
With continued reference to Fig. 7, as the realization to method shown in above-mentioned Fig. 6, this application provides one kind for generating information
Device one embodiment.The Installation practice is corresponding with embodiment of the method shown in fig. 6, which can specifically apply
In various electronic equipments.
As shown in fig. 7, being used to generate the device 700 of information described in the present embodiment includes: input unit 701, it is configured
At in response to receiving target video, the frame input in above-mentioned target video is used into the side as described in above-mentioned Fig. 2 embodiment
The video popularity prediction model that method generates, wherein above-mentioned target video has the source place for being used to indicate above-mentioned target video
Mark;Acquiring unit 702 is configured to using the source place of above-mentioned target video as target source place, by above-mentioned video flowing
With the corresponding branch in above-mentioned target source place as intended branch in the full articulamentum of row degree prediction model, by above-mentioned target point
The information that branch is exported is determined as the popularity of above-mentioned target video.
In some optional implementations of the present embodiment, which can also include that push unit (does not show in figure
Out).Wherein, the push unit may be configured to be greater than preset threshold in response to the popularity for determining target video, determine
Target video is hot video, and target video is pushed to target user.
It is understood that all units recorded in the device 700 and each step phase in the method with reference to Fig. 6 description
It is corresponding.As a result, above with respect to the operation of method description, the beneficial effect of feature and generation be equally applicable to device 700 and its
In include unit, details are not described herein.
Below with reference to Fig. 8, it illustrates the computer systems 800 for the electronic equipment for being suitable for being used to realize the embodiment of the present application
Structural schematic diagram.Electronic equipment shown in Fig. 8 is only an example, function to the embodiment of the present application and should not use model
Shroud carrys out any restrictions.
As shown in figure 8, computer system 800 includes central processing unit (CPU) 801, it can be read-only according to being stored in
Program in memory (ROM) 802 or be loaded into the program in random access storage device (RAM) 803 from storage section 808 and
Execute various movements appropriate and processing.In RAM 803, also it is stored with system 800 and operates required various programs and data.
CPU 801, ROM 802 and RAM 803 are connected with each other by bus 804.Input/output (I/O) interface 805 is also connected to always
Line 804.
I/O interface 805 is connected to lower component: the importation 806 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 807 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 808 including hard disk etc.;
And the communications portion 809 of the network interface card including LAN card, modem etc..Communications portion 809 via such as because
The network of spy's net executes communication process.Driver 810 is also connected to I/O interface 805 as needed.Detachable media 811, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 810, in order to read from thereon
Computer program be mounted into storage section 808 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communications portion 809, and/or from detachable media
811 are mounted.When the computer program is executed by central processing unit (CPU) 801, limited in execution the present processes
Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or
Computer readable storage medium either the two any combination.Computer readable storage medium for example can be --- but
Be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.
The more specific example of computer readable storage medium can include but is not limited to: have one or more conducting wires electrical connection,
Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit
Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory
Part or above-mentioned any appropriate combination.In this application, computer readable storage medium, which can be, any include or stores
The tangible medium of program, the program can be commanded execution system, device or device use or in connection.And
In the application, computer-readable signal media may include in a base band or the data as the propagation of carrier wave a part are believed
Number, wherein carrying computer-readable program code.The data-signal of this propagation can take various forms, including but not
It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer
Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use
In by the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc., Huo Zheshang
Any appropriate combination stated.
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor packet
Include acquiring unit and training unit.Wherein, the title of these units does not constitute the limit to the unit itself under certain conditions
It is fixed, for example, acquiring unit is also described as " obtaining the unit of sample set ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be
Included in device described in above-described embodiment;It is also possible to individualism, and without in the supplying device.Above-mentioned calculating
Machine readable medium carries one or more program, when said one or multiple programs are executed by the device, so that should
Device: extracting sample from the sample set, executes following training step: the frame of the Sample video in extracted sample is inputted
To convolutional neural networks;In information that each branch of full articulamentum based on convolutional neural networks is exported, extracted sample
Markup information and preset loss function corresponding with each branch, determine the penalty values of sample;Based on the penalty values and target
The comparison of value, determines whether convolutional neural networks train completion;In response to determining that convolutional neural networks training is completed, after training
Convolutional neural networks be determined as video popularity prediction model.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.