CN109447246A - Method and apparatus for generating model - Google Patents

Method and apparatus for generating model Download PDF

Info

Publication number
CN109447246A
CN109447246A CN201811273479.0A CN201811273479A CN109447246A CN 109447246 A CN109447246 A CN 109447246A CN 201811273479 A CN201811273479 A CN 201811273479A CN 109447246 A CN109447246 A CN 109447246A
Authority
CN
China
Prior art keywords
sample
branch
video
convolutional neural
neural networks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811273479.0A
Other languages
Chinese (zh)
Other versions
CN109447246B (en
Inventor
袁泽寰
王长虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201811273479.0A priority Critical patent/CN109447246B/en
Publication of CN109447246A publication Critical patent/CN109447246A/en
Application granted granted Critical
Publication of CN109447246B publication Critical patent/CN109447246B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the present application discloses the method and apparatus for generating model.One specific embodiment of this method includes: acquisition sample set;Sample is extracted from the sample set, executes following training step: the frame of the Sample video in extracted sample is input to convolutional neural networks;Information, the markup information in extracted sample and the preset loss function corresponding with each branch that each branch of full articulamentum based on convolutional neural networks is exported, determine the penalty values of sample;Based on the penalty values compared with target value, determine whether convolutional neural networks train completion;In response to determining that convolutional neural networks training is completed, the convolutional neural networks after training are determined as video popularity prediction model.The embodiment can obtain a kind of model that can be used for video Popularity prediction, and the model is suitable for the prediction of the popularity to the video in different source places.

Description

Method and apparatus for generating model
Technical field
The invention relates to field of computer technology, the method and apparatus specifically for generating model.
Background technique
With the development of computer technology, short video class application is come into being.User can use short video class using upper It passes, publication video.Server can predict the popularity of the video after receiving a video, so according to popularity into Row video recommendations.
Relevant mode extracts hot video (such as click volume big video) and non-hot usually from available data Video (such as click volume small video).These videos are divided according to source place (such as country), for each source The independent model of ground training, to predict the popularity of video.
Summary of the invention
The embodiment of the present application proposes the method and apparatus for generating model.
In a first aspect, the embodiment of the present application provides a kind of method for generating model, this method comprises: obtaining sample Collection, wherein the sample in sample set include Sample video, be used to indicate Sample video source place the first markup information and use In instruction Sample video whether be hot video the second markup information;Sample is extracted from sample set, executes following training step It is rapid: the frame of the Sample video in extracted sample is input to the convolutional neural networks comprising full articulamentum, wherein full connection Layer includes multiple branches, the corresponding source place of each branch;In the information that is exported based on each branch, extracted sample Markup information and preset loss function corresponding with each branch, determine the penalty values of sample;Based on penalty values and target value Comparison, determine whether convolutional neural networks train completion;In response to determining that convolutional neural networks training is completed, after training Convolutional neural networks are determined as video popularity prediction model.
In some embodiments, each branch is exported information, the markup information in extracted sample and it is preset with The corresponding loss function of each branch, determines the penalty values of sample, comprising: the information for being exported each branch and extracted sample In the second markup information be input to preset loss function corresponding with corresponding branch, determine the penalty values of each branch;Base The first markup information in extracted sample, determines the weight of the penalty values of each branch;The penalty values of each branch are carried out Weighting, determines the penalty values of sample.
In some embodiments, based on the first markup information in extracted sample, the penalty values of each branch are determined Weight, comprising: for each branch, in response to source place indicated by the first markup information in the extracted sample of determination with The corresponding source place of the branch is identical, and the weight of the penalty values of the branch is determined as the first default value;In response to determining institute The source place corresponding from the branch of source place indicated by the first markup information in the sample of extraction is different, by the loss of the branch The weight of value is determined as the second default value.
In some embodiments, this method further include: in response to determining that convolutional neural networks not complete by training, based on loss Value updates the parameter in convolutional neural networks, extracts sample again from sample set, use the convolutional Neural net after undated parameter Network continues to execute training step as convolutional neural networks.
Second aspect, the embodiment of the present application provide it is a kind of for generating the device of model, the device include: obtain it is single Member is configured to obtain sample set, wherein the sample in sample set includes Sample video, the source for being used to indicate Sample video First markup information on ground and be used to indicate Sample video whether be hot video the second markup information;Training unit is matched It is set to from sample set and extracts sample, execute following training step: the frame of the Sample video in extracted sample is input to Convolutional neural networks comprising full articulamentum, wherein full articulamentum includes multiple branches, the corresponding source of each branch Ground;Information, the markup information in extracted sample and the preset loss corresponding with each branch exported based on each branch Function determines the penalty values of sample;Based on penalty values compared with target value, determine whether convolutional neural networks train completion; In response to determining that convolutional neural networks training is completed, the convolutional neural networks after training are determined as video Popularity prediction mould Type.
In some embodiments, training unit is further configured to: information that each branch is exported and extracted The second markup information in sample is input to preset loss function corresponding with corresponding branch, determines the loss of each branch Value;Based on the first markup information in extracted sample, the weight of the penalty values of each branch is determined;To the penalty values of each branch It is weighted, determines the penalty values of sample.
In some embodiments, training unit is further configured to: for each branch, being mentioned in response to determination The source place corresponding with the branch of source place indicated by the first markup information in the sample taken is identical, by the penalty values of the branch Weight be determined as the first default value;In response to source place indicated by the first markup information in the extracted sample of determination with The corresponding source place of the branch is different, and the weight of the penalty values of the branch is determined as the second default value.
In some embodiments, device further include: updating unit is configured in response to determine convolutional neural networks not Training is completed, and penalty values are based on, and is updated the parameter in convolutional neural networks, is extracted sample again from sample set, use update Convolutional neural networks after parameter continue to execute training step as convolutional neural networks.
The third aspect, the embodiment of the present application provide a kind of method for generating information, comprising: in response to receiving mesh Video is marked, the view that the frame input in target video is generated using the method as described in any embodiment in above-mentioned first aspect Frequency Popularity prediction model, wherein target video has the mark for being used to indicate the source place of target video;By target video Source place is as target source place, by branch corresponding with target source place in the full articulamentum of video popularity prediction model As intended branch, the information that intended branch is exported is determined as the popularity of target video.
In some embodiments, this method further include: in response to determining that the popularity of target video is greater than preset threshold, really The video that sets the goal is hot video, and target video is pushed to target user.
Fourth aspect, the embodiment of the present application provide a kind of for generating the device of information, comprising: input unit is matched It is set in response to receiving target video, by the frame input in target video using such as any embodiment institute in above-mentioned first aspect The video popularity prediction model that the method for description generates, wherein target video has the source place for being used to indicate target video Mark;Acquiring unit is configured to using the source place of target video as target source place, by video popularity prediction model Full articulamentum in branch corresponding with target source place as intended branch, the information that intended branch is exported is determined as The popularity of target video.
In some embodiments, device further include: push unit is configured in response to determine the prevalence of target video Degree is greater than preset threshold, determines that target video is hot video, target video is pushed to target user.
5th aspect, the embodiment of the present application provide a kind of electronic equipment, comprising: one or more processors;Storage dress Set, be stored thereon with one or more programs, when one or more programs are executed by one or more processors so that one or Multiple processors realize the method such as any embodiment in above-mentioned first aspect and the third aspect.
6th aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program, should The method such as any embodiment in above-mentioned first aspect and the third aspect is realized when program is executed by processor.
Method and apparatus provided by the embodiments of the present application for generating model can be mentioned therefrom by obtaining sample set This is sampled to carry out the training of convolutional neural networks.Wherein, the sample in sample set includes Sample video, is used to indicate sample view First markup information in the source place of frequency and be used to indicate Sample video whether be hot video the second markup information.In this way, The frame of Sample video in the sample of extraction is input to convolutional neural networks, the full connection of convolutional neural networks can be obtained The information that each branch of layer is exported.Then, the letter that can be exported based on each branch of the full articulamentum of convolutional neural networks Breath, the markup information in extracted sample and preset loss function corresponding with each branch, determine the penalty values of sample.Most Afterwards, it can determine whether convolutional neural networks train completion based on penalty values compared with target value.If convolutional neural networks Training is completed, so that it may which the convolutional neural networks after training are determined as video popularity prediction model.Thus, it is possible to obtain one Kind can be used for the model of video Popularity prediction, and the model is suitable for the pre- of the popularity of the video in different source places It surveys, improves the applicability of model.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the application can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the method for generating model of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the method for generating model of the application;
Fig. 4 is the flow chart according to another embodiment of the method for generating model of the application;
Fig. 5 is the structural schematic diagram according to one embodiment of the device for generating model of the application;
Fig. 6 is the flow chart according to one embodiment of the method for generating information of the application;
Fig. 7 is the structural schematic diagram according to one embodiment of the device for generating information of the application;
Fig. 8 is adapted for the structural schematic diagram for the computer system for realizing the electronic equipment of the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the application for generating the method for model or the example of the device for generating model Property system architecture 100.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105. Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out Send message etc..Various telecommunication customer end applications can be installed, such as video record class is answered on terminal device 101,102,103 With the application of, video playback class, the application of interactive voice class, searching class application, instant messaging tools, mailbox client, social platform Software etc..
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard When part, it can be the various electronic equipments with display screen, including but not limited to smart phone, tablet computer, on knee portable Computer and desktop computer etc..When terminal device 101,102,103 is software, above-mentioned cited electricity may be mounted at In sub- equipment.Multiple softwares or software module (such as providing Distributed Services) may be implemented into it, also may be implemented into Single software or software module.It is not specifically limited herein.
When terminal device 101,102,103 is hardware, it is also equipped with image capture device thereon.Image Acquisition is set It is standby to can be the various equipment for being able to achieve acquisition image function, such as camera, sensor.User can use terminal device 101, the image capture device on 102,103, to acquire video.
Server 105 can be to provide the server of various services, such as uploading to terminal device 101,102,103 The video video processing service device that is stored, managed or analyzed.The available sample set of video processing service device.Sample Concentration may include a large amount of sample.Wherein, the sample in above-mentioned sample set may include Sample video, be used to indicate sample view First markup information in the source place of frequency and be used to indicate Sample video whether be hot video the second markup information.In addition, Video processing service device can use the sample in sample set, be trained to convolutional neural networks, and can be by training result (such as the video popularity prediction model generated) is stored.In this way, uploading view using terminal device 101,102,103 in user After frequency, server 105 can determine the popularity for the video that user is uploaded, and in turn, can carry out the behaviour such as pushing to the video Make.
It should be noted that server 105 can be hardware, it is also possible to software.When server is hardware, Ke Yishi The distributed server cluster of ready-made multiple server compositions, also may be implemented into individual server.When server is software, Multiple softwares or software module (such as providing Distributed Services) may be implemented into, single software or soft also may be implemented into Part module.It is not specifically limited herein.
It should be noted that the method provided by the embodiment of the present application for generating model is generally held by server 105 Row, correspondingly, the device for generating model is generally positioned in server 105.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the process of one embodiment of the method for generating model according to the application is shown 200.The method for being used to generate model, comprising the following steps:
Step 201, sample set is obtained.
It in the present embodiment, can be with for generating the executing subject (such as server 105 shown in FIG. 1) of the method for model Obtain sample set in several ways.For example, executing subject can by wired connection mode or radio connection, from It is obtained in another server (such as database server) of storage sample and is stored in existing sample set therein.Example again Such as, user can collect sample by terminal device (such as terminal device shown in FIG. 1 101,102,103).In this way, above-mentioned Executing subject can receive sample collected by terminal, and these samples are stored in local, to generate sample set.It needs to refer to Out, above-mentioned radio connection can include but is not limited to 3G/4G connection, WiFi connection, bluetooth connection, WiMAX connection, Zigbee connection, UWB (ultra wideband) connection and other currently known or exploitation in the future radio connections.
It herein, may include a large amount of sample in sample set.Wherein, sample may include Sample video, be used to indicate sample First markup information in the source place of this video and be used to indicate Sample video whether be hot video the second markup information.
Herein, whether some video is hot video, can be determined previously according to some indexs.For example, if the view The big Mr. Yu of daily recommended amounts of frequency specifies numerical value (such as 5000), it may be considered that the video is hot video.Alternatively, if the view Frequently the big Mr. Yu of click volume at the appointed time in section specifies numerical value, it may be considered that the video is hot video.Similarly, if the view The small Mr. Yu of daily recommended amounts of frequency specifies numerical value (such as 500), it may be considered that the video is not hot video.Alternatively, if should At the appointed time the small Mr. Yu of click volume in section specifies numerical value to video, it may be considered that the video is not hot video.
Step 202, sample is extracted from sample set.
In the present embodiment, sample is extracted in the sample set that executing subject can be obtained from step 201, and executes step Rapid 203 to step 206 training step.Wherein, the extracting mode of sample and extraction quantity are not intended to limit in this application.Example Such as, it can be and extract at least one sample at random, be also possible to therefrom extract clarity preferably (the i.e. Sample video of Sample video Frame pixel it is higher) sample.
Step 203, the frame of the Sample video in extracted sample is input to the convolutional Neural net comprising full articulamentum Network.
In the present embodiment, above-mentioned executing subject can be by the frame in the Sample video in the sample extracted in step 202 It is input to the convolutional neural networks comprising full articulamentum.Convolutional neural networks can carry out feature extraction to the frame in video, divide The processing such as analysis, and then output information.It should be noted that the frame in the Sample video inputted, can be one randomly selected Frame or multiframe;It is also possible to the multiframe extracted from above-mentioned Sample video according to specified time interval (such as 1s or 2s etc.). It is not construed as limiting herein.
Herein, convolutional neural networks use various existing structures (such as DenseBox, VGGNet, ResNet, SegNet Deng).In practice, convolutional neural networks (Convolutional Neural Network, CNN) are a kind of feedforward neural networks, Its artificial neuron can respond the surrounding cells in a part of coverage area, have outstanding performance for image procossing, thus, It can use the extraction for the frame feature that convolutional neural networks carry out in Sample video.
In the present embodiment, convolutional neural networks may include full articulamentum.In addition to convolutional layer, it also can according to need and set Set other layers.Such as convolutional layer, pond layer, Fusion Features layer etc..Wherein, convolutional layer can be used for extracting characteristics of image.Chi Hua Layer can be used for carrying out down-sampled (downsample) to the information of input.Fusion Features layer can be used for obtained each frame Corresponding characteristics of image (for example, it may be form of the form of eigenmatrix or feature vector) is merged.For example, can The characteristic value of the same position in the corresponding eigenmatrix of different frame to be averaged, to carry out Fusion Features, generates one and melt Eigenmatrix after conjunction.Full articulamentum can be used for classifying obtained feature.
In the present embodiment, the full articulamentum of convolutional neural networks may include multiple branches, each branch corresponding one A source place.Each branch of full articulamentum is independent from each other.The information that the preceding layer of full articulamentum is exported can be defeated respectively Enter to each branch, each branch is independently processed from the information inputted, to obtain the information of each branch's output.Practice In, each branch is considered as mutually independent full articulamentum.Herein, source place can be divided according to country.For example, in the future Source it can be divided into China, the U.S., Thailand etc..Herein, can also according to area south, the north of China (such as etc.), Continent (such as Asia, Europe etc.), city (such as Beijing, Shanghai etc.) etc. are divided.It is not construed as limiting herein.
In the present embodiment, after the frame in Sample video is input to convolutional neural networks, convolutional neural networks it is shallow Layer (can be understood as layer before full articulamentum, such as convolutional layer, pond layer, Fusion Features layer etc. herein) can be to being inputted Frame successively carry out feature extraction, analysis etc. processing.Then by each branch of treated information input to full articulamentum.Each point Branch can further calculate received information, final output information.Herein, for each branch, the branch The information of final output can be the probability for belonging to hot video of Sample video.Here it is possible to each branch be exported general Popularity of the Sample video that rate is predicted as convolutional neural networks in different sources ground.In practice, full articulamentum it is each The functions such as Sigmoid function or softmax function can be used in branch, calculate the popularity of Sample video.Each branch is counted The popularity of calculating can be located at section [0,1].It should be noted that the popularity of video can be used for characterizing the video by Degree of concern.In general, popularity is higher, the click volume or transfer amount of video are bigger.When the popularity of video is pre- greater than some If threshold value, it may be considered that the video is hot video.
Step 204, the information that is exported based on each branch, the markup information in extracted sample and it is preset with each point The corresponding loss function of branch, determines the penalty values of sample.
In the present embodiment, above-mentioned executing subject can be based in information that each branch is exported, extracted sample Markup information (including the first markup information and the second markup information) and preset loss function corresponding with each branch, determine sample This penalty values.Herein, for a certain branch, the corresponding loss function of the branch can be to be exported for estimating the branch Information (such as Sample video is in the popularity in the corresponding source place of the branch) and true value (such as 1 or 0, characterization institute is defeated respectively Whether the Sample video entered is hot video) inconsistent degree.It is a non-negative real-valued function.Under normal circumstances, it loses The value (penalty values) of function is smaller, and the robustness of model is better.Loss function can be arranged according to actual needs.
Specifically, the penalty values of sample: the first step can be determined in accordance with the following steps, it is above-mentioned to hold for each branch The second markup information in information and sample that the branch is exported can be substituting to the corresponding loss of the branch by row main body Function obtains the corresponding penalty values of the branch.Second step can determine indicated by the first markup information in extracted sample Source place, to determine the branch of the corresponding full articulamentum in the source place.Third step can only be extracted determined by second step The corresponding penalty values of branch, using the penalty values as the penalty values of sample.
In some optional implementations of the present embodiment, above-mentioned executing subject can determine the sample in accordance with the following steps This penalty values:
The second markup information in information and extracted sample that each branch is exported can be input to pre- by the first step If loss function corresponding with corresponding branch, determine the penalty values of each branch.
Second step can determine the weight of the penalty values of each branch based on the first markup information in extracted sample.
Optionally, the weight of the penalty values of branch corresponding to source place indicated by the first markup information can be arranged For some designated value (such as 1), by the weight of the penalty values of the corresponding branch in remaining source place be set as another designated value (such as 0).It is pre-established it should be noted that above-mentioned each designated value can be technical staff based on mass data statistics and analysis Value.
Optionally, source place can be grouped first.It will be in affiliated group of source place indicated by the first markup information The penalty values of the corresponding branch in each source place are set as some designated value (such as 1), by corresponding point of source place each in remaining group The penalty values of branch are set as another designated value (such as 0).Herein, source place is grouped, is can be according to belonging to source place What region was divided.For example, source place includes China, South Korea, Germany etc..Then can be according to the continent belonging to source place, it will Source place is divided into Asia group, Europe group etc..
Optionally, the weight of the penalty values of each branch can be determined in accordance with the following steps: for each branch, in response to Determine that the source place corresponding with the branch of source place indicated by the first markup information in extracted sample is identical, it can should The weight of the penalty values of branch is determined as the first default value (such as 1);In response to the first mark in the extracted sample of determination It is different to infuse source place source place corresponding from the branch indicated by information, it is pre- that the weight of the penalty values of the branch is determined as second If numerical value (such as 0).Herein, the first default value, the second default value can be technical staff be based on mass data statistics and The value analyzed and pre-established.
Third step can be weighted the penalty values of each branch, determine the penalty values of sample.
Step 205, based on penalty values compared with target value, determine whether convolutional neural networks train completion.
In the present embodiment, above-mentioned executing subject can be compared based on determined penalty values with target value.According to than Relatively result determines whether convolutional neural networks train completion.It should be noted that if extracting in step 202 has multiple (at least two It is a) sample, then the penalty values of each sample can be compared with target value by executing subject respectively.It may thereby determine that each Whether the penalty values of sample are less than or equal to target value.As an example, if extracting in step 202 has multiple samples, In the case that the penalty values of each sample are respectively less than or are equal to target value, executing subject can determine that convolutional neural networks have been trained At.For another example executing subject can count the ratio that penalty values account for the sample of extraction less than or equal to the sample of target value.And at this Ratio reaches default sample proportion (such as 95%), can determine that convolutional neural networks training is completed.It should be noted that target value It can be generally used for indicating the ideal situation of the inconsistent degree between predicted value and true value.That is, when penalty values are small In or when being equal to target value, it is believed that predicted value nearly or approximately true value.Target value can be arranged according to actual needs.
It should be noted that can then continue to execute step 206 in response to determining that convolutional neural networks have trained completion. In response to determining that convolutional neural networks not complete by training, can be updated in convolutional neural networks based on identified penalty values Parameter extracts sample from above-mentioned sample set again, the convolutional neural networks after using undated parameter as convolutional neural networks, Continue to execute above-mentioned training step.Herein, it can use the gradient that back-propagation algorithm acquires penalty values relative to model parameter, Then gradient updating model parameter is based on using gradient descent algorithm.It should be noted that under above-mentioned back-propagation algorithm, gradient Drop algorithm and machine learning method are the well-known techniques studied and applied extensively at present, and details are not described herein.It may be noted that It is that extracting mode here does not also limit in this application.Such as in the case where sample is concentrated with great amount of samples, executing subject The sample being not extracted by can therefrom be extracted.
Step 206, in response to determining that convolutional neural networks training is completed, the convolutional neural networks after training are determined as regarding Frequency Popularity prediction model.
In the present embodiment, in response to determining that convolutional neural networks training is completed, above-mentioned executing subject can will be after training Convolutional neural networks be determined as video popularity prediction model.
In some optional implementations of the present embodiment, after training obtains video popularity prediction model, on Executing subject is stated in response to receiving target video, it is pre- the frame in above-mentioned target video can be input to above-mentioned video popularity Survey model.Wherein, above-mentioned target video has the mark for being used to indicate the source place of above-mentioned target video.Target video can be Any video that terminal device uploads.Then, above-mentioned executing subject can be come the source place of above-mentioned target video as target Source, using in the full articulamentum of above-mentioned video popularity prediction model with the corresponding branch in above-mentioned target source place as target The information that above-mentioned intended branch is exported is determined as the popularity of above-mentioned target video by branch.So as to realize to user The prediction of the popularity of the video of upload.
In some optional implementations of the present embodiment, it is greater than in advance in response to the popularity of the above-mentioned target video of determination If threshold value, above-mentioned executing subject can determine that above-mentioned target video is hot video, and above-mentioned target video is pushed to target and is used Family.Herein, target user can be the user randomly selected, be also possible to the user determined according to preset rules.Make For example, the other users for the user that concern uploads the target video can be determined as target user.
With continued reference to the signal that Fig. 3, Fig. 3 are according to the application scenarios of the method for generating model of the present embodiment Figure.In the application scenarios of Fig. 3, in the application scenarios of Fig. 3, model can be installed on terminal device 301 used by a user Training class application.When user opens the application, and after uploading the store path of sample set or sample set, backstage is provided to the application The server 302 of support can run the method for generating model, comprising:
It is possible, firstly, to obtain sample set.Wherein, the sample in above-mentioned sample set may include Sample video 303, for referring to Show first markup information 304 in the source place of Sample video and be used to indicate Sample video whether be hot video the second mark Information 305.Later, sample can be extracted from above-mentioned sample set, executes following training step: by the sample in extracted sample The frame of this video is input to the convolutional neural networks 306 comprising full articulamentum, wherein and above-mentioned full articulamentum includes multiple branches, Each branch corresponds to a source place;Markup information (first in the information that is exported based on each branch, extracted sample Markup information 304 and the second markup information 305) and preset loss function corresponding with each branch, determine the penalty values of sample 307;Based on above-mentioned penalty values compared with target value, determine whether convolutional neural networks train completion;In response to determining convolution Neural metwork training is completed, and the convolutional neural networks after training are determined as video popularity prediction model 308.
The method provided by the above embodiment of the application can therefrom extract sample to be rolled up by obtaining sample set The training of product neural network.Wherein, the sample in above-mentioned sample set includes Sample video, the source place for being used to indicate Sample video The first markup information and be used to indicate Sample video whether be hot video the second markup information.In this way, by the sample of extraction The frame of Sample video in this is input to convolutional neural networks, can obtain each branch of the full articulamentum of convolutional neural networks The information exported.It then, can be based on information that each branch of the full articulamentum of convolutional neural networks is exported, extracted Markup information and preset loss function corresponding with each branch in sample, determine the penalty values of sample.Finally, can be based on Above-mentioned penalty values determine whether convolutional neural networks train completion compared with target value.If convolutional neural networks have been trained At, so that it may the convolutional neural networks after training are determined as video popularity prediction model.It can be with thus, it is possible to obtain one kind For the model of video Popularity prediction, and the model is suitable for the prediction of the popularity to the video in different source places, mentions The high applicability of model.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of the method for generating model.The use In the process 400 for the method for generating model, comprising the following steps:
Step 401, sample set is obtained.
It in the present embodiment, can be with for generating the executing subject (such as server 105 shown in FIG. 1) of the method for model Obtain sample set.It herein, may include a large amount of sample in sample set.Wherein, sample may include Sample video, be used to indicate First markup information in the source place of Sample video and be used to indicate Sample video whether be hot video the second markup information.
Step 402, sample is extracted from sample set.
In the present embodiment, sample is extracted in the sample set that executing subject can be obtained from step 401, and executes step Rapid 403 to step 408 training step.Wherein, the extracting mode of sample and extraction quantity are not intended to limit in this application.Example Such as, it can be and extract at least one sample at random, be also possible to therefrom extract clarity preferably (the i.e. Sample video of Sample video Frame pixel it is higher) sample.
Step 403, the frame of the Sample video in extracted sample is input to the convolutional Neural net comprising full articulamentum Network.
In the present embodiment, above-mentioned executing subject can be by the frame in the Sample video in the sample extracted in step 402 It is input to the convolutional neural networks comprising full articulamentum.Herein, convolutional neural networks may include full articulamentum.Except convolutional layer Outside, it also can according to need and other layers be set.Such as convolutional layer, pond layer, Fusion Features layer etc..Above-mentioned full articulamentum can wrap Containing multiple branches, the corresponding source place of each branch.Each branch of full articulamentum is independent from each other.Wherein, source place It can be divided according to country.
In the present embodiment, after the frame in Sample video is input to convolutional neural networks, convolutional neural networks it is shallow Layer (can be understood as layer before full articulamentum, such as convolutional layer, pond layer, Fusion Features layer etc. herein) can be to being inputted Frame successively carry out feature extraction, analysis etc. processing.Then by each branch of treated information input to full articulamentum.Each point Branch can further calculate received information, final output information.Herein, for each branch, the branch The information of final output can be the probability for belonging to hot video of Sample video.Here it is possible to each branch be exported general Popularity of the Sample video that rate is predicted as convolutional neural networks in different sources ground.
Step 404, the second markup information in information and extracted sample that each branch is exported is input to default Loss function corresponding with corresponding branch, determine the penalty values of each branch.
In the present embodiment, in the information and extracted sample that above-mentioned executing subject can be exported each branch Two markup informations are input to preset loss function corresponding with corresponding branch, determine the penalty values of each branch.For a certain Branch, the corresponding loss function of the branch can be that (such as Sample video is in this point for estimating information that the branch exported The popularity in the corresponding source place of branch) and true value (such as 1 or 0, characterize whether inputted Sample video is hot spot view respectively Frequently inconsistent degree).It is a non-negative real-valued function.Under normal circumstances, the value (penalty values) of loss function is smaller, model Robustness it is better.Loss function can be arranged according to actual needs.As an example, cross entropy loss function can be used Etc. existing loss function.
Step 405, for each branch, in response to coming indicated by the first markup information in the extracted sample of determination Source corresponding with branch source place it is identical, the weight of the penalty values of the branch is determined as the first default value;In response to Determine that the source place corresponding from the branch of source place indicated by the first markup information in extracted sample is different, by the branch The weights of penalty values be determined as the second default value.
In the present embodiment, for each branch, in response to the first markup information institute in the extracted sample of determination Indicate that source place source place corresponding with the branch is identical, above-mentioned executing subject can determine the weight of the penalty values of the branch For the first default value (such as 1).In response to source place indicated by the first markup information in the extracted sample of determination and it is somebody's turn to do The corresponding source place of branch is different, and the weight of the penalty values of the branch is determined as the second default value (such as 0).
Step 406, the penalty values of each branch are weighted, determine the penalty values of sample.
In the present embodiment, above-mentioned executing subject can be weighted the penalty values of each branch, determine the loss of sample Value.
Step 407, based on penalty values compared with target value, determine whether convolutional neural networks train completion.
In the present embodiment, above-mentioned executing subject can be compared based on determined penalty values with target value.According to than Relatively result determines whether convolutional neural networks train completion.It should be noted that if extracting in step 402 has multiple (at least two It is a) sample, then the penalty values of each sample can be compared with target value by executing subject respectively.It may thereby determine that each Whether the penalty values of sample are less than or equal to target value.As an example, if extracting in step 402 has multiple samples, In the case that the penalty values of each sample are respectively less than or are equal to target value, executing subject can determine that convolutional neural networks have been trained At.
It should be noted that can then continue to execute step 408 in response to determining that convolutional neural networks have trained completion. In response to determining that convolutional neural networks not complete by training, can be updated in convolutional neural networks based on identified penalty values Parameter extracts sample from above-mentioned sample set again, the convolutional neural networks after using undated parameter as convolutional neural networks, Continue to execute above-mentioned training step.Herein, it can use the gradient that back-propagation algorithm acquires penalty values relative to model parameter, Then gradient updating model parameter is based on using gradient descent algorithm.It should be noted that under above-mentioned back-propagation algorithm, gradient Drop algorithm and machine learning method are the well-known techniques studied and applied extensively at present, and details are not described herein.It may be noted that It is that extracting mode here does not also limit in this application.Such as in the case where sample is concentrated with great amount of samples, executing subject The sample being not extracted by can therefrom be extracted.
Step 408, in response to determining that convolutional neural networks training is completed, the convolutional neural networks after training are determined as regarding Frequency Popularity prediction model.
In the present embodiment, in response to determining that convolutional neural networks training is completed, above-mentioned executing subject can will be after training Convolutional neural networks be determined as video popularity prediction model.
Figure 4, it is seen that the method for generating model compared with the corresponding embodiment of Fig. 2, in the present embodiment Process 400 embody a kind of mode for determining the penalty values of extracted sample.The scheme of the present embodiment description can be with as a result, Based on the different Sample video in source place, training obtains a kind of mould that Popularity prediction can be carried out to the video of different sources ground Type improves the applicability of model.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for generating mould One embodiment of the device of type, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer For in various electronic equipments.
As shown in figure 5, being used to generate the device 500 of model described in the present embodiment includes: acquiring unit 501, it is configured At obtaining sample set, wherein the sample in above-mentioned sample set include Sample video, be used to indicate Sample video source place the One markup information and be used to indicate Sample video whether be hot video the second markup information;Training unit 502, is configured to Sample is extracted from above-mentioned sample set, executes following training step: the frame of the Sample video in extracted sample is input to Convolutional neural networks comprising full articulamentum, wherein above-mentioned full articulamentum includes multiple branches, and corresponding one of each branch comes Source;Information, the markup information in extracted sample and the preset damage corresponding with each branch exported based on each branch Function is lost, determines the penalty values of sample;Based on above-mentioned penalty values compared with target value, determine whether convolutional neural networks train It completes;In response to determining that convolutional neural networks training is completed, it is pre- that the convolutional neural networks after training are determined as video popularity Survey model.
In some optional implementations of the present embodiment, above-mentioned training unit 502 can be further configured to: will The second markup information in information and extracted sample that each branch is exported is input to preset corresponding with corresponding branch Loss function, determine the penalty values of each branch;Based on the first markup information in extracted sample, the damage of each branch is determined The weight of mistake value;The penalty values of each branch are weighted, determine the penalty values of sample.
In some optional implementations of the present embodiment, above-mentioned training unit 502 can be further configured to: right It is corresponding with the branch in response to source place indicated by the first markup information in the extracted sample of determination in each branch Source place is identical, and the weight of the penalty values of the branch is determined as the first default value;In response in the extracted sample of determination The first markup information indicated by source place source place corresponding from the branch it is different, the weights of the penalty values of the branch is determined For the second default value.
In some optional implementations of the present embodiment, which can also include that updating unit (does not show in figure Out).Wherein, above-mentioned updating unit may be configured in response to determining that convolutional neural networks not complete by training, is based on above-mentioned damage Mistake value updates the parameter in convolutional neural networks, extracts sample again from above-mentioned sample set, use the convolution after undated parameter Neural network continues to execute above-mentioned training step as convolutional neural networks.
The device provided by the above embodiment of the application can therefrom extract sample to be rolled up by obtaining sample set The training of product neural network.Wherein, the sample in above-mentioned sample set includes Sample video, the source place for being used to indicate Sample video The first markup information and be used to indicate Sample video whether be hot video the second markup information.In this way, by the sample of extraction The frame of Sample video in this is input to convolutional neural networks, can obtain each branch of the full articulamentum of convolutional neural networks The information exported.It then, can be based on information that each branch of the full articulamentum of convolutional neural networks is exported, extracted Markup information and preset loss function corresponding with each branch in sample, determine the penalty values of sample.Finally, can be based on Above-mentioned penalty values determine whether convolutional neural networks train completion compared with target value.If convolutional neural networks have been trained At, so that it may the convolutional neural networks after training are determined as video popularity prediction model.It can be with thus, it is possible to obtain one kind For the model of video Popularity prediction, and the model is suitable for the prediction of the popularity to the video in different source places, mentions The high applicability of model.
Fig. 6 is referred to, it illustrates provided by the present application for generating the process of one embodiment of the method for information 600.The method for being used to generate information may comprise steps of:
Step 601, in response to receiving target video, the frame in target video is input to video Popularity prediction mould Type.
In the present embodiment, for generating executing subject (such as the server shown in FIG. 1 105, or be stored with of information Other servers of video popularity prediction model) in response to receiving target video, it can be by the frame in above-mentioned target video It is input to video popularity prediction model.Wherein, above-mentioned target video has the source place for being used to indicate above-mentioned target video Mark.
In the present embodiment, video popularity prediction model can be using the method as described in above-mentioned Fig. 2 embodiment And generate.Specific generating process may refer to the associated description of Fig. 2 embodiment, and details are not described herein again.
Step 602, using the source place of target video as target source place, by the full connection of video popularity prediction model As intended branch, the information that intended branch is exported is determined as target video for branch corresponding with target source place in layer Popularity.
In the present embodiment, above-mentioned executing subject can be using the source place of above-mentioned target video as target source place, will It, will with the corresponding branch in above-mentioned target source place as intended branch in the full articulamentum of above-mentioned video popularity prediction model The information that above-mentioned intended branch is exported is determined as the popularity of above-mentioned target video.So as to realize the view uploaded to user The prediction of the popularity of frequency.
In some optional implementations of the present embodiment, it is greater than in advance in response to the popularity of the above-mentioned target video of determination If threshold value, above-mentioned executing subject can determine that above-mentioned target video is hot video, and above-mentioned target video is pushed to target and is used Family.Herein, target user can be the user randomly selected, be also possible to the user determined according to preset rules.Make For example, the other users for the user that concern uploads the target video can be determined as target user.
It should be noted that the method that the present embodiment is used to generate information can be used for testing the various embodiments described above and be generated Video popularity prediction model.And then video popularity prediction model can constantly be optimized according to test result.This method It is also possible to the practical application methods of the various embodiments described above video popularity prediction model generated.Utilize the various embodiments described above Video popularity prediction model generated, can predict the popularity of the video from each source place, improve the suitable of model With property, reduce model maintenance cost.Meanwhile can effectively predict video whether can become hot video, facilitate into Capable targetedly video push.
With continued reference to Fig. 7, as the realization to method shown in above-mentioned Fig. 6, this application provides one kind for generating information Device one embodiment.The Installation practice is corresponding with embodiment of the method shown in fig. 6, which can specifically apply In various electronic equipments.
As shown in fig. 7, being used to generate the device 700 of information described in the present embodiment includes: input unit 701, it is configured At in response to receiving target video, the frame input in above-mentioned target video is used into the side as described in above-mentioned Fig. 2 embodiment The video popularity prediction model that method generates, wherein above-mentioned target video has the source place for being used to indicate above-mentioned target video Mark;Acquiring unit 702 is configured to using the source place of above-mentioned target video as target source place, by above-mentioned video flowing With the corresponding branch in above-mentioned target source place as intended branch in the full articulamentum of row degree prediction model, by above-mentioned target point The information that branch is exported is determined as the popularity of above-mentioned target video.
In some optional implementations of the present embodiment, which can also include that push unit (does not show in figure Out).Wherein, the push unit may be configured to be greater than preset threshold in response to the popularity for determining target video, determine Target video is hot video, and target video is pushed to target user.
It is understood that all units recorded in the device 700 and each step phase in the method with reference to Fig. 6 description It is corresponding.As a result, above with respect to the operation of method description, the beneficial effect of feature and generation be equally applicable to device 700 and its In include unit, details are not described herein.
Below with reference to Fig. 8, it illustrates the computer systems 800 for the electronic equipment for being suitable for being used to realize the embodiment of the present application Structural schematic diagram.Electronic equipment shown in Fig. 8 is only an example, function to the embodiment of the present application and should not use model Shroud carrys out any restrictions.
As shown in figure 8, computer system 800 includes central processing unit (CPU) 801, it can be read-only according to being stored in Program in memory (ROM) 802 or be loaded into the program in random access storage device (RAM) 803 from storage section 808 and Execute various movements appropriate and processing.In RAM 803, also it is stored with system 800 and operates required various programs and data. CPU 801, ROM 802 and RAM 803 are connected with each other by bus 804.Input/output (I/O) interface 805 is also connected to always Line 804.
I/O interface 805 is connected to lower component: the importation 806 including keyboard, mouse etc.;It is penetrated including such as cathode The output par, c 807 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 808 including hard disk etc.; And the communications portion 809 of the network interface card including LAN card, modem etc..Communications portion 809 via such as because The network of spy's net executes communication process.Driver 810 is also connected to I/O interface 805 as needed.Detachable media 811, such as Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 810, in order to read from thereon Computer program be mounted into storage section 808 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communications portion 809, and/or from detachable media 811 are mounted.When the computer program is executed by central processing unit (CPU) 801, limited in execution the present processes Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or Computer readable storage medium either the two any combination.Computer readable storage medium for example can be --- but Be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination. The more specific example of computer readable storage medium can include but is not limited to: have one or more conducting wires electrical connection, Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory Part or above-mentioned any appropriate combination.In this application, computer readable storage medium, which can be, any include or stores The tangible medium of program, the program can be commanded execution system, device or device use or in connection.And In the application, computer-readable signal media may include in a base band or the data as the propagation of carrier wave a part are believed Number, wherein carrying computer-readable program code.The data-signal of this propagation can take various forms, including but not It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use In by the use of instruction execution system, device or device or program in connection.Include on computer-readable medium Program code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc., Huo Zheshang Any appropriate combination stated.
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard The mode of part is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor packet Include acquiring unit and training unit.Wherein, the title of these units does not constitute the limit to the unit itself under certain conditions It is fixed, for example, acquiring unit is also described as " obtaining the unit of sample set ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be Included in device described in above-described embodiment;It is also possible to individualism, and without in the supplying device.Above-mentioned calculating Machine readable medium carries one or more program, when said one or multiple programs are executed by the device, so that should Device: extracting sample from the sample set, executes following training step: the frame of the Sample video in extracted sample is inputted To convolutional neural networks;In information that each branch of full articulamentum based on convolutional neural networks is exported, extracted sample Markup information and preset loss function corresponding with each branch, determine the penalty values of sample;Based on the penalty values and target The comparison of value, determines whether convolutional neural networks train completion;In response to determining that convolutional neural networks training is completed, after training Convolutional neural networks be determined as video popularity prediction model.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (14)

1. a kind of method for generating model, comprising:
Obtain sample set, wherein the sample in the sample set includes Sample video, the source place for being used to indicate Sample video First markup information and be used to indicate Sample video whether be hot video the second markup information;
Sample is extracted from the sample set, executes following training step: the frame of the Sample video in extracted sample is defeated Enter to the convolutional neural networks comprising full articulamentum, wherein the full articulamentum includes multiple branches, each branch corresponding one A source place;The information that is exported based on each branch, the markup information in extracted sample and preset corresponding with each branch Loss function, determine the penalty values of sample;Based on the penalty values compared with target value, whether convolutional neural networks are determined Training is completed;In response to determining that convolutional neural networks training is completed, the convolutional neural networks after training are determined as video prevalence Spend prediction model.
2. the method according to claim 1 for generating model, wherein the information exported based on each branch, Markup information and preset loss function corresponding with each branch in extracted sample, determine the penalty values of sample, comprising:
By the second markup information in information and extracted sample that each branch is exported be input to it is preset with corresponding point The corresponding loss function of branch, determines the penalty values of each branch;
Based on the first markup information in extracted sample, the weight of the penalty values of each branch is determined;
The penalty values of each branch are weighted, determine the penalty values of sample.
3. the method according to claim 2 for generating model, wherein first based in extracted sample Markup information determines the weight of the penalty values of each branch, comprising:
For each branch, in response to source place indicated by the first markup information in the extracted sample of determination and the branch Corresponding source place is identical, and the weight of the penalty values of the branch is determined as the first default value;It is extracted in response to determination The source place corresponding from the branch of source place indicated by the first markup information in sample is different, by the power of the penalty values of the branch It is determined as the second default value again.
4. the method according to claim 1 for generating model, wherein the method also includes:
In response to determining that convolutional neural networks not complete by training, is based on the penalty values, updates the parameter in convolutional neural networks, Sample is extracted again from the sample set, and the convolutional neural networks after using undated parameter continue as convolutional neural networks Execute the training step.
5. a kind of for generating the device of model, comprising:
Acquiring unit is configured to obtain sample set, wherein the sample in the sample set includes Sample video, is used to indicate First markup information in the source place of Sample video and be used to indicate Sample video whether be hot video the second markup information;
Training unit is configured to extract sample from the sample set, executes following training step: will be in extracted sample The frame of Sample video be input to the convolutional neural networks comprising full articulamentum, wherein the full articulamentum includes multiple branches, Each branch corresponds to a source place;The information that is exported based on each branch, the markup information in extracted sample and pre- If loss function corresponding with each branch, determine the penalty values of sample;Based on the penalty values compared with target value, determine Whether convolutional neural networks train completion;In response to determining that convolutional neural networks training is completed, by the convolutional Neural net after training Network is determined as video popularity prediction model.
6. according to claim 5 for generating the device of model, wherein the training unit is further configured to:
By the second markup information in information and extracted sample that each branch is exported be input to it is preset with corresponding point The corresponding loss function of branch, determines the penalty values of each branch;
Based on the first markup information in extracted sample, the weight of the penalty values of each branch is determined;
The penalty values of each branch are weighted, determine the penalty values of sample.
7. according to claim 6 for generating the device of model, wherein the training unit is further configured to:
For each branch, in response to source place indicated by the first markup information in the extracted sample of determination and the branch Corresponding source place is identical, and the weight of the penalty values of the branch is determined as the first default value;It is extracted in response to determination The source place corresponding from the branch of source place indicated by the first markup information in sample is different, by the power of the penalty values of the branch It is determined as the second default value again.
8. according to claim 5 for generating the device of model, wherein described device further include:
Updating unit is configured in response to determine that convolutional neural networks not complete by training, is based on the penalty values, updates convolution Parameter in neural network extracts sample from the sample set again, the convolutional neural networks after using undated parameter as Convolutional neural networks continue to execute the training step.
9. a kind of method for generating information, comprising:
In response to receiving target video, by the frame input in the target video using as described in one of claim 1-4 The video popularity prediction model that method generates, wherein the target video has the source for being used to indicate the target video The mark on ground;
It, will be in the full articulamentum of the video popularity prediction model using the source place of the target video as target source place With the corresponding branch in the target source place as intended branch, the information that the intended branch is exported is determined as described The popularity of target video.
10. the method according to claim 9 for generating information, wherein the method also includes:
It is greater than preset threshold in response to the popularity of the determination target video, determines that the target video is hot video, it will The target video pushes to target user.
11. a kind of for generating the device of information, comprising:
Input unit is configured in response to receive target video, by the frame input in the target video using such as right It is required that the video popularity prediction model that method described in one of 1-4 generates, wherein the target video has and is used to indicate institute State the mark in the source place of target video;
Acquiring unit is configured to using the source place of the target video as target source place, and the video popularity is pre- It surveys in the full articulamentum of model with the corresponding branch in the target source place as intended branch, the intended branch institute is defeated Information out is determined as the popularity of the target video.
12. according to claim 11 for generating the device of information, wherein described device further include:
Push unit is configured in response to determine that the popularity of the target video is greater than preset threshold, determines the target Video is hot video, and the target video is pushed to target user.
13. a kind of electronic equipment, comprising:
One or more processors;
Storage device is stored thereon with one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real The now method as described in any in claim 1-4,9-10.
14. a kind of computer-readable medium, is stored thereon with computer program, wherein the realization when program is executed by processor Method as described in any in claim 1-4,9-10.
CN201811273479.0A 2018-10-30 2018-10-30 Method and apparatus for generating a model Active CN109447246B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811273479.0A CN109447246B (en) 2018-10-30 2018-10-30 Method and apparatus for generating a model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811273479.0A CN109447246B (en) 2018-10-30 2018-10-30 Method and apparatus for generating a model

Publications (2)

Publication Number Publication Date
CN109447246A true CN109447246A (en) 2019-03-08
CN109447246B CN109447246B (en) 2021-01-15

Family

ID=65548710

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811273479.0A Active CN109447246B (en) 2018-10-30 2018-10-30 Method and apparatus for generating a model

Country Status (1)

Country Link
CN (1) CN109447246B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110278447A (en) * 2019-06-26 2019-09-24 北京字节跳动网络技术有限公司 Video pushing method, device and electronic equipment based on continuous feature
CN111026849A (en) * 2019-12-17 2020-04-17 北京百度网讯科技有限公司 Data processing method and device
WO2020087974A1 (en) * 2018-10-30 2020-05-07 北京字节跳动网络技术有限公司 Model generation method and device
CN111368204A (en) * 2020-03-09 2020-07-03 北京字节跳动网络技术有限公司 Content pushing method and device, electronic equipment and computer readable medium
CN112288447A (en) * 2020-10-30 2021-01-29 北京每日优鲜电子商务有限公司 Article information display method and device, electronic equipment and computer readable medium
CN112330711A (en) * 2020-11-26 2021-02-05 北京奇艺世纪科技有限公司 Model generation method, information extraction method and device and electronic equipment
CN113641896A (en) * 2021-07-23 2021-11-12 北京三快在线科技有限公司 Model training and recommendation probability prediction method and device
CN112016685B (en) * 2020-08-07 2024-06-07 广州小鹏自动驾驶科技有限公司 Data processing method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105635762A (en) * 2016-01-15 2016-06-01 深圳大学 Video heat prediction method based on deep belief networks and system thereof
CN106778472A (en) * 2016-11-17 2017-05-31 成都通甲优博科技有限责任公司 The common invader object detection and recognition method in transmission of electricity corridor based on deep learning
CN107222787A (en) * 2017-06-02 2017-09-29 中国科学技术大学 Video resource popularity prediction method
US20170289409A1 (en) * 2016-03-30 2017-10-05 Nec Laboratories America, Inc. Large margin high-order deep learning with auxiliary tasks for video-based anomaly detection
CN108229480A (en) * 2017-12-25 2018-06-29 新智数字科技有限公司 A kind of recognition methods, device and the equipment of number meter reading

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105635762A (en) * 2016-01-15 2016-06-01 深圳大学 Video heat prediction method based on deep belief networks and system thereof
US20170289409A1 (en) * 2016-03-30 2017-10-05 Nec Laboratories America, Inc. Large margin high-order deep learning with auxiliary tasks for video-based anomaly detection
CN106778472A (en) * 2016-11-17 2017-05-31 成都通甲优博科技有限责任公司 The common invader object detection and recognition method in transmission of electricity corridor based on deep learning
CN107222787A (en) * 2017-06-02 2017-09-29 中国科学技术大学 Video resource popularity prediction method
CN108229480A (en) * 2017-12-25 2018-06-29 新智数字科技有限公司 A kind of recognition methods, device and the equipment of number meter reading

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020087974A1 (en) * 2018-10-30 2020-05-07 北京字节跳动网络技术有限公司 Model generation method and device
CN110278447A (en) * 2019-06-26 2019-09-24 北京字节跳动网络技术有限公司 Video pushing method, device and electronic equipment based on continuous feature
CN110278447B (en) * 2019-06-26 2021-07-20 北京字节跳动网络技术有限公司 Video pushing method and device based on continuous features and electronic equipment
CN111026849A (en) * 2019-12-17 2020-04-17 北京百度网讯科技有限公司 Data processing method and device
CN111026849B (en) * 2019-12-17 2023-09-19 北京百度网讯科技有限公司 Data processing method and device
CN111368204A (en) * 2020-03-09 2020-07-03 北京字节跳动网络技术有限公司 Content pushing method and device, electronic equipment and computer readable medium
CN112016685B (en) * 2020-08-07 2024-06-07 广州小鹏自动驾驶科技有限公司 Data processing method and device
CN112288447A (en) * 2020-10-30 2021-01-29 北京每日优鲜电子商务有限公司 Article information display method and device, electronic equipment and computer readable medium
CN112330711A (en) * 2020-11-26 2021-02-05 北京奇艺世纪科技有限公司 Model generation method, information extraction method and device and electronic equipment
CN112330711B (en) * 2020-11-26 2023-12-05 北京奇艺世纪科技有限公司 Model generation method, information extraction device and electronic equipment
CN113641896A (en) * 2021-07-23 2021-11-12 北京三快在线科技有限公司 Model training and recommendation probability prediction method and device

Also Published As

Publication number Publication date
CN109447246B (en) 2021-01-15

Similar Documents

Publication Publication Date Title
CN109344908A (en) Method and apparatus for generating model
CN109447246A (en) Method and apparatus for generating model
CN109492128A (en) Method and apparatus for generating model
CN109376267A (en) Method and apparatus for generating model
CN109308490A (en) Method and apparatus for generating information
CN109447156A (en) Method and apparatus for generating model
CN109191453A (en) Method and apparatus for generating image category detection model
CN109145828A (en) Method and apparatus for generating video classification detection model
CN109002842A (en) Image-recognizing method and device
CN109508681A (en) The method and apparatus for generating human body critical point detection model
CN108446387A (en) Method and apparatus for updating face registration library
CN109460514A (en) Method and apparatus for pushed information
CN109460513A (en) Method and apparatus for generating clicking rate prediction model
CN108830235A (en) Method and apparatus for generating information
CN108171191B (en) Method and apparatus for detecting face
CN109582982A (en) Method and apparatus for translated speech
CN109446990A (en) Method and apparatus for generating information
CN109976997A (en) Test method and device
CN110225366A (en) Video data processing and advertisement position determine method, apparatus, medium and electronic equipment
CN109242043A (en) Method and apparatus for generating information prediction model
CN109815365A (en) Method and apparatus for handling video
CN108062416B (en) Method and apparatus for generating label on map
CN110263748A (en) Method and apparatus for sending information
CN109710507A (en) A kind of method and apparatus of automatic test
CN112231663A (en) Data acquisition method, device, equipment and storage medium combining RPA and AI

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.

CP01 Change in the name or title of a patent holder
CP03 Change of name, title or address

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 room b-0035, 2nd floor, building 3, yard 30, Shixing street, Fengtai District, Beijing

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address