CN109242592A - A kind of recommended method and device of application - Google Patents

A kind of recommended method and device of application Download PDF

Info

Publication number
CN109242592A
CN109242592A CN201810798909.4A CN201810798909A CN109242592A CN 109242592 A CN109242592 A CN 109242592A CN 201810798909 A CN201810798909 A CN 201810798909A CN 109242592 A CN109242592 A CN 109242592A
Authority
CN
China
Prior art keywords
data
application
training
model
recommended models
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810798909.4A
Other languages
Chinese (zh)
Inventor
潘岸腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Guangzhou Youshi Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Youshi Network Technology Co Ltd filed Critical Guangzhou Youshi Network Technology Co Ltd
Priority to CN201810798909.4A priority Critical patent/CN109242592A/en
Publication of CN109242592A publication Critical patent/CN109242592A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0255Targeted advertisements based on user history

Landscapes

  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)
  • Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the present application provides the recommended method and device of a kind of application, which comprises obtains training sample, the training sample includes that data and true application preferences data are applied in training;Training is extracted using characteristic using data in the training;It is adjusted using characteristic and true application preferences data to using recommended models according to the training;It is carried out according to the application recommended models adjusted using recommendation.The embodiment of the present application is used to be trained to using recommended models after carrying out valid data extraction for sample data, improves the performance of model, carries out for user using the accuracy rate recommended to improve, can preferably be user service.

Description

A kind of recommended method and device of application
Technical field
This application involves Internet technical fields, recommended method and a kind of recommendation of application more particularly to a kind of application Device.
Background technique
With the development of internet, internet has become the important channel that people obtain information, conventional internet A feature be that user needs when finding oneself interested things by largely being searched for and browse operation, And it is further desired that search for and browsing result carry out artificial screening filtering, finally can just obtain meet demand as a result, entire obtain It takes process cumbersome, needs to take a substantial amount of time and energy.For example, for shopping online, system tool, it is audio-visual broadcasting and The acquisition of the type applications such as amusement and recreation.
Many software companys provide application shop or application market, and user can open application shop or using city , it will be able to various application programs required for fast search and downloading.In application shop or application market, in order to continuous User is promoted using application shop or the user experience of application market, software company develops many convenient users and makes at present Function, one of them is recommendation function, i.e., recommends some applications to user, to help user's discovery is more interested to answer With.
It is presently recommended that the method for application is mainly recommended with the public ratings of application, such as recommend download The application of front is come, or recommends the application in popular ranking list.But since different users possesses different interest, It is not necessarily that user is interested according to the application that the above method is recommended, therefore is unable to satisfy the individual demand of different user, Cause the experience sense of user bad.
Summary of the invention
In view of the above problems, it proposes the embodiment of the present application and overcomes the above problem or at least partly in order to provide one kind A kind of recommended method of the application to solve the above problems and a kind of corresponding recommendation apparatus of application.
To solve the above-mentioned problems, this application discloses a kind of recommended methods of application, comprising:
Training sample is obtained, the training sample includes that data and true application preferences data are applied in training;
Training is extracted using characteristic using data in the training;
It is adjusted using characteristic and true application preferences data to using recommended models according to the training;
It is carried out according to the application recommended models adjusted using recommendation.
Preferably, it is described according to the training using characteristic and true application preferences data to application recommended models into The step of row adjustment, comprising:
The training is applied using characteristic input and obtains prediction application preferences data in recommended models;
Loss late is calculated using the prediction application preferences data and the true application preferences data;
Gradient is calculated using the loss late;
Judge whether the gradient meets default iterated conditional;
If so, it is described using recommended models to terminate adjustment;
If it is not, then being declined to described using the model parameter of recommended models using the gradient, return described in executing The training is inputted into the application recommended models the step of obtaining prediction application preferences data using characteristic.
Preferably, it is described using recommended models be Wide model and Deep model Fusion Model.
It is preferably, described to extract the step of training is using characteristic using data in the training, comprising:
It obtains training and applies data, the training includes historical user search record using data;
It is recorded according to the historical user search and obtains Wide aspect of model data;
It is recorded according to the historical user search and obtains Deep aspect of model data;
Characteristic is applied using the Wide aspect of model data and the Deep aspect of model data as training.
Preferably, the historical user search record includes search term, the association data for showing application;The basis The step of historical user search record obtains Wide aspect of model data, comprising:
The association data are segmented to obtain the first word cutting feature;
Described search word is segmented to obtain the second word cutting feature;
Merge the first word cutting feature and the second word cutting feature obtains Wide aspect of model data.
It is preferably, described that the step of obtaining Deep aspect of model data is recorded according to the historical user search, comprising:
Calculate described search word and the related coefficient shown between application;
Temperature feature is extracted according to the association data;
User's average score feature is extracted according to the association data;
Using the related coefficient, the temperature feature and user's average score feature as Deep aspect of model number According to.
Preferably, described the step of being carried out according to the application recommended models adjusted using recommending, comprising:
Obtain the target search word of user;
Calculate Wide aspect of model data in the target search word and default application database between each application and Target Deep aspect of model data;
The Wide aspect of model data and the target Deep aspect of model data are input to application adjusted and pushed away It recommends in model, obtains the target prediction preference data of each application;
It is ranked up according to the target prediction preference data, and is that user recommends application according to ranking results.
The embodiment of the present application also discloses a kind of recommendation apparatus of application, comprising:
Training sample acquisition module, for obtaining training sample, the training sample is including training using data and really Application preferences data;
Characteristic extraction module is applied in training, for extracting training using characteristic using data in the training According to;
Module is adjusted using recommended models, for applying characteristic and true application preferences data pair according to the training It is adjusted using recommended models;
Using recommending module, for being carried out according to the application recommended models adjusted using recommendation.
Preferably, the application recommended models adjust module, comprising:
Prediction application preferences data obtain submodule, for the training to be applied recommended models using characteristic input In obtain prediction application preferences data;
Loss late computational submodule, based on using the prediction application preferences data and the true application preferences data Calculate loss late;
Gradient computational submodule, for calculating gradient using the loss late;
Iterated conditional judging submodule, for judging whether the gradient meets default iterated conditional;If so, calling knot Beam adjusting submodule, if it is not, gradient is then called to decline submodule;
Terminate adjusting submodule, adjustment is described to apply recommended models for terminating;
Gradient declines submodule, for being declined to described using the model parameter of recommended models using the gradient, It returns to described input the training in the application recommended models using characteristic of execution and obtains prediction application preferences data The step of.
Preferably, it is described using recommended models be Wide model and Deep model Fusion Model.
Preferably, characteristic extraction module is applied in the training, comprising:
Data acquisition submodule is applied in training, and for obtaining training using data, the training includes history using data User searches for record;
Wide aspect of model data obtain submodule, obtain Wide model spy for recording according to the historical user search Levy data;
Deep aspect of model data obtain submodule, obtain Deep model spy for recording according to the historical user search Levy data;
Training obtains submodule using characteristic, for the Wide aspect of model data and the Deep model are special Data are levied as training and apply characteristic.
Preferably, the historical user search record includes search term, the association data for showing application;The Wide Aspect of model data obtain submodule, comprising:
First participle unit obtains the first word cutting feature for being segmented to the association data;
Second participle unit obtains the second word cutting feature for being segmented to described search word;
Combining unit obtains Wide aspect of model number for merging the first word cutting feature and the second word cutting feature According to.
Preferably, the Deep aspect of model data obtain submodule, comprising:
Related coefficient computing unit, for calculating described search word and the related coefficient shown between application;
Temperature feature extraction unit, for extracting temperature feature according to the association data;
Average score feature extraction unit, for extracting user's average score feature according to the association data;
Deep aspect of model data acquiring unit, for putting down the related coefficient, the temperature feature and the user Scoring feature is as Deep aspect of model data.
It is preferably, described to apply recommending module, comprising:
Target search word acquisition submodule, for obtaining the target search word of user;
Aspect of model data computational submodule is answered for calculating the target search word with each in default application database Wide aspect of model data and target Deep aspect of model data between;
Target prediction preference data obtains submodule, is used for the Wide aspect of model data and the target Deep mould Type characteristic is input in application recommended models adjusted, obtains the target prediction preference data of each application;
Using submodule is recommended, for being ranked up according to the target prediction preference data, and it is according to ranking results User recommends application.
The embodiment of the present application includes the following advantages:
It includes that data and true application preferences data training sample are applied in training, and are answered in training that the embodiment of the present application, which obtains, Training is extracted using characteristic with data, is then corresponded to further according to training using characteristic and true application preferences data It is adjusted with recommended models, is finally carried out further according to application recommended models adjusted using recommendation.The embodiment of the present application pair For improving the performance of model to applying recommended models to be trained after sample data carries out valid data extraction, thus It improves and carries out for user using the accuracy rate recommended, can preferably be user service.
Detailed description of the invention
Fig. 1 is a kind of step flow chart of the recommended method embodiment of application of the application;
Fig. 2 is a kind of process flow diagram of DNN model of the application;
Fig. 3 is a kind of structural block diagram of the recommendation apparatus embodiment of application of the application.
Specific embodiment
In order to make the above objects, features, and advantages of the present application more apparent, with reference to the accompanying drawing and it is specific real Applying mode, the present application will be further described in detail.
Referring to Fig.1, a kind of step flow chart of the recommended method embodiment of application of the application is shown, specifically can wrap Include following steps:
Step 101, training sample is obtained.
In the concrete realization, training sample includes that data and true application preferences data are applied in training.
Wherein, training is historical user search record using data, and historical user search record may include that user uses Search term, based on the search term displaying application, using whether have click, using whether be mounted etc. data, really answer For user with preference data based on historical user search data statistics for the preference degree of application, preference is higher, indicates A possibility that user installation application, is higher.
Step 102, training is extracted using characteristic using data in the training.
In practical applications, it needs to be used as training to apply characteristic from training using effective data are extracted in data, Recommended models are applied to be trained model for being input to.
In the embodiment of the present application, using recommended models by the aspect of model of the part Wide and the aspect of model of the part Deep This two parts composition is specifically the Fusion Model of Wide model and Deep model using recommended models.
The generalization ability that Fusion Model passes through memory capability (memorization) and Deep model in conjunction with Wide model (generalization), the model parameter for optimizing two models simultaneously in the training process, to reach the pre- of overall model Survey ability is optimal.In application recommended models training, need to extract training using characteristic from training using data.
In a preferred embodiment of the present application, the step 102 may include following sub-step:
Sub-step S11 obtains training and applies data;
Firstly the need of source data collection is carried out, in the embodiment of the present application, historical user search record is collected as instruction Practice and applies data.
Sub-step S12 is recorded according to the historical user search and is obtained Wide aspect of model data;
In the embodiment of the present application, include Wide model and Deep model using recommended models, therefore Wide can be directed to Model and Deep model extract Wide aspect of model data and Deep aspect of model data from historical user search record respectively.
In the concrete realization, historical user search record is collected, data Store form can be with are as follows: the search term of user, exhibition Show application, show application associated description, using whether have click etc. associations data.
Assuming that having a sample searches word is k, showing that application is i, if click indicates with the variable c that value is 0/1, under The Wide aspect of model data and Deep aspect of model data for how generating the sample introduced in face.
In a preferred embodiment of the present application, the historical user search record may include search term, show and answer Association data;The sub-step S12 may include following sub-step:
The association data are segmented to obtain the first word cutting feature;
Described search word is segmented to obtain the second word cutting feature;
Merge the first word cutting feature and the second word cutting feature obtains Wide aspect of model data.
The aspect of model generating process of the part Wide specifically:
Step1: the word cutting feature for applying i is generated
The description of application i is segmented, the word cutting feature for the i that is applied is denoted as Ai:
Ai={ w1, w2 ... }
Such as: the word segmentation result of " input method that typewriting is most accurate, interface is most personalized " sentence are as follows:
Ai={ typewriting, precisely, interface are personalized, input method }
Step2: the word cutting feature of search term k is generated
Search term k is segmented, the word cutting feature of search term k is obtained, is denoted as Qk:
Qk={ q1, q2 ... }
Such as: the word segmentation result of " Chinese character coding input method " search term are as follows:
Qk={ Chinese, input method }
Step3: the wide aspect of model is generated
By AiAnd QkThe element of the inside intersects two-by-two to be merged, and is obtained Wide aspect of model data, is denoted as WFi,k
Such as:
Ai={ typewriting, precisely, interface are personalized, input method }
Qk={ Chinese, input method }
So:
WFi,k={ typewriting & Chinese, accurate & Chinese, interface & Chinese, personalized & Chinese, input method & Chinese, typewriting & are defeated Enter method, accurate & input method, interface & input method, personalized & input method, input method & input method }
In practical applications, after obtaining Wide aspect of model data, it is also necessary to which being further processed can be input to Using carrying out model training in recommended models.
Specifically, when obtaining above-mentioned WFi,kAfterwards, it is also necessary to for WFi,kCarry out following conversion operation:
Step1: one-hot coding is carried out to Wide aspect of model data
By WFi,kAll characteristic expansions of the inside, each feature are a variable, are encoded and are recorded with one-hot, such as:
WFi,k={ typewriting & Chinese, accurate & Chinese, interface & Chinese }
One-hot coding:
" typewriting & Chinese " one-hot is encoded to (1,0,0)
" accurate & Chinese " one-hot is encoded to (0,1,0)
" interface & Chinese " one-hot is encoded to (0,0,1)
It is denoted as:
Xi,k={ x1, x2 ... }
In the embodiment of the present application, Wide model can use LR model, the defined formula of Wide model are as follows: Yw=f (Xm +n)。
Wherein, wherein m, n are high dimension vectors, and dimension is equal to characteristic quantity, indicate model parameter;F is Logistic function, F (x)=1/ (1+e-x), YwIt is a numerical value, for indicating prediction user for the preference of application, m indicates the weight of feature Parameter vector, n are constant terms, and X is Wide aspect of model data.If X is 3 dimensional vectors (0,1,0), then m is also 3 dimensional vectors (m1, m2, m3), then Yw=f ((0,1,0) (m1, m2, m3)+n).
Sub-step S13 is recorded according to the historical user search and is obtained Deep aspect of model data;
In a preferred embodiment of the present application, the sub-step S13 may include following sub-step:
Calculate described search word and the related coefficient shown between application;
Temperature feature is extracted according to the association data;
User's average score feature is extracted according to the association data;
Using the related coefficient, the temperature feature and user's average score feature as Deep aspect of model number According to.
The aspect of model data generating procedure of the part Deep specifically:
Step1: the related coefficient for applying i and search term k is calculated
Related coefficient uses variable ri,kIt indicates, ri,kScore rule is as follows:
If 1, k is scored at 4 as the title of application i;
If 2, k is the substring using i title, it is scored at 3;
If 3, AiWith QkIntersection be not it is empty, be scored at 2;
4, other situations are scored at 1.
It should be noted that after a higher fractional is calculated according to score rule, then will be without continuing The calculating of related coefficient, such as, it is assumed that obtaining related coefficient is 4 points, will not continue the calculating process below of score 3, And so on.
Step2: the temperature feature for applying i is extracted
It calculates a nearest monthly average and downloads the number using i daily as the temperature feature for applying i, indicated with di.
Step3: the user's average score feature for applying i is extracted
A nearest month user is calculated to application i Mean Opinion Score as the user's evaluation feature for applying i, with ci table Show.
It is Deep model by these data preparations after obtaining related coefficient, temperature feature and user's average score feature Characteristic.Wherein, the feature of Deep model can be indicated using vector Z.
Deep model can use DNN model, be defined as follows: define 1 input layer, 5 hidden layers, 1 input layer, knot Structure is as shown in Figure 2.Wherein input layer is the vector of 3 dimensions, and hidden layer is the vector of 50 dimensions, and output layer is the vector of 1 dimension.
Deep model can the definition of model parameter include:
Wi: indicate each hidden layer hides the weight coefficient that (input layer) is connect with upper one, and model is hidden for 5 in total Layer is so i=1 is the matrix of 3*50;I=2,3,4,5 be the matrix of 50*50.
bi: it is the matrix of 1*50, indicates the biasing coefficient of each hidden layer.
W: being the matrix of 50*1, indicate the last one hidden layer to output layer parameter.
B: it is the matrix of 1*1, indicates the deviation ratio of output layer.
Logistic (l): indicating Logistic function, such as:
F (L): indicating the output function of hidden layer, and wherein L is the vector of multidimensional, which can bring element each in L into Logistic function is converted, the final transformed multi-C vector of Logistic function.
Such as: L=(1,2 ..., 50) is so
YDIt is a numerical value, for indicating prediction user for the preference of application.
Each layer relationship is as follows in DNN model:
L1=f (Zw1+b1)
L2=f (L1·w2+b2)
L3=f (L2·w3+b3)
L4=f (L3·w4+b4)
L5=f (L4·w5+b5)
YD=f (L5·w+b)
Based on above-mentioned Wide model and Deep model, the embodiment of the present application merge by Wide model and Deep model Model obtains Fusion Model and is used as using recommended models.
Specifically, Fusion Model be Wide model together with Deep Model Fusion, original scheme be Wide and Deep respectively establishes a set of prediction model, respectively predicts:
Wide model: Yw=f (Xm+n)
Deep model: YD=f (L5w+b)
It is had their own advantages in above-mentioned two model, the part Wide is the LR algorithm of extensive discrete features, which is good at note Recall.The part Deep is DNN algorithm, the part be good at it is extensive.In order to which the fusion of the two advantages is got up, the embodiment of the present application Integration program is combined with the Multiple regression model of the output layer of Deep and Wide, and fused expression formula is as follows:
Wide-Deep Fusion Model: YWD=f ([L5,X]·[w,m]+b)
Wherein, in the parameter in Wide-Deep Fusion Model, L5 indicate Deep output node, be one 50 dimension to Amount;The input vector (Wide aspect of model data) of X expression wide;W indicates the corresponding weight vectors of output node of Deep, It is the vector of one 50 dimension;B indicates the constant term of wide-deep, and m indicates the weight parameter vector of feature.
Sub-step S14, the Wide aspect of model data and the Deep aspect of model data are special as training application Levy data.
In the embodiment of the present application, after Wide aspect of model data and Deep aspect of model data, using the two as instruction Practice and apply characteristic, for the training to application recommended models.
Step 103, application recommended models are carried out using characteristic and true application preferences data according to the training Adjustment.
In practical applications, the mould for needing to solve will be obtained using characteristic and true application preferences data according to training Shape parameter, so as to be adjusted for the model parameter in application recommended models.
Joint training is carried out by Wide model and Deep model, guarantees that memory capability is balanced with generalization ability.Wide Model is that model logic returns (Logistic Regression, LR) model, needs to add eigentransformation to guarantee the phase of model Pass ability.Deep model is DNN model, does the extensive energy that low-dimensional insertion guarantees model to sparse and unknown feature combination Power.In training process, using Joint Training, individual features data are input in Wide model and Deep model, together Shi Xunlian Wide model and Deep model, the model parameter of optimization include two respective model parameters of model.
In a preferred embodiment of the present application, the step 103 may include following sub-step:
The training is applied using characteristic input and obtains prediction application preferences number in recommended models by sub-step S21 According to;
Sub-step S22 calculates loss late using the prediction application preferences data and the true application preferences data;
Sub-step S23 calculates gradient using the loss late;
Sub-step S24, judges whether the gradient meets default iterated conditional;If so, sub-step S25 is executed, if it is not, Then execute sub-step S26;
It is described using recommended models to terminate adjustment by sub-step S25;
Sub-step S26 is declined to described using the model parameter of recommended models using the gradient, is returned and is executed institute State sub-step S11.
The training of the embodiment of the present application application recommended models is actually for above-mentioned Wide-Deep Fusion Model formula Middle model parameter is trained, Fusion Model YWDIt is actually about model parameter w1,w2,w3,w4,w5,b1,b2,b3,b4,b5, The expression formula of w, b, m.
In the concrete realization, the model parameter using recommended models can be trained by way of stochastic gradient descent.This The loss function that application embodiment uses are as follows:
Wherein, n indicates the quantity of sample, is a constant term, yiIndicate the true application preferences data of user, YiIt indicates It predicts application preferences data, that is to say above-mentioned Fusion Model YWD
By all YiIt is unfolded according to above-mentioned calculation expression, passes through when gradient descent method solution loss function los minimum pairs W should be solved1,w2,w3,w4,w5,b1,b2,b3,b4,b5, w, b, the model parameter that m as needs to solve.The following institute of gradient descent method Show:
Step 1: the unified note of all parameters of Fusion Model is gathered to one, is denoted as θ={ θi, random given one group in 0- Between 1, it is set as θ(0), initialize iterative steps k=0.
Step 2: iterative calculation loss late
Wherein, ρ is used for control convergence speed, takes 0.01.
Step 3: judge whether gradient restrains
If gradientThen think to meet default iterated conditional, the training for model can be terminated, So it is returned to θ(k+1), otherwise return to step 2 and continue to calculate, wherein α is the value of a very little, can take the ρ of α=0.01.
Step 104, it is carried out according to the application recommended models adjusted using recommendation.
After completing solving model parameter, so that it may apply recommended models based on model parameter adjustment, and use adjustment mould Application recommended models after shape parameter are that user recommends application.
In a preferred embodiment of the present application, the step 104 may include following sub-step:
Sub-step S31 obtains the target search word of user;
Sub-step S32 calculates the Wide model in the target search word and default application database between each application Characteristic and target Deep aspect of model data;
The Wide aspect of model data and the target Deep aspect of model data are input to adjustment by sub-step S33 In application recommended models afterwards, the target prediction preference data of each application is obtained;
Sub-step S34 is ranked up according to the target prediction preference data, and is that user recommends to answer according to ranking results With.
Using the embodiment of the present application, the search term that can be inputted according to user is that user shows the application of search and pushes away It recommends.Assuming that user u inputs search term k, A is applied firstly for any one in the application of full library, is melted with front Wide-Deep Molding type can calculate the score Y (A) of the application, that is to say target prediction preference data, then according to the score of different application Descending is done to the application of full library, the application of 100, head is chosen and shows user.
It includes that data and true application preferences data training sample are applied in training, and are answered in training that the embodiment of the present application, which obtains, Training is extracted using characteristic with data, is then corresponded to further according to training using characteristic and true application preferences data It is adjusted with recommended models, is finally carried out further according to application recommended models adjusted using recommendation.The embodiment of the present application pair For improving the performance of model to applying recommended models to be trained after sample data carries out valid data extraction, thus It improves and carries out for user using the accuracy rate recommended, can preferably be user service.
It should be noted that for simple description, therefore, it is stated as a series of action groups for embodiment of the method It closes, but those skilled in the art should understand that, the embodiment of the present application is not limited by the described action sequence, because according to According to the embodiment of the present application, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art also should Know, the embodiments described in the specification are all preferred embodiments, and related movement not necessarily the application is implemented Necessary to example.
Referring to Fig. 3, a kind of structural block diagram of the recommendation apparatus embodiment of application of the application is shown, can specifically include Following module:
Training sample acquisition module 201, for obtaining training sample, the training sample is including training using data and very Real application preferences data;
Characteristic extraction module 202 is applied in training, for extracting training using feature using data in the training Data;
Module 203 is adjusted using recommended models, for applying characteristic and true application preferences number according to the training It is adjusted according to using recommended models;
Using recommending module 204, for being carried out according to the application recommended models adjusted using recommendation.
In a preferred embodiment of the present application, the application recommended models adjust module 203, comprising:
Prediction application preferences data obtain submodule, for the training to be applied recommended models using characteristic input In obtain prediction application preferences data;
Loss late computational submodule, based on using the prediction application preferences data and the true application preferences data Calculate loss late;
Gradient computational submodule, for calculating gradient using the loss late;
Iterated conditional judging submodule, for judging whether the gradient meets default iterated conditional;If so, calling knot Beam adjusting submodule, if it is not, gradient is then called to decline submodule;
Terminate adjusting submodule, adjustment is described to apply recommended models for terminating;
Gradient declines submodule, for being declined to described using the model parameter of recommended models using the gradient, It returns to described input the training in the application recommended models using characteristic of execution and obtains prediction application preferences data The step of.
In a preferred embodiment of the present application, it is described using recommended models be Wide model and Deep model fusion Model.
In a preferred embodiment of the present application, characteristic extraction module 202 is applied in the training, comprising:
Data acquisition submodule is applied in training, and for obtaining training using data, the training includes history using data User searches for record;
Wide aspect of model data obtain submodule, obtain Wide model spy for recording according to the historical user search Levy data;
Deep aspect of model data obtain submodule, obtain Deep model spy for recording according to the historical user search Levy data;
Training obtains submodule using characteristic, for the Wide aspect of model data and the Deep model are special Data are levied as training and apply characteristic.
In a preferred embodiment of the present application, the historical user search record includes search term, shows application Association data;The Wide aspect of model data obtain submodule, comprising:
First participle unit obtains the first word cutting feature for being segmented to the association data;
Second participle unit obtains the second word cutting feature for being segmented to described search word;
Combining unit obtains Wide aspect of model number for merging the first word cutting feature and the second word cutting feature According to.
In a preferred embodiment of the present application, the Deep aspect of model data obtain submodule, comprising:
Related coefficient computing unit, for calculating described search word and the related coefficient shown between application;
Temperature feature extraction unit, for extracting temperature feature according to the association data;
Average score feature extraction unit, for extracting user's average score feature according to the association data;
Deep aspect of model data acquiring unit, for putting down the related coefficient, the temperature feature and the user Scoring feature is as Deep aspect of model data.
In a preferred embodiment of the present application, the application recommending module 204, comprising:
Target search word acquisition submodule, for obtaining the target search word of user;
Aspect of model data computational submodule is answered for calculating the target search word with each in default application database Wide aspect of model data and target Deep aspect of model data between;
Target prediction preference data obtains submodule, is used for the Wide aspect of model data and the target Deep mould Type characteristic is input in application recommended models adjusted, obtains the target prediction preference data of each application;
Using submodule is recommended, for being ranked up according to the target prediction preference data, and it is according to ranking results User recommends application.
For device embodiment, since it is basically similar to the method embodiment, related so being described relatively simple Place illustrates referring to the part of embodiment of the method.
All the embodiments in this specification are described in a progressive manner, the highlights of each of the examples are with The difference of other embodiments, the same or similar parts between the embodiments can be referred to each other.
It should be understood by those skilled in the art that, the embodiments of the present application may be provided as method, apparatus or calculating Machine program product.Therefore, the embodiment of the present application can be used complete hardware embodiment, complete software embodiment or combine software and The form of the embodiment of hardware aspect.Moreover, the embodiment of the present application can be used one or more wherein include computer can With in the computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) of program code The form of the computer program product of implementation.
In a typical configuration, the computer equipment includes one or more processors (CPU), input/output Interface, network interface and memory.Memory may include the non-volatile memory in computer-readable medium, random access memory The forms such as device (RAM) and/or Nonvolatile memory, such as read-only memory (ROM) or flash memory (flash RAM).Memory is to calculate The example of machine readable medium.Computer-readable medium includes that permanent and non-permanent, removable and non-removable media can be with Realize that information is stored by any method or technique.Information can be computer readable instructions, data structure, the module of program or Other data.The example of the storage medium of computer includes, but are not limited to phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other kinds of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory techniques, CD-ROM are read-only Memory (CD-ROM), digital versatile disc (DVD) or other optical storage, magnetic cassettes, tape magnetic disk storage or Other magnetic storage devices or any other non-transmission medium, can be used for storage can be accessed by a computing device information.According to Herein defines, and computer-readable medium does not include non-persistent computer readable media (transitory media), such as The data-signal and carrier wave of modulation.
The embodiment of the present application is referring to according to the method for the embodiment of the present application, terminal device (system) and computer program The flowchart and/or the block diagram of product describes.It should be understood that flowchart and/or the block diagram can be realized by computer program instructions In each flow and/or block and flowchart and/or the block diagram in process and/or box combination.It can provide these Computer program instructions are set to general purpose computer, special purpose computer, Embedded Processor or other programmable data processing terminals Standby processor is to generate a machine, so that being held by the processor of computer or other programmable data processing terminal devices Capable instruction generates for realizing in one or more flows of the flowchart and/or one or more blocks of the block diagram The device of specified function.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing terminal devices In computer-readable memory operate in a specific manner, so that instruction stored in the computer readable memory generates packet The manufacture of command device is included, which realizes in one side of one or more flows of the flowchart and/or block diagram The function of being specified in frame or multiple boxes.
These computer program instructions can also be loaded into computer or other programmable data processing terminal devices, so that Series of operation steps are executed on computer or other programmable terminal equipments to generate computer implemented processing, thus The instruction executed on computer or other programmable terminal equipments is provided for realizing in one or more flows of the flowchart And/or in one or more blocks of the block diagram specify function the step of.
Although preferred embodiments of the embodiments of the present application have been described, once a person skilled in the art knows bases This creative concept, then additional changes and modifications can be made to these embodiments.So the following claims are intended to be interpreted as Including preferred embodiment and all change and modification within the scope of the embodiments of the present application.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning Covering non-exclusive inclusion, so that process, method, article or terminal device including a series of elements not only wrap Those elements are included, but also including other elements that are not explicitly listed, or further includes for this process, method, article Or the element that terminal device is intrinsic.In the absence of more restrictions, being wanted by what sentence "including a ..." limited Element, it is not excluded that there is also other identical elements in process, method, article or the terminal device for including the element.
Recommended method to a kind of application provided herein and a kind of recommendation apparatus of application above have carried out in detail It introduces, specific examples are used herein to illustrate the principle and implementation manner of the present application, the explanation of above embodiments It is merely used to help understand the present processes and its core concept;At the same time, for those skilled in the art, according to this The thought of application, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification is not answered It is interpreted as the limitation to the application.

Claims (14)

1. a kind of recommended method of application characterized by comprising
Training sample is obtained, the training sample includes that data and true application preferences data are applied in training;
Training is extracted using characteristic using data in the training;
It is adjusted using characteristic and true application preferences data to using recommended models according to the training;
It is carried out according to the application recommended models adjusted using recommendation.
2. the method according to claim 1, wherein described answer according to the training using characteristic and really The step of with preference data to being adjusted using recommended models, comprising:
The training is applied using characteristic input and obtains prediction application preferences data in recommended models;
Loss late is calculated using the prediction application preferences data and the true application preferences data;
Gradient is calculated using the loss late;
Judge whether the gradient meets default iterated conditional;
If so, it is described using recommended models to terminate adjustment;
If it is not, then declining to described using the model parameter of recommended models using the gradient, it is described by institute to return to execution It states training and inputs in the application recommended models the step of obtaining prediction application preferences data using characteristic.
3. the method according to claim 1, wherein the application recommended models are Wide model and Deep model Fusion Model.
4. method according to claim 1 or 3, which is characterized in that described to extract training using data in the training The step of using characteristic, comprising:
It obtains training and applies data, the training includes historical user search record using data;
It is recorded according to the historical user search and obtains Wide aspect of model data;
It is recorded according to the historical user search and obtains Deep aspect of model data;
Characteristic is applied using the Wide aspect of model data and the Deep aspect of model data as training.
5. according to the method described in claim 4, it is characterized in that, historical user search record includes search term, shows The association data of application;It is described that the step of obtaining Wide aspect of model data, packet are recorded according to the historical user search It includes:
The association data are segmented to obtain the first word cutting feature;
Described search word is segmented to obtain the second word cutting feature;
Merge the first word cutting feature and the second word cutting feature obtains Wide aspect of model data.
6. according to the method described in claim 5, it is characterized in that, described recorded according to the historical user search obtains Deep The step of aspect of model data, comprising:
Calculate described search word and the related coefficient shown between application;
Temperature feature is extracted according to the association data;
User's average score feature is extracted according to the association data;
Using the related coefficient, the temperature feature and user's average score feature as Deep aspect of model data.
7. the method according to claim 1, wherein described carry out according to the application recommended models adjusted Using the step of recommending, comprising:
Obtain the target search word of user;
Calculate the Wide aspect of model data and target in the target search word and default application database between each application Deep aspect of model data;
The Wide aspect of model data and the target Deep aspect of model data are input to application adjusted and recommend mould In type, the target prediction preference data of each application is obtained;
It is ranked up according to the target prediction preference data, and is that user recommends application according to ranking results.
8. a kind of recommendation apparatus of application characterized by comprising
Training sample obtains module, and for obtaining training sample, the training sample includes training using data and true application Preference data;
Characteristic extraction module is applied in training, for extracting training using characteristic using data in the training;
Module is adjusted using recommended models, for applying characteristic and true application preferences data to application according to the training Recommended models are adjusted;
Using recommending module, for being carried out according to the application recommended models adjusted using recommendation.
9. device according to claim 8, which is characterized in that the application recommended models adjust module, comprising:
Prediction application preferences data obtain submodule, are used to using characteristic input obtain the training using in recommended models To prediction application preferences data;
Loss late computational submodule, for calculating damage using the prediction application preferences data and the true application preferences data Mistake rate;
Gradient computational submodule, for calculating gradient using the loss late;
Iterated conditional judging submodule, for judging whether the gradient meets default iterated conditional;If so, calling terminates to adjust Whole submodule, if it is not, gradient is then called to decline submodule;
Terminate adjusting submodule, adjustment is described to apply recommended models for terminating;
Gradient declines submodule, for being declined to described using the model parameter of recommended models using the gradient, returns Execute it is described by it is described training using characteristic input it is described using obtained in recommended models prediction application preferences data step Suddenly.
10. device according to claim 8, which is characterized in that the application recommended models are Wide model and Deep mould The Fusion Model of type.
11. the device according to claim 8 or 10, which is characterized in that characteristic extraction module, packet are applied in the training It includes:
Data acquisition submodule is applied in training, and for obtaining training using data, the training includes historical user using data Search record;
Wide aspect of model data obtain submodule, obtain Wide aspect of model number for recording according to the historical user search According to;
Deep aspect of model data obtain submodule, obtain Deep aspect of model number for recording according to the historical user search According to;
Training obtains submodule using characteristic, is used for the Wide aspect of model data and the Deep aspect of model number Characteristic is applied according to as training.
12. device according to claim 11, which is characterized in that the historical user search record includes search term, exhibition Show the association data of application;The Wide aspect of model data obtain submodule, comprising:
First participle unit obtains the first word cutting feature for being segmented to the association data;
Second participle unit obtains the second word cutting feature for being segmented to described search word;
Combining unit obtains Wide aspect of model data for merging the first word cutting feature and the second word cutting feature.
13. device according to claim 12, which is characterized in that the Deep aspect of model data obtain submodule, packet It includes:
Related coefficient computing unit, for calculating described search word and the related coefficient shown between application;
Temperature feature extraction unit, for extracting temperature feature according to the association data;
Average score feature extraction unit, for extracting user's average score feature according to the association data;
Deep aspect of model data acquiring unit, for averagely commenting the related coefficient, the temperature feature and the user Dtex sign is used as Deep aspect of model data.
14. device according to claim 8, which is characterized in that described to apply recommending module, comprising:
Target search word acquisition submodule, for obtaining the target search word of user;
Aspect of model data computational submodule, it is each using it in the target search word and default application database for calculating Between Wide aspect of model data and target Deep aspect of model data;
Target prediction preference data obtains submodule, for the Wide aspect of model data and the target Deep model are special Sign data are input in application recommended models adjusted, obtain the target prediction preference data of each application;
It is user for being ranked up according to the target prediction preference data, and according to ranking results using submodule is recommended Recommend application.
CN201810798909.4A 2018-07-19 2018-07-19 A kind of recommended method and device of application Pending CN109242592A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810798909.4A CN109242592A (en) 2018-07-19 2018-07-19 A kind of recommended method and device of application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810798909.4A CN109242592A (en) 2018-07-19 2018-07-19 A kind of recommended method and device of application

Publications (1)

Publication Number Publication Date
CN109242592A true CN109242592A (en) 2019-01-18

Family

ID=65072132

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810798909.4A Pending CN109242592A (en) 2018-07-19 2018-07-19 A kind of recommended method and device of application

Country Status (1)

Country Link
CN (1) CN109242592A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110147851A (en) * 2019-05-29 2019-08-20 北京达佳互联信息技术有限公司 Method for screening images, device, computer equipment and storage medium
CN110297848A (en) * 2019-07-09 2019-10-01 深圳前海微众银行股份有限公司 Recommended models training method, terminal and storage medium based on federation's study
CN110727785A (en) * 2019-09-11 2020-01-24 北京奇艺世纪科技有限公司 Recommendation method, device and storage medium for training recommendation model and recommending search text
CN111079001A (en) * 2019-11-26 2020-04-28 贝壳技术有限公司 Decoration recommendation information generation method and device, storage medium and electronic equipment
CN114861071A (en) * 2022-07-01 2022-08-05 北京百度网讯科技有限公司 Object recommendation method and device
CN114969486A (en) * 2022-08-02 2022-08-30 平安科技(深圳)有限公司 Corpus recommendation method, apparatus, device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982107A (en) * 2012-11-08 2013-03-20 北京航空航天大学 Recommendation system optimization method with information of user and item and context attribute integrated
US20160306798A1 (en) * 2015-04-16 2016-10-20 Microsoft Corporation Context-sensitive content recommendation using enterprise search and public search
CN106250532A (en) * 2016-08-04 2016-12-21 广州优视网络科技有限公司 Application recommendation method, device and server
CN106295832A (en) * 2015-05-12 2017-01-04 阿里巴巴集团控股有限公司 Product information method for pushing and device
CN107103036A (en) * 2017-03-22 2017-08-29 广州优视网络科技有限公司 Using acquisition methods, equipment and the programmable device for downloading probability
CN107220386A (en) * 2017-06-29 2017-09-29 北京百度网讯科技有限公司 Information-pushing method and device
CN108228824A (en) * 2017-12-29 2018-06-29 暴风集团股份有限公司 Recommendation method, apparatus, electronic equipment, medium and the program of a kind of video

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982107A (en) * 2012-11-08 2013-03-20 北京航空航天大学 Recommendation system optimization method with information of user and item and context attribute integrated
US20160306798A1 (en) * 2015-04-16 2016-10-20 Microsoft Corporation Context-sensitive content recommendation using enterprise search and public search
CN106295832A (en) * 2015-05-12 2017-01-04 阿里巴巴集团控股有限公司 Product information method for pushing and device
CN106250532A (en) * 2016-08-04 2016-12-21 广州优视网络科技有限公司 Application recommendation method, device and server
CN107103036A (en) * 2017-03-22 2017-08-29 广州优视网络科技有限公司 Using acquisition methods, equipment and the programmable device for downloading probability
CN107220386A (en) * 2017-06-29 2017-09-29 北京百度网讯科技有限公司 Information-pushing method and device
CN108228824A (en) * 2017-12-29 2018-06-29 暴风集团股份有限公司 Recommendation method, apparatus, electronic equipment, medium and the program of a kind of video

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110147851A (en) * 2019-05-29 2019-08-20 北京达佳互联信息技术有限公司 Method for screening images, device, computer equipment and storage medium
CN110147851B (en) * 2019-05-29 2022-04-01 北京达佳互联信息技术有限公司 Image screening method and device, computer equipment and storage medium
CN110297848A (en) * 2019-07-09 2019-10-01 深圳前海微众银行股份有限公司 Recommended models training method, terminal and storage medium based on federation's study
CN110297848B (en) * 2019-07-09 2024-02-23 深圳前海微众银行股份有限公司 Recommendation model training method, terminal and storage medium based on federal learning
CN110727785A (en) * 2019-09-11 2020-01-24 北京奇艺世纪科技有限公司 Recommendation method, device and storage medium for training recommendation model and recommending search text
CN111079001A (en) * 2019-11-26 2020-04-28 贝壳技术有限公司 Decoration recommendation information generation method and device, storage medium and electronic equipment
CN114861071A (en) * 2022-07-01 2022-08-05 北京百度网讯科技有限公司 Object recommendation method and device
CN114861071B (en) * 2022-07-01 2022-10-18 北京百度网讯科技有限公司 Object recommendation method and device
CN114969486A (en) * 2022-08-02 2022-08-30 平安科技(深圳)有限公司 Corpus recommendation method, apparatus, device and storage medium
CN114969486B (en) * 2022-08-02 2022-11-04 平安科技(深圳)有限公司 Corpus recommendation method, apparatus, device and storage medium

Similar Documents

Publication Publication Date Title
CN109242592A (en) A kind of recommended method and device of application
Reddy et al. Content-based movie recommendation system using genre correlation
Rafailidis et al. Modeling users preference dynamics and side information in recommender systems
US10095771B1 (en) Clustering and recommending items based upon keyword analysis
CN110209922B (en) Object recommendation method and device, storage medium and computer equipment
Kuzelewska Clustering algorithms in hybrid recommender system on movielens data
CN103823908B (en) Content recommendation method and server based on user preference
US20170206276A1 (en) Large Scale Recommendation Engine Based on User Tastes
CN102789462B (en) A kind of item recommendation method and system
CN106844637B (en) Movie recommendation method for improving multi-target genetic algorithm based on orthogonal and clustering pruning
CN107492008A (en) Information recommendation method, device, server and computer-readable storage medium
CN110909182A (en) Multimedia resource searching method and device, computer equipment and storage medium
CN109409928A (en) A kind of material recommended method, device, storage medium, terminal
CN104199896A (en) Video similarity determining method and video recommendation method based on feature classification
Wanaskar et al. A hybrid web recommendation system based on the improved association rule mining algorithm
CN107256241B (en) Movie recommendation method for improving multi-target genetic algorithm based on grid and difference replacement
Sachan et al. A survey on recommender systems based on collaborative filtering technique
CN112380433B (en) Recommendation element learning method for cold start user
CN108540860B (en) Video recall method and device
CN109034981A (en) A kind of electric business collaborative filtering recommending method
CN114329200B (en) Personalized self-adaptive binding recommendation model and recommendation method
Fan et al. An improved collaborative filtering algorithm combining content-based algorithm and user activity
CN111104599A (en) Method and apparatus for outputting information
CN105894310A (en) Personalized recommendation method
Mwinyi et al. Predictive self-learning content recommendation system for multimedia contents

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200512

Address after: 310052 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Alibaba (China) Co.,Ltd.

Address before: 510630 Guangdong city of Guangzhou province Whampoa Tianhe District Road No. 163 Xiping Yun Lu Yun Ping square B radio tower 15 layer self unit 02

Applicant before: GUANGZHOU UC NETWORK TECHNOLOGY Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190118