CN109447244A - A kind of advertisement recommended method of combination gating cycle unit neural network - Google Patents

A kind of advertisement recommended method of combination gating cycle unit neural network Download PDF

Info

Publication number
CN109447244A
CN109447244A CN201811183803.XA CN201811183803A CN109447244A CN 109447244 A CN109447244 A CN 109447244A CN 201811183803 A CN201811183803 A CN 201811183803A CN 109447244 A CN109447244 A CN 109447244A
Authority
CN
China
Prior art keywords
neural network
text
model
advertisement
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811183803.XA
Other languages
Chinese (zh)
Inventor
陶久成
潘炎
印鉴
潘文杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Guangzhou Zhongda Nansha Technology Innovation Industrial Park Co Ltd
National Sun Yat Sen University
Original Assignee
Guangzhou Zhongda Nansha Technology Innovation Industrial Park Co Ltd
National Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Zhongda Nansha Technology Innovation Industrial Park Co Ltd, National Sun Yat Sen University filed Critical Guangzhou Zhongda Nansha Technology Innovation Industrial Park Co Ltd
Priority to CN201811183803.XA priority Critical patent/CN109447244A/en
Publication of CN109447244A publication Critical patent/CN109447244A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Character Discrimination (AREA)

Abstract

The present invention provides a kind of advertisement recommended method of combination gating cycle unit neural network, the text feature for the advertisement that this method extracts gating cycle unit neural network is inputted as the part of network, information in advertising words and advertisement descriptive text can be extracted, special disposal text feature.Gating cycle neural network can then be obtained by pre-training, and using trained network as one layer of advertisement recommended models, i.e., Text character extraction layer is trained end to end to realize one.New method can preferably extract the feature in advertisement text compared to the model of no Text character extraction layer, it was found that the semantic structure information in text between word and word, sentence and sentence, the problem of can disappearing simultaneously to avoid traditional Recognition with Recurrent Neural Network bring training gradient, and GRU network may be implemented once to input directly as a network layer of entire training pattern, output result can be obtained, completion is trained end to end, reduces intermediate steps.

Description

A kind of advertisement recommended method of combination gating cycle unit neural network
Technical field
The invention belongs to deep learning fields, gating cycle neural network are combined with range depth model end-to-end Advertisement recommended method.
Background technique
In recent years, machine learning techniques continue to develop, and machine learning method has early had been supplied in the every field in life. These living scenes closely bound up with us of webpage sorting, dining room recommendation etc. are all because the addition of machine learning becomes more Efficiently and accurately.As our requirements for machine learning model are higher and higher, again by manually extraction, assemblage characteristic in the past It goes cost of labor required for the mode of training machine learning model and calculating cost all higher and higher, and is generally unattainable The effect that we want, and the artificial work for carrying out Feature Engineering can replace by tree-model as GBDT to a certain extent Make, but calculation amount is still very big.
In this context, deep neural network high speed development gets up, and by the connection of neural network between layers, realizes The more feature combination of high-order, saves the cost of a large amount of manual construction features.Therefore at present in industry under many scenes all It is to be learnt using deep neural network.
But simple deep neural network, there are also problem, feature, which enters deep neural network, can compare high-order Feature combination, the interpretation of this combination is not strong, and combined mode is also uncontrollable.Especially known to us In the case where having extraordinary effect after certain feature combinations, we not can guarantee this feature energy yet in deep neural network As we are combined with being willing to.Therefore, Google propose it is a kind of can be by the Memorability of linear model and depth nerve net The model that the generalization ability of network combines --- extensiveness and intensiveness model.
On the other hand, under the scene that advertisement is recommended, be often related to the text information of some advertisements, traditional processing this The method of a little text informations is usually to segment to text, then calculate its TF-IDF value, extracts text feature with this, or The term vector of text information is obtained by the method for word2vec.The meeting when handling the semantic structure information of text of such method Some shortcomings, while needing individually to handle text information in advance every time, it can not be trained, be increased end to end Intermediate steps.
Gating cycle unit neural network (GRU) is the variant of length memory Recognition with Recurrent Neural Network, in natural language processing There is good effect in field.More particularly to semantic structure etc. has very big advantage when having the problem of time structure.
Summary of the invention
The present invention provides a kind of method that the advertisement for combining gating cycle neural network with range depth model is recommended, This method can preferably extract text feature in advertisement, and realize training end to end and prediction.
In order to reach above-mentioned technical effect, technical scheme is as follows:
A kind of advertisement recommended method of combination gating cycle unit neural network, comprising:
S1: pre-training gating cycle unit neural network obtains trained model as the advertisement recommended models Text character extraction layer;
Door control unit neural network is a mutation of length memory Recognition with Recurrent Neural Network (LSTM).Therefore GRU is in a side Face inherits advantage of the LSTM on the problem of processing feature length relies on, while can be avoided gradient and disappearing and gradient explosion, On the other hand, GRU simplifies the structure of LSTM, more simpler than LSTM;Specifically, needing to GRU layers of progress pre-training, training Good network will be as entire model network layer handles text feature.Pre-training needs to carry out GRU network preceding to biography It broadcasts, calculates the weight and biasing that each layer is connect with neuron.Input reaches output layer by hidden layer, can obtain and really export Error.Connecting quantity is updated by error Back-Propagation, final trained GRU network is negative as a layer network of final mask Duty processing text feature;
S2: the input layer of other features of input in addition to advertisement text feature to advertisement recommended models;
Specifically, common continuous feature, discrete features are directly inputted to extensiveness and intensiveness model and are trained;
S3: input advertisement text content to Text character extraction layer;
S4: text feature, and the input as advertisement recommended models are extracted with Text character extraction layer;
Specifically, the text information of advertisement is input to GRU layer trained in advance, extracts and be transferred to after text feature extensively Degree is trained with depth model;
S5: extensiveness and intensiveness model is trained, and wherein deep neural network part is propagated forward, and is calculated Error, linear segment is directly trained, and error is calculated;
Specifically, deep neural network part to input data carry out a propagated forward, input data from input layer to Hidden layer finally reaches output layer, by the calculating between each layer, is exported, at the same calculate reality output and truthful data it Between error, the update for subsequent network parameter.Linear model part is calculated by using gradient descent method to loss function Acquire optimal parameter;
S6: training error obtained in step S5, which can be fed back simultaneously in linear model and deep neural network model, to carry out Parameter updates, and wherein deep neural network model passes through error Back-Propagation undated parameter;
Specifically, parameter and error that deep neural network part is returned according to linear segment carry out carry out Back-propagation It broadcasts, updates every layer of parameter;
S7: executing S5 to S6 step repeatedly, until model is restrained.
Specifically, the output of linear segment and the output of deep neural network part are integrated by a weighting function One output, provides the prediction result of model;
Further, the error Back-Propagation in the step S1 and S6 needs to seek parameters error respectively It leads.
Further, the gradient descent method of linear model needs the parameter derivation to loss function in the step S5, Direction of the direction of negative derivative as fitting is chosen, it can more rapid convergence.
Compared with prior art, the beneficial effect of technical solution of the present invention is:
The present invention realizes the training method end to end from text information to prediction output, and the prior art needs to mention mostly It is preceding that text information is handled, text feature is extracted, this undoubtedly increases the intermediate steps of model training link, does not meet end To the thought of end training.The present invention is pre-processed the GRU network of a pre-training as a text information before input layer Layer, i.e. Text character extraction layer, then enter extensiveness and intensiveness model training after integrating other input datas.Wherein GRU network is The mutation of LSTM network can not only extract the context relation in text, solve the problems, such as that shot and long term relies on, prevent gradient from disappearing The case where gradient of becoming estranged is exploded, is more a simplified the structure of LSTM, original forgetting door, input gate and out gate is reduced to more New door and resetting door, more succinctly.In addition to this, linear model and depth have then been merged as the extensiveness and intensiveness model of master cast The advantages of spending both neural networks not only has extremely strong memory capability, while also having extremely strong generalization ability, so that entire mould Type is excellent in.
Detailed description of the invention
Fig. 1 is flow chart of the invention;
Fig. 2 is GRU unit block structural diagram of the present invention;
The position Fig. 3 extensiveness and intensiveness model structure of the present invention.
Specific embodiment
The attached figures are only used for illustrative purposes and cannot be understood as limitating the patent;
In order to better illustrate this embodiment, the certain components of attached drawing have omission, zoom in or out, and do not represent actual product Size;
To those skilled in the art, it is to be understood that certain known features and its explanation, which may be omitted, in attached drawing 's.
The following further describes the technical solution of the present invention with reference to the accompanying drawing.
As shown in Figure 1, a kind of advertisement recommended method of combination gating cycle unit neural network, the specific steps are as follows:
A. pre-training GRU network layer, i.e. Text character extraction layer;
The structure of GRU network is similar with RNN, but in GRU network, constitutes biography by the block that multiple GRU units are constituted Hiding layer network in system network.The structure of GRU unit as shown in Fig. 2, GRU network unit by three door letters in LSTM It has been melted into two: having updated door (by forgetting that door and input gate synthesize) and resetting door, i.e. z in figuretAnd rt.Door is updated for controlling The status information of previous moment is brought into the degree in current state, updates the bigger state letter for illustrating previous moment of value of door Breath is brought into more.Resetting door is used to control the degree for the status information for ignoring previous moment, and the smaller explanation of value for resetting door is ignored It obtains more.Presence due to door enables GRU to store and access prolonged information plus cell state etc. is incorporated, Prevent gradient from disappearing and gradient explosion issues.
Shown in Fig. 2, xtIndicate the input of t moment network, ht-1Indicate the hidden state at t-1 moment, htThen indicate t moment Hidden state, rtRepresent the resetting door of t moment, ztRepresent the update door of t moment.Calculation is such as in the propagated forward of GRU Under:
Reset door:
rt=σ (Wr·[ht-1, xt])
Update door:
zt=σ (Wz·[ht-1, xt])
Candidate hidden state:
Hidden state:
Output:
yt=σ (Wo·ht)
Wherein [] indicates that two vectors are connected, and * representing matrix element multiplication, σ indicates sigmoid function.
According to the calculation method of GRU propagated forward, that need training is parameter WrWzWhWo, wherein first three weight all belongs to In splicing weight, it is as follows to need to do other processing in study:
Wr=Wrx+Wrh
For the output layer of GRU network, input are as follows:
The output of output layer are as follows:
According to output, the loss function of t moment, i.e. gap between model predication value and true value can be calculated:
Therefore the error that sample can be calculated updates network parameter for subsequent error Back-Propagation:
In the error Back-Propagation the step of, the error acquired to previous step is needed to seek local derviation to parameters respectively, in addition to Including above-mentionedIt further include the neuron weight W of output layero, error Back-Propagation knot Fruit is as follows:
Wherein:
The derivative of σ ' expression sigmoid function.
Network parameter is constantly updated by above-mentioned propagated forward and error Back-Propagation, a given text is obtained after the completion of training This input exports the network of text feature, using this network as the Text character extraction layer of entire model, completes GRU pre-training.
B. the text data in input data is to Text character extraction layer;
Input data is divided into two parts, and a part is text data, and another part is general data.Its text data is just It individually enters the good Text character extraction layer of pre-training in the step A and extracts text feature, since this layer of GRU network is pre- Training is completed, therefore can obtain text feature quickly.
C. text feature and other features are input to extensiveness and intensiveness model;
The text feature extracted in the step B by Text character extraction layer, it is such as continuous special with other general features Sign, discrete special data characteristics etc. enter extensiveness and intensiveness model training as input together.
D. extensiveness and intensiveness model training, undated parameter;
Depth and broadness model is combined both linear model and deep neural network, while linear mould is utilized The good memory capability of type and the powerful generalization ability of deep neural network, are mutually transmitted by parameter, error, realize two The joint training of model allows two models to be respectively dedicated to the work being most good at, to obtain good effect.
Linear model part uses logistic regression, is carried out using the method that gradient declines to objective function in logistic regression excellent Change iteration, find so that objective function i.e. loss function minimum when parameter value, the final argument as model.Method is as follows:
By maximizing, likelihood function can be obtained:
To the expression formula that likelihood function logarithmetics negate, i.e. loss function expression formula are as follows:
The form for being written as matrix is as follows:
J (θ)=- Y ⊙ loghθ(X)-(E-Y)⊙log(E-hθ(X))
Parameter θ derivation is obtained using gradient descent method:
It can be obtained after abbreviation:
Therefore, the iterative manner of parameter θ is as follows:
θ=θ-α XT(hθ(X)-Y)
Wherein, α is step-length.
For deep neural network part, network parameter is updated using propagated forward and Back-propagation broadcasting method, forward direction passes Broadcasting method is as follows:
Total number of plies L, all hidden layers and the corresponding matrix W of output layer, bias vector b, input value vector x;Initialize a1 =x
For subsequent each layer, calculate:
al=σ (zl)=σ (Wlal-1+bl)
Then aLIt as exports, according to output valve, we can calculate the error between true value:
By al=σ (zl)=σ (Wlal-1+bl) above formula and to W is substituted into, b derivation obtains:
Using chain rule, derivation can be carried out to the parameter of the hidden layer of front:
It enablesThen:
Declined simultaneously using gradient, update each layer of parameter:
Operation as above is repeated, until two parts model is restrained.By such method, linear segment and depth mind The training of parameter can be completed through network portion, after the completion of two parts training, by by the function of a weighting, will most be terminated Fruit is calculated.
The present invention is directed to propose a kind of advertisement recommended method of combination gating cycle unit neural network.Method mainly exists A door control unit neural network is added before extensiveness and intensiveness model as the feature extraction layer for extracting text information, is realized to text The extraction of this information, to complete to train end to end.
Its features and advantages are as follows: the text feature in advertisement can be preferably extracted compared to general advertisement recommended method, and And the pre-training of GRU network, it realizes from text to the end-to-end realization finally predicted, reduces intermediate steps, while GRU net Network compared with the methods of other TF-IDF, can preferably extract shot and long term in text rely on, context relation, extract Text feature is more accurate.Extensiveness and intensiveness model can play the memory capability and deep neural network of linear model simultaneously Superpower generalization ability, Each performs its own functions, and accuracy is higher.
Therefore GRU network is combined with extensiveness and intensiveness model, the text feature in advertisement can be extracted well, it is real Existing end-to-end training, and generalization ability can be combined with memory capability, excavate the relationship between feature.
Described in attached drawing positional relationship for only for illustration, should not be understood as the limitation to this patent.

Claims (4)

1. a kind of advertisement recommended method of combination gating cycle unit neural network, which is characterized in that the advertisement recommended models For extensiveness and intensiveness model (wide and deep), model includes linear segment and deep neural network part, including following step It is rapid:
S1: pre-training gating cycle unit neural network obtains text of the trained model as the advertisement recommended models Feature extraction layer;
S2: the input layer of other features of input in addition to advertisement text feature to advertisement recommended models;
S3: input advertisement text content to Text character extraction layer;
S4: text feature, and the input as advertisement recommended models are extracted with Text character extraction layer;
S5: extensiveness and intensiveness model is trained, and wherein deep neural network part is propagated forward, and mistake is calculated Difference, linear segment is directly trained, and error is calculated;
S6: training error obtained in step S5 can feed back progress parameter update in linear model and dnn model simultaneously, wherein Dnn model passes through error Back-Propagation undated parameter;
S7: executing S5 to S6 step repeatedly, until model is restrained.
2. the advertisement recommended method of combination gating cycle unit neural network according to claim 1, which is characterized in that step The pre-training of gating cycle neural network in rapid S1 calculates the error for obtaining network by the propagated forward of neural network, then leads to It crosses error Back-Propagation and updates the parameter of each layer of network, until obtaining convergent result.
3. the advertisement recommended method of combination gating cycle unit neural network according to claim 1, which is characterized in that step Text character extraction layer in rapid S4 can be extracted the semantic structure information in text, be better than by GRU network implementations The result that common bag of words are connected to.
4. the advertisement recommendation side of combination gating cycle unit neural network according to claim 1 and probabilistic neural network Method, which is characterized in that the error of step S6 feed back between linear model and dnn model be it is mutual, the weight of single model is more Newly it will receive the end wide and the end deep to the joint effect of model training error.
CN201811183803.XA 2018-10-11 2018-10-11 A kind of advertisement recommended method of combination gating cycle unit neural network Pending CN109447244A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811183803.XA CN109447244A (en) 2018-10-11 2018-10-11 A kind of advertisement recommended method of combination gating cycle unit neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811183803.XA CN109447244A (en) 2018-10-11 2018-10-11 A kind of advertisement recommended method of combination gating cycle unit neural network

Publications (1)

Publication Number Publication Date
CN109447244A true CN109447244A (en) 2019-03-08

Family

ID=65545961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811183803.XA Pending CN109447244A (en) 2018-10-11 2018-10-11 A kind of advertisement recommended method of combination gating cycle unit neural network

Country Status (1)

Country Link
CN (1) CN109447244A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993668A (en) * 2019-04-09 2019-07-09 桂林电子科技大学 A kind of recommending scenery spot method based on gating cycle unit neural network
CN110225460A (en) * 2019-06-05 2019-09-10 三维通信股份有限公司 A kind of indoor orientation method and device based on deep neural network
CN110222894A (en) * 2019-06-06 2019-09-10 阿里巴巴集团控股有限公司 Advertisement placement method, device and equipment
CN110299194A (en) * 2019-06-06 2019-10-01 昆明理工大学 The similar case recommended method with the wide depth model of improvement is indicated based on comprehensive characteristics
CN110716964A (en) * 2019-09-19 2020-01-21 卓尔智联(武汉)研究院有限公司 Newborn naming method based on GRU network, electronic device and storage medium
CN111222628A (en) * 2019-11-20 2020-06-02 深圳前海微众银行股份有限公司 Method, device and system for optimizing recurrent neural network training and readable storage medium
CN113288050A (en) * 2021-04-23 2021-08-24 山东师范大学 Multidimensional enhanced epileptic seizure prediction system based on graph convolution network
CN114462584A (en) * 2022-04-11 2022-05-10 北京达佳互联信息技术有限公司 Recommendation model training method, recommendation device, server and medium
CN114500197A (en) * 2022-01-24 2022-05-13 华南理工大学 Method, system, device and storage medium for equalization after visible light communication

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106682217A (en) * 2016-12-31 2017-05-17 成都数联铭品科技有限公司 Method for enterprise second-grade industry classification based on automatic screening and learning of information
CN107577662A (en) * 2017-08-08 2018-01-12 上海交通大学 Towards the semantic understanding system and method for Chinese text
CN107807971A (en) * 2017-10-18 2018-03-16 北京信息科技大学 A kind of automated graphics semantic description method
CN108416625A (en) * 2018-02-28 2018-08-17 阿里巴巴集团控股有限公司 The recommendation method and apparatus of marketing product
CN108596645A (en) * 2018-03-13 2018-09-28 阿里巴巴集团控股有限公司 A kind of method, apparatus and equipment of information recommendation
US20180336889A1 (en) * 2017-05-19 2018-11-22 Baidu Online Network Technology (Beijing) Co., Ltd . Method and Apparatus of Building Acoustic Feature Extracting Model, and Acoustic Feature Extracting Method and Apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106682217A (en) * 2016-12-31 2017-05-17 成都数联铭品科技有限公司 Method for enterprise second-grade industry classification based on automatic screening and learning of information
US20180336889A1 (en) * 2017-05-19 2018-11-22 Baidu Online Network Technology (Beijing) Co., Ltd . Method and Apparatus of Building Acoustic Feature Extracting Model, and Acoustic Feature Extracting Method and Apparatus
CN107577662A (en) * 2017-08-08 2018-01-12 上海交通大学 Towards the semantic understanding system and method for Chinese text
CN107807971A (en) * 2017-10-18 2018-03-16 北京信息科技大学 A kind of automated graphics semantic description method
CN108416625A (en) * 2018-02-28 2018-08-17 阿里巴巴集团控股有限公司 The recommendation method and apparatus of marketing product
CN108596645A (en) * 2018-03-13 2018-09-28 阿里巴巴集团控股有限公司 A kind of method, apparatus and equipment of information recommendation

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993668B (en) * 2019-04-09 2021-07-13 桂林电子科技大学 Scenic spot recommendation method based on gated cyclic unit neural network
CN109993668A (en) * 2019-04-09 2019-07-09 桂林电子科技大学 A kind of recommending scenery spot method based on gating cycle unit neural network
CN110225460B (en) * 2019-06-05 2021-03-23 三维通信股份有限公司 Indoor positioning method and device based on deep neural network
CN110225460A (en) * 2019-06-05 2019-09-10 三维通信股份有限公司 A kind of indoor orientation method and device based on deep neural network
CN110299194A (en) * 2019-06-06 2019-10-01 昆明理工大学 The similar case recommended method with the wide depth model of improvement is indicated based on comprehensive characteristics
CN110222894A (en) * 2019-06-06 2019-09-10 阿里巴巴集团控股有限公司 Advertisement placement method, device and equipment
CN110716964A (en) * 2019-09-19 2020-01-21 卓尔智联(武汉)研究院有限公司 Newborn naming method based on GRU network, electronic device and storage medium
CN111222628A (en) * 2019-11-20 2020-06-02 深圳前海微众银行股份有限公司 Method, device and system for optimizing recurrent neural network training and readable storage medium
CN111222628B (en) * 2019-11-20 2023-09-26 深圳前海微众银行股份有限公司 Method, device, system and readable storage medium for optimizing training of recurrent neural network
CN113288050A (en) * 2021-04-23 2021-08-24 山东师范大学 Multidimensional enhanced epileptic seizure prediction system based on graph convolution network
CN114500197A (en) * 2022-01-24 2022-05-13 华南理工大学 Method, system, device and storage medium for equalization after visible light communication
CN114500197B (en) * 2022-01-24 2023-05-23 华南理工大学 Method, system, device and storage medium for equalizing after visible light communication
CN114462584A (en) * 2022-04-11 2022-05-10 北京达佳互联信息技术有限公司 Recommendation model training method, recommendation device, server and medium
CN114462584B (en) * 2022-04-11 2022-07-22 北京达佳互联信息技术有限公司 Recommendation model training method, recommendation device, server and medium

Similar Documents

Publication Publication Date Title
CN109447244A (en) A kind of advertisement recommended method of combination gating cycle unit neural network
CN111488734B (en) Emotional feature representation learning system and method based on global interaction and syntactic dependency
CN110163299B (en) Visual question-answering method based on bottom-up attention mechanism and memory network
CN107239444B (en) A kind of term vector training method and system merging part of speech and location information
CN106022237B (en) A kind of pedestrian detection method of convolutional neural networks end to end
CN109299396A (en) Merge the convolutional neural networks collaborative filtering recommending method and system of attention model
CN110609899B (en) Specific target emotion classification method based on improved BERT model
CN110728541B (en) Information streaming media advertising creative recommendation method and device
CN108664632A (en) A kind of text emotion sorting algorithm based on convolutional neural networks and attention mechanism
CN109740154A (en) A kind of online comment fine granularity sentiment analysis method based on multi-task learning
CN108038205B (en) Viewpoint analysis prototype system for Chinese microblogs
CN109597891A (en) Text emotion analysis method based on two-way length Memory Neural Networks in short-term
CN110083700A (en) A kind of enterprise's public sentiment sensibility classification method and system based on convolutional neural networks
Chennupati et al. Auxnet: Auxiliary tasks enhanced semantic segmentation for automated driving
CN109582956A (en) text representation method and device applied to sentence embedding
CN107066445A (en) The deep learning method of one attribute emotion word vector
CN111460157B (en) Cyclic convolution multitask learning method for multi-field text classification
CN110866542A (en) Depth representation learning method based on feature controllable fusion
CN108427665A (en) A kind of text automatic generation method based on LSTM type RNN models
CN109284361A (en) A kind of entity abstracting method and system based on deep learning
CN108647191A (en) It is a kind of based on have supervision emotion text and term vector sentiment dictionary construction method
CN110263165A (en) A kind of user comment sentiment analysis method based on semi-supervised learning
CN109101629A (en) A kind of network representation method based on depth network structure and nodal community
CN109584006A (en) A kind of cross-platform goods matching method based on depth Matching Model
CN109117943A (en) Utilize the method for more attribute informations enhancing network characterisation study

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190308

RJ01 Rejection of invention patent application after publication