CN113688309B - Training method for generating model and generation method and device for recommendation reason - Google Patents

Training method for generating model and generation method and device for recommendation reason Download PDF

Info

Publication number
CN113688309B
CN113688309B CN202110838589.2A CN202110838589A CN113688309B CN 113688309 B CN113688309 B CN 113688309B CN 202110838589 A CN202110838589 A CN 202110838589A CN 113688309 B CN113688309 B CN 113688309B
Authority
CN
China
Prior art keywords
network model
word
recommendation
comment
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110838589.2A
Other languages
Chinese (zh)
Other versions
CN113688309A (en
Inventor
王姿雯
王思睿
易根良
张富峥
武威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202110838589.2A priority Critical patent/CN113688309B/en
Publication of CN113688309A publication Critical patent/CN113688309A/en
Application granted granted Critical
Publication of CN113688309B publication Critical patent/CN113688309B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention provides a training method for generating a model and a method and a device for generating a recommendation reason, wherein the training method comprises the following steps: and training the generator network model, the first discriminator network model and the second discriminator network model according to the training sample data until the convergence condition is met. The first discriminator network model is used for judging whether the recommended reason output by the generator network model belongs to the comment annotation text; and the second discriminator is used for judging whether the recommendation reason output by the generator network model belongs to the label clicking characteristic. The embodiment of the invention takes the user characteristics as training sample data, namely, the label click characteristics are introduced in the training process of generating the model. And judging whether the recommendation reason output by the generator network model belongs to the label clicking characteristic or not through the second discriminator network model, and guiding the training of the generation model through the label clicking characteristic so that the generation model can generate the recommendation reason with high click rate.

Description

Training method for generating model and generation method and device for recommendation reason
Technical Field
The invention relates to the technical field of internet, in particular to a training method and a training device for a recommendation reason generation model and a recommendation reason generation method and a recommendation reason generation device.
Background
The recommendation reason plays a great role in helping the user to quickly know characteristics of the merchant, assisting the user in making a visit decision and promoting content consumption of the user. At present, a plurality of blocks for recommending reasons such as search and recommendation are enabled, and the blocks play a positive role in expressing click rate and conversion rate.
In the related art, the merchant recommendation reason is mainly obtained through the following schemes:
(1) The method of manual writing: the proposal can ensure high quality and rich expression of recommended reasons according to professional production Content (PGC for short) written by professional operators.
(2) The comment extraction method comprises the following steps: extracted from the merchant's high-quality user reviews. The scheme can fully utilize the mass User Generated Content (UGC for short) of the comment service to obtain a recommendation reason which is closer to the User perspective and is more confident.
(3) The template filling method comprises the following steps: the method is obtained by filling information of users and merchants based on a template designed by professional operators, for example, the users from the city name like the old store recording the year X. The scheme has controllable quality, can display the personalized information of the user, and gives a surprise to the people.
(4) Scheme of text generation: the recommendation reason is generated by a sequence-to-sequence (sequence) training model by taking merchant information, user comments and the like as input and taking the existing high-quality recommendation reason as a sample.
However, the above solutions all have technical drawbacks:
(1) The method of manual writing comprises the following steps: this solution requires a lot of time and labor costs and cannot be written individually and custom for users with different preferences.
(2) The comment extraction method comprises the following steps: the scheme depends on the number of high-quality UGCs of the merchants, and for cities below three lines or new stores, the high-quality UGCs with sufficient quantity are difficult to extract.
(3) The template filling method comprises the following steps: the schema is relatively single in language form.
(4) Scheme of text generation: in the prior text generation scheme, user characteristics are rarely taken into consideration, only indexes of a language model are usually taken into consideration for generating targets, the quality of the language model and the performance of indexes on a line cannot be completely equivalent, and when the scheme is used alone, the quality of on-line generation is uncontrollable, and bad cases (bad cases) are easily generated.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide a method and an apparatus for training a recommendation reason generation model, and a recommendation reason generation method and an apparatus, which overcome the above problems or at least partially solve the above problems.
In order to solve the above problem, according to a first aspect of an embodiment of the present invention, a training method for a generative model of a recommendation reason is disclosed, including: acquiring training sample data, wherein the training sample data comprises user characteristics and comment labeling texts of POI, and the user characteristics comprise a label clicking characteristic; training a generator network model, a first discriminator network model and a second discriminator network model according to the training sample data until the generator network model, the first discriminator network model and the second discriminator network model meet preset convergence conditions; the first discriminator network model is used for judging whether the recommended reason output by the generator network model belongs to the comment annotation text or not; the second discriminator is used for judging whether the recommendation reason output by the generator network model belongs to the label clicking characteristics.
Optionally, the training the generator network model, the first discriminator network model and the second discriminator network model according to the training sample data includes: inputting the training sample data to the generator network model; coding and decoding the training sample data based on the generator network model to obtain a probability distribution result of each recommended word of the recommendation reason; generating word embedding vectors of recommended words of the recommendation reason according to the probability distribution result; and inputting the word embedding vector of each recommended word and the word embedding vector of the user characteristic into the first discriminator network model and the second discriminator network model so as to train the first discriminator network model according to the word embedding vector of the recommended word and the comment labeling text and train the second discriminator network model according to the word embedding vector of the recommended word and the word embedding vector of the user characteristic.
Optionally, the encoding and decoding the training sample data based on the generator network model to obtain a probability distribution result of each recommended word of the recommended reason includes: respectively coding the word embedded vector of the user characteristic and the word embedded vector of the comment labeling text based on the generator network model to obtain a coding result of the training sample data; and decoding the coding result based on the generator network model to obtain a probability distribution result of each recommended word of the recommendation reason.
Optionally, the obtaining an encoding result of the training sample data by respectively encoding the word embedding vector of the user feature and the word embedding vector of the comment annotation text based on the generator network model includes: coding the word embedded vector of the user characteristic based on the generator network model to obtain a coding result of the user characteristic; coding the word embedded vector of the comment annotation text based on the generator network model to obtain a coding result of the comment annotation text; and splicing the coding result of the user characteristic and the coding result of the comment labeling text into the coding result of the training sample data.
Optionally, the decoding the encoding result based on the generator network model to obtain a probability distribution result of each recommended word of the recommendation reason includes: decoding the coding result according to a copy mode based on the generator network model to obtain the attention distribution result of each recommended word of the recommendation reason; extracting each comment word from the comment annotation text according to the attention distribution result of each recommendation word so as to reduce the number of each recommendation word of the recommendation reason to be equal to the number of each comment word of the comment annotation text; and taking the attention distribution result of each recommended word as the probability distribution result of each corresponding recommended word.
Optionally, the generating a word embedding vector of each recommended word of the reason for recommendation according to the probability distribution result includes: and performing weighted summation on the word embedding vector of each comment word according to the probability distribution result of each recommendation word to obtain the word embedding vector of each recommendation word.
Optionally, the training the generator network model, the first discriminator network model and the second discriminator network model according to the training sample data includes: training the generator network model and the second discriminator network model according to the training sample data until the generator network model and the second discriminator network model meet the convergence condition; keeping the parameters of the generator network model and the parameters of the second discriminator network model unchanged, adjusting the parameters of the first discriminator network model, keeping the parameters of the first discriminator network model and the parameters of the second discriminator network model unchanged, and adjusting the parameters of the generator network model until the generator network model and the first discriminator network model meet the convergence condition.
According to the second aspect of the embodiments of the present invention, there is also disclosed a method for generating a reason for recommendation, including: acquiring user characteristics, wherein the user characteristics comprise a label clicking characteristic; inputting the user features into the generated model trained according to the method of the first aspect, and outputting POI recommendation reasons for the user features.
Optionally, the inputting the user feature into the generative model trained according to the method of the first aspect, and outputting a POI recommendation reason for the user feature includes: generating a probability distribution result of each recommended word of the POI recommendation reason according to a generator network model of the generation model; and decoding the probability distribution result to obtain the POI recommendation reason.
Optionally, the decoding the probability distribution result to obtain the POI recommendation reason includes: decoding the probability distribution result according to a cluster searching and decoding mode to obtain a local optimal solution; and taking the local optimal solution as the POI recommendation reason.
Optionally, the method further comprises: and inputting the POI recommendation reasons to a trained text classification model and a confusion language model, and outputting a linguistic judgment result of the POI recommendation reasons.
Optionally, the method further comprises: and performing category offset judgment and entity existence judgment on the POI recommendation reason so as to ensure the correlation between the POI recommendation result and the user characteristics.
According to the third aspect of the embodiments of the present invention, there is also disclosed a training apparatus for a recommendation cause generation model, including: the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring training sample data, the training sample data comprises user characteristics and comment labeling texts of POI, and the user characteristics comprise a label clicking characteristic; the training module is used for training a generator network model, a first discriminator network model and a second discriminator network model according to the training sample data until the generator network model, the first discriminator network model and the second discriminator network model meet preset convergence conditions; the first discriminator network model is used for judging whether the recommended reason output by the generator network model belongs to the comment annotation text or not; the second judging device is used for judging whether the recommended reason output by the generator network model belongs to the label clicking characteristic or not.
Optionally, the training module comprises: a sample input module for inputting the training sample data to the generator network model; the coding and decoding module is used for coding and decoding the training sample data based on the generator network model to obtain a probability distribution result of each recommended word of the recommendation reason; the word embedding module is used for generating word embedding vectors of the recommended words of the recommendation reasons according to the probability distribution result; a word embedding input module, configured to input the word embedding vector of each recommended word and the word embedding vector of the user characteristic into the first discriminator network model and the second discriminator network model, so as to train the first discriminator network model according to the word embedding vector of the recommended word and the comment tagging text, and train the second discriminator network model according to the word embedding vector of the recommended word and the word embedding vector of the user characteristic.
Optionally, the encoding and decoding module includes: the coding module is used for respectively coding the word embedded vector of the user characteristic and the word embedded vector of the comment labeling text based on the generator network model to obtain a coding result of the training sample data; and the decoding module is used for decoding the coding result based on the generator network model to obtain a probability distribution result of each recommended word of the recommendation reason.
Optionally, the encoding module includes: the user coding module is used for coding the word embedded vector of the user characteristics based on the generator network model to obtain a coding result of the user characteristics; the comment encoding module is used for encoding the word embedded vector of the comment tagged text based on the generator network model to obtain an encoding result of the comment tagged text; and the result splicing module is used for splicing the coding result of the user characteristic and the coding result of the comment labeling text into the coding result of the training sample data.
Optionally, the decoding module includes: the attention decoding module is used for decoding the coding result according to a copy mode based on the generator network model to obtain the attention distribution result of each recommended word of the recommendation reason; the word extraction module is used for extracting each comment word from the comment annotation text according to the attention distribution result of each recommendation word so as to reduce the number of each recommendation word of the recommendation reason to be equal to the number of each comment word of the comment annotation text; and the probability distribution determining module is used for taking the attention distribution result of each recommended word as the probability distribution result of each corresponding recommended word.
Optionally, the word embedding module is configured to perform weighted summation on the word embedding vector of each of the comment words according to a result of probability distribution of each of the recommendation words, so as to obtain a word embedding vector of each of the recommendation words.
Optionally, the training module is configured to train the generator network model and the second determiner network model according to the training sample data until the generator network model and the second determiner network model satisfy the convergence condition; keeping the parameters of the generator network model and the parameters of the second discriminator network model unchanged, adjusting the parameters of the first discriminator network model, keeping the parameters of the first discriminator network model and the parameters of the second discriminator network model unchanged, and adjusting the parameters of the generator network model until the generator network model and the first discriminator network model meet the convergence condition.
According to a fourth aspect of the embodiments of the present invention, there is also disclosed an apparatus for generating a reason for recommendation, including: the system comprises a characteristic acquisition module, a characteristic acquisition module and a characteristic acquisition module, wherein the characteristic acquisition module is used for acquiring user characteristics which comprise a label click characteristic; an input/output module, configured to input the user feature into the generated model trained according to the method of the first aspect, and output a POI recommendation reason for the user feature.
Optionally, the input/output module includes: a probability distribution result generation module for generating a probability distribution result of each recommended word of the POI recommendation reason according to the generator network model of the generation model; and the probability distribution result decoding module is used for decoding the probability distribution result to obtain the POI recommendation reason.
Optionally, the probability distribution result decoding module is configured to decode the probability distribution result according to a bundle search decoding manner to obtain a local optimal solution; and taking the local optimal solution as the POI recommendation reason.
Optionally, the apparatus further comprises: and the linguistic processing module is used for inputting the POI recommendation reasons to the trained text classification model and the confusion language model and outputting a linguistic judgment result of the POI recommendation reasons.
Optionally, the apparatus further comprises: and the correlation processing module is used for carrying out category deviation judgment and entity existence judgment on the POI recommendation reason so as to ensure the correlation between the POI recommendation result and the user characteristics.
Compared with the prior art, the technical scheme provided by the embodiment of the invention has the following advantages:
the training scheme of the generation model of the recommendation reason provided by the embodiment of the invention obtains training sample data of a comment marking text containing user characteristics and a Point of Interest (POI for short), wherein the user characteristics contain a label clicking characteristic. And training the generator network model, the first discriminator network model and the second discriminator network model according to the training sample data until the generator network model, the first discriminator network model and the second discriminator network model meet preset convergence conditions. The first discriminator network model is used for judging whether the recommendation reason output by the generator network model belongs to the comment annotation text; and the second discriminator is used for judging whether the recommendation reason output by the generator network model belongs to the label click characteristics. The embodiment of the invention takes the user characteristics as training sample data, namely, the label click characteristics are introduced in the training process of generating the model. And judging whether the recommendation reason output by the generator network model belongs to the label click feature through the second discriminator network model, and guiding the training of the generation model through the label click feature so that the generation model can generate the recommendation reason with high click rate.
Drawings
FIG. 1 is a flowchart illustrating the steps of a method for training a recommendation reason generation model according to an embodiment of the present invention;
FIG. 2 is a flowchart of the steps for training a generator network model, a first discriminator network model and a second discriminator network model in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of a network structure of a generative model according to an embodiment of the invention;
FIG. 4 is a flowchart illustrating steps of a method for generating a reason for recommendation according to an embodiment of the present invention;
FIG. 5 is a block diagram of a training apparatus for generating a model of a reason for recommendation according to an embodiment of the present invention;
fig. 6 is a block diagram showing a configuration of a recommendation reason generation apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention more comprehensible, the present invention is described in detail with reference to the accompanying drawings and the detailed description thereof.
Referring to fig. 1, a flowchart illustrating steps of a training method for a reason for recommendation generative model according to an embodiment of the present invention is shown. The method for training the recommendation reason generation model specifically includes the following steps:
step 101, obtaining training sample data.
In an embodiment of the invention, the training sample data may contain user characteristics and comment annotation text of the POI. Wherein the user feature may comprise a tag click feature. In practical applications, the comment annotation text may be a sentence, for example, "live on every business trip, which is a good choice for business trips". The tag click feature may be a historical high frequency click feature, such as "business," comfort, "" meeting, "" high-end, "" luxury, "" swimming pool, "" parking lot. The POI may be a merchant, such as a restaurant, hotel, casino, or the like.
And 102, training the generator network model, the first discriminator network model and the second discriminator network model according to the training sample data until the generator network model, the first discriminator network model and the second discriminator network model meet preset convergence conditions.
In an embodiment of the invention, the generative model may comprise a generator network model, a first discriminator network model and a second discriminator network model. In practical application, the generator Network model may adopt a Network structure of a Point Network (Point Network) in a Sequence to Sequence frame. The first discriminator network model is used for judging whether the recommendation reason output by the generator network model belongs to the comment annotation text; and the second discriminator is used for judging whether the recommendation reason output by the generator network model belongs to the label clicking characteristic. In practical applications, the first discriminator network model and the second discriminator network model may both adopt a Text classification model (Text CNN) network structure.
According to the training scheme of the model for generating the recommended reason, training sample data of comment labeling texts containing user characteristics and POI are obtained, wherein the user characteristics contain a label clicking characteristic. And training the generator network model, the first discriminator network model and the second discriminator network model according to the training sample data until the generator network model, the first discriminator network model and the second discriminator network model meet the preset convergence condition. The first discriminator network model is used for judging whether the recommendation reason output by the generator network model belongs to the comment annotation text; and the second discriminator is used for judging whether the recommendation reason output by the generator network model belongs to the label click characteristics. The embodiment of the invention takes the user characteristics as training sample data, namely, the label clicking characteristics are introduced in the training process of generating the model. And judging whether the recommendation reason output by the generator network model belongs to the label clicking characteristic or not through the second discriminator network model, and guiding the training of the generation model through the label clicking characteristic so that the generation model can generate the recommendation reason with high click rate.
In a preferred embodiment of the present invention, referring to fig. 2, a flowchart of the steps of training the generator network model, the first discriminator network model and the second discriminator network model according to an embodiment of the present invention is shown. One embodiment of training the generator network model, the first discriminator network model and the second discriminator network model according to training sample data includes the following steps.
Step 201, inputting training sample data to a generator network model.
In an embodiment of the present invention, the training sample data may contain a plurality of comment annotation texts and user features of the POI. The user features include an identity feature and a tag click feature. The identity characteristics may include gender, occupation, consumption level, etc., among others. The user characteristic may indicate that the user conforming to the identity characteristic clicks the tag of the POI frequently. The comment annotation text represents comment text generated for the user according with the user characteristics.
And 202, carrying out encoding processing and decoding processing on training sample data based on a generator network model to obtain a probability distribution result of each recommended word of a recommendation reason.
In the embodiment of the invention, the word embedded vector of the user characteristic and the word embedded vector of the comment labeling text can be respectively encoded based on the generator network model to obtain the encoding result of the training sample data. And then decoding the coding result based on the generator network model to obtain the probability distribution result of each recommended word of the recommendation reason. When the word embedding vector of the user characteristic and the word embedding vector of the comment labeling text are generated, a set of word embedding vector parameters can be shared.
One implementation way of respectively encoding the word embedded vector of the user characteristic and the word embedded vector of the comment tagged text based on the generator network model to obtain the encoding result of the training sample data is that the word embedded vector of the user characteristic is encoded based on the generator network model to obtain the encoding result of the user characteristic, the word embedded vector of the comment tagged text is encoded based on the generator network model to obtain the encoding result of the comment tagged text, and then the encoding result of the user characteristic and the encoding result of the comment tagged text are spliced into the encoding result of the training sample data.
In practical applications, the encoding process may adopt a coding structure of a bidirectional Short Term Memory Network (LSTM), or may also adopt a convolution structure or a transform (a natural language processing model) structure to perform the encoding process.
One embodiment of decoding the coding result based on the generator network model to obtain the probability distribution result of each recommended word of the recommendation reason is to decode the coding result based on the generator network model according to a copy mode to obtain the attention distribution result of each recommended word of the recommendation reason; extracting each comment word from the comment annotation text according to the attention distribution result of each recommendation word so as to reduce the number of each recommendation word of the recommendation reason to be the same as the number of each comment word of the comment annotation text; and taking the attention distribution result of each recommended word as the probability distribution result of each corresponding recommended word. According to the embodiment of the invention, the attention distribution result of the coding process of the Point Network structure is used as the probability distribution result of the recommended word in the decoding process by multiplexing the parameters of the Point Network structure, so that the complexity of the generator Network model is reduced.
Step 203, generating word embedding vectors of each recommended word of the recommendation reason according to the probability distribution result.
In the embodiment of the invention, the word embedding vectors of the comment words are subjected to weighted summation according to the probability distribution result of each recommended word, so that the word embedding vectors of the recommended words are obtained.
Step 204, inputting the word embedding vector of each recommended word and the word embedding vector of the user characteristic into the first discriminator network model and the second discriminator network model, so as to train the first discriminator network model according to the word embedding vector of the recommended word and the comment labeling text, and train the second discriminator network model according to the word embedding vector of the recommended word and the word embedding vector of the user characteristic.
In the embodiment of the invention, the word embedding vector of each recommended word and the word embedding vector of the user characteristic can be spliced and then input into the first discriminator network model and the second discriminator network model.
In a preferred embodiment of the present invention, one implementation of training the generator network model, the first discriminator network model and the second discriminator network model according to training sample data is that the generator network model and the second discriminator network model are trained according to the training sample data until the generator network model and the second discriminator network model satisfy a convergence condition; keeping the parameters of the generator network model and the parameters of the second discriminator network model unchanged, adjusting the parameters of the first discriminator network model, keeping the parameters of the first discriminator network model and the parameters of the second discriminator network model unchanged, and adjusting the parameters of the generator network model until the generator network model and the first discriminator network model meet the convergence condition.
In a preferred embodiment of the present invention, referring to fig. 3, a network structure diagram of a generative model of an embodiment of the present invention is shown. In FIG. 3, point Network structure is used as a generation modelThe generator network model G, text CNN network structure of (1) as the discriminator network model D of the generation model. The input items of the generator network model G contain a number of premium comments (comment annotation text) and user features of the POI. The user characteristics adopt identity characteristics (Profile adjustment) such as gender, occupation and consumption level and real-time characteristics such as high-frequency display labels of POI clicked by the user history, share a set of word embedding (embedding) vector parameters, and generate corresponding word embedding vectors. And then respectively coding the word embedding vectors and splicing. The coding structure of bidirectional LSTM, convolution structure, and Transformer structure may be used for coding. The decoding process takes a Copy mode of an Attention based decoder (Attention based decoder) and takes words from comments and user characteristics according to the Attention (Attention) distribution result. In each step of decoding, the Attention distribution result calculated by the generator Network model G is directly used as the probability distribution result output by the Point Network, and the complexity of the generator Network model G is greatly reduced through parameter multiplexing. And according to the probability distribution result output by the generator network model G for each word in the comment, carrying out weighted summation on the word embedded vectors of all words in the input item to obtain the word embedded vectors of all recommended words in the recommendation reason. Loss function of attention-based decoder s . And splicing the word embedding vector of the recommended word with the word embedding vector of the user characteristic, and then inputting the word embedding vector into a discriminator network model D. The discriminator network model D carries out two classification tasks, wherein the Task one (Task 1) is to judge whether the generated result is a real sample (real/fake), and the corresponding network structure is recorded as the discriminator network model D 1 Arbiter network model D 1 Has a Loss function of Loss c1 . Task two (Task 2) is to judge whether the generated result can be clicked by the current user (CTR Predict), and the corresponding network structure is marked as a discriminator network model D 2 Network model of arbiter D 2 Has a Loss function of less c2 . Discriminator network model D 1 And arbiter network model D 2 A general text classification network structure may be employed. The Loss function of the generative model is Loss = Loss s +Loss c1 +Loss c2
In generating modelsIn the training stage, firstly, a generator network model G and a discriminator network model D are pre-trained according to input items 2 Until the model is converged, then fixing the generator network model G and the discriminator network model D in each round of training 2 Optimizing the discriminator network model D 1 And then fixing the discriminator network model D 1 And arbiter network model D 2 Optimizing the generator network model G until the generator network model G and the discriminator network model D 1 And (6) converging.
Referring to fig. 4, a flowchart illustrating steps of an embodiment of a method for generating a reason for recommendation according to an embodiment of the present invention is shown. The method for generating the recommendation reason may specifically include the following steps:
step 401, obtaining user characteristics.
In embodiments of the present invention, the user characteristics may include a tag click characteristic and an identity characteristic.
Step 402, inputting the user characteristics into the generated model obtained by training according to the training method of the generated model of the recommendation reason, and outputting the POI recommendation reason aiming at the user characteristics.
In an embodiment of the present invention, the generative model may be generated according to the steps shown in FIG. 1. The output POI recommendation reason may be POI premium reviews.
In a preferred embodiment of the present invention, an implementation manner of inputting the user characteristics into the generated model obtained by training according to the training method of the generated model of the recommendation reason and outputting the POI recommendation reason for the user characteristics is that a probability distribution result of each recommended word comment text of the POI recommendation reason is generated according to a generator network model of the generated model; and decoding the probability distribution result to obtain the POI recommendation reason. In practical application, when the probability distribution result is decoded, the probability distribution result can be decoded according to a bundle searching and decoding mode to obtain a local optimal solution, and then the local optimal solution is used as a POI recommendation result. Compared with the global optimal solution obtained by decoding, the local optimal solution obtained by decoding reduces the dimensionality of the recommended word recommended by the POI, has the advantage of short time consumption, and meets the requirement of generating the POI recommendation reason on line in real time.
In a preferred embodiment of the present invention, after generating the POI reason recommendation, the quality control may be performed on the POI reason recommendation, which mainly solves the following two problems:
1) Linguistic problems: and inputting the POI recommendation reasons to the trained text classification model and the confusion language model, and outputting a linguistic judgment result of the POI recommendation reasons. The judgment result is used for indicating whether the POI recommendation result has the problems of linguistics incompactness and incompleteness.
The text classification model can be obtained based on negative sample training constructed in the modes of word loss, word filling and order exchange.
2) The relevance problem is as follows: and performing category offset judgment and entity existence judgment on the POI recommendation reason to ensure the correlation between the POI recommendation result and the user characteristics.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 5, a block diagram of a training apparatus for generating a model of a reason for recommendation according to an embodiment of the present invention is shown, where the training apparatus for generating a model of a reason for recommendation specifically includes the following modules:
the obtaining module 51 is configured to obtain training sample data, where the training sample data includes user features and comment labeling texts of POIs, and the user features include a tag click feature;
a training module 52, configured to train a generator network model, a first discriminator network model, and a second discriminator network model according to the training sample data until the generator network model, the first discriminator network model, and the second discriminator network model satisfy a preset convergence condition;
the first discriminator network model is used for judging whether the recommendation reason output by the generator network model belongs to the comment annotation text or not; the second discriminator is used for judging whether the recommendation reason output by the generator network model belongs to the label clicking characteristics.
In a preferred embodiment of the present invention, the training module 52 includes:
a sample input module for inputting the training sample data to the generator network model;
the coding and decoding module is used for coding and decoding the training sample data based on the generator network model to obtain a probability distribution result of each recommended word of the recommendation reason;
the word embedding module is used for generating word embedding vectors of the recommended words of the recommendation reasons according to the probability distribution result;
a word embedding input module, configured to input the word embedding vector of each recommended word and the word embedding vector of the user characteristic into the first discriminator network model and the second discriminator network model, so as to train the first discriminator network model according to the word embedding vector of the recommended word and the comment labeling text, and train the second discriminator network model according to the word embedding vector of the recommended word and the word embedding vector of the user characteristic.
In a preferred embodiment of the present invention, the encoding/decoding module includes:
the coding module is used for respectively coding the word embedded vector of the user characteristic and the word embedded vector of the comment labeling text based on the generator network model to obtain a coding result of the training sample data;
and the decoding module is used for decoding the coding result based on the generator network model to obtain a probability distribution result of each recommended word of the recommendation reason.
In a preferred embodiment of the present invention, the encoding module includes:
the user coding module is used for coding the word embedding vector of the user characteristic based on the generator network model to obtain a coding result of the user characteristic;
the comment encoding module is used for encoding the word embedding vector of the comment labeling text based on the generator network model to obtain an encoding result of the comment labeling text;
and the result splicing module is used for splicing the coding result of the user characteristic and the coding result of the comment labeling text into the coding result of the training sample data.
In a preferred embodiment of the present invention, the decoding module includes:
the attention decoding module is used for decoding the coding result according to a copy mode based on the generator network model to obtain the attention distribution result of each recommended word of the recommendation reason;
the word extraction module is used for extracting each comment word from the comment annotation text according to the attention distribution result of each recommendation word so as to reduce the number of each recommendation word of the recommendation reason to be the same as the number of each comment word of the comment annotation text;
and the probability distribution determining module is used for taking the attention distribution result of each recommended word as the probability distribution result of each corresponding recommended word.
In a preferred embodiment of the present invention, the word embedding module is configured to perform weighted summation on the word embedding vectors of the comment words according to a result of probability distribution of each of the recommended words, so as to obtain the word embedding vector of each of the recommended words.
In a preferred embodiment of the present invention, the training module is configured to train the generator network model and the second discriminator network model according to the training sample data until the generator network model and the second discriminator network model satisfy the convergence condition; keeping the parameters of the generator network model and the parameters of the second discriminator network model unchanged, adjusting the parameters of the first discriminator network model, keeping the parameters of the first discriminator network model and the parameters of the second discriminator network model unchanged, and adjusting the parameters of the generator network model until the generator network model and the first discriminator network model meet the convergence condition.
Referring to fig. 6, a block diagram of a device for generating a reason for recommendation according to an embodiment of the present invention is shown, where the device for generating a reason for recommendation specifically includes the following modules:
the feature obtaining module 61 is configured to obtain a user feature, where the user feature includes a tag click feature;
and an input/output module 62, configured to input the user characteristics into the generated model trained according to the training method of the generated model of the recommendation reason described above, and output POI recommendation reasons for the user characteristics.
In a preferred embodiment of the present invention, the input/output module 62 includes:
a probability distribution result generation module used for generating probability distribution results of each recommended word of the POI recommendation reason according to a generator network model of the generation model;
and the probability distribution result decoding module is used for decoding the probability distribution result to obtain the POI recommendation reason.
In a preferred embodiment of the present invention, the probability distribution result decoding module is configured to perform decoding processing on the probability distribution result according to a bundle search decoding manner to obtain a local optimal solution; and taking the local optimal solution as the POI recommendation reason.
In a preferred embodiment of the present invention, the apparatus further comprises:
and the linguistic processing module is used for inputting the POI recommendation reason to the trained text classification model and the confusion language model and outputting a linguistic judgment result of the POI recommendation reason.
In a preferred embodiment of the present invention, the apparatus further comprises:
and the correlation processing module is used for carrying out category offset judgment and entity existence judgment on the POI recommendation reason so as to ensure the correlation between the POI recommendation result and the user characteristics.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment. The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all changes and modifications that fall within the true scope of the embodiments of the present invention.
Finally, it should also be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrases "comprising one of \ 8230; \8230;" does not exclude the presence of additional like elements in a process, method, article, or terminal device that comprises the element.
The method and the device for training the model for generating the recommended reason and the method and the device for generating the recommended reason provided by the invention are introduced in detail, and specific examples are applied to explain the principle and the implementation mode of the invention, and the description of the examples is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (24)

1. A method for training a generative model of a recommendation reason, comprising:
acquiring training sample data, wherein the training sample data comprises user characteristics and comment labeling texts of POI, and the user characteristics comprise a label clicking characteristic;
training a generator network model, a first discriminator network model and a second discriminator network model according to the training sample data until the generator network model, the first discriminator network model and the second discriminator network model meet preset convergence conditions;
the first discriminator network model is used for judging whether the recommended reason output by the generator network model belongs to the comment annotation text or not; the second discriminator is used for judging whether the recommendation reason output by the generator network model belongs to the label clicking characteristic or not;
the first discriminator network model is trained according to word embedding vectors of recommended words and the comment labeling texts; the second discriminator network model is trained according to the word embedding vector of the recommended word and the word embedding vector of the user characteristic; and generating a word embedding vector of the recommended word according to a probability distribution result of the recommended word, wherein the probability distribution result of the recommended word is obtained by encoding and decoding the training sample data through the generator network model.
2. The method of claim 1, wherein training a generator network model, a first discriminator network model and a second discriminator network model according to the training sample data comprises:
inputting the training sample data to the generator network model;
coding and decoding the training sample data based on the generator network model to obtain a probability distribution result of each recommended word of the recommendation reason;
generating word embedding vectors of recommended words of the recommendation reason according to the probability distribution result;
inputting the word embedding vector of each recommended word and the word embedding vector of the user feature to the first discriminator network model and the second discriminator network model, so as to train the first discriminator network model according to the word embedding vector of the recommended word and the comment tagging text, and train the second discriminator network model according to the word embedding vector of the recommended word and the word embedding vector of the user feature.
3. The method according to claim 2, wherein the encoding and decoding the training sample data based on the generator network model to obtain a probability distribution result of each recommended word of the recommendation reason comprises:
respectively coding the word embedded vector of the user characteristic and the word embedded vector of the comment labeling text based on the generator network model to obtain a coding result of the training sample data;
and decoding the coding result based on the generator network model to obtain a probability distribution result of each recommended word of the recommendation reason.
4. The method of claim 3, wherein the encoding the word embedding vector of the user feature and the word embedding vector of the comment labeling text respectively based on the generator network model to obtain the encoding result of the training sample data comprises:
coding the word embedding vector of the user characteristic based on the generator network model to obtain a coding result of the user characteristic;
coding the word embedded vector of the comment annotation text based on the generator network model to obtain a coding result of the comment annotation text;
and splicing the coding result of the user characteristic and the coding result of the comment labeling text into the coding result of the training sample data.
5. The method according to claim 3, wherein the decoding the encoded result based on the generator network model to obtain a probability distribution result of each recommended word of the reason for recommendation comprises:
decoding the coding result according to a copy mode based on the generator network model to obtain the attention distribution result of each recommended word of the recommendation reason;
extracting each comment word from the comment annotation text according to the attention distribution result of each recommendation word so as to reduce the number of each recommendation word of the recommendation reason to be equal to the number of each comment word of the comment annotation text;
and taking the attention distribution result of each recommended word as the probability distribution result of each corresponding recommended word.
6. The method according to claim 5, wherein the generating a word embedding vector for each recommended word of the reason for recommendation according to the probability distribution result comprises:
and carrying out weighted summation on the word embedding vectors of the comment words according to the probability distribution result of each recommended word to obtain the word embedding vectors of each recommended word.
7. The method of claim 1, wherein training a generator network model, a first discriminator network model and a second discriminator network model from the training sample data comprises:
training the generator network model and the second discriminator network model according to the training sample data until the generator network model and the second discriminator network model meet the convergence condition;
keeping the parameters of the generator network model and the parameters of the second discriminator network model unchanged, adjusting the parameters of the first discriminator network model, keeping the parameters of the first discriminator network model and the parameters of the second discriminator network model unchanged, and adjusting the parameters of the generator network model until the generator network model and the first discriminator network model meet the convergence condition.
8. A method for generating a reason for recommendation, comprising:
acquiring user characteristics, wherein the user characteristics comprise a label clicking characteristic;
inputting the user features into a generative model trained according to the method of any one of claims 1 to 7, and outputting POI recommendation reasons for the user features.
9. The method according to claim 8, wherein the inputting the user features into a generative model trained according to the method of any one of claims 1 to 7 and outputting POI recommendation reasons for the user features comprises:
generating a probability distribution result of each recommended word of the POI recommendation reason according to a generator network model of the generation model;
and decoding the probability distribution result to obtain the POI recommendation reason.
10. The method of claim 9, wherein the decoding the probability distribution result to obtain the POI recommendation reason comprises:
decoding the probability distribution result according to a cluster searching and decoding mode to obtain a local optimal solution;
and taking the local optimal solution as the POI recommendation reason.
11. The method according to any one of claims 8 to 10, further comprising:
and inputting the POI recommendation reasons to a trained text classification model and a confusion language model, and outputting a linguistic judgment result of the POI recommendation reasons.
12. The method according to any one of claims 8 to 10, further comprising:
and performing category offset judgment and entity existence judgment on the POI recommendation reason so as to ensure the correlation between the POI recommendation result and the user characteristics.
13. A training device for generating a model for a reason for recommendation, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring training sample data, the training sample data comprises user characteristics and comment labeling texts of POI, and the user characteristics comprise a label clicking characteristic;
the training module is used for training a generator network model, a first discriminator network model and a second discriminator network model according to the training sample data until the generator network model, the first discriminator network model and the second discriminator network model meet preset convergence conditions;
the first discriminator network model is used for judging whether the recommendation reason output by the generator network model belongs to the comment annotation text or not; the second discriminator is used for judging whether the recommendation reason output by the generator network model belongs to the label clicking characteristic or not;
the first discriminator network model is trained according to word embedding vectors of recommended words and the comment labeling texts; the second discriminator network model is trained according to the word embedding vector of the recommended word and the word embedding vector of the user characteristic; and generating a word embedding vector of the recommended word according to a probability distribution result of the recommended word, wherein the probability distribution result of the recommended word is obtained by encoding and decoding the training sample data through the generator network model.
14. The apparatus of claim 13, wherein the training module comprises:
a sample input module for inputting the training sample data to the generator network model;
the coding and decoding module is used for coding and decoding the training sample data based on the generator network model to obtain a probability distribution result of each recommended word of the recommendation reason;
the word embedding module is used for generating word embedding vectors of the recommended words of the recommendation reasons according to the probability distribution result;
a word embedding input module, configured to input the word embedding vector of each recommended word and the word embedding vector of the user characteristic into the first discriminator network model and the second discriminator network model, so as to train the first discriminator network model according to the word embedding vector of the recommended word and the comment labeling text, and train the second discriminator network model according to the word embedding vector of the recommended word and the word embedding vector of the user characteristic.
15. The apparatus of claim 14, wherein the codec module comprises:
the coding module is used for respectively coding the word embedded vector of the user characteristic and the word embedded vector of the comment labeling text based on the generator network model to obtain a coding result of the training sample data;
and the decoding module is used for decoding the coding result based on the generator network model to obtain the probability distribution result of each recommended word of the recommendation reason.
16. The apparatus of claim 15, wherein the encoding module comprises:
the user coding module is used for coding the word embedding vector of the user characteristic based on the generator network model to obtain a coding result of the user characteristic;
the comment encoding module is used for encoding the word embedded vector of the comment tagged text based on the generator network model to obtain an encoding result of the comment tagged text;
and the result splicing module is used for splicing the coding result of the user characteristics and the coding result of the comment annotation text into the coding result of the training sample data.
17. The apparatus of claim 15, wherein the decoding module comprises:
the attention decoding module is used for decoding the coding result according to a copy mode based on the generator network model to obtain the attention distribution result of each recommended word of the recommendation reason;
the word extraction module is used for extracting each comment word from the comment annotation text according to the attention distribution result of each recommendation word so as to reduce the number of each recommendation word of the recommendation reason to be equal to the number of each comment word of the comment annotation text;
and the probability distribution determining module is used for taking the attention distribution result of each recommended word as the probability distribution result of each corresponding recommended word.
18. The apparatus of claim 17, wherein the word embedding module is configured to perform weighted summation on the word embedding vector of each of the comment words according to the probability distribution result of each of the recommended words to obtain the word embedding vector of each of the recommended words.
19. The apparatus of claim 13, wherein the training module is configured to train the generator network model and the second discriminator network model according to the training sample data until the generator network model and the second discriminator network model satisfy the convergence condition; keeping the parameters of the generator network model and the parameters of the second discriminator network model unchanged, adjusting the parameters of the first discriminator network model, keeping the parameters of the first discriminator network model and the parameters of the second discriminator network model unchanged, and adjusting the parameters of the generator network model until the generator network model and the first discriminator network model meet the convergence condition.
20. An apparatus for generating a reason for recommendation, comprising:
the characteristic acquisition module is used for acquiring user characteristics, and the user characteristics comprise a label clicking characteristic;
an input and output module, configured to input the user characteristics into a generative model trained according to the method of any one of claims 1 to 7, and output a POI recommendation reason for the user characteristics.
21. The apparatus of claim 20, wherein the input-output module comprises:
a probability distribution result generation module used for generating probability distribution results of each recommended word of the POI recommendation reason according to a generator network model of the generation model;
and the probability distribution result decoding module is used for decoding the probability distribution result to obtain the POI recommendation reason.
22. The apparatus of claim 21, wherein the probability distribution result decoding module is configured to perform decoding processing on the probability distribution result according to a bundle search decoding manner to obtain a local optimal solution; and taking the local optimal solution as the POI recommendation reason.
23. The apparatus as claimed in any one of claims 20 to 22, further comprising:
and the linguistic processing module is used for inputting the POI recommendation reasons to the trained text classification model and the confusion language model and outputting a linguistic judgment result of the POI recommendation reasons.
24. The apparatus of any one of claims 20 to 22, further comprising:
and the correlation processing module is used for carrying out category deviation judgment and entity existence judgment on the POI recommendation reason so as to ensure the correlation between the POI recommendation result and the user characteristics.
CN202110838589.2A 2021-07-23 2021-07-23 Training method for generating model and generation method and device for recommendation reason Active CN113688309B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110838589.2A CN113688309B (en) 2021-07-23 2021-07-23 Training method for generating model and generation method and device for recommendation reason

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110838589.2A CN113688309B (en) 2021-07-23 2021-07-23 Training method for generating model and generation method and device for recommendation reason

Publications (2)

Publication Number Publication Date
CN113688309A CN113688309A (en) 2021-11-23
CN113688309B true CN113688309B (en) 2022-11-29

Family

ID=78577793

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110838589.2A Active CN113688309B (en) 2021-07-23 2021-07-23 Training method for generating model and generation method and device for recommendation reason

Country Status (1)

Country Link
CN (1) CN113688309B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110457452A (en) * 2019-07-08 2019-11-15 汉海信息技术(上海)有限公司 Rationale for the recommendation generation method, device, electronic equipment and readable storage medium storing program for executing
CN112308650A (en) * 2020-07-01 2021-02-02 北京沃东天骏信息技术有限公司 Recommendation reason generation method, device, equipment and storage medium
WO2021023249A1 (en) * 2019-08-06 2021-02-11 北京三快在线科技有限公司 Generation of recommendation reason
CN112667813A (en) * 2020-12-30 2021-04-16 北京华宇元典信息服务有限公司 Method for identifying sensitive identity information of referee document

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150371262A1 (en) * 2014-06-23 2015-12-24 Pure Auto Llc Dba Purecars Internet Search Engine Advertisement Optimization
EP3792830A1 (en) * 2019-09-10 2021-03-17 Robert Bosch GmbH Training a class-conditional generative adverserial network
CN110727844B (en) * 2019-10-21 2022-07-01 东北林业大学 Online commented commodity feature viewpoint extraction method based on generation countermeasure network
CN111046138B (en) * 2019-11-15 2023-06-27 北京三快在线科技有限公司 Recommendation reason generation method and device, electronic equipment and storage medium
CN112905776B (en) * 2021-03-17 2023-03-31 西北大学 Emotional dialogue model construction method, emotional dialogue system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110457452A (en) * 2019-07-08 2019-11-15 汉海信息技术(上海)有限公司 Rationale for the recommendation generation method, device, electronic equipment and readable storage medium storing program for executing
WO2021023249A1 (en) * 2019-08-06 2021-02-11 北京三快在线科技有限公司 Generation of recommendation reason
CN112308650A (en) * 2020-07-01 2021-02-02 北京沃东天骏信息技术有限公司 Recommendation reason generation method, device, equipment and storage medium
CN112667813A (en) * 2020-12-30 2021-04-16 北京华宇元典信息服务有限公司 Method for identifying sensitive identity information of referee document

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"考虑评级信息的音乐评论文本自动生成";严丹 等;《计算机科学与探索》;20191104;第1389-1396页 *

Also Published As

Publication number Publication date
CN113688309A (en) 2021-11-23

Similar Documents

Publication Publication Date Title
CN106328147B (en) Speech recognition method and device
CN115329127A (en) Multi-mode short video tag recommendation method integrating emotional information
CN111263238B (en) Method and equipment for generating video comments based on artificial intelligence
JP7150842B2 (en) Multilingual Document Retrieval Based on Document Structure Extraction
CN110688832B (en) Comment generation method, comment generation device, comment generation equipment and storage medium
CN110188158B (en) Keyword and topic label generation method, device, medium and electronic equipment
US10915756B2 (en) Method and apparatus for determining (raw) video materials for news
CN111553159B (en) Question generation method and system
CN113408287B (en) Entity identification method and device, electronic equipment and storage medium
US11533495B2 (en) Hierarchical video encoders
CN112016320A (en) English punctuation adding method, system and equipment based on data enhancement
CN112287687B (en) Case tendency extraction type summarization method based on case attribute perception
CN115630145A (en) Multi-granularity emotion-based conversation recommendation method and system
CN110738059A (en) text similarity calculation method and system
CN115525744A (en) Dialog recommendation system based on prompt learning method
CN114117041B (en) Attribute-level emotion analysis method based on specific attribute word context modeling
CN114611520A (en) Text abstract generating method
CN110852103A (en) Named entity identification method and device
CN114281948A (en) Summary determination method and related equipment thereof
CN113688309B (en) Training method for generating model and generation method and device for recommendation reason
CN116910251A (en) Text classification method, device, equipment and medium based on BERT model
CN115905585A (en) Keyword and text matching method and device, electronic equipment and storage medium
CN114677165A (en) Contextual online advertisement delivery method, contextual online advertisement delivery device, contextual online advertisement delivery server and storage medium
Wang et al. Distill-AER: Fine-Grained Address Entity Recognition from Spoken Dialogue via Knowledge Distillation
CN114328902A (en) Text labeling model construction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant