CN111209725A - Text information generation method and device and computing equipment - Google Patents

Text information generation method and device and computing equipment Download PDF

Info

Publication number
CN111209725A
CN111209725A CN201811377243.1A CN201811377243A CN111209725A CN 111209725 A CN111209725 A CN 111209725A CN 201811377243 A CN201811377243 A CN 201811377243A CN 111209725 A CN111209725 A CN 111209725A
Authority
CN
China
Prior art keywords
text
information
title
generation model
commodity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811377243.1A
Other languages
Chinese (zh)
Other versions
CN111209725B (en
Inventor
严玉良
王勇臻
黄恒
刘晓钟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201811377243.1A priority Critical patent/CN111209725B/en
Publication of CN111209725A publication Critical patent/CN111209725A/en
Application granted granted Critical
Publication of CN111209725B publication Critical patent/CN111209725B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a text information generation method and device and computing equipment. The method comprises the following steps: acquiring the title information of the commodity; inputting the title information into a text generation model to generate a plurality of description texts for the commodity; acquiring click information of the plurality of description texts; and training the text generation model at least according to the acquired click information so as to adjust the network parameters of the text generation model.

Description

Text information generation method and device and computing equipment
Technical Field
The invention relates to the field of natural language processing, in particular to a text information generation method and device and computing equipment.
Background
At present, in the process of content transformation of e-commerce, description information of commodities needs manual writing by people, and the text generation efficiency is extremely low. Therefore, it is desirable to automatically generate a description of a commodity so as to liberate both hands of a human operator and improve the efficiency of generating a text.
Disclosure of Invention
In view of the above, the present invention has been made to provide a text generation method, apparatus and computing device that overcome or at least partially address the above-mentioned problems.
According to an aspect of the present invention, there is provided a text information generating method including:
acquiring the title information of the commodity;
inputting the title information into a text generation model to generate a plurality of description texts for the commodity;
acquiring click information of the plurality of description texts;
and training the text generation model at least according to the acquired click information so as to adjust the network parameters of the text generation model.
Alternatively, in the text information generating method according to the present invention, the text generating model includes a title encoder adapted to encode title information into a semantic vector for the title information, and a title decoder adapted to generate a word distribution vector describing each position of a text from at least the semantic vector, and generate a plurality of description texts for the commodity from the word distribution vector.
Optionally, the text information generating method according to the present invention further includes: acquiring attribute information associated with a commodity; generating an attention vector according to the attribute information; inputting the attention vector to the title decoder to cause the title decoder to generate the word distribution vector from the semantic vector and the attention vector.
Optionally, in the text information generating method according to the present invention, the attribute information associated with the article includes at least one of a brand, a color, a size, and a price of the article.
Optionally, in the text information generating method according to the present invention, the training the text generating model according to at least the obtained click information to adjust a network parameter of the text generating model includes: acquiring a target description text of the title information; calculating a first cross entropy loss of the word distribution vector and the target description text; calculating a second cross entropy loss of a predetermined number of description texts with the highest click rate and the title information in the plurality of description texts; and adjusting the network parameters of the text generation model by taking the sum of the first cross entropy loss and the second cross entropy loss as a loss function value.
Optionally, in the text information generating method according to the present invention, the generating a plurality of description texts for the commodity according to the word distribution vector includes: and searching the word distribution vector of each position by adopting a cluster searching algorithm so as to generate a plurality of description texts of the commodity.
Optionally, in the text information generating method according to the present invention, the title encoder and the title decoder employ at least one of a recurrent neural network RNN, a gated recurrent unit GRU, or an long-term memory network LSTM.
Optionally, the text information generating method according to the present invention further includes: and sending the description text to a client for display.
According to another aspect of the present invention, there is also provided a text information generating apparatus including:
the first acquisition module is suitable for acquiring the title information of the commodity;
a text generation module adapted to input the title information into a text generation model to generate a plurality of description texts for the commodity;
the second acquisition module is suitable for acquiring click information of the description texts;
and the parameter adjusting module is suitable for training the text generation model according to the acquired click information so as to adjust the network parameters of the text generation model.
According to yet another aspect of the present invention, there is also provided a computing device comprising:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing a method according to any of the methods described above.
According to the method, on the basis that a plurality of versions of description texts are generated by a text generation model, the description texts of each version are put on line, user behaviors (clicking conditions of the description texts by users) of the online versions are acquired, and the user behaviors are added into a loss function of the text generation model to continue training, so that performance indexes of the text generation model can be improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 shows a schematic diagram of a textual information generation system 100, according to one embodiment of the present invention;
FIG. 2 shows a schematic diagram of a computing device 200, according to one embodiment of the invention;
FIG. 3 illustrates a flow diagram of a text message generation method 300 according to one embodiment of the invention;
fig. 4 shows a schematic diagram of a text information generating apparatus 400 according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 shows a schematic diagram of a text information generating system 100 according to an embodiment of the invention. As shown in fig. 1, the text information generating system 100 includes a user terminal 110 and a computing device 200.
The user terminal 110 is a terminal device used by a user, and may specifically be a personal computer such as a desktop computer and a notebook computer, or may also be a mobile phone, a tablet computer, a multimedia device, an intelligent wearable device, and the like, but is not limited thereto. Computing device 200 is used to provide services to user terminal 110, and may be implemented as a server, such as an application server, a Web server, or the like; but may also be implemented as a desktop computer, a notebook computer, a processor chip, a tablet computer, etc., but is not limited thereto.
According to an embodiment, the computing device 200 may perform the goods information query, and the terminal device 110 may establish a connection with the computing device 200 via the internet, so that the user may perform the goods information query via the terminal device 110. For example, a user opens a browser or shopping class Application (APP) on terminal device 110, enters a query phrase (query) in a search box, i.e., initiates a query request to computing device 200. After receiving the query request, the computing device 200 queries the information of the commodity according to the query phrase input by the user, and returns the query result to the terminal device 110, where the query result may include the title information of the commodity and the description text for the commodity. The terminal device 110 displays the title information and the description text of the commodity in the interface, and the user can click on the description text of the commodity of interest, so as to enter the commodity detail page. Meanwhile, the computing device 200 records the click behavior of the user on the commodity description text. Here, the article description text is automatically generated by the computing device 200 using a text generation tool (text generation model) based on the title information of the article.
In one embodiment, the text information generating system 100 further includes a data storage 120. The data storage 120 may be a relational database such as MySQL, ACCESS, etc., or a non-relational database such as NoSQL, etc.; the data storage device 120 may be a local database residing in the computing device 200, or may be disposed at a plurality of geographic locations as a distributed database, such as HBase, in short, the data storage device 120 is used for storing data, and the present invention is not limited to the specific deployment and configuration of the data storage device 120. The computing device 200 may connect with the data storage 120 and retrieve data stored in the data storage 120. For example, the computing device 200 may directly read the data in the data storage 120 (when the data storage 120 is a local database of the computing device 200), or may access the internet in a wired or wireless manner and obtain the data in the data storage 120 through a data interface.
In an embodiment of the present invention, the data storage device 120 is adapted to store commodity information, for example, title information of the commodity, attribute information associated with the commodity, detailed description of the commodity, description text for the commodity, and click data of the user for the commodity (including click behavior of the user on the description text). The description text for the commodity is generated by the text generation model based on the title information of the commodity and the click data of the user for the commodity. In order to train the text generation model, a training data set may be further stored in the data storage device, each training sample of the training data set includes title information and an associated target description text, and the target description text may be manually generated by a person based on the title information, that is, the person writes the description text for the commodity as the target description text by reading the commodity title information, the relevant attributes and detailed description of the commodity.
The text information generating method of the present invention may be executed in a computing device. FIG. 2 shows a block diagram of a computing device 200, according to one embodiment of the invention. As shown in FIG. 2, in a basic configuration 202, a computing device 200 typically includes a system memory 206 and one or more processors 204. A memory bus 208 may be used for communication between the processor 204 and the system memory 206.
Depending on the desired configuration, the processor 204 may be any type of processing, including but not limited to: a microprocessor (μ P), a microcontroller (μ C), a Digital Signal Processor (DSP), or any combination thereof. The processor 204 may include one or more levels of cache, such as a level one cache 210 and a level two cache 212, a processor core 214, and registers 216. Example processor cores 214 may include Arithmetic Logic Units (ALUs), Floating Point Units (FPUs), digital signal processing cores (DSP cores), or any combination thereof. The example memory controller 218 may be used with the processor 204, or in some implementations the memory controller 218 may be an internal part of the processor 204.
Depending on the desired configuration, system memory 206 may be any type of memory, including but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. System memory 106 may include an operating system 220, one or more applications 222, and program data 224. The application 222 is actually a plurality of program instructions that direct the processor 204 to perform corresponding operations. In some embodiments, application 222 may be arranged to cause processor 204 to operate with program data 224 on an operating system.
Computing device 200 may also include an interface bus 240 that facilitates communication from various interface devices (e.g., output devices 242, peripheral interfaces 244, and communication devices 246) to the basic configuration 202 via the bus/interface controller 230. The example output device 242 includes a graphics processing unit 248 and an audio processing unit 250. They may be configured to facilitate communication with various external devices, such as a display or speakers, via one or more a/V ports 252. Example peripheral interfaces 244 can include a serial interface controller 254 and a parallel interface controller 256, which can be configured to facilitate communications with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 258. An example communication device 246 may include a network controller 260, which may be arranged to facilitate communications with one or more other computing devices 262 over a network communication link via one or more communication ports 264.
A network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media, such as carrier waves or other transport mechanisms, in a modulated data signal. A "modulated data signal" may be a signal that has one or more of its data set or its changes made in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or private-wired network, and various wireless media such as acoustic, Radio Frequency (RF), microwave, Infrared (IR), or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
In a computing device 200 according to the present invention, application 222 includes a text information generating apparatus 400, apparatus 400 including a plurality of program instructions that may direct processor 104 to perform text information generating method 300.
Fig. 3 shows a flow diagram of a text information generation method 300 according to one embodiment of the invention. The method 300 is suitable for execution in a computing device, such as the computing device 200 described above.
As shown in fig. 3, the method 300 begins at step S310, and in step S310, title information of a product is acquired. As described above, when a user browses a shopping website or queries a shopping website for information about a product, the shopping website usually displays information about the product, including title information of the product, a description text for the product, and a product detail page, to the user. The computing device may retrieve the merchandise information from locally or from a data storage device communicatively coupled thereto. In addition, when training the text generation model, the title information of the product may be acquired from the training data set.
The commodities in the embodiment of the invention include, but are not limited to, any type of commodity which can be provided to a market for human consumption or use. In some embodiments, the merchandise may include physical products such as clothing, coffee, automobiles, etc., and in other embodiments, the merchandise may include intangible products such as services, education, games, virtual resources, etc.
In step S320, the acquired title information is input to the text generation model, and a plurality of description texts for the product are generated by the text generation model. The text generation model is a sequence-to-sequence (Seq2Seq) model, which typically includes an encoder that converts source text into a vector and a decoder that converts the vector into target text. In addition, the text generation model may also be an attention-based sequence-to-sequence model, i.e. an attention mechanism is introduced in the sequence-to-sequence model.
In an embodiment of the present invention, the text generation model includes a title encoder and a title decoder, the title encoder is adapted to encode the title information into a semantic vector for the title information, and the title decoder is adapted to generate the description text for the commodity according to the semantic vector. The header encoder and the header decoder may each employ a Recurrent Neural Network (RNN), a Gated Recurrent Unit (GRU), a long-term memory (LSTM), or other types of neural networks, which are not intended to be limiting.
The title encoder comprises a plurality of encoding units connected in sequence, each encoding unit outputs a state vector of a word in the title information, and the last encoding unit can also output a state vector corresponding to the whole title. The semantic vector may be a state vector corresponding to the whole title, or may be a vector sequence including a state vector corresponding to each word and a state vector corresponding to the whole title. The title decoder comprises a plurality of decoding units which are connected in sequence, each decoding unit generates a word distribution vector of one position of a description text, namely a first decoding unit generates a word distribution vector corresponding to a first word of the description text, a second decoding unit generates a word distribution vector corresponding to a second word of the description text, and the like, and finally the word distribution vector of each position of the description text is obtained.
After generating the word distribution vectors for the respective locations, the word distribution vectors for each location may be searched using various search algorithms, for example, using a beam search (BeamSearch) algorithm, to generate a plurality of description texts for the product.
Assuming that the parameter of the BeamSearch is k, it indicates that k optimal versions are maintained simultaneously when calculation is performed at each time step, wherein k decoded versions with the maximum probability are calculated at the t-th step according to the results of k t-1 steps and the word distribution vector at the t step at the same time until an end tag (for example) is decoded to obtain k description texts. That is, by performing the BeamSearch search, on one hand, k description texts can be obtained, and on the other hand, the text probability corresponding to each description text can be obtained simultaneously.
In one embodiment, the method 300 further comprises: acquiring attribute information associated with the commodity, generating an attention vector according to the acquired attribute information of the commodity, inputting the generated attention vector into a title decoder, and generating a word distribution vector describing each position in the text by the title decoder according to the semantic vector and the attention vector. Specifically, the attention vector includes a plurality of dimensions corresponding to the attributes and attribute values of the commodity, and the attention vector may be input to the title decoder as an initialization state, so that the title decoder focuses more on the relevant attributes of the commodity in the process of generating the description text of the commodity.
The attribute information associated with the article is, for example, a brand, a color, a size, a price, and the like of the article. Likewise, the computing device may obtain the attribute information for the item from local or from a data store communicatively coupled thereto. The attribute information of the goods may be expressed in the form of a key-value, for example: brand-nike, color-red, size: s, m, l, xs, price-600, etc.
After generating a plurality of description texts for the commodity through the text generation model, the description texts may be sent to the client for display, for example, the description text with the highest text probability may be sent to the client for display.
In addition, the text generation model may be trained in advance before text processing using the text generation model. The training process is as follows:
1) the method comprises the steps of obtaining a training data set, wherein each training sample of the training data set comprises title information and an associated target description text, and the target description text can be manually generated by a person based on the title information, namely, the person writes the description text of a commodity as the target description text by reading the commodity title information, the relevant attributes and the detailed description of the commodity.
2) And inputting the title information in the training sample into a text generation model, and outputting a word distribution vector by the text generation model.
3) And calculating the cross entropy loss of the word distribution vector and the target description text.
4) And adjusting the network parameters of the text generation model by adopting a back propagation algorithm according to the cross entropy loss.
In order to further improve the performance index of the text generation model, the embodiment of the present invention further performs training on the text generation model in combination with the user behavior, specifically as described in steps S330 and S340.
Before continuing to train the model, a plurality of description texts of the commodity may be put on line, that is, each description text is returned to the corresponding one or more users, and then, in step S330, click information of the user on the plurality of description texts is obtained. Here, the click information may be the number of clicks or the click rate corresponding to each descriptive text.
In step S340, a text generation model is trained at least according to the obtained click information, so as to adjust network parameters of the text generation model. The training process is as follows:
1) acquiring a target description text of the title information;
and the target description text is the description text associated with the title information in the training sample.
2) Calculating a first cross entropy loss of a word distribution vector output by a text generation model and a target description text;
how to calculate the cross entropy loss of the word distribution vector and the text can adopt the prior art. In the embodiment of the present invention, the first cross entropy loss h (seq) can be expressed as:
Figure BDA0001871083760000091
therein, prediAnd representing a distribution vector corresponding to the ith word in the predicted description text, wherein m is the number of words included in the predicted description text, and glod represents the target description text.
3) Calculating a second cross entropy loss of a predetermined number (e.g. t) of description texts with the highest click rate and the title information in the plurality of description texts;
how to calculate the cross entropy loss between texts can adopt the prior art. In the embodiment of the present invention, the second cross entropy loss h (ce) can be expressed as:
Figure BDA0001871083760000101
wherein desciThe description text which represents the ith highest dot rate, title represents title information, and a is a preset weighting coefficient (which can be set according to experience or experiment). The probability value in the formula can be calculated according to the similarity between texts.
For example, if a certain piece of title information corresponds to 10 description texts, t ═ 4 description texts with the highest click rate can be obtained from the description texts, and then the cross entropy loss between the 4 description texts and the title information is calculated.
4) And adjusting the network parameters of the text generation model by taking the sum of the first cross entropy loss and the second cross entropy loss as a loss function value.
In summary, in the embodiments of the present invention, on the basis that a text generation model generates a plurality of versions of description texts, the description texts of each version are put on line, then the user behavior (the click condition of the user on the description texts) of the online version is obtained, and the user behavior is added to the loss function of the model to continue training, so that the most needed result of the user can be continuously evolved.
For example, suppose for a certain title: the V-neck chiffon one-piece dress 2018 new-style korean lace-edged retro white dress airy lacing dress in spring and summer, the text generation model according to the invention generates results of version 3:
a) the one-piece dress adopts high-quality lace fabrics, and is very fresh and comfortable after being worn.
b) The one-piece dress adopts high-quality cotton and linen fabric, and is comfortable and clear after being worn.
c) The white one-piece dress is suitable for being worn in spring and summer, adopts high-quality lace fabrics, and shows the antique atmosphere style after being worn on the body.
After these versions are dropped on line, if the click rate is highest for version c), then the new model training will continue primarily toward version c).
Fig. 4 shows a schematic diagram of a text information generating apparatus 400 according to an embodiment of the present invention. Referring to fig. 4, the apparatus 400 includes:
a first obtaining module 410, adapted to obtain title information of a commodity;
a text generation module 420 adapted to input the title information into a text generation model to generate a plurality of description texts for the commodity;
a second obtaining module 430, adapted to obtain click information of the plurality of description texts;
the parameter adjusting module 440 is adapted to train the text generation model according to the obtained click information, so as to adjust the network parameters of the text generation model.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.

Claims (10)

1. A text information generating method includes:
acquiring the title information of the commodity;
inputting the title information into a text generation model to generate a plurality of description texts for the commodity;
acquiring click information of the plurality of description texts;
and training the text generation model at least according to the acquired click information so as to adjust the network parameters of the text generation model.
2. The method of claim 1, wherein the text generation model comprises a title encoder and a title decoder, the title encoder adapted to encode title information as a semantic vector for the title information, the title decoder adapted to generate a word distribution vector describing each location of text based at least on the semantic vector, and to generate a plurality of descriptive texts for the good based on the word distribution vector.
3. The method of claim 2, further comprising:
acquiring attribute information associated with a commodity;
generating an attention vector according to the attribute information;
inputting the attention vector to the title decoder to cause the title decoder to generate the word distribution vector from the semantic vector and the attention vector.
4. The method of claim 3, wherein the attribute information associated with the item includes at least one of a brand, a color, a size, a price of the item.
5. The method of claim 2 or 3, wherein the training the text generation model to adjust network parameters of the text generation model according to at least the obtained click information comprises:
acquiring a target description text of the title information;
calculating a first cross entropy loss of the word distribution vector and the target description text;
calculating a second cross entropy loss of a predetermined number of description texts with the highest click rate and the title information in the plurality of description texts;
and adjusting the network parameters of the text generation model by taking the sum of the first cross entropy loss and the second cross entropy loss as a loss function value.
6. The method of claim 2 or 3, wherein the generating a plurality of description texts for the commodity from the word distribution vector comprises:
and searching the word distribution vector of each position by adopting a cluster searching algorithm so as to generate a plurality of description texts of the commodity.
7. The method of claim 2 or 3, wherein the header encoder and header decoder employ at least one of a Recurrent Neural Network (RNN), a Gated Recurrent Unit (GRU), or an long-term memory network (LSTM).
8. The method of claim 1, further comprising:
and sending the description text to a client for display.
9. A text information generating apparatus comprising:
the first acquisition module is suitable for acquiring the title information of the commodity;
a text generation module adapted to input the title information into a text generation model to generate a plurality of description texts for the commodity;
the second acquisition module is suitable for acquiring click information of the description texts;
and the parameter adjusting module is suitable for training the text generation model according to the acquired click information so as to adjust the network parameters of the text generation model.
10. A computing device, comprising:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing any of the methods of claims 1-8.
CN201811377243.1A 2018-11-19 2018-11-19 Text information generation method and device and computing equipment Active CN111209725B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811377243.1A CN111209725B (en) 2018-11-19 2018-11-19 Text information generation method and device and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811377243.1A CN111209725B (en) 2018-11-19 2018-11-19 Text information generation method and device and computing equipment

Publications (2)

Publication Number Publication Date
CN111209725A true CN111209725A (en) 2020-05-29
CN111209725B CN111209725B (en) 2023-04-25

Family

ID=70787604

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811377243.1A Active CN111209725B (en) 2018-11-19 2018-11-19 Text information generation method and device and computing equipment

Country Status (1)

Country Link
CN (1) CN111209725B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113157910A (en) * 2021-04-28 2021-07-23 北京小米移动软件有限公司 Commodity description text generation method and device and storage medium
CN113256379A (en) * 2021-05-24 2021-08-13 北京小米移动软件有限公司 Method for correlating shopping demands for commodities
CN115250365A (en) * 2021-04-28 2022-10-28 京东科技控股股份有限公司 Commodity text generation method and device, computer equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017162074A1 (en) * 2016-03-25 2017-09-28 阿里巴巴集团控股有限公司 Method, apparatus and device for mapping products
CN107526725A (en) * 2017-09-04 2017-12-29 北京百度网讯科技有限公司 The method and apparatus for generating text based on artificial intelligence
CN107577763A (en) * 2017-09-04 2018-01-12 北京京东尚科信息技术有限公司 Search method and device
CN107977363A (en) * 2017-12-20 2018-05-01 北京百度网讯科技有限公司 Title generation method, device and electronic equipment
CN108024005A (en) * 2016-11-04 2018-05-11 北京搜狗科技发展有限公司 Information processing method, device, intelligent terminal, server and system
CN108052512A (en) * 2017-11-03 2018-05-18 同济大学 A kind of iamge description generation method based on depth attention mechanism
CN108108449A (en) * 2017-12-27 2018-06-01 哈尔滨福满科技有限责任公司 A kind of implementation method based on multi-source heterogeneous data question answering system and the system towards medical field
CN108319585A (en) * 2018-01-29 2018-07-24 北京三快在线科技有限公司 Data processing method and device, electronic equipment, computer-readable medium
CN108763211A (en) * 2018-05-23 2018-11-06 中国科学院自动化研究所 The automaticabstracting and system of knowledge are contained in fusion
US20180329883A1 (en) * 2017-05-15 2018-11-15 Thomson Reuters Global Resources Unlimited Company Neural paraphrase generator

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017162074A1 (en) * 2016-03-25 2017-09-28 阿里巴巴集团控股有限公司 Method, apparatus and device for mapping products
CN108024005A (en) * 2016-11-04 2018-05-11 北京搜狗科技发展有限公司 Information processing method, device, intelligent terminal, server and system
US20180329883A1 (en) * 2017-05-15 2018-11-15 Thomson Reuters Global Resources Unlimited Company Neural paraphrase generator
CN107526725A (en) * 2017-09-04 2017-12-29 北京百度网讯科技有限公司 The method and apparatus for generating text based on artificial intelligence
CN107577763A (en) * 2017-09-04 2018-01-12 北京京东尚科信息技术有限公司 Search method and device
CN108052512A (en) * 2017-11-03 2018-05-18 同济大学 A kind of iamge description generation method based on depth attention mechanism
CN107977363A (en) * 2017-12-20 2018-05-01 北京百度网讯科技有限公司 Title generation method, device and electronic equipment
CN108108449A (en) * 2017-12-27 2018-06-01 哈尔滨福满科技有限责任公司 A kind of implementation method based on multi-source heterogeneous data question answering system and the system towards medical field
CN108319585A (en) * 2018-01-29 2018-07-24 北京三快在线科技有限公司 Data processing method and device, electronic equipment, computer-readable medium
CN108763211A (en) * 2018-05-23 2018-11-06 中国科学院自动化研究所 The automaticabstracting and system of knowledge are contained in fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王帅;赵翔;李博;葛斌;汤大权;: "TP-AS:一种面向长文本的两阶段自动摘要方法" *
郑雄风;丁立新;万润泽;: "基于用户和产品Attention机制的层次BGRU模型" *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113157910A (en) * 2021-04-28 2021-07-23 北京小米移动软件有限公司 Commodity description text generation method and device and storage medium
CN115250365A (en) * 2021-04-28 2022-10-28 京东科技控股股份有限公司 Commodity text generation method and device, computer equipment and storage medium
CN113157910B (en) * 2021-04-28 2024-05-10 北京小米移动软件有限公司 Commodity description text generation method, commodity description text generation device and storage medium
CN113256379A (en) * 2021-05-24 2021-08-13 北京小米移动软件有限公司 Method for correlating shopping demands for commodities

Also Published As

Publication number Publication date
CN111209725B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
JP7296387B2 (en) Content generation method and apparatus
CN111784455A (en) Article recommendation method and recommendation equipment
CN107644036B (en) Method, device and system for pushing data object
CN109583970A (en) Advertisement placement method, device, computer equipment and storage medium
US9727906B1 (en) Generating item clusters based on aggregated search history data
CN111209725A (en) Text information generation method and device and computing equipment
JP6698040B2 (en) Generation device, generation method, and generation program
CN110909536A (en) System and method for automatically generating articles for a product
US11315165B2 (en) Routine item recommendations
CN109410001B (en) Commodity recommendation method and system, electronic equipment and storage medium
CN110428295A (en) Method of Commodity Recommendation and system
CN111695960A (en) Object recommendation system, method, electronic device and storage medium
CN111967924A (en) Commodity recommendation method, commodity recommendation device, computer device, and medium
JP2018013925A (en) Information processing device, information processing method, and program
CN109993619B (en) Data processing method
CN113344648B (en) Advertisement recommendation method and system based on machine learning
US20180113919A1 (en) Graphical user interface rendering predicted query results to unstructured queries
JP6037540B1 (en) Search system, search method and program
US20210233150A1 (en) Trending item recommendations
CN111797622B (en) Method and device for generating attribute information
CN116030466B (en) Image text information identification and processing method and device and computer equipment
CN111787042B (en) Method and device for pushing information
CN112069404A (en) Commodity information display method, device, equipment and storage medium
CN115511546A (en) Behavior analysis method, system, equipment and readable medium for E-commerce users
CN112184250B (en) Method and device for generating retrieval page, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant