CN111444725B - Statement generation method, device, storage medium and electronic device - Google Patents

Statement generation method, device, storage medium and electronic device Download PDF

Info

Publication number
CN111444725B
CN111444725B CN202010209182.9A CN202010209182A CN111444725B CN 111444725 B CN111444725 B CN 111444725B CN 202010209182 A CN202010209182 A CN 202010209182A CN 111444725 B CN111444725 B CN 111444725B
Authority
CN
China
Prior art keywords
sentence
text
word
web page
keywords
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010209182.9A
Other languages
Chinese (zh)
Other versions
CN111444725A (en
Inventor
张海松
宋彦
史树明
黎婷
洪成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010209182.9A priority Critical patent/CN111444725B/en
Publication of CN111444725A publication Critical patent/CN111444725A/en
Application granted granted Critical
Publication of CN111444725B publication Critical patent/CN111444725B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a sentence generation method, a sentence generation device, a storage medium and an electronic device. Wherein, the method comprises the following steps: acquiring keywords input by a user from a first web page of a mobile terminal; acquiring a click operation on a first button on a first web page; generating a first statement and a second statement according to the keywords, wherein the first statement comprises at least one word in the keywords, the second statement comprises at least one word in the keywords, and the second statement has the same number of words as the first statement and is symmetrical in structure; and displaying a second web page on the mobile terminal, wherein the second web page displays the first statement and the second statement. The invention solves the technical problem that the couplet cannot be automatically generated.

Description

Statement generation method, device, storage medium and electronic device
Technical Field
The present invention relates to the field of data processing, and in particular, to a statement generation method, apparatus, storage medium, and electronic apparatus.
Background
The existing mainstream couplet product mainly refers to the computer couplet developed by a certain company, which is referred to as the computer couplet for short. From the product perspective, the product mode adopted by the computer antithetical couplet is simple, the user needs to manually input the antithetical couplet and then clicks the 'antithetical couplet' button, the system can correspondingly generate a plurality of candidate antithetical couplets, the user selects one of the plurality of candidate antithetical couplets, then clicks the 'subject transverse lot' button after selecting, and then the user needs to select one of the given transverse lots, so that the complete antithetical couplet with the transverse lots is completed.
From a technical point of view, the task of generating the downlink of the computer couplet can be understood as a process of Statistical Machine Translation (SMT). Which employs a phrase-based statistical machine translation method to generate the second sentence. First, the system requires the user to input the first sentence, and then the system will output the candidate set of the N best second sentences based on the phrase SMT decoder, giving the result of the generation. A set of filters is then used to eliminate those candidates that violate the language constraint. Finally, the remaining candidate set is sorted using a sorting support vector machine.
However, at present, the method for generating the couplet needs the user to participate for many times, and the automatic generation of the couplet cannot be realized.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a sentence generation method, a sentence generation device, a storage medium and an electronic device, and at least solves the technical problem that an antithetical couplet cannot be automatically generated.
According to an aspect of the embodiments of the present invention, there is provided a statement generation method, including: acquiring a target text; selecting a first text and a second text from the target text, wherein the first text comprises at least one word in the target text, and the second text comprises at least one word in the target text; generating a first sentence according to the first text, wherein the generated first sentence comprises the first text; and generating a second sentence according to the second text and the first sentence, wherein the generated second sentence comprises the second text, and the second sentence has the same word number and symmetrical structure with the first sentence.
According to another aspect of the embodiments of the present invention, there is further provided a sentence generating apparatus, including: an acquisition unit configured to acquire a target text; the selection unit is used for selecting a first text and a second text from the target texts, wherein the first text comprises at least one word in the target texts, and the second text comprises at least one word in the target texts; a first generating unit, configured to generate a first sentence according to the first text, where the generated first sentence includes the first text; and the second generating unit is used for generating a second sentence according to the second text and the first sentence, wherein the generated second sentence comprises the second text, and the second sentence has the same word number and symmetrical structure with the first sentence.
According to an aspect of the embodiments of the present invention, there is also provided a statement generation method, including: acquiring keywords input by a user from a first web page of a mobile terminal; acquiring a click operation on a first button on the first web page; generating a first statement and a second statement according to the keywords, wherein the first statement comprises at least one word in the keywords, the second statement comprises at least one word in the keywords, and the second statement has the same number of words and is symmetrical to the first statement in structure; and displaying a second web page on the mobile terminal, wherein the first statement and the second statement are displayed in the second web page.
According to an aspect of embodiments of the present invention, there is provided a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the above-mentioned method when executed.
According to an aspect of the embodiments of the present invention, there is provided an electronic apparatus, including a memory and a processor, the memory having a computer program stored therein, the processor being configured to execute the above method via the computer program.
In the embodiment, the first text and the second text are selected from the target text, the first sentence is generated according to the first text, and the second sentence is generated according to the first sentence and the second text, so that the generated first sentence and the second sentence have the same word number and symmetrical structure, and in the process of generating the first sentence and the second sentence, user intervention is not needed, the technical problem that the antithetical couplet cannot be automatically generated is solved, and the technical effect of automatically generating the antithetical couplet is achieved. In addition, the generation of the couplet in the prior art can only obtain the lower couplet by inputting the upper couplet by the user, and the upper and lower couplets can be automatically generated by inputting the vocabulary required by the user, so that the diversity of the couplet is enriched.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of a hardware environment according to an embodiment of the present invention;
FIG. 2 is a flow diagram of a method of generating statements according to an embodiment of the invention;
FIG. 3 is a schematic diagram of a home page for generating a couplet according to an embodiment of the invention;
FIG. 4 is a schematic diagram of generating an animation of a couplet, according to an embodiment of the invention;
FIG. 5 is a schematic diagram of a sample of generating couplets, according to an embodiment of the invention;
FIG. 6 is a schematic diagram of generating input keywords for couplets, according to an embodiment of the invention;
FIG. 7 is a schematic diagram showing generated couplets, according to an embodiment of the invention;
FIG. 8 is a schematic diagram of a sharing and antithetical interface according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of generating a statement from words, according to an embodiment of the invention;
FIG. 10 is a schematic diagram of a generative model according to an embodiment of the invention;
FIG. 11 is a flow diagram of generating couplet logic according to an embodiment of the invention;
FIG. 12 is a schematic diagram of a server architecture according to an embodiment of the invention;
FIG. 13 is a flow chart of a server according to an embodiment of the present invention;
FIG. 14 is a schematic diagram of a sentence generation apparatus according to an embodiment of the present invention;
FIG. 15 is a schematic diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Definition of terms:
and (3) carrying out antithetical couplet: one of the traditional Chinese cultures, also called spring festival couplets, pairs, couplets written on paper, cloth or carved on bamboo, wood, pillars, etc. The Chinese character input method has the advantages of deep simplicity, neat and orderly structure, harmonious tone and tone, same number of characters and same structure, and is a unique artistic form of Chinese language.
Head hiding and antithetical couplet: the two keywords input by the user are respectively hidden at the beginning positions of the upper and lower links, and the first words of the upper and lower links are read together, so that a certain special thought of an author can be conveyed, and the personalized hidden head couplet is formed. The form of the book can be divided into general head-hiding couplets, blessing head-hiding couplets, name head-hiding couplets and the like.
First antithetical couplet is hidden to intelligence: the skill of human to do the couplet writing is learned by means of AI, so that the head-hidden couplet is automatically created. Also called Artificial Intelligence (AI) head-to-head couplets, intelligent head-to-spring couplets, AI head-to-spring couplets, etc., are collectively used herein as "intelligent head-to-head couplets".
According to an aspect of an embodiment of the present invention, a method for generating a statement is provided. In the present embodiment, the above-described sentence generation method can be applied to a hardware environment formed by the terminal 101, the terminal 102, and the server 103 shown in fig. 1. As shown in fig. 1, the terminals 101 and 102 are connected to the server 103 through a network, which includes but is not limited to: the terminal 101 may be a mobile phone terminal, and the terminal 102 may be a PC terminal, a notebook terminal, or a tablet terminal. The server 103 may generate a sentence according to the instruction of the terminal and return the sentence to the terminal.
Fig. 2 is a flowchart of a sentence generation method according to an embodiment of the present invention. As shown in fig. 2, the generation method of the statement includes:
s202, acquiring a target text. The target text can be Chinese characters or English characters input by the user, the number of the Chinese characters or the English characters input by the user is not limited, and 2 to 4 Chinese characters or English characters can be input in general. The target text may be pure chinese, pure english, or a combination of chinese and english. The target text may be one word or a combination of words, and a word may be one word or a plurality of words. For example, "like" is a word of one word, "beautiful" and "very good" are words of multiple words.
S204, selecting a first text and a second text from the target texts, wherein the first text comprises at least one word in the target texts, and the second text comprises at least one word in the target texts.
When the target text is a word of two words, the first text and the second text are respectively a word. When the target text is more than 3 words, the first text and the second text may be any two or more words in the target text, respectively. The number of words in the first text and the second text may be the same or different. For example, the target text is "eight directions in the four seas", and the first text and the second text may be "eight" and "directions", respectively, or "four seas" and "eight directions", respectively. When the target text is "very good", the first text and the second text may be "very" and "good", respectively. The number of the first text and the second text is not limited, and the number of words of the first text and the second text is generally the same.
When the target text is the english word "good day", the first text and the second text may be "good" and "day", respectively, and the first sentence and the second sentence generated are "good good study" and "day day up", respectively.
S206, generating a first sentence according to the first text, wherein the generated first sentence comprises the first text.
Optionally, generating a first sentence from the first text comprises: setting the position of the first text in the first sentence; inputting the first text into a neural network language model, wherein the neural network language model is obtained by training according to couplet samples and/or poetry samples; and acquiring the first sentence output by the neural network language model, wherein the first text is positioned at the position in the first sentence.
The system is set up in advance to determine the position of the first text in the first sentence. Setting a position of the first text in the first sentence comprises: setting the position of the first text in the first sentence to be any one of the following positions: the method comprises the steps of obtaining a first sentence, wherein the first sentence is a first word of the first sentence, obtaining a second sentence of the first sentence, obtaining a first starting text of the first sentence, obtaining a second sentence, obtaining a first middle text of the first sentence, obtaining a second middle text of the second sentence, obtaining a second middle text of the first sentence, obtaining a second middle text of the second sentence, obtaining a third sentence, obtaining a fourth sentence, and obtaining a fourth sentence.
The position of the first text in the first sentence may be a position where the starting text is located, a position where the intermediate text is located, and a position where the ending text is located. The position of the second text is the same as the position of the first text, and after the position of the first text in the first sentence is determined, the position of the second text in the second sentence is determined. That is, when the first text is the starting text of the first sentence, the second text is the starting text of the second sentence; when the first text is the intermediate text of the first sentence, the second text is the intermediate text of the second sentence; when the first text is the end text of the first sentence, the second text is the end text of the second sentence. And, when the first text and the second text are both intermediate texts, the position of the second text in the second sentence is the same as the position of the first text in the first sentence. For example, the first text is in the position of the 4 th word in the first sentence, and the second text is in the position of the 4 th word in the second sentence. It should be noted that the position of the first text in the first sentence and the position of the second text in the second sentence may also be different positions, for example, the first text is at the beginning of the first sentence, the second text is at the end of the second sentence, or the first text is the 2 nd word in the first sentence, and the second text is the 3 rd word in the second sentence, and the positions of the first text and the second text are not limited in this embodiment.
After determining the location of the first text, the first text is input into a neural network language model, which outputs a first sentence. In the generated first sentence, the first text is positioned at a preset position. The neural network language model is obtained by training according to couplet samples and/or poetry samples.
In the embodiment, when the neural network language model is trained, corpora are collected when the whole Internet of the antithetical couplet data crawler works. However, the collected corpus often has impurities such as punctuations and special symbols, and the formats are inconsistent, and a series of data operations such as data cleaning, format consistency processing, sensitive word removal, special complex characters to simple characters, duplication removal and the like need to be performed on the collected couplet data in various formats, and finally training data conforming to the couplet form is sorted.
A large number of ancient poems and couplets are used as samples for learning, and from the most basic rule, the matching conversion from upper coupling to lower coupling, etc. are gradually learned. In the embodiment, a large number of couplets are specially screened from the couplet corpus, and special small sample learning is carried out to learn the commonly used front words, images and expressions in the specific environment of spring festival. According to the embodiment, by learning a huge word stock, such as a high-frequency word stock, what content is received behind one word can be automatically judged, so that the sentence is smoother, and the semantics are smoother.
The embodiment can generate couplets meeting different situations aiming at different application scenes, for example, the spring festival generates a spring festival couplet, the couplet generated at the morning festival expresses the meaning of blessing and health, the couplet generated at the mid-autumn festival can express the meaning of mid-autumn reunion, and the like. Different neural network language models can be adopted to generate couplets of different scenes and festivals, in the training process, the different neural network language models can be distinguished in the training process by using small samples, and different small samples are selected for different scenes to be trained.
S208, generating a second sentence according to the second text and the first sentence, wherein the generated second sentence comprises the second text, and the second sentence has the same word number and a symmetrical structure with the first sentence.
And after the first sentence is generated, a second sentence is generated according to the second text and the first sentence, and the generated second sentence and the first sentence have the same word number and symmetrical structure. The structural symmetry comprises the same part of speech and prosody coordination of words at the same position.
In one embodiment, the first sentence and the second sentence form an upper link and a lower link of a couplet. And in the case that the first text is used as the initial text of the first sentence, and the second text is used as the initial text of the second sentence, the couplet formed by the first sentence and the second sentence is a hidden couplet. That is, the first words of the top and bottom links of the head-hidden couplet can form a phrase. The user can also set the position of the target text displayed in the couplet to obtain the personalized customized couplet.
Optionally, generating a second sentence according to the second text and the first sentence includes: inputting the second text and the first sentence into a generative model, wherein the generative model is used for generating a second sentence which has the same word number and symmetrical structure as the first sentence, and the position of the second text in the second sentence is the same as that of the first text in the first sentence; and acquiring the second statement output by the generative model.
The first sentence is generated from a word, the second sentence is generated from the word together with the generated sentence, and the second page display may be generated using a generative model. The generative model may be a sequence-to-sequence generative model with an attention mechanism, or a sequence-to-sequence model through a memory mechanism.
In the embodiment, the first text and the second text are selected from the target text, the first sentence is generated according to the first text, and the second sentence is generated according to the first sentence and the second text, so that the generated first sentence and the second sentence have the same word number and symmetrical structure, and in the process of generating the first sentence and the second sentence, user intervention is not needed, the technical problem that the antithetical couplet cannot be automatically generated is solved, and the technical effect of automatically generating the antithetical couplet is achieved. In addition, the generation of the couplet in the prior art can only obtain the lower couplet by inputting the upper couplet by the user, and the upper and lower couplets can be automatically generated by inputting the vocabulary required by the user, so that the diversity of the couplet is enriched.
The following describes a sentence generating method according to the present embodiment with reference to fig. 3 to 8 and fig. 11.
1. The user opens the AI New year scrolls H5 page. The H5 page is the web page of the mobile terminal. As shown in fig. 3. The user may open the H5 application page through an instant messaging application or browser, with the user first seeing the capital "fu" word of fig. 3.
2. And displaying the Chinese wind animation elements. The user clicks on "click-through" of fig. 3 to jump to the animation page shown in fig. 4 (which displays chinese style animation elements. If no skip operation is performed, the animation page plays the open scene animation.
3. And displaying the sample couplet. If "skip" is selected or after the animation is played, the page displays the head-hidden couplet sample, as shown in FIG. 5 (the page shown in FIG. 5 can be regarded as a fourth web page).
4. The user enters a keyword. After the user clicks "write me spring festival scroll" on fig. 5 (the "write me spring festival scroll" button can be considered as the sixth button), a jump is made to a page waiting for the user to enter keywords, as shown in fig. 6 (the page shown in fig. 6 can be considered as the first web page). The user can input 2 to 4 keywords, such as name, company name, blessing words, etc., on the page shown in fig. 6.
5. And generating a pair of head collection couplets. Clicking the next button (the next button can be regarded as the first button) shown in fig. 6 can generate a corresponding horizontal Tibetan language sentence (the top and bottom sentences of the Tibetan language sentence can be regarded as a first sentence and a second sentence, the first sentence comprises at least one word in the keywords, the second sentence comprises at least one word in the keywords, and the second sentence is the same in number and symmetrical in structure with the words in the first sentence) according to the keywords input by the user, as shown in fig. 7 (the page shown in fig. 7 can be regarded as a second web page). The page shown in fig. 7 shows two buttons, and if the user is not satisfied, the user can click "change over" ("change over" button can be regarded as the second button), update the couplet, and generate a new couplet (the upper and lower couplets of the "new couplet" can be regarded as the fourth sentence and the fifth sentence, the fourth sentence includes at least one word of the keywords, the fifth sentence includes at least one word of the keywords, and the fourth sentence is the same in number and symmetrical in structure with the fifth sentence). If the displayed couplet is a user-satisfied spring festival, the user may click "it is" on the page shown in FIG. 7 (the "it is" button may be considered a third button) to jump to a new page FIG. 8 (the page shown in FIG. 8 may be considered a third web page).
6. And sending the hidden head couplet. The user may click the "long press save blessing" button on the page of fig. 8 (the "long press save blessing" button may be considered as the fourth button), and may save the page of fig. 8 to the mobile phone or directly send to the friend through the application program, and if one wants to write a friend, may click the "rewrite one pair of friends present" (the "rewrite one pair of friends present" button may be considered as the fifth button) on the page of fig. 8, and then jump to the page waiting for the keyword input by the user as shown in fig. 6. Repeating the above process to generate a new couplet again.
Alternatively, the keywords may be regarded as the target file shown in fig. 2. Further, the first sentence and the second sentence can be generated according to the keyword through the steps shown in fig. 2 and the related description thereof. For the sake of understanding, the first sentence and the second sentence are generated according to the keyword, which includes: selecting a first text and a second text from a target text, wherein the keyword is the target text, the first text comprises at least one word in the target text, and the second text comprises at least one word in the target text; generating the first sentence from the first text, wherein the generated first sentence includes the first text; and generating the second sentence from the second text and the first sentence, wherein the generated second sentence includes the second text.
Optionally, the generating the first sentence according to the first text includes: setting the position of the first text in the first sentence; inputting the first text into a neural network language model, wherein the neural network language model is used for generating a sentence related to the input text according to the input text; and acquiring the first sentence output by the neural network language model, wherein the first text is positioned at the position in the first sentence.
Optionally, the generating the second sentence according to the second text and the first sentence includes: inputting the second text and the first sentence into a generative model, wherein the generative model is used for generating a second sentence with the same word number and symmetrical structure as the first sentence, and the position of the second text in the second sentence is the same as the position of the first text in the first sentence; and acquiring the second statement output by the generative model.
Optionally, after the second sentence is generated according to the second text and the first sentence, the method further includes: a third term (e.g., the above-mentioned head-hidden antithetical couplet) is generated from the first term and the second term, wherein the third term matches the semantics of the first term and the second term.
In this embodiment, generating the first sentence according to the first text may be performed by using a neural network language model, and fig. 9 is a schematic diagram of generating the sentence according to the word according to the embodiment of the present invention. As shown in fig. 9, after the "not" word is obtained, the "not" word is input to the neural network language model, the neural network language model determines the probability of the next word according to the "not" word, and if the maximum probability is "known", the second word is output; and then, taking the two unknown words as the input of the neural network language model, determining the probability of the next word, wherein the maximum probability is day, outputting the day words, and repeating the steps until the fact is obtained, thereby completing a sentence.
After the first statement is obtained, a sequence-to-sequence generative model is employed to generate a second statement. FIG. 10 is a schematic diagram of a generative model according to an embodiment of the invention. As shown in fig. 10, the input sequence is "unknown nature" and the output sequence is "waiting for ancient and modern people".
The hidden couplet in this embodiment may hide keywords in the couplet, may hide the meanings of the keywords in the couplet, or may combine the keywords in the couplet in a manner of multiple sets. When the meanings of the keywords are hidden in the couplets, other words with similar meanings can be generated according to the selected words, and the words with similar meanings are displayed in the couplets to express the meanings of the selected words.
Optionally, after generating a second sentence from the second text and the first sentence, the method further comprises: and generating a third sentence according to the first sentence and the second sentence, wherein the third sentence is matched according to the semantics of the first sentence and the second sentence.
After generating the top and bottom couplets of the couplet, the horizontal batch can also be automatically generated. And combining the generated Tibetan upper-lower union and selecting a matched transverse batch by adopting retrieval and semantic similarity calculation, wherein the generated transverse batch corresponds to the Tibetan upper-lower union. And searching the transverse batch matched with the semantics expressed by the upper link and the lower link according to the semantics expressed by the upper link and the lower link.
Optionally, after generating a second sentence from the second text and the first sentence, the method further comprises: receiving an update instruction for instructing to update the first statement and the second statement; and displaying a fourth sentence and a fifth sentence according to the updating instruction, wherein the fourth sentence and the fifth sentence have the same word number and are symmetrical in structure. That is, the fourth sentence and the fifth sentence are updated couplets. If the couplet formed by the first sentence and the second sentence is a hidden header couplet, the couplet formed by the fourth sentence and the fifth sentence is also a hidden header couplet, and the first text and the second text adopted by the fourth sentence and the fifth sentence are the same as the first text and the second text adopted by the first sentence and the second sentence. It should be noted that the third statement is also updated when the fourth statement and the fifth statement are updated, that is, the cross batch is also updated while the couplet is updated.
The sentence generation method of the embodiment mainly includes the following functions:
1. the embodiment of the invention mainly generates the personalized couplet automatically, and takes the example that the user inputs two characters to generate the head-hidden couplet, wherein the first Chinese character needs to be hidden at the head of the upper couplet, so the head of the upper couplet is fixed, and the system can generate the head-hidden couplet of the first Chinese character according to the neural network language model. The quality of the upper connection directly affects the quality of the lower connection, and further affects the quality of the whole pair of couplets. Therefore, the embodiment of the invention does not need the user to manually input the complete upper connection, but automatically generates the head-hiding upper connection according to the first character, thereby ensuring the quality of the upper connection generation. In addition, the length of the head-hiding upper connection in the embodiment of the invention is flexibly configured, the length of the couplet is 5-11 Chinese characters, and the diversity of the result generated by the upper connection is enriched. It is further noted that: the couplets are in the sequence from right to left, but in order to consider the reading habit of modern people from left to right, the modern reading habit sequence is adopted for typesetting. Of course, it is also possible to change the layout from left to right to a layout from right to left.
2. After the head-hidden upper link is determined, the system generates the head-hidden lower link by adopting a sequence-to-sequence generation model with an attention mechanism according to the first Chinese character of the head-hidden upper link and the first Chinese character of the lower link. Through continuous training of the generation model, the generation model learns the special expression forms of couplets, such as formation of couplets, prosody coordination, consistent length, flat and narrow rules and the like, which are required by the upper couplets and the lower couplets, the same head-hidden upper couplets are given, the same head-hidden lower couplets are not only given, but a plurality of different head-hidden lower couplets can be generated, and the diversity of the couplets is greatly enriched.
3. After the top connection and the bottom connection are completed, the generated Tibetan head top and bottom connection is combined, and the matched transverse lot is selected by adopting retrieval and semantic similarity calculation to form the transverse lot corresponding to the Tibetan head top and bottom connection. The function of generating the upper connection, the lower connection and the transverse batch is an integral body, and a complete hidden head couplet with the transverse batch can be directly displayed after the user inputs keywords to generate the couplet.
4. If the user is not satisfied with the generated couplet, different couplets can be obtained through the function of 'changing one by one'. Because each Chinese character generated in the generating model has a plurality of candidate lists, the system can finally generate a plurality of non-repeated complete head-hiding couplets, the diversity is guaranteed, and the preference of different users can be met. It should be noted that after receiving an instruction that a user needs to change a pair of couplets, a pair of couplets can be generated again by using the neural network language model and the generation model, or a plurality of couplets can be generated at a time, and after receiving an instruction that the user needs to change a pair of couplets, a pair of couplets is randomly selected to be displayed.
The server architecture of the present embodiment is shown in fig. 12. The user interacts with the front end H5 page through a CDN (content delivery network). And then, the data processing capacity of the network is enhanced through load balancing, the flexibility and the availability of the network are improved, and the bandwidth of the server is expanded. The user communicates with the backend server through the front end H5 page. The front-end H5 module is mainly responsible for page display of the associated service, and the design of the relevant logic of the user experience, considering that the concurrent access amount is large, the acceleration of a content delivery network (CDN for short) performed by the front-end service can be also involved, and the access speed of the user can be accelerated. In the aspect of back-end service deployment, in order to increase the overall concurrent access amount of the back-end networking service, the load balancing technology is adopted in the embodiment, and the dual-machine hot standby is performed on the load balancing server, so that the high availability of the back-end service is ensured.
The server is provided with an input preprocessing module, a sensitive information monitoring module, a couplet generation module and a sensitive information monitoring module. The input preprocessing module can remove the content of punctuation, special symbols and the like in the input characters. The antithetical couplet generation module is the most core module, and mainly adopts a neural network language model and a sequence-to-sequence generation model, so that the generated antithetical couplet has the characteristics of obvious antithetical couplet, flat and narrow rhythm and the like. The sensitive information detection module is mainly used for detecting whether the content input by the user and the couplet generated by the couplet generation module have a sensitive problem.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
According to another aspect of the embodiments of the present invention, there is also provided a sentence generation apparatus for implementing the sentence generation method, as shown in fig. 14, the apparatus including:
an acquisition unit 92 configured to acquire a target text; the target text can be Chinese characters or English characters input by the user, the number of the Chinese characters or the English characters input by the user is not limited, and 2 to 4 Chinese characters or English characters can be input in general. The target text may be pure chinese characters, pure english, or a combination of chinese characters and english. The target text may be one word or a combination of words, and a word may be one word or a plurality of words. For example, "like" is a word of one word, "beautiful" and "very good" are words of multiple words.
A selecting unit 94, configured to select a first text and a second text from the target texts, where the first text includes at least one word in the target texts, and the second text includes at least one word in the target texts;
when the target text is a word of two words, the first text and the second text are respectively a word. When the target text is more than 3 words, the first text and the second text may be any two or more words in the target text, respectively. The number of words in the first text and the second text may be the same or different. For example, the target text is "eight directions in the four seas", and the first text and the second text may be "eight" and "directions", respectively, or "four seas" and "eight directions", respectively. When the target text is "very good", the first text and the second text may be "very" and "good", respectively. The number of the first text and the second text is not limited, and the number of words of the first text and the second text is generally the same.
When the target text is the english word "good day", the first text and the second text may be "good" and "day", respectively, and the first sentence and the second sentence generated are "good good study" and "day day day up", respectively.
A first generating unit 96, configured to generate a first sentence according to the first text, where the generated first sentence includes the first text;
the system is set up in advance to determine the position of the first text in the first sentence. Setting a position of the first text in the first sentence comprises: setting the position of the first text in the first sentence to be any one of the following positions: the method comprises the steps of obtaining a first sentence, wherein the first sentence is a first word of the first sentence, obtaining a second sentence of the first sentence, obtaining a first starting text of the first sentence, obtaining a second sentence, obtaining a first middle text of the first sentence, obtaining a second middle text of the second sentence, obtaining a second middle text of the first sentence, obtaining a second middle text of the second sentence, obtaining a third sentence, obtaining a fourth sentence, and obtaining a fourth sentence.
The position of the first text in the first sentence may be a position where the starting text is located, a position where the intermediate text is located, and a position where the ending text is located. The position of the second text is the same as the position of the first text, and after the position of the first text in the first sentence is determined, the position of the second text in the second sentence is determined. That is, when the first text is the starting text of the first sentence, the second text is the starting text of the second sentence; when the first text is the intermediate text of the first sentence, the second text is the intermediate text of the second sentence; when the first text is the end text of the first sentence, the second text is the end text of the second sentence. And, when the first text and the second text are both intermediate texts, the position of the second text in the second sentence is the same as the position of the first text in the first sentence. For example, the first text is in the position of the 4 th word in the first sentence, and the second text is in the position of the 4 th word in the second sentence. It should be noted that the position of the first text in the first sentence and the position of the second text in the second sentence may also be different positions, for example, the first text is at the beginning of the first sentence, the second text is at the end of the second sentence, or the first text is the 2 nd word in the first sentence, and the second text is the 3 rd word in the second sentence, and the positions of the first text and the second text are not limited in this embodiment.
After determining the location of the first text, the first text is input into a neural network language model, which outputs a first sentence. In the generated first sentence, the first text is positioned at a preset position. The neural network language model is obtained by training according to couplet samples and/or poetry samples.
In the embodiment, when the neural network language model is trained, corpora are collected in the working of the internet full-network couplet data crawler. However, the collected corpus often has impurities such as punctuations and special symbols, and the formats are inconsistent, and a series of data operations such as data cleaning, format consistency processing, sensitive word removal, special complex characters to simple characters, duplication removal and the like need to be performed on the collected couplet data in various formats, and finally training data conforming to the couplet form is sorted.
A large number of ancient poems and couplets are used as samples for learning, and from the most basic rule, the matching conversion from upper coupling to lower coupling, etc. are gradually learned. In the embodiment, a large number of couplets are specially screened from the couplet corpus, and special small sample learning is carried out to learn the commonly used front words, images and expressions in the specific environment of spring festival. According to the embodiment, by learning a huge word stock, such as a high-frequency word stock, what content is received behind one word can be automatically judged, so that the sentence is smoother, and the semantics are smoother.
The embodiment can generate couplets meeting different situations aiming at different application scenes, for example, the spring festival generates a spring festival couplet, the couplet generated at the morning festival expresses the meaning of blessing and health, the couplet generated at the mid-autumn festival can express the meaning of mid-autumn reunion, and the like. Different neural network language models can be adopted to generate couplets of different scenes and festivals, in the training process, the different neural network language models can be distinguished in the training process by using small samples, and different small samples are selected for different scenes to be trained.
A second generating unit 98, configured to generate a second sentence according to the second text and the first sentence, wherein the generated second sentence includes the second text, and the second sentence has the same word number and a symmetrical structure with the first sentence.
And after the first sentence is generated, a second sentence is generated according to the second text and the first sentence, and the generated second sentence and the first sentence have the same word number and symmetrical structure. The structural symmetry comprises the same part of speech and prosody coordination of words at the same position.
In one embodiment, the first sentence and the second sentence form an upper link and a lower link of a couplet. And in the case that the first text is used as the initial text of the first sentence, and the second text is used as the initial text of the second sentence, the couplet formed by the first sentence and the second sentence is a hidden couplet. That is, the first words of the top and bottom links of the head-hidden couplet can form a phrase. The user can also set the position of the target text displayed in the couplet to obtain the personalized customized couplet.
Optionally, the second generating unit includes: a second input module, configured to input the second text and the first sentence into a generative model, where the generative model is used to generate a second sentence with a same word number and a symmetrical structure as the first sentence, and a position of the second text in the second sentence is the same as a position of the first text in the first sentence; and the second acquisition module is used for acquiring the second statement output by the generative model.
The first sentence is generated from a word, the second sentence is generated from the word together with the generated sentence, and the second page display may be generated using a generative model. The generative model may be a sequence-to-sequence generative model with an attention mechanism, or a sequence-to-sequence model through a memory mechanism.
In the embodiment, the first text and the second text are selected from the target text, the first sentence is generated according to the first text, and the second sentence is generated according to the first sentence and the second text, so that the generated first sentence and the second sentence have the same word number and symmetrical structure, and in the process of generating the first sentence and the second sentence, user intervention is not needed, the technical problem that the antithetical couplet cannot be automatically generated is solved, and the technical effect of automatically generating the antithetical couplet is achieved. In addition, the generation of the couplet in the prior art can only obtain the lower couplet by inputting the upper couplet by the user, and the upper and lower couplets can be automatically generated by inputting the vocabulary required by the user, so that the diversity of the couplet is enriched.
The hidden couplet in this embodiment may hide keywords in the couplet, may hide the meanings of the keywords in the couplet, or may combine the keywords in the couplet in a manner of multiple sets. When the meanings of the keywords are hidden in the couplets, other words with similar meanings can be generated according to the selected words, and the words with similar meanings are displayed in the couplets to express the meanings of the selected words.
Optionally, the first generating unit includes: the setting module is used for setting the position of the first text in the first sentence; the first input module is used for inputting the first text into a neural network language model, wherein the neural network language model is obtained by training a couplet sample and/or a poetry sample; a first obtaining module, configured to obtain the first sentence output by the neural network language model, where the first text is located at the position in the first sentence.
Optionally, the setting module includes: the setting submodule is used for setting the position of the first text in the first sentence to be any one of the following positions: the method comprises the steps of obtaining a first sentence, wherein the first sentence is a first word of the first sentence, obtaining a second sentence of the first sentence, obtaining a first starting text of the first sentence, obtaining a second sentence, obtaining a first middle text of the first sentence, obtaining a second middle text of the second sentence, obtaining a second middle text of the first sentence, obtaining a second middle text of the second sentence, obtaining a third sentence, obtaining a fourth sentence, and obtaining a fourth sentence.
Optionally, the apparatus further comprises: a third generating unit, configured to generate a third sentence according to the first sentence and the second sentence after generating a second sentence according to the second text and the first sentence, where the third sentence matches the semantics of the first sentence and the second sentence.
After generating the top and bottom couplets of the couplet, the horizontal batch can also be automatically generated. And combining the generated Tibetan upper-lower union and selecting a matched transverse batch by adopting retrieval and semantic similarity calculation, wherein the generated transverse batch corresponds to the Tibetan upper-lower union. And searching the transverse batch matched with the semantics expressed by the upper link and the lower link according to the semantics expressed by the upper link and the lower link.
Optionally, the apparatus further comprises: a receiving unit configured to receive an update instruction for instructing to update the first sentence and the second sentence after generating a second sentence from the second text and the first sentence; and the display unit is used for displaying a fourth sentence and a fifth sentence according to the updating instruction, wherein the fourth sentence and the fifth sentence have the same word number and are symmetrical in structure.
The fourth sentence and the fifth sentence are updated couplets. And if the couplet formed by the first sentence and the second sentence is a hidden couplet, the couplet formed by the fourth sentence and the fifth sentence is also a hidden couplet, and the first text and the second text adopted by the fourth sentence and the fifth sentence are the same as the first text and the second text adopted by the first sentence and the second sentence. It should be noted that the third statement is also updated when the fourth statement and the fifth statement are updated, that is, the cross batch is also updated while the couplet is updated.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device for implementing the sentence generating method, as shown in fig. 15, the electronic device includes a memory and a processor, the memory stores a computer program, and the processor is configured to execute the steps in any one of the method embodiments by the computer program.
Alternatively, fig. 15 is a block diagram of an electronic device according to an embodiment of the invention. As shown in fig. 15, the electronic device may include: one or more processors 1001 (only one of which is shown), at least one communication bus 1002, a user interface 1003, at least one transmitting device 1004, and memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include, among other things, a display 1006 and a keyboard 1007. The transmission means 1004 may optionally include standard wired and wireless interfaces.
Optionally, in this embodiment, the electronic apparatus may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
S1, acquiring a target text;
s2, selecting a first text and a second text from the target texts, wherein the first text comprises at least one word in the target texts, and the second text comprises at least one word in the target texts;
s3, generating a first sentence according to the first text, wherein the generated first sentence comprises the first text;
and S4, generating a second sentence according to the second text and the first sentence, wherein the generated second sentence comprises the second text, and the second sentence has the same word number and symmetrical structure with the first sentence.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 15 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 15 is a diagram illustrating a structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 15, or have a different configuration than shown in FIG. 15.
The memory 1005 may be used to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for generating statements in the embodiments of the present invention, and the processor 1001 executes various functional applications and data processing by running the software programs and modules stored in the memory 1005, that is, implements the above-described method for generating statements. The memory 1005 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1005 may further include memory located remotely from the processor 1001, which may be connected to a terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 1004 is used for receiving or transmitting data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 1004 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmission device 1004 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The memory 1005 is used, among other things, for storing neural network language models and generative models.
In the embodiment, the first text and the second text are selected from the target text, the first sentence is generated according to the first text, and the second sentence is generated according to the first sentence and the second text, so that the generated first sentence and the second sentence have the same word number and symmetrical structure, and in the process of generating the first sentence and the second sentence, user intervention is not needed, the technical problem that the antithetical couplet cannot be automatically generated is solved, and the technical effect of automatically generating the antithetical couplet is achieved. In addition, the generation of the couplet in the prior art can only obtain the lower couplet by inputting the upper couplet by the user, and the upper and lower couplets can be automatically generated by inputting the vocabulary required by the user, so that the diversity of the couplet is enriched.
Embodiments of the present invention also provide a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring a target text;
s2, selecting a first text and a second text from the target texts, wherein the first text comprises at least one word in the target texts, and the second text comprises at least one word in the target texts;
S3, generating a first sentence according to the first text, wherein the generated first sentence comprises the first text;
and S4, generating a second sentence according to the second text and the first sentence, wherein the generated second sentence comprises the second text, and the second sentence has the same word number and symmetrical structure with the first sentence.
Optionally, the storage medium is further arranged to store a computer program for performing the steps of: generating a first sentence from the first text comprises: setting the position of the first text in the first sentence; inputting the first text into a neural network language model, wherein the neural network language model is obtained by training according to couplet samples and/or poetry samples; and acquiring the first sentence output by the neural network language model, wherein the first text is positioned at the position in the first sentence.
Optionally, the storage medium is further arranged to store a computer program for performing the steps of: setting a position of the first text in the first sentence comprises: setting the position of the first text in the first sentence to be any one of the following positions: the method comprises the steps of obtaining a first sentence, wherein the first sentence is a first word of the first sentence, obtaining a second sentence of the first sentence, obtaining a first starting text of the first sentence, obtaining a second sentence, obtaining a first middle text of the first sentence, obtaining a second middle text of the second sentence, obtaining a second middle text of the first sentence, obtaining a second middle text of the second sentence, obtaining a third sentence, obtaining a fourth sentence, and obtaining a fourth sentence.
Optionally, the storage medium is further arranged to store a computer program for performing the steps of: generating a second sentence from the second text and the first sentence comprises: inputting the second text and the first sentence into a generative model, wherein the generative model is used for generating a second sentence which has the same word number and symmetrical structure as the first sentence, and the position of the second text in the second sentence is the same as that of the first text in the first sentence; and acquiring the second statement output by the generative model.
Optionally, the storage medium is further arranged to store a computer program for performing the steps of: after generating a second sentence from the second text and the first sentence, the method further comprises: and generating a third sentence according to the first sentence and the second sentence, wherein the third sentence is matched according to the semantics of the first sentence and the second sentence.
Optionally, the storage medium is further arranged to store a computer program for performing the steps of: after generating a second sentence from the second text and the first sentence, the method further comprises: receiving an update instruction for instructing to update the first statement and the second statement; and displaying a fourth sentence and a fifth sentence according to the updating instruction, wherein the fourth sentence and the fifth sentence have the same word number and are symmetrical in structure.
Optionally, the storage medium is further configured to store a computer program for executing the steps included in the method in the foregoing embodiment, which is not described in detail in this embodiment.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (12)

1. A sentence generation method, comprising:
acquiring keywords input by a user from a first web page of a mobile terminal;
acquiring a click operation on a first button on the first web page;
generating a first sentence and a second sentence according to the keywords, wherein the first sentence comprises at least one word in the keywords, the second sentence has the same word number as the first sentence and has a symmetrical structure, the word number of the first sentence and the positions of the keywords in the first sentence and the second sentence are set by a user, and in the case that the keywords form keywords, the second sentence is a sentence generated according to the meanings of the first sentence and the keywords;
The method further comprises the following steps: hiding the meanings of the keywords in the first sentence and the second sentence, generating other words with similar meanings according to the selected keywords when the meanings of the keywords are hidden in the first sentence and the second sentence, displaying the words with similar meanings in the first sentence and the second sentence, and expressing the meanings of the selected keywords;
displaying a second web page on the mobile terminal, wherein the first statement and the second statement are displayed in the second web page;
generating a first statement and a second statement according to the keyword comprises:
setting a position of a first text in the first sentence, wherein the first text comprises at least one word in the keywords;
inputting the first text into a neural network language model, wherein the neural network language model is used for generating a sentence related to the input text according to the input text;
acquiring the first sentence output by the neural network language model, wherein the first text is positioned at the position in the first sentence;
the inputting the first text into a neural network language model comprises:
When the first text is a first single-word text, inputting the first single-word text into the neural network language model, and determining the prediction probability of the next single-word text of the first single-word text;
determining the single-word text with the maximum prediction probability as a second single-word text, inputting a first word formed by the first single-word text and the second single-word text into the neural network language model, and determining the prediction probability of the next single-word text of the first word;
determining the single-word text with the maximum prediction probability as a third single-word text, and repeating the steps until the first sentence is generated;
the method further comprises the following steps: and setting a sensitive information monitoring module in the server, wherein the sensitive information monitoring module is used for detecting whether the content input by the user and the antithetical couplet have sensitive problems.
2. The method of claim 1, wherein if the first sentence and the second sentence are displayed in the second web page, the method further comprises:
displaying a second button in the second web page;
acquiring clicking operation on the second button;
generating a fourth sentence and a fifth sentence according to the keywords, wherein the fourth sentence comprises at least one word in the keywords, the fifth sentence comprises at least one word in the keywords, and the fourth sentence and the fifth sentence have the same word number and are symmetrical in structure;
Displaying the fourth sentence and the fifth sentence in the second web page.
3. The method of claim 1, wherein if the first sentence and the second sentence are displayed in the second web page, the method further comprises:
displaying a third button in the second web page;
acquiring click operation on the third button;
displaying a third web page on the mobile terminal, wherein a fourth button is displayed in the third web page;
acquiring click operation on the fourth button;
and saving the third web page to the mobile terminal or directly sending the third web page to a friend through an application program.
4. The method of claim 1, wherein if the first sentence and the second sentence are displayed in the second web page, the method further comprises:
displaying a third button in the second web page;
acquiring click operation on the third button;
displaying a third web page on the mobile terminal, wherein a fifth button is displayed in the third web page;
acquiring click operation on the fifth button;
and displaying the first web page on the mobile terminal.
5. The method of claim 1,
generating a first statement and a second statement according to the keyword comprises: generating a corresponding hidden head couplet with a transverse batch according to the keywords;
the second web page displays the first sentence and the second sentence, and includes: the second web page displays the head-hiding couplet with the transverse lot.
6. The method according to claim 1, before acquiring the keyword input by the user in the first web page of the mobile terminal, the method further comprises:
displaying a fourth web page on the mobile terminal, wherein a sixth button is displayed in the fourth web page;
acquiring click operation on the sixth button;
and displaying the first web page on the mobile terminal.
7. The method according to claim 6, wherein in case of displaying a fourth web page on the mobile terminal, the method further comprises:
and displaying the head-hidden couplet sample in the fourth web page.
8. The method of any one of claims 1 to 7, wherein the generating a first statement and a second statement from the keyword comprises:
Selecting a first text and a second text from target texts, wherein the keywords are the target texts, the first text comprises at least one word in the target texts, and the second text comprises at least one word in the target texts;
generating the first sentence according to the first text, wherein the generated first sentence comprises the first text;
generating the second sentence according to the second text and the first sentence, wherein the generated second sentence comprises the second text.
9. The method of claim 8, wherein the generating the second sentence from the second text and the first sentence comprises:
inputting the second text and the first sentence into a generative model, wherein the generative model is used for generating a second sentence which has the same word number and symmetrical structure as the first sentence, and the position of the second text in the second sentence is the same as that of the first text in the first sentence;
and acquiring the second statement output by the generative model.
10. The method of claim 8, wherein after the generating the second sentence from the second text and the first sentence, the method further comprises:
And generating a third sentence according to the first sentence and the second sentence, wherein the third sentence is matched according to the semantics of the first sentence and the second sentence.
11. A storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 10 when executed.
12. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 10 by means of the computer program.
CN202010209182.9A 2018-06-22 2018-06-22 Statement generation method, device, storage medium and electronic device Active CN111444725B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010209182.9A CN111444725B (en) 2018-06-22 2018-06-22 Statement generation method, device, storage medium and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010209182.9A CN111444725B (en) 2018-06-22 2018-06-22 Statement generation method, device, storage medium and electronic device
CN201810654922.2A CN108874789B (en) 2018-06-22 2018-06-22 Statement generation method, device, storage medium and electronic device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201810654922.2A Division CN108874789B (en) 2018-06-22 2018-06-22 Statement generation method, device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN111444725A CN111444725A (en) 2020-07-24
CN111444725B true CN111444725B (en) 2022-07-29

Family

ID=64294648

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201810654922.2A Active CN108874789B (en) 2018-06-22 2018-06-22 Statement generation method, device, storage medium and electronic device
CN202010209182.9A Active CN111444725B (en) 2018-06-22 2018-06-22 Statement generation method, device, storage medium and electronic device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201810654922.2A Active CN108874789B (en) 2018-06-22 2018-06-22 Statement generation method, device, storage medium and electronic device

Country Status (1)

Country Link
CN (2) CN108874789B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111191439A (en) * 2019-12-16 2020-05-22 浙江大搜车软件技术有限公司 Natural sentence generation method and device, computer equipment and storage medium
CN111797611B (en) * 2020-07-24 2023-07-25 中国平安人寿保险股份有限公司 Antithetical couplet generation model, antithetical couplet generation method, antithetical couplet generation device, computer equipment and medium
CN111984783B (en) * 2020-08-28 2024-04-02 达闼机器人股份有限公司 Training method of text generation model, text generation method and related equipment
CN116702834B (en) * 2023-08-04 2023-11-03 深圳市智慧城市科技发展集团有限公司 Data generation method, data generation device, and computer-readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103336803A (en) * 2013-06-21 2013-10-02 杭州师范大学 Method for generating name-embedded spring festival scrolls through computer
CN106569995A (en) * 2016-09-26 2017-04-19 天津大学 Method for automatically generating Chinese poetry based on corpus and metrical rule
CN106776517A (en) * 2016-12-20 2017-05-31 科大讯飞股份有限公司 Automatic compose poem method and apparatus and system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070005345A1 (en) * 2005-07-01 2007-01-04 Microsoft Corporation Generating Chinese language couplets
CN101568917A (en) * 2006-12-20 2009-10-28 微软公司 Generating chinese language banners
US9092425B2 (en) * 2010-12-08 2015-07-28 At&T Intellectual Property I, L.P. System and method for feature-rich continuous space language models
CN102902362B (en) * 2011-07-25 2017-10-31 深圳市世纪光速信息技术有限公司 Character input method and system
US9830315B1 (en) * 2016-07-13 2017-11-28 Xerox Corporation Sequence-based structured prediction for semantic parsing
KR102630668B1 (en) * 2016-12-06 2024-01-30 한국전자통신연구원 System and method for expanding input text automatically
CN110516244B (en) * 2019-08-26 2023-03-24 西安艾尔洛曼数字科技有限公司 Automatic sentence filling method based on BERT

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103336803A (en) * 2013-06-21 2013-10-02 杭州师范大学 Method for generating name-embedded spring festival scrolls through computer
CN106569995A (en) * 2016-09-26 2017-04-19 天津大学 Method for automatically generating Chinese poetry based on corpus and metrical rule
CN106776517A (en) * 2016-12-20 2017-05-31 科大讯飞股份有限公司 Automatic compose poem method and apparatus and system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
#智能春联#输入关键词自动生成对联,快来玩!;甜豆腐脑吧;《tieba.***.com/p/5546209859?red_tag=0640327608》;20180209;网页全文 *
36氪.卧X!腾讯和百度竟然用AI帮你写春联啦?|唠氪儿.《https://m.sohu.com/a/222455127_114778》.2018,网页全文. *
卧X!腾讯和百度竟然用AI帮你写春联啦?|唠氪儿;36氪;《https://m.sohu.com/a/222455127_114778》;20180212;第1-12页 *
百度最新网红产品—智能春联,火爆朋友圈,赶紧来试试;赤鹿君;《https://v.qq.com/x/page/j0550eb8af4.html》;20180210;视频截图第1-12页 *

Also Published As

Publication number Publication date
CN108874789A (en) 2018-11-23
CN111444725A (en) 2020-07-24
CN108874789B (en) 2022-07-01

Similar Documents

Publication Publication Date Title
CN110717017B (en) Method for processing corpus
Pohl et al. Beyond just text: semantic emoji similarity modeling to support expressive communication👫📲😃
KR102577514B1 (en) Method, apparatus for text generation, device and storage medium
CN108363697B (en) Text information generation method and device, storage medium and equipment
CN111444725B (en) Statement generation method, device, storage medium and electronic device
CN104050160B (en) Interpreter's method and apparatus that a kind of machine is blended with human translation
WO2019000326A1 (en) Generating responses in automated chatting
CN107767871B (en) Text display method, terminal and server
CN115082602B (en) Method for generating digital person, training method, training device, training equipment and training medium for model
CN110427614B (en) Construction method and device of paragraph level, electronic equipment and storage medium
CN104008091A (en) Sentiment value based web text sentiment analysis method
US9129216B1 (en) System, method and apparatus for computer aided association of relevant images with text
CN108153831A (en) Music adding method and device
CN109508448A (en) Short information method, medium, device are generated based on long article and calculate equipment
CN112084305A (en) Search processing method, device, terminal and storage medium applied to chat application
KR20210046594A (en) Method and device for pushing information
CN101556596A (en) Input method system and intelligent word making method
KR102146433B1 (en) Method for providing context based language learning service using associative memory
CN112667120A (en) Display method and device of interactive icon and electronic equipment
JP2024064941A (en) Display method, device, pen-type electronic dictionary, electronic device, and storage medium
CN113204624B (en) Multi-feature fusion text emotion analysis model and device
CN115994535A (en) Text processing method and device
CN110287413A (en) The display methods and electronic equipment of e-book description information
CN117436414A (en) Presentation generation method and device, electronic equipment and storage medium
CN108932069A (en) Input method candidate entry determines method, apparatus, equipment and readable storage medium storing program for executing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40025817

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant