CN113360001A - Input text processing method and device, electronic equipment and storage medium - Google Patents

Input text processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113360001A
CN113360001A CN202110580302.0A CN202110580302A CN113360001A CN 113360001 A CN113360001 A CN 113360001A CN 202110580302 A CN202110580302 A CN 202110580302A CN 113360001 A CN113360001 A CN 113360001A
Authority
CN
China
Prior art keywords
text
model
generation model
training
poem
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110580302.0A
Other languages
Chinese (zh)
Inventor
高钧亮
胡哲
赵晓蕾
范敏虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110580302.0A priority Critical patent/CN113360001A/en
Publication of CN113360001A publication Critical patent/CN113360001A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/543User-generated data transfer, e.g. clipboards, dynamic data exchange [DDE], object linking and embedding [OLE]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Machine Translation (AREA)

Abstract

The application discloses a processing method and device of an input text, electronic equipment and a storage medium, and relates to the technical field of computers, in particular to the artificial intelligence fields of natural language processing, deep learning and the like. The specific implementation scheme is as follows: acquiring a first text input by a user on an input interface; inputting the first text into a text generation model generated by training to obtain a second text; and displaying the second text on the input interface. Therefore, when a user inputs a text on the input interface, the text input by the user can be input into the text generation model generated by training, a new text is generated by utilizing the text generation model generated by training and displayed on the input interface for the user to select, and the interestingness of the input content of the user can be improved.

Description

Input text processing method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of computers, in particular to the fields of artificial intelligence such as natural language processing, deep learning and the like, and specifically relates to a processing method and device of an input text, electronic equipment and a storage medium.
Background
With the development of computer technology and internet technology, people are more and more favored with convenient and fast auxiliary tools for life and work, and thus various social software, social platforms and the like combining multiple service functions appear in succession.
Therefore, how to improve the interest of the user input content in the social process is an urgent problem to be solved.
Disclosure of Invention
The application provides a processing method and device for an input text, electronic equipment and a storage medium.
According to an aspect of the present application, there is provided a method for processing an input text, including:
acquiring a first text input by a user on an input interface;
inputting the first text into a text generation model generated by training to obtain a second text;
and displaying the second text on the input interface.
According to another aspect of the present application, there is provided a processing apparatus for inputting text, including:
the first acquisition module is used for acquiring a first text input by a user on an input interface;
the second acquisition module is used for inputting the first text into a text generation model generated by training so as to acquire a second text;
and the display module is used for displaying the second text on the input interface.
According to another aspect of the present application, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the above embodiments.
According to another aspect of the present application, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method according to the above-described embodiments.
According to another aspect of the present application, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the method according to the above embodiments.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic flowchart of a processing method for inputting a text according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of another method for processing an input text according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another method for processing an input text according to an embodiment of the present application;
fig. 4 is a schematic flowchart of another processing method for input text according to an embodiment of the present application;
fig. 5 is a schematic flowchart of another processing method for inputting text according to an embodiment of the present application;
fig. 6 is a schematic flowchart of another processing method for input text according to an embodiment of the present application;
fig. 7 is a schematic flowchart of another processing method for input text according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a processing apparatus for inputting text according to an embodiment of the present application;
fig. 9 is a block diagram of an electronic device for implementing a method for processing input text according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
A method, an apparatus, an electronic device, and a storage medium for processing an input text according to an embodiment of the present application are described below with reference to the drawings.
Artificial intelligence is the subject of research on the use of computers to simulate certain mental processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.) of humans, both in the hardware and software domain. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology comprises a computer vision technology, a voice recognition technology, a natural language processing technology, deep learning, a big data processing technology, a knowledge map technology and the like.
NLP (Natural Language Processing) is an important direction in the fields of computer science and artificial intelligence, and the content of NLP research includes but is not limited to the following branch fields: text classification, information extraction, automatic summarization, intelligent question answering, topic recommendation, machine translation, subject word recognition, knowledge base construction, deep text representation, named entity recognition, text generation, text analysis (lexical, syntactic, grammatical, etc.), speech recognition and synthesis, and the like.
Deep learning is a new research direction in the field of machine learning. Deep learning is the intrinsic law and expression level of the learning sample data, and the information obtained in the learning process is very helpful for the interpretation of data such as characters, images and sounds. The final aim of the method is to enable the machine to have the analysis and learning capability like a human, and to recognize data such as characters, images and sounds.
Fig. 1 is a schematic flowchart of a method for processing an input text according to an embodiment of the present application.
The processing method of the input text of the embodiment of the application can be executed by the processing device of the input text of the embodiment of the application, and the device can be configured on electronic equipment, so that when a user inputs the text on an input interface, the input text is rewritten by using a text generation model, the text corresponding to the input text of the user is generated and displayed on the input interface for the user to select, and the interestingness of the input content is improved.
As shown in fig. 1, the method for processing an input text includes:
step 101, acquiring a first text input by a user on an input interface.
In the application, when a text is input on the user input interface, the text input by the user can be acquired, and for convenience of distinguishing, the text input by the user can be called as a first text. The input interface here may be an input interface for inputting information to be sent in social software, an input interface for content to be published, an input interface for comment content, and the like.
In practical applications, the first text may be input by the user through a key mode, for example, through an input method or a handwriting mode, may be input through a paste operation, or may be obtained by recognizing a voice input by the user.
For example, when the user inputs content in the input box of the chat interface by using the input method, the input method application program may determine the content input by the user according to the characters input by the user and the selection operation, so that the input method application program may acquire the first text input by the user on the input interface.
And 102, inputting the first text into a text generation model generated by training to obtain a second text.
In the application, a text generation model can be trained in advance according to needs, for example, a model for generating an rhyme text is trained, a model for generating a Tibetan poem is trained, a model for renewing a text is trained, and the like. The number of text generation models may be one or multiple, and the text generation models are not limited in the present application.
After the first text input by the user is acquired, the first text can be input into the text generation model, so that the first text is processed by the text generation model to generate a second text.
That is, when the user inputs the first text on the input interface, the text generation model can be used to generate a text with high interest, such as generating a rhyme text, or using each character in the first text as the first character of each sentence to generate a Tibetan poem, or using each character in the first text as the tail character of each sentence to generate a Tibetan poem, etc.
And 103, displaying the second text on the input interface.
After generating the second text, the second text may be displayed on the input interface for selection by the user. The second text may be one or more than one.
In the application, if the selection operation of the user is detected within the preset duration of displaying the second text, the second text selected by the user can be determined according to the selection operation of the user, and the first text input by the user in the input interface is replaced by the second text. Therefore, the interestingness of input content can be improved, and the interestingness in the social process can be improved.
When a plurality of second texts are generated, the second texts may be displayed in a random order, may be displayed based on a preset rule, may be displayed in an order of a weight from high to low, or the like. The weight of the second text may be determined based on the historical behavior data of the user for various types of text.
The processing method of the input text can be applied to the application program of the input interface and can also be applied to an input method.
Taking the application program applied to the input interface as an example, when the user inputs a text on the input interface, the application program to which the input interface belongs may obtain the text input by the user, and may generate a model of the text input by the user to generate a new text, where the new text is generated based on the text input by the user. The application displays the new text on the input interface for selection by the user.
Taking the application to the input method as an example, when a user inputs content on an input interface by using the input method, the input method application program can acquire a text input by the user according to characters and selection operations input by the user, can generate a model of the text input by the user to generate a new text, and the new text is generated based on the text input by the user and is displayed on the input method interface for the user to select.
In the embodiment of the application, a first text input by a user on an input interface is obtained, the first text is input into a text generation model generated by training to obtain a second text, and the second text is displayed on the input interface. Therefore, when a user inputs a text on the input interface, the text input by the user can be input into the text generation model generated by training, a new text is generated by utilizing the text generation model generated by training and displayed on the input interface for the user to select, and the interestingness of the input content of the user can be improved.
In order to make the obtained second text more meet the user requirements and improve the accuracy of the second text, in an embodiment of the present application, if there are a plurality of text generation models, when the second text is obtained, a corresponding text generation model may be determined based on the intention of the first text, so as to obtain the second text by using the determined text generation model. Fig. 2 is a schematic flow chart of another method for processing an input text according to an embodiment of the present application.
As shown in fig. 2, the training of inputting the first text into the generated text generation model to obtain the second text includes:
step 201, determining an application scene corresponding to each text generation model.
In the application, a plurality of text generation models generated by training can be provided, for example, a rhyme-as text generation model, a Tibetan poem generation model, a modern poem generation model and the like.
In the application, the corresponding relationship between each of the text generation models and the application scene may be established in advance, and the application scene corresponding to each text generation model may be determined according to the corresponding relationship.
Alternatively, the application scenario corresponding to each text generation model may be determined based on the text generated by each text generation model. For example, each sentence in the rhyme text has the same or similar vowel, and based on the rhyme text, an application scene corresponding to the rhyme text generation model can be determined, and the application scene can be a daily chat, a content release scene, a comment making scene and the like; the Tibetan poetry can hide the said things at the head of the poetry, so that the application scene corresponding to the Tibetan poetry generating model can be determined, and the scenes such as the relatively dull appearance in chatting and the expression of literature and art can be determined.
It should be noted that one text generation model may correspond to one or more application scenarios.
Step 202, performing intention recognition on the first text to acquire an intention corresponding to the first text.
In the application, the first text can be input into a pre-trained intention recognition model, and the intention of the first text can be recognized by using the intention recognition model. Alternatively, the intention corresponding to the first text may be determined by using a correspondence relationship between the word segmentation and the intention established in advance.
And step 203, extracting a target text generation model from the plurality of text generation models according to the matching degree between the intention and each application scene.
After the intention corresponding to the first text is obtained, the intention corresponding to the first text can be respectively matched with the application scenes of the text generation models, and the target text generation model is extracted from the text generation models according to the matching degree between the intention corresponding to the first text and each application scene. For example, the text generation model with the maximum matching degree may be used as the target text generation model, or a preset number of text generation models with the maximum matching degree may be used as the target text generation model.
The target text generation model may be one or more than one. For example, for a daily chat scene, a rhyme text generation model and a modern poem text generation model can be adopted.
Step 204, inputting the first text into a target text generation model to obtain a second text.
In the present application, step 204 is similar to step 103, and therefore will not be described herein again.
In the embodiment of the application, if a plurality of text generation models are provided, when the second text is obtained, the intention corresponding to the first text can be determined by determining the application scene corresponding to each text generation model and performing intention recognition on the first text, the target text generation models are extracted from the plurality of text generation models according to matching between the intention corresponding to the first text and each application scene, and the second text is obtained by using the target text generation models. Therefore, the text generation model matched with the intention of the first text is extracted from the text generation models, and the second text is obtained by using the extracted text generation model, so that the obtained second text is more in line with the user requirement, and the accuracy of the second text is improved.
The above embodiment determines the target generation model according to the matching degree of the intention of the first text and the application scene of each text generation model. In an embodiment of the application, the input interface may include a plurality of text processing controls, and when the second text is obtained, the target text generation model may be extracted from the plurality of text generation models according to the selected text processing control, and the second text is obtained by using the target text generation model. Fig. 3 is a schematic flow chart of another method for processing an input text according to an embodiment of the present application.
As shown in fig. 3, the training the first text input to generate the text generation model to obtain the second text includes:
step 301, under the condition that any text processing control is selected, determining a target text type according to a processing type corresponding to any selected text processing control.
In the present application, the input interface may include a plurality of text processing controls. And under the condition that any text processing control is selected, determining the target text type according to the processing type corresponding to the selected text processing control.
For example, the input interface includes rhyme text such as a key of rap content, a key of Tibetan poem text, a key of modern poem text and the like, and the corresponding processing types may include processing the first text to obtain rhyme type text (such as rap content), processing the first text to obtain Tibetan poem type text, processing the first text to obtain modern poem type text and the like. When the user triggers the key of the Tibetan poem, the fact that the user wants to display the Tibetan poem text corresponding to the input text is shown, namely the target text type is the Tibetan poem type.
Step 302, extracting a target text generation model from the plurality of text generation models according to the target text type and the corresponding relation between the plurality of text generation models and the text type.
In the application, different text generation models can generate texts of different text types, and then, after the text type of the text to be generated, namely the target text type, is determined, a text generation model corresponding to the target text type, namely the target text generation model, can be extracted from the plurality of text generation models according to the corresponding relationship between the plurality of text generation models and the text type.
For example, the target text type is a Tibetan poem type, and the target text generation model can be determined to be a Tibetan poem generation model.
Step 303, inputting the first text into a target text generation model to obtain a second text.
In the present application, step 303 is similar to step 103, and therefore will not be described herein again.
In this embodiment, the input interface may include a plurality of text processing controls, when the second text is obtained, the target text type is determined according to the processing type corresponding to any selected text processing control in the case where any text processing control is selected, the target text generation model is extracted from the plurality of text generation models according to the target text type and the correspondence between the plurality of text generation models and the text type, and the first text is input into the target text generation model to obtain the second text. Therefore, the user can select the text processing control according to the requirement through the plurality of text processing controls on the input interface so as to select the text type of the text to be generated, and therefore the personalized requirements of the user can be met.
In an embodiment of the application, the input interface may include a plurality of text processing controls, and when obtaining the text in the second step, the text type of the text to be generated may also be determined according to the historical operation data of the user on the plurality of text processing controls, and then the second text is obtained through the corresponding text generation model. Fig. 4 is a schematic flowchart of another method for processing an input text according to an embodiment of the present application.
As shown in fig. 4, the training the first text input to generate the text generation model to obtain the second text includes:
step 401, obtaining historical operation data of a user on a plurality of text processing controls.
In the present application, the input interface may include a plurality of text processing controls. Each text processing control has a corresponding processing type, and the processing type has a corresponding text type. When a certain text processing control is selected, the text type of the text to be generated can be determined according to the processing type corresponding to the text processing control.
In the application, historical operation data of a user on each text processing control on the input interface can be acquired, for example, the number of times each text processing control is selected within the past preset time, the selected time and the like.
Step 402, extracting a target text generation model from a plurality of text generation models according to historical operation data.
After obtaining the historical operation data of the plurality of text processing controls, the target text type can be determined according to the historical operation data of the plurality of text processing controls, and the text generation model corresponding to the target text type, namely the target text generation model, is extracted from the plurality of text generation models according to the corresponding relation between the text generation models and the text types.
In practical application, the more times the user has historically operated a certain text processing control, the more the user prefers to generate the corresponding type of text. Therefore, the text type corresponding to the text processing control with the largest number of selected times in the past preset duration can be used as the target text type, and after the target text type is obtained, the target text generation model can be extracted from the multiple target text generation models according to the corresponding relation between the multiple text generation models and the text type.
For example, the input interface may include rhyme text such as a key of the rap content, a key of the Tibetan poem text, a key of the modern poem text, and the like, and the user often selects the key of the rap content, so that the rhyme text generation model for acquiring the rap content may be used as the target text generation model.
Or determining the text processing control with the most use times in each time period according to the historical use time of each text processing control, taking the text processing control with the most use times in the time period to which the current time belongs as the target text processing control, and determining the target text type according to the processing type corresponding to the target text processing control. After the target text type is determined, a text generation model corresponding to the target text type, namely the target text generation model, can be determined according to the corresponding relationship between the text generation models and the text types.
Step 403, inputting the first text into a target text generation model to obtain a second text.
In the present application, step 403 is similar to step 103, and therefore will not be described herein again.
In the embodiment of the application, the input interface may include a plurality of text processing controls, and when the second text is obtained, the second text may be obtained by obtaining historical operation data of the user on the plurality of text processing controls, extracting the target text generation model from the plurality of text generation models according to the historical operation data, and inputting the first text into the target text generation model. Therefore, the target text generation model is extracted from the text generation models according to the historical operation data of the user on the text processing controls to obtain the second text, so that the obtained second text conforms to the habit of the user, and the accuracy of the second text is improved.
In an embodiment of the present application, if the text generation model is an rhyme-entering text generation model, the rhyme-entering text generation model can be obtained through training in the manner of fig. 5. Fig. 5 is a flowchart illustrating another processing method for inputting a text according to an embodiment of the present application.
As shown in fig. 5, before the first text is input into the generated text generation model for training to obtain the second text, the method for processing the input text further includes:
step 501, a first training data set is obtained, wherein the first training data set includes a plurality of training texts and a first vowel of each training text.
In the application, lyrics, smooth mouth, nursery rhymes and the like can be obtained, and the finals of each sentence can be determined, so that a training data set can be obtained, which is called as a first training data set for convenience of distinguishing. The first training data set may include a plurality of training texts and a first vowel of each training text. Each training text can be a sentence or a plurality of sentences, and if the training text is a plurality of sentences, the finals of the first finals of the plurality of sentences are the same or similar.
Step 502, inputting each training text into the initial rhyme text generation model to obtain a prediction text and a second rhyme.
In the application, each training text can be input into the initial rhyme text generation model, and the training text is processed by using the initial rhyme text generation model to obtain the first prediction text and the second vowel. The first predictive text may include a plurality of sentences.
Step 503, under the condition that the second vowel foot is not matched with the corresponding first vowel foot, correcting the initial rhyme model according to the difference between the second vowel foot and the corresponding first vowel foot until the second vowel foot is matched with the corresponding first vowel foot to generate the rhyme text generation model.
Because the rhyme text such as the rap content generally rhyme is rhyme between each sentence, when the rhyme text generation model is trained, if the second vowel of the first prediction text is not matched with the corresponding first vowel, the initial rhyme text generation model can be modified according to the difference between the second initial character and the corresponding first initial character, so as to process the training text by using the modified rhyme text generation model until the second vowel of the first prediction text is matched with the first vowel of the corresponding training text, so as to generate the rhyme text generation model.
For example, the position of the rhyme of the first prediction text may be different from the position of the rhyme of the training text, or the rhyme and the rhyme of the training text may be different from each other, or both of them may be included.
Because the rhyme is in various forms, such as single rhyme, double rhyme, three rhyme and the like, wherein the single rhyme refers to the last word rhyme of the sentence, the double rhyme refers to the last two word rhymes of the sentence, the three rhyme refers to the last three word rhymes of the sentence, and the like, and the rest can be done in sequence. Taking the double-entry as an example, the condition that the second vowel of the first prediction text is not matched with the corresponding first vowel can be that the first prediction text is different from the corresponding position of the training text in terms of the entry vowels and vowels, or the first prediction text is different from the training text in terms of the entry modes, for example, the first prediction text is a single entry, and the second training text is a double entry.
In the application, the rhyme-entering text generation model can generate texts in various rhyme-entering forms, and the rhyme-entering text generation model can be a deep model obtained by deep learning mode training.
In the embodiment of the application, if the text generation model for obtaining the second text is the rhyme text generation model, when the rhyme text generation model is generated through training, each training text is input into the initial rhyme text generation model through obtaining the training data set comprising a plurality of training texts and the first rhyme of each training text so as to obtain the first text prediction text and the second rhyme, and the initial rhyme text generation model is trained according to the matching condition between the second rhyme and the corresponding first rhyme, so that the rhyme text generation model is obtained.
In an embodiment of the application, if the text generating model is a Tibetan poem generating model, the Tibetan poem generating model can be obtained through training in the manner of fig. 6. Fig. 6 is a flowchart illustrating another processing method for inputting a text according to an embodiment of the present application.
As shown in fig. 6, before the first text is input into the generated text generation model for training to obtain the second text, the method for processing the input text further includes:
step 601, a second training data set is obtained, wherein the second training data set comprises a plurality of poetry texts.
In this application, can acquire a large amount of poems such as ancient poetry, modern poetry etc. to acquire the second training data set, wherein, include a plurality of poetry texts in the second training data set. Each poem text can be a poem or a part of adjacent sentences in the poem.
Step 602, obtaining the first character of each sentence in each poem text.
In the application, in order to enable the neural network to better learn the corresponding relation between the hidden head and the generated text, the first character of each sentence in each poetry text can be extracted. For the sake of distinction, the first character is referred to.
For example, the poetry text is' best on a white day; entering the ocean current from the yellow river; lei Qianlimu; on the upper floor, the initial characters "white", "yellow", "desire" and "more" of each sentence of the poem text can be obtained.
Step 603, inputting each poem text and the first character of each sentence into the initial Tibetan poem generating model to obtain a second prediction text.
In the application, when each poem text is input into the initial Tibetan poem generating model, the first character of each sentence of the poem text can be simultaneously input into the initial Tibetan poem generating model so as to obtain the second prediction text.
As one implementation, the initial character of each sentence may be input to the initial Tibetan poem generating model as a prefix of the poem text. For example, let "want white and yellow to be more _ SEP _ be up to the mountain on a white day; entering the ocean current from the yellow river; lei Qianlimu; and taking the next floor as an input to carry out model training. Wherein "_ SEP _" denotes a special delimiter.
As another implementation, the initial character of each sentence may also be input to the initial Tibetan poem generating model as a suffix of the poem text.
In the method and the device, each poem text and the first character of each sentence are used as input, the model can learn to generate corresponding text content under the condition of appointed head hiding control, and therefore the accuracy of the model is improved.
Step 604, in the case that the second initial character in the second prediction text is not matched with the first initial character in the corresponding poetry text, modifying the initial poetry generating model according to the difference between the second initial character and the corresponding first initial character until the second initial character is matched with the corresponding first initial character to generate the poetry generating model.
In the application, the second first character of each sentence in the second prediction text is obtained, and the second first character is matched with the corresponding first character. And under the condition that the second initial character in the second prediction text is not matched with the first initial character in the corresponding poetry text, correcting the initial Tibetan poetry generating model by using the difference between the second initial character and the corresponding first initial character, and continuing training by using the corrected model until the second initial character is matched with the corresponding first initial character to generate the Tibetan poetry generating model.
The first character of the first sentence in the second predicted text is not matched with the first character of the corresponding poetry text, and may be that the first character of the certain sentence in the second predicted text is different from the first character of the corresponding position sentence in the poetry text, for example, the first character of the second sentence in the second predicted text is different from the first character of the second sentence in the poetry text.
Because the Tibetan poetry is also one of poetry and usually has the requirement of rhyme retention, when the Tibetan poetry generating model is trained, the difference between the vowel of the second prediction text and the vowel of the poetry text can be used as the correction basis of the initial Tibetan poetry generating model, so that the first character of each sentence in the text generated by the generated Tibetan poetry generating model is the content input by the user and the rhyme retention between the sentences is realized.
In the embodiment of the application, if the text generation model for acquiring the second text is the Tibetan poem generation model, when the Tibetan poem generation model is trained, the first initial character of each sentence in each poem text is acquired, each poem text and the first initial character of each sentence are input into the initial Tibetan poem generation model to acquire the second prediction text, and under the condition that the second initial character in the second prediction text is not matched with the first initial character in the corresponding poem text, the initial Tibetan poem generation model is trained according to the difference between the second initial character and the corresponding first initial character, so that the Tibetan poem generation model is acquired. Therefore, each poem text and the first character of each sentence are used as input, so that the model can learn to generate corresponding text content under the condition of designating the head hiding control, and the accuracy of the model is improved.
In an embodiment of the present application, if the text generating model is a modern poem generating model, the modern poem generating model may be obtained through training in the manner of fig. 7. Fig. 7 is a flowchart illustrating another processing method for inputting a text according to an embodiment of the present application.
As shown in fig. 7, before the first text is input into the generated text generation model for training to obtain the second text, the method for processing the input text further includes:
step 701, a third training data set is obtained, wherein the third training data set may include a plurality of plain texts and a modern poem text semantically matched with each plain text.
In the method and the device, a plurality of common texts and the modern poetry texts corresponding to the common texts can be obtained, and the common texts and the modern poetry texts corresponding to the common texts are used as training sets of the modern poetry generation model. The semantics of the ordinary texts are matched with the corresponding modern poem texts, namely, each ordinary text has the modern poem text matched with the semantics of the ordinary text.
Step 702, inputting each common text into the initial modern poem generating model to obtain a second prediction text.
In the application, in order to enable the model to generate corresponding modern poems under the condition that a user inputs a common text, each common text can be input into the initial modern poem generating model, the common text is processed by using the initial modern poem generating model, and a second prediction text is output.
And 703, under the condition that the semantic similarity between the third predicted text and the corresponding modern poem text is smaller than a threshold value, correcting the initial modern poem generating model according to the semantic similarity between the third predicted text and the corresponding modern poem text until the semantic similarity between the third predicted text and the corresponding modern poem text is equal to or larger than the threshold value so as to generate the modern poem generating model.
In the method, in order to enable the semantics of the text generated by the modern poetry generating model to be consistent with the semantics of the text input by the user, under the condition that the semantic similarity between the third predicted text and the corresponding modern poetry text is smaller than a threshold value, the semantics of the text output by the initial modern poetry generating model is not matched with the semantics of the input common text, the initial modern poetry generating model can be corrected according to the semantic similarity between the third predicted text and the corresponding modern poetry text, and the training is continued by using the corrected model until the semantic similarity between the third predicted text and the corresponding modern poetry text is equal to or larger than the threshold value, so that the modern poetry generating model is generated.
Because modern poetry sometimes has the requirement of rhyme, when the modern poetry generating model is trained, the difference between the vowel foot of the third prediction text and the vowel foot of the modern poetry text can be used as the correction basis of the initial modern poetry generating model, so that the generated text of the modern poetry generating model is matched with the content semanteme input by the user, and rhyme is rhyme between sentences in the generated text.
In the embodiment of the application, if the text generation model for acquiring the second text is a modern poem generation model, when the modern poem generation model is trained, a third training data set comprising a plurality of common texts and a modern poem text semantically matched with each common text is acquired, each common text is input into the initial modern poem generation model to acquire a third predicted text, and under the condition that the semantic similarity between the third predicted text and the corresponding modern poem text is smaller than a threshold value, the initial modern poem generation model is trained by utilizing the semantic similarity between the third predicted text and the corresponding modern poem text until the third predicted text is semantically matched with the corresponding modern poem text to generate the modern poem generation model. Therefore, the model is trained according to the semantic similarity between the predicted text and the corresponding modern poem text, so that the text generated by the modern poem generation model can be semantically matched with the text input by the user.
In training the text generation model, the initial model used may be a pre-trained model. For example, a transformer (transformer) based network architecture can be employed to pre-train on a large number of Chinese predictions. During training, the self-supervision learning target can be predicted by using the structured segments, and a pre-training model with a simple structure and a better effect is obtained.
In order to solve the problem of insufficient training data, a multi-stage training method can be adopted to improve the generation effect of the model. For example, a large number of pre-training corpora may be used for the first stage of training, so that the model has the capability of generating a smooth text; in the second stage, large-scale literature corpora (such as scattered texts, composition texts and the like) can be adopted for continuous training, so that the model can generate text contents with literature; in the third stage, model training can be performed on specific tasks, that is, training can be further performed on specific tasks based on the models completed in the first two stages, so that the models can generate high-quality results. During the third stage training process, the maximum likelihood target may be used for training.
For example, for the third stage, a corresponding training data set may be adopted to train to obtain an rhyme-rhyme text generation model, a Tibetan poem text generation model, a modern poem text generation model, and the like.
It should be noted that the text generation model is only an example, and other text generation models may also be obtained by training on a pre-training model as needed, which is not limited in the present application.
In order to implement the foregoing embodiments, an apparatus for processing an input text is further provided in the embodiments of the present application. Fig. 8 is a schematic structural diagram of a processing apparatus for inputting a text according to an embodiment of the present application.
As shown in fig. 8, the processing apparatus 800 for inputting text includes:
a first obtaining module 810, configured to obtain a first text input by a user on an input interface;
a second obtaining module 820, configured to input the first text into a text generation model generated by training, so as to obtain a second text;
a display module 830, configured to display the second text on the input interface.
In a possible implementation manner of the embodiment of the present application, the text generation model includes a plurality of text generation models, and the second obtaining module 820 is configured to:
determining an application scene corresponding to each text generation model;
performing intention recognition on the first text to acquire an intention corresponding to the first text;
extracting a target text generation model from a plurality of text generation models according to the matching degree between the intention and each application scene;
and inputting the first text into the target text generation model to obtain the second text.
In a possible implementation manner of the embodiment of the present application, the input interface includes a plurality of text processing controls, and the second obtaining module 820 is configured to:
under the condition that any text processing control is selected, determining a target text type according to a processing type corresponding to the selected text processing control;
extracting a target text generation model from the plurality of text generation models according to the target text type and the corresponding relation between the text generation models and the text types;
and inputting the first text into the target text generation model to obtain the second text.
In a possible implementation manner of the embodiment of the present application, the input interface includes a plurality of text processing controls, and the second obtaining module 820 is configured to:
acquiring historical operation data of the user on the plurality of text processing controls;
extracting target text generation models from a plurality of text generation models according to the historical operation data;
and inputting the first text into the target text generation model to obtain the second text.
In a possible implementation manner of the embodiment of the present application, the text generation model is an rhyme text generation model, and the apparatus further includes:
the third acquisition module is used for acquiring a first training data set, wherein the first training data set comprises a plurality of training texts and a first vowel of each training text;
the first training module is used for inputting each training text into an initial rhyme text generation model so as to obtain a first prediction text and a second rhyme; and under the condition that the second vowel foot is not matched with the corresponding first vowel foot, correcting the initial rhyme model according to the difference between the second vowel foot and the corresponding first vowel foot until the second vowel foot is matched with the corresponding first vowel foot to generate the rhyme text generation model.
In a possible implementation manner of the embodiment of the present application, the text generating model is a Tibetan poem generating model, and the apparatus further includes:
the fourth acquisition module is used for acquiring a second training data set, wherein the second training data set comprises a plurality of poetry texts;
a fifth obtaining module, configured to obtain a first character of each sentence in each poetry text;
the second training module is used for inputting each poetry text and the first character of each sentence into an initial Tibetan poetry generating model so as to obtain a second prediction text; and under the condition that a second initial character in the second prediction text is not matched with a first initial character in the corresponding poetry text, correcting the initial poetry generating model according to the difference between the second initial character and the corresponding first initial character until the second initial character is matched with the corresponding first initial character so as to generate the poetry generating model.
In a possible implementation manner of the embodiment of the present application, the text generation model is a modern poem generation model, and the apparatus further includes:
a sixth obtaining module, configured to obtain a third training data set, where the third training data set includes a plurality of common texts and modern poem texts semantically matched with each of the common texts;
the third training module is used for inputting each common text into the initial modern poetry generating model so as to obtain a third prediction text; and under the condition that the semantic similarity between the third predicted text and the corresponding modern poem text is smaller than a threshold value, correcting the initial modern poem generating model according to the semantic similarity between the third predicted text and the corresponding modern poem text until the semantic similarity between the third predicted text and the corresponding modern poem text is equal to or larger than the threshold value so as to generate the modern poem generating model.
It should be noted that the explanation of the foregoing embodiment of the method for processing an input text is also applicable to the apparatus for processing an input text of this embodiment, and therefore, the explanation is not repeated here.
In the embodiment of the application, a first text input by a user on an input interface is obtained, the first text is input into a text generation model generated by training to obtain a second text, and the second text is displayed on the input interface. Therefore, when a user inputs a text on the input interface, the text input by the user can be input into the text generation model generated by training, a new text is generated by utilizing the text generation model generated by training and displayed on the input interface for the user to select, and the interestingness of the input content of the user can be improved.
There is also provided, in accordance with an embodiment of the present application, an electronic device, a readable storage medium, and a computer program product.
FIG. 9 illustrates a schematic block diagram of an example electronic device 900 that can be used to implement embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 9, the apparatus 900 includes a computing unit 901 that can perform various appropriate actions and processes in accordance with a computer program stored in a ROM (Read-Only Memory) 902 or a computer program loaded from a storage unit 908 into a RAM (Random Access Memory) 903. In the RAM903, various programs and data required for the operation of the device 900 can also be stored. The calculation unit 901, ROM 902, and RAM903 are connected to each other via a bus 904. An I/O (Input/Output) interface 905 is also connected to the bus 904.
A number of components in the device 900 are connected to the I/O interface 905, including: an input unit 906 such as a keyboard, a mouse, and the like; an output unit 907 such as various types of displays, speakers, and the like; a storage unit 908 such as a magnetic disk, optical disk, or the like; and a communication unit 909 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 909 allows the device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing Unit 901 include, but are not limited to, a CPU (Central Processing Unit), a GPU (graphics Processing Unit), various dedicated AI (Artificial Intelligence) computing chips, various computing Units running machine learning model algorithms, a DSP (Digital Signal Processor), and any suitable Processor, controller, microcontroller, and the like. The calculation unit 901 executes the respective methods and processes described above, such as a processing method of an input text. For example, in some embodiments, the processing method of the input text may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 900 via ROM 902 and/or communications unit 909. When the computer program is loaded into the RAM903 and executed by the computing unit 901, one or more steps of the processing method of the input text described above may be performed. Alternatively, in other embodiments, the computing unit 901 may be configured to perform the processing method of the input text in any other suitable way (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be realized in digital electronic circuitry, Integrated circuitry, FPGAs (Field Programmable Gate arrays), ASICs (Application-Specific Integrated circuits), ASSPs (Application Specific Standard products), SOCs (System On Chip, System On a Chip), CPLDs (Complex Programmable Logic devices), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present application may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a RAM, a ROM, an EPROM (Electrically Programmable Read-Only-Memory) or flash Memory, an optical fiber, a CD-ROM (Compact Disc Read-Only-Memory), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a Display device (e.g., a CRT (Cathode Ray Tube) or LCD (Liquid Crystal Display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: LAN (Local Area Network), WAN (Wide Area Network), internet, and blockchain Network.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server may be a cloud Server, which is also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in a conventional physical host and a VPS (Virtual Private Server). The server may also be a server of a distributed system, or a server incorporating a blockchain.
According to an embodiment of the present application, there is also provided a computer program product, which when executed by an instruction processor in the computer program product, performs the method for processing an input text set forth in the above-mentioned embodiment of the present application.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (17)

1. A processing method of input text comprises the following steps:
acquiring a first text input by a user on an input interface;
inputting the first text into a text generation model generated by training to obtain a second text;
and displaying the second text on the input interface.
2. The method of claim 1, wherein the text generation model is a plurality of, and the training the first text input into the generated text generation model to obtain a second text comprises:
determining an application scene corresponding to each text generation model;
performing intention recognition on the first text to acquire an intention corresponding to the first text;
extracting a target text generation model from a plurality of text generation models according to the matching degree between the intention and each application scene;
and inputting the first text into the target text generation model to obtain the second text.
3. The method of claim 1, wherein the input interface includes a plurality of text processing controls, and training the first text input to the generated text generation model to obtain a second text comprises:
under the condition that any text processing control is selected, determining a target text type according to a processing type corresponding to the selected text processing control;
extracting a target text generation model from the plurality of text generation models according to the target text type and the corresponding relation between the text generation models and the text types;
and inputting the first text into the target text generation model to obtain the second text.
4. The method of claim 1, wherein the input interface includes a plurality of text processing controls, and training the first text input to the generated text generation model to obtain a second text comprises:
acquiring historical operation data of the user on the plurality of text processing controls;
extracting target text generation models from a plurality of text generation models according to the historical operation data;
and inputting the first text into the target text generation model to obtain the second text.
5. The method of claim 1, wherein the text generation model is an rhyme text generation model, and further comprising, before the training the first text input into the generated text generation model to obtain the second text:
acquiring a first training data set, wherein the first training data set comprises a plurality of training texts and a first vowel of each training text;
inputting each training text into an initial rhyme text generation model to obtain a first prediction text and a second rhyme;
and under the condition that the second vowel foot is not matched with the corresponding first vowel foot, correcting the initial rhyme model according to the difference between the second vowel foot and the corresponding first vowel foot until the second vowel foot is matched with the corresponding first vowel foot to generate the rhyme text generation model.
6. The method of claim 1, wherein the text generating model is a Tibetan poem generating model, and further comprising, before the inputting the first text into the generated text generating model trained to obtain the second text:
acquiring a second training data set, wherein the second training data set comprises a plurality of poem texts;
acquiring a first character of each sentence in each poem text;
inputting each poem text and the first character of each sentence into an initial Tibetan poem generating model to obtain a second prediction text;
and under the condition that a second initial character in the second prediction text is not matched with a first initial character in the corresponding poetry text, correcting the initial poetry generating model according to the difference between the second initial character and the corresponding first initial character until the second initial character is matched with the corresponding first initial character so as to generate the poetry generating model.
7. The method of claim 1, wherein the text generation model is a modern poetry generation model, and further comprising, before the inputting the first text into the text generation model generated by training to obtain the second text:
acquiring a third training data set, wherein the third training data set comprises a plurality of common texts and modern poem texts semantically matched with the common texts;
inputting each common text into an initial modern poem generating model to obtain a third predicted text;
and under the condition that the semantic similarity between the third predicted text and the corresponding modern poem text is smaller than a threshold value, correcting the initial modern poem generating model according to the semantic similarity between the third predicted text and the corresponding modern poem text until the semantic similarity between the third predicted text and the corresponding modern poem text is equal to or larger than the threshold value so as to generate the modern poem generating model.
8. A processing apparatus for inputting text, comprising:
the first acquisition module is used for acquiring a first text input by a user on an input interface;
the second acquisition module is used for inputting the first text into a text generation model generated by training so as to acquire a second text;
and the display module is used for displaying the second text on the input interface.
9. The apparatus of claim 8, wherein the text generation model is plural, and the second obtaining module is configured to:
determining an application scene corresponding to each text generation model;
performing intention recognition on the first text to acquire an intention corresponding to the first text;
extracting a target text generation model from a plurality of text generation models according to the matching degree between the intention and each application scene;
and inputting the first text into the target text generation model to obtain the second text.
10. The apparatus of claim 8, wherein the input interface comprises a plurality of text processing controls, the second obtaining module to:
under the condition that any text processing control is selected, determining a target text type according to a processing type corresponding to the selected text processing control;
extracting a target text generation model from the plurality of text generation models according to the target text type and the corresponding relation between the text generation models and the text types;
and inputting the first text into the target text generation model to obtain the second text.
11. The apparatus of claim 8, wherein the input interface comprises a plurality of text processing controls, the second obtaining module to:
acquiring historical operation data of the user on the plurality of text processing controls;
extracting target text generation models from a plurality of text generation models according to the historical operation data;
and inputting the first text into the target text generation model to obtain the second text.
12. The apparatus of claim 8, wherein the text generation model is an rhyme text generation model, the apparatus further comprising:
the third acquisition module is used for acquiring a first training data set, wherein the first training data set comprises a plurality of training texts and a first vowel of each training text;
the first training module is used for inputting each training text into an initial rhyme text generation model so as to obtain a first prediction text and a second rhyme; and under the condition that the second vowel foot is not matched with the corresponding first vowel foot, correcting the initial rhyme model according to the difference between the second vowel foot and the corresponding first vowel foot until the second vowel foot is matched with the corresponding first vowel foot to generate the rhyme text generation model.
13. The apparatus of claim 8, wherein the text generating model is a Tibetan poem generating model, the apparatus further comprising:
the fourth acquisition module is used for acquiring a second training data set, wherein the second training data set comprises a plurality of poetry texts;
a fifth obtaining module, configured to obtain a first character of each sentence in each poetry text;
the second training module is used for inputting each poetry text and the first character of each sentence into an initial Tibetan poetry generating model so as to obtain a second prediction text; and under the condition that a second initial character in the second prediction text is not matched with a first initial character in the corresponding poetry text, correcting the initial poetry generating model according to the difference between the second initial character and the corresponding first initial character until the second initial character is matched with the corresponding first initial character so as to generate the poetry generating model.
14. The apparatus of claim 8, wherein the text generating model is a modern poetry generating model, the apparatus further comprising:
a sixth obtaining module, configured to obtain a third training data set, where the third training data set includes a plurality of common texts and modern poem texts semantically matched with each of the common texts;
the third training module is used for inputting each common text into the initial modern poetry generating model so as to obtain a third prediction text; and under the condition that the semantic similarity between the third predicted text and the corresponding modern poem text is smaller than a threshold value, correcting the initial modern poem generating model according to the semantic similarity between the third predicted text and the corresponding modern poem text until the semantic similarity between the third predicted text and the corresponding modern poem text is equal to or larger than the threshold value so as to generate the modern poem generating model.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
17. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-7.
CN202110580302.0A 2021-05-26 2021-05-26 Input text processing method and device, electronic equipment and storage medium Pending CN113360001A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110580302.0A CN113360001A (en) 2021-05-26 2021-05-26 Input text processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110580302.0A CN113360001A (en) 2021-05-26 2021-05-26 Input text processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113360001A true CN113360001A (en) 2021-09-07

Family

ID=77527732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110580302.0A Pending CN113360001A (en) 2021-05-26 2021-05-26 Input text processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113360001A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114911553A (en) * 2022-03-28 2022-08-16 携程旅游信息技术(上海)有限公司 Text processing task construction method, device, equipment and storage medium
CN116861860A (en) * 2023-07-06 2023-10-10 百度(中国)有限公司 Text processing method and device, electronic equipment and storage medium
CN116861861A (en) * 2023-07-06 2023-10-10 百度(中国)有限公司 Text processing method and device, electronic equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180286426A1 (en) * 2017-03-29 2018-10-04 Microsoft Technology Licensing, Llc Voice synthesized participatory rhyming chat bot
CN109086408A (en) * 2018-08-02 2018-12-25 腾讯科技(深圳)有限公司 Document creation method, device, electronic equipment and computer-readable medium
CN109977390A (en) * 2017-12-27 2019-07-05 北京搜狗科技发展有限公司 A kind of method and device generating text
CN110134968A (en) * 2019-05-22 2019-08-16 网易(杭州)网络有限公司 Poem generation method, device, equipment and storage medium based on deep learning
US20200051536A1 (en) * 2017-09-30 2020-02-13 Tencent Technology (Shenzhen) Company Limited Method and apparatus for generating music
CN111046648A (en) * 2019-10-29 2020-04-21 平安科技(深圳)有限公司 Rhythm-controlled poetry generating method, device and equipment and storage medium
CN111221940A (en) * 2020-01-03 2020-06-02 京东数字科技控股有限公司 Text generation method and device, electronic equipment and storage medium
CN111444679A (en) * 2020-03-27 2020-07-24 北京小米松果电子有限公司 Poetry generation method and device, electronic equipment and storage medium
CN112101006A (en) * 2020-09-14 2020-12-18 中国平安人寿保险股份有限公司 Poetry generation method and device, computer equipment and storage medium
CN112651235A (en) * 2020-12-24 2021-04-13 北京搜狗科技发展有限公司 Poetry generation method and related device
CN112784599A (en) * 2020-12-23 2021-05-11 北京百度网讯科技有限公司 Poetry sentence generation method and device, electronic equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180286426A1 (en) * 2017-03-29 2018-10-04 Microsoft Technology Licensing, Llc Voice synthesized participatory rhyming chat bot
US20200051536A1 (en) * 2017-09-30 2020-02-13 Tencent Technology (Shenzhen) Company Limited Method and apparatus for generating music
CN109977390A (en) * 2017-12-27 2019-07-05 北京搜狗科技发展有限公司 A kind of method and device generating text
CN109086408A (en) * 2018-08-02 2018-12-25 腾讯科技(深圳)有限公司 Document creation method, device, electronic equipment and computer-readable medium
CN110134968A (en) * 2019-05-22 2019-08-16 网易(杭州)网络有限公司 Poem generation method, device, equipment and storage medium based on deep learning
CN111046648A (en) * 2019-10-29 2020-04-21 平安科技(深圳)有限公司 Rhythm-controlled poetry generating method, device and equipment and storage medium
CN111221940A (en) * 2020-01-03 2020-06-02 京东数字科技控股有限公司 Text generation method and device, electronic equipment and storage medium
CN111444679A (en) * 2020-03-27 2020-07-24 北京小米松果电子有限公司 Poetry generation method and device, electronic equipment and storage medium
CN112101006A (en) * 2020-09-14 2020-12-18 中国平安人寿保险股份有限公司 Poetry generation method and device, computer equipment and storage medium
CN112784599A (en) * 2020-12-23 2021-05-11 北京百度网讯科技有限公司 Poetry sentence generation method and device, electronic equipment and storage medium
CN112651235A (en) * 2020-12-24 2021-04-13 北京搜狗科技发展有限公司 Poetry generation method and related device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐菲菲等: "文本词向量与预训练语言模型研究", 《上海电力大学学报》 *
徐菲菲等: "文本词向量与预训练语言模型研究", 《上海电力大学学报》, no. 04, 15 August 2020 (2020-08-15), pages 320 - 328 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114911553A (en) * 2022-03-28 2022-08-16 携程旅游信息技术(上海)有限公司 Text processing task construction method, device, equipment and storage medium
CN116861860A (en) * 2023-07-06 2023-10-10 百度(中国)有限公司 Text processing method and device, electronic equipment and storage medium
CN116861861A (en) * 2023-07-06 2023-10-10 百度(中国)有限公司 Text processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112560912B (en) Classification model training method and device, electronic equipment and storage medium
US20220215183A1 (en) Automatic post-editing model for neural machine translation
CN112560479B (en) Abstract extraction model training method, abstract extraction device and electronic equipment
CN113360001A (en) Input text processing method and device, electronic equipment and storage medium
KR102565673B1 (en) Method and apparatus for generating semantic representation model,and storage medium
CN113220836A (en) Training method and device of sequence labeling model, electronic equipment and storage medium
CN112580339B (en) Model training method and device, electronic equipment and storage medium
CN114416943B (en) Training method and device for dialogue model, electronic equipment and storage medium
CN113450759A (en) Voice generation method, device, electronic equipment and storage medium
CN116012481B (en) Image generation processing method and device, electronic equipment and storage medium
CN112633017A (en) Translation model training method, translation processing method, translation model training device, translation processing equipment and storage medium
CN115309877A (en) Dialog generation method, dialog model training method and device
CN110851601A (en) Cross-domain emotion classification system and method based on layered attention mechanism
CN113657100A (en) Entity identification method and device, electronic equipment and storage medium
CN111859953A (en) Training data mining method and device, electronic equipment and storage medium
CN112466289A (en) Voice instruction recognition method and device, voice equipment and storage medium
CN115688920A (en) Knowledge extraction method, model training method, device, equipment and medium
CN110991175A (en) Text generation method, system, device and storage medium under multiple modes
CN114399772B (en) Sample generation, model training and track recognition methods, devices, equipment and media
CN117290515A (en) Training method of text annotation model, method and device for generating text graph
CN112860995A (en) Interaction method, device, client, server and storage medium
CN110457691B (en) Script role based emotional curve analysis method and device
CN112466277A (en) Rhythm model training method and device, electronic equipment and storage medium
CN112784599B (en) Method and device for generating poem, electronic equipment and storage medium
CN114758649B (en) Voice recognition method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210907