CN114021545A - Automatic poem making language model training method and device and automatic poem making method and device - Google Patents

Automatic poem making language model training method and device and automatic poem making method and device Download PDF

Info

Publication number
CN114021545A
CN114021545A CN202210003512.8A CN202210003512A CN114021545A CN 114021545 A CN114021545 A CN 114021545A CN 202210003512 A CN202210003512 A CN 202210003512A CN 114021545 A CN114021545 A CN 114021545A
Authority
CN
China
Prior art keywords
text
poetry
language model
complete
preset initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210003512.8A
Other languages
Chinese (zh)
Inventor
邹旭
杨植麟
殷达
丁铭
唐杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhiyuan Wudao Technology Co ltd
Original Assignee
Beijing Zhiyuan Wudao Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhiyuan Wudao Technology Co ltd filed Critical Beijing Zhiyuan Wudao Technology Co ltd
Priority to CN202210003512.8A priority Critical patent/CN114021545A/en
Publication of CN114021545A publication Critical patent/CN114021545A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Machine Translation (AREA)

Abstract

The application discloses a method and a device for training a language model for automatically making poems, and a method and a device for automatically making poems, and belongs to the technical field of natural language processing. The method mainly comprises the following steps: inputting a preset initial text into a preset language model to obtain at least one complete poetry sentence text corresponding to the preset initial text; calculating the confusion rate between each complete poetry sentence text and a preset initial text respectively, further obtaining the corresponding score of each complete poetry sentence text respectively, and sequencing the scores; acquiring a target complete verse text from the complete verse text according to the sequencing result; and adjusting the language model according to the target complete poetry sentence text to obtain a target language model. According to the poetry sentence generating method and device, reverse inquiry and linear search are carried out after poetry sentences are generated according to the preset initial texts in the preset language model, poetry sentences with high correlation degree with the preset initial texts are screened, correlation degree between the generated poetry sentences and the preset initial texts is increased, and quality of the poetry sentences generated by the language model is improved.

Description

Automatic poem making language model training method and device and automatic poem making method and device
Technical Field
The application relates to the technical field of natural language processing, in particular to a poetry automatic making model training method and device and a poetry automatic making method and device.
Background
Attempts to create traditional poems using artificial intelligence have been long in different stages of artificial intelligence, but most of these attempts have been limited to the field of ancient poems. In recent years, with the development of deep learning, some researchers also begin to use a deep learning method to train and apply the weight specially creating poetry by using a special poetry model and data, and develop poetry applications like "nine songs", "poetry three hundred", and the like. However, due to the limitation of data, the applications can only learn the common image in the ancient poetry, but the bypass is difficult to be realized, so that the three aspects are performed at once, and no or few images such as Beijing, New York and the like appear in the traditional poetry, so that better results cannot be obtained.
In recent years, large-scale pre-trained language models have become new highlights in the field of natural language processing, with their scale and performance. The language model obtained by training the large-scale natural language data collected on the internet in a mode of predicting the next word according to the above is not enough in the process of directly completing the judgment type task, but can learn information contained in massive texts under the condition of no label, so that a surprisingly good result is obtained in the generation type task.
Although the pre-training model can achieve excellent generation effect on common text generation, the conventional generation method is still limited to generating common text similar to training data, and has poor quality and little optimization work on cross-domain generation. The poetry is generated directly from the language model, no matter what inquiry format is used, the appropriate poetry cannot be generated in a large probability, and even if poetry sentences of which the formats meet the requirements are generated reluctantly, the poetry is free from questions and low in quality.
Disclosure of Invention
Aiming at the problems that in the prior art, appropriate poetry cannot be generated or the quality is low even if poetry with a format meeting the requirements is generated, the application mainly provides an automatic poetry language model training method and device and an automatic poetry method and device.
In order to achieve the above object, the present application adopts a technical solution that: the automatic poetry language model training method comprises the following steps: inputting a preset initial text into a preset language model to obtain at least one complete poetry sentence text corresponding to the preset initial text, wherein the preset initial text comprises a title and a genre; calculating the confusion rate between each complete poetry sentence text and a preset initial text respectively, further obtaining the corresponding score of each complete poetry sentence text respectively, and sequencing the scores; acquiring a target complete verse text from the complete verse text according to the sequencing result; and adjusting the language model according to the target complete poetry sentence text to obtain a target language model.
Another technical scheme adopted by the application is as follows: the utility model provides an automatic make poetry language model training device which includes: the module is used for inputting a preset initial text into a preset language model and obtaining at least one complete verse text corresponding to the preset initial text, wherein the preset initial text comprises a title and a genre; a module for calculating the confusion rate between each complete poetry sentence text and a preset initial text, further obtaining the scores corresponding to each complete poetry sentence text, and sequencing the scores; a module for obtaining a target complete verse text from the complete verse text according to the sequencing result; and the module is used for adjusting the language model according to the target complete poetry sentence text to obtain a target language model.
Another technical scheme adopted by the application is as follows: an automatic poetry method is provided, which is characterized by comprising the following steps: inputting a preset initial text into a preset target language model to obtain at least one complete poetry sentence text corresponding to the preset initial text, wherein the preset initial text comprises a title and a genre; calculating the confusion rate between each complete poetry sentence text and a preset initial text respectively, further obtaining the corresponding score of each complete poetry sentence text respectively, and sequencing the confusion rates; and acquiring a target complete verse text from the complete verse text according to the sequencing result.
Another technical scheme adopted by the application is as follows: an automatic poetry device is provided, which is characterized by comprising: the module is used for inputting a preset initial text into a preset target language model and obtaining at least one complete verse text corresponding to the preset initial text, wherein the preset initial text comprises a title and a genre; a module for calculating the confusion rate between each complete poetry sentence text and a preset initial text, further obtaining the corresponding score of each complete poetry sentence text, and sequencing the confusion rate; and the module is used for acquiring a target complete verse text from the complete verse text according to the sequencing result.
The technical scheme of the application can reach the beneficial effects that: the application designs an automatic poetry language model training method and device and an automatic poetry method and device. According to the method, after poetry is generated by a preset language model according to a preset initial text, reverse inquiry and linear search are carried out, poetry with high correlation degree with the preset initial text is screened, so that the correlation degree of the generated poetry and the preset initial text is increased, and the quality of the poetry generated by the language model is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a schematic diagram of one embodiment of an automated poetry language model training method of the present application;
FIG. 2 is a schematic diagram of an embodiment of an automated poetry language model training method of the present application;
FIG. 3 is a schematic diagram of another embodiment of an automated poetry language model training method of the present application;
FIG. 4 is a schematic diagram of an embodiment of an automated poetry language model training apparatus of the present application;
FIG. 5 is a schematic diagram of one embodiment of an automated poetry method of the present application;
fig. 6 is a schematic diagram of an embodiment of an automatic poetry apparatus of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
The following detailed description of the preferred embodiments of the present application, taken in conjunction with the accompanying drawings, will provide those skilled in the art with a better understanding of the advantages and features of the present application, and will make the scope of the present application more clear and definite.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Attempts to create traditional poems using artificial intelligence have been long in different stages of artificial intelligence, but most of these attempts have been limited to the field of ancient poems. In recent years, with the development of deep learning, some researchers also begin to use a deep learning method to train and apply the weight specially creating poetry by using a special poetry model and data, and develop poetry applications like "nine songs", "poetry three hundred", and the like. However, due to the limitation of data, the applications can only learn the common image in the ancient poetry, but the bypass is difficult to be realized, so that the three aspects are performed at once, and no or few images such as Beijing, New York and the like appear in the traditional poetry, so that better results cannot be obtained.
In recent years, large-scale pre-trained language models have become new highlights in the field of natural language processing, with their scale and performance. The language model obtained by training the large-scale natural language data collected on the internet in a mode of predicting the next word according to the above is not enough in the process of directly completing the judgment type task, but can learn information contained in massive texts under the condition of no label, so that a surprisingly good result is obtained in the generation type task.
Although the pre-training model can achieve excellent generation effect on common text generation, the conventional generation method is still limited to generating common text similar to training data, and has poor quality and little optimization work on cross-domain generation. The poetry is generated directly from the language model, no matter what inquiry format is used, the appropriate poetry cannot be generated in a large probability, and even if poetry sentences of which the formats meet the requirements are generated reluctantly, the poetry is free from questions and low in quality.
The invention conception of the application is as follows: a poetry automatic making language model training method and device and a poetry automatic making method and device are provided. Inputting a preset initial text into a preset language model to obtain at least one complete poetry sentence text corresponding to the preset initial text, wherein the preset initial text comprises a title and a genre; the method comprises the steps of generating first poetry sentences according to a preset initial text in a preset language model, then carrying out reverse inquiry, calculating the confusion rate between each complete poetry sentence text and the preset initial text, screening out poetry sentences with a preset first number from the complete poetry sentence texts, determining the poetry sentences as first sentence target poetry sentences, and searching the rest complete target poetry sentences for each first sentence target poetry sentence by utilizing linear search. Calculating the confusion rate of each complete poetry sentence text and a preset initial text respectively, and sequencing the confusion rates; acquiring a target complete verse text from the complete verse text according to the sequencing result; the verses with high correlation degree with the preset initial text are screened to increase the correlation degree of the generated verses with the preset initial text, so that the quality of the generated verses of the language model is improved. And adjusting the language model according to the target complete poetry sentence text to obtain a target language model. In the subsequent poetry making process, the target language model is used for improving the quality of the subsequent poetry making conveniently.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 shows a specific embodiment of an automatic poetry language model training method according to the present application. In the specific embodiment shown in fig. 1, the method for automatically training a poetry language model mainly includes step S101, inputting a preset initial text into a preset language model to obtain at least one complete poetry sentence text corresponding to the preset initial text, wherein the preset initial text includes a title and a genre; step S102, calculating the confusion rate between each complete poetry sentence text and a preset initial text, further obtaining scores corresponding to each complete poetry sentence text, and sequencing the scores; step S103, acquiring a target complete verse text from the complete verse text according to a sequencing result; and step S104, adjusting the language model according to the target complete poetry sentence text to obtain a target language model.
In the specific implementation mode, a preset initial text is input into a preset language model, at least one complete verse text related to the preset initial text is obtained, according to the confusion rate between the at least one complete verse text and the preset initial text respectively, sequencing at least one complete verse text, obtaining a final target complete verse text according to a sequencing result, because the preset language model has good effect on generating different texts, but the preset language model also has limitation on generating the verse, and the target complete verse text with higher degree of correlation with the preset initial text is obtained according to the screening of the confusion rate, so that according to the obtained target complete verse text, and adjusting related parameters in the language model to obtain a target language model, so that poetry quality can be improved in a poetry making process through the target language model.
In the specific embodiment shown in fig. 1, the method for automatically training a poetry language model mainly includes step S101, inputting a preset initial text into a preset language model, and obtaining at least one complete poetry sentence text corresponding to the preset initial text, where the preset initial text includes a title and a genre.
In the specific implementation mode, a preset initial text is input into a preset language model, at least one first verse text corresponding to the preset initial text is obtained, then a second verse text corresponding to the preset initial text and the first verse text is obtained respectively until a last verse text is obtained, and then a complete verse text corresponding to the first verse text is obtained. The method comprises the steps of presetting a text title and a genre in a preset initial text to improve the quality of obtaining a first verse text, a second verse text and a complete verse text. In an optional embodiment of the present application, the method for obtaining at least one complete verse text corresponding to the preset initial text includes inputting the preset initial text into a preset language model, and further includes: in the language model, acquiring at least one first verse text corresponding to a preset initial text according to the preset initial text; calculating first confusion rates between the first verse texts and a preset initial text respectively so as to obtain first scores corresponding to each first verse text respectively; and screening the first verse texts according to the first scores to obtain first target verse texts with a preset first number.
In the optional embodiment, a preset initial text is input into a preset language model, so that the language model outputs at least one first verse text corresponding to the preset initial text, the first verse text is input into the language model in a reverse query mode, a first confusion rate of the preset initial text output by the language model according to the first verse text is obtained, a first score corresponding to each first verse text is obtained, and then screening is performed according to the first scores, so that a first target verse text with a preset first number is obtained. After the first poetry sentence text is obtained, the probability of obtaining the preset initial text according to the first poetry sentence text, namely the first confusion rate, is calculated in a reverse query mode, and then the first target poetry sentence text is obtained through screening, so that the weight of the first target poetry sentence text is improved, the correlation degree of the first target poetry sentence text and the preset initial text is improved, and the quality of the first target poetry sentence text obtained by a language model according to the preset initial text is improved.
In an alternative example of the present application, when the initial text is preset as "title: singing new york genre: poetry text: when the preset initial text is input into the language model, the preset initial text is limited in the aspect of the genre, so that the rough rate in the first text output by the language model is poetry, and a first poetry text is obtained; for example, "clouding in manhattan", "clouding rain in manhattan", "worship for years and not knowing the tree", etc. When 3 first verse texts are obtained, respectively and simultaneously carrying out reverse query on the 3 first verse texts, and calculating to obtain a title according to three first data texts: singing the probability of new york to obtain first confusion rates corresponding to 3 first poetry texts respectively; namely, gathering 'Yuntang in Manhattan from an input language model' of ancient poem ', obtaining a confusion rate of the language model according to' Yuntang in Manhattan 'to output' Yongnyuan ', getting' Yun-whisking rain in Manhattan from the input language model 'of ancient poem', obtaining a confusion rate of the language model according to 'Yun-whisking rain in Manhattan' to output 'Yongnyuan', getting 'Jingyan not known for years from the ancient poem' to input the language model, and obtaining a confusion rate of the language model according to 'Jingyan not known for years' output 'Yongnyuan'; and screening the first verse texts according to the three confusion rates, for example, when the preset first number is 2, keeping two first verse texts as first target verse texts. For example, the confusion rate corresponding to "Yuntangpo in Manhattan" is e ^ -25.31, and then the score is 24.69; the score corresponding to "cloud rain in manhattan" is 23.48; the score corresponding to "no know vitex for years of admiration" is 19.26; two poetry sentences with high scores are reserved, namely 'clouding in Manhattan' and 'cloud rain in Manhattan' are respectively used as first target poetry sentence texts.
Fig. 2 is a diagram showing an example of obtaining a first verse text according to a preset initial text and reversely querying scores of the first verse text in the automatic poetry language model training method of the present application.
In fig. 2, when the preset initial text is "title: singing new york genre: poetry text: when the preset initial text is input into the language model, a first poetry sentence text output by the language model is subjected to reverse query on the 'cloud rain in Manhattan', a corresponding first confusion rate is obtained, and a first score is further obtained, wherein the specific process of the reverse query is as follows: when the input of the ' cloud rain in Manhattan ' comes from ancient poetry, ' the probability that the next word is a song ' is calculated, and then when the input of the ' cloud rain in Manhattan ' comes from ancient poetry ' song, ' the probability that the next word is a new york ' is calculated. The two probabilities are multiplied and then logarithmized (or logarithmized and then summed) to give-26.52, and then the score of the sentence is calculated to be 23.48.
In an optional embodiment of the present application, the method for obtaining at least one complete verse text corresponding to a preset initial text by inputting the preset initial text into a preset language model further includes: in a language model, acquiring at least one Nth verse text corresponding to a previous N-1 target verse text according to a preset initial text and the previous N-1 target verse text, wherein N is a natural number which is not more than the number of verses and is more than 1; calculating the Nth confusion rate of the previous N poetry sentence texts and a preset initial text respectively, and further obtaining the Nth score corresponding to each N poetry sentence text respectively; and screening the Nth poetry sentence text according to the Nth score to obtain a preset second number of Nth target poetry sentence texts, and further obtain at least one complete poetry sentence text.
In the optional embodiment, a preset initial text and a first target verse text are input into a preset language model, so that the language model outputs a second verse text corresponding to the preset initial text, the first target verse text and the second verse text are input into the language model in a reverse query mode, a second confusion rate of the preset initial text output by the language model according to the second verse text is obtained, a second score corresponding to each second verse text is obtained, screening is performed according to the second score, and the second verse text with the highest second score is used as the second target verse text. And inputting a preset initial text, a first target poetry text corresponding to the second target poetry text and the second target poetry text into a preset language model, enabling the language model to output a third poetry text corresponding to the second target poetry text, obtaining a third target poetry text corresponding to the second target poetry text according to a reverse query mode, and repeating the steps according to the mode until an Nth target poetry text is obtained. To obtain the complete target text. After the first target poetry sentence text is obtained, subsequent second target poetry sentence text, third target poetry sentence text and Nth target poetry sentence text are all in a linear searching mode, only the second target poetry sentence text, the third target poetry sentence text and the Nth target poetry sentence text with the highest score are reserved for each first target poetry sentence text, so that the weight of the Nth target poetry sentence text is improved, the correlation degree of the previous N target poetry sentence text and the preset initial text is improved, and the quality of the Nth target poetry sentence text obtained by the language model according to the preset initial text is improved.
In an alternative example of the present application, according to the first target verse texts "clouds in manhattan gather" and "clouds in manhattan" obtained as described above, each first target verse text and a preset initial text are respectively input into the language model, that is, a "title: singing new york genre: poetry text: the "and" title "of clouding in manhattan: singing new york genre: poetry text: in the language model, the first target poetry text "clouds in manhattan" is input into the language model, wherein the second poetry text corresponding to the first target poetry text "clouds in manhattan" is obtained as "Baiyawai-Dianjiao-Ninggong" and "Baidi river mountain empty Cui screen", and the second poetry text corresponding to the first target poetry text "clouds in manhattan" is obtained as "Baiyao-Doxing Zhuang Di Jing". The method comprises the steps of gathering 'Manhattan Zhongyuntang, having all the useless new works as a new work from ancient poetry' input language model ', obtaining the confusion rate of the language model according to' Manhattan Zhongyuntang gathering, having all the useless new works as a new work 'output' Yong New York ', calculating the corresponding score of the confusion rate by 26.75, gathering' Manhattan Zhongyuteng, having Baidi river mountain empty Cui screen from ancient poetry 'input language model', obtaining the confusion rate of the language model according to 'Manhattan Zhongyuteng gathering, having Baidi river empty Cui screen' output 'Yong New York' and calculating the corresponding score of the confusion rate by 26.52, obtaining the language model according to 'Manhattan Zhongyun, having all the useless Duyu' output 'Ningyu' input language model ', obtaining the language model according to' Jing Shanhattan Zhongyu, having all the useless Yun Yu 'output' input language, and calculating the confusion rate corresponding to a linear search application 26.94, therefore, a second number of "Baishijizhu emperor" is preset as 1, that is, "Baishijizhu emperor" is used as the second target verse text, and the first target verse text corresponding to the second target verse text is "cloud rain in Manhattan".
Inputting each front N-1 poetry sentence text and a preset initial text into a language model according to the mode to obtain an Nth poetry sentence text corresponding to the front N-1 poetry sentence text, calculating the Nth confusion rate of the N poetry sentence text and the preset initial text according to the front N-1 poetry sentence text and the corresponding Nth poetry sentence text to obtain an Nth score of the N poetry sentence text, screening the Nth target poetry text corresponding to the front N-1 poetry sentence text from the Nth poetry sentence text corresponding to the front N-1 poetry sentence text through the Nth score, and taking the Nth target poetry sentence text corresponding to the front N-1 poetry sentence text and the Nth target poetry sentence text corresponding to the front N-1 poetry sentence text as a complete poetry sentence text, wherein the N poetry sentence text is a total poetry sentence text of the front N-1 poetry sentence text and the corresponding to the Nth poetry sentence text .
In an optional embodiment of the present application, in the language model, at least one first verse text corresponding to a preset initial text is obtained according to the preset initial text, and the method further includes: inputting a preset initial text into a language model to obtain at least one first text; and respectively judging the text format of each first text, and determining a first verse text conforming to the verse format from the first texts.
In this optional embodiment, although the preset initial text is preset to limit the genre of the subsequent text, so that the language model outputs a corresponding text according to the setting of the genre in the preset initial text at a high probability, only the probability of outputting the text corresponding to the genre is improved, and other texts inconsistent with the genre are output at a low probability, so that the texts output by the language model according to the preset initial text are all used as first texts, the text format of each first text is judged according to the set genre, and the first verse text conforming to the set genre is screened out, so that the correlation degree of the language model to verse is improved according to the preset initial text.
In an alternative example of the present application, when the initial text is preset as "title: singing new york genre: poetry text: when the above-mentioned time passes; the language model outputs 'clouds gather in Manhattan', 'clouds rain in Manhattan', 'buildings stand like a sky', 'a super city' and 'beautiful and prosperous city'. When 5 first texts are obtained, judging the text formats of the 5 first texts respectively, wherein the text formats of the idiom of the poetry sentence include but are not limited to level and narrow, rhyme, lattice law and word number; because the 'beautiful and flourishing cities' do not accord with the verse genres, the 'clouds in Manhattan' and rain ', the buildings stand as heaven' and 'a super metropolitan area' are deleted and used as the first verse text corresponding to the preset initial text. In an optional embodiment of the present application, the inputting a preset initial text into the language model to obtain at least one first text, further includes: inputting a preset initial text into a language model, and acquiring at least one first word text corresponding to the preset initial text; respectively generating second word texts corresponding to the first word texts according to the preset initial text and each first word text; respectively generating an Mth word text corresponding to the previous M-1 word text according to the preset initial text and the previous M-1 word text, so that the number of characters of the previous M word text is equal to the number of words of the preset initial text, wherein M is a natural number greater than 1.
In an alternative example of the present application, the language model is able to predict the next word from the above. Therefore we will "title: singing new york genre: poetry text: "inputting into the language model, the language model will give a predicted probability distribution of all possible words appearing at the next position, and then randomly selecting according to the distribution. For example, when the language model is based on a preset initial text "title: singing new york genre: poetry text: when outputting the cloud rain in Manhattan and the cloud accumulation in Manhattan, the method comprises the following specific steps: input "title: singing new york genre: poetry text: "the first word text generated is" manhattan ", then" title: singing new york genre: poetry text: manhattan ', the generated second word text is' middle ', and then' title: singing new york genre: poetry text: in manhattan, the generated third word text is ' cloud ', and then ' title: singing new york genre: poetry text: cloud in manhattan, two different fourth word texts are generated as "flick" and "rain", and the "title: singing new york genre: poetry text: and (3) in Manhattan, cloud flicking is carried out on the input language model, the generated fifth word text is rain, and the title: singing new york genre: poetry text: inputting the cloud and rain in Manhattan into a language model, wherein the generated fifth word text is ' poly ', and then ' title: singing new york genre: poetry text: cloud rain "and" title in manhattan: singing new york genre: poetry text: inputting the Yuyu gathering in Manhattan into a language model to generate a sixth word text, wherein the sixth word text is' so far, and 2 first texts are obtained; the subsequent second text to the Nth text are obtained in the manner described above.
In the specific embodiment shown in fig. 1, the method for automatically training a poetry language model further includes step S102, in which a confusion rate between each complete poetry sentence text and a preset initial text is calculated, so as to obtain scores corresponding to each complete poetry sentence text, and the scores are sorted.
In the optional implementation mode, according to the complete poetry sentence text corresponding to each first target poetry sentence text obtained in the previous step, reverse query is respectively carried out on each complete poetry sentence text, confusion rates corresponding to each complete poetry sentence text and the preset initial text are obtained, scores of each complete poetry sentence text are obtained through the confusion rates, each complete poetry sentence text is sorted according to the scores, and the calculated confusion rate provides a basis for obtaining the target complete poetry sentence text with the highest relevance with the preset initial text subsequently.
In an optional embodiment of the present application, calculating a confusion rate between the preset initial texts for each complete verse text further includes: calculating the confusion rate between the complete poetry sentence text and the preset initial text through the language model, and obtaining the confusion rate corresponding to each complete poetry sentence text.
In the optional embodiment, according to the complete poetry sentence text which is obtained and corresponds to each first target poetry sentence text, reverse query is respectively carried out on each complete poetry sentence text, the ' complete poetry sentence text ' is input into a language model, the confusion rate of the language model according to the ' output of ' singing new york ' of the ' complete poetry sentence text ' is obtained, the confusion rate which corresponds to each complete poetry sentence text is obtained by using the method, and a basis is provided for obtaining the target complete poetry sentence text with the highest relevance to the preset initial text subsequently.
In the specific embodiment shown in fig. 1, the method for automatically training a poetry language model further includes step S103, obtaining a target complete poetry text from the complete poetry text according to the sorting result.
In the optional implementation mode, each complete verse text is sequenced according to a score obtained by the corresponding mixing efficiency of each complete verse text, and the complete verse text with the highest score is used as a target complete verse text according to a sequencing result so as to ensure that the matching degree of the obtained target complete verse text and a preset initial text is highest.
Fig. 3 is a specific example of obtaining a target complete verse text according to a preset initial text in the automatic poetry language model training method.
As shown in fig. 3, inputting the preset initial text into the language model, the language model generating a first verse text related to the preset initial text, such as "clouds in manhattan", "clouds in manhattan rain", "mansion as a day, and" super metropolitan "shown in fig. 3; and respectively carrying out reverse query on each first poetry text to obtain a first score corresponding to each first poetry text, and taking the 2 texts with the highest scores in the first poetry texts in the graph 3 as first target poetry texts, namely the 'clouds in Manhattan' and 'clouds in Manhattan' in the graph 3. And then respectively inputting a preset initial text and each first target poetry text into a language model to generate second poetry texts respectively corresponding to each first target poetry text, wherein the second poetry texts corresponding to the 'Manhattan Zhongyuteng gathering' are 'Baiyuwaitai Ninggong' and 'Baidi river mountain empty Cui screen', and the second poetry texts corresponding to the 'Manhattan Zhongyun whistling rain' are 'Baiyunxing Zhuiji Jing'. And reversely inquiring corresponding second scores of the Manhattan Zhongyuteng poly, the Baikuwaiwaikangtao Ninggong, the Manhattan Zhongyuteng poly, the Bailian river mountain empty jade screen, the Manhattan Zhongyuzu raining and the Baikuijing emperor according to the first score, and reserving a text with the highest score, so that a second target poetry sentence text corresponding to the Manhattan Zhongyuzui is obtained and is 'Baikuijingemperor Jing'. And respectively obtaining third poetry texts corresponding to the 'cloud rain in Manhattan, Baiyao Zhuang emperor Jing' as 'financial center Xingxing Wanxiang' and 'oriented one-day convergence', and determining a first target poetry sentence text as 'financial center Xingxing Wanxiang' through reverse query until a target poetry complete sentence text 'cloud rain in Manhattan, Baiyao Zhuang emperor Jing' is obtained. The finance center is happy with everything and gives saint material at every moment. In the interval of five continents in the city, wealth in the lower day inclines at dusk. If it is not king qi coming from the sky, it is famous all the world. "
In the specific embodiment shown in fig. 1, the method for automatically training a poetry language model further includes step S104, which adjusts the language model according to the target complete poetry sentence text to obtain the target language model. In a specific embodiment of the present application, the adjusting the language model according to the target complete poetry text to obtain the target language model further includes: and adjusting relevant parameters of the language model according to the preset initial text and the target complete poetry text to obtain a target language model, so that the probability of the target language model outputting the target complete poetry text according to the preset initial text reaches a preset threshold value.
In this optional embodiment, because the set parameters in the preset language model have a good effect on generating a common text, and there is a limitation on generating a verse text, the relevant parameters of the language model are finely adjusted according to the target complete verse text obtained from the preset initial text to obtain the target language model, so as to improve the probability of obtaining the target complete verse text according to the preset initial text.
According to the automatic poem making language model training, the title is put in and out of a preset language model, the text with the text format of the poem is reserved in the generated text, then a reverse query (inverse query) method is used, the generalization of the language model to natural language is utilized, after each poem is generated, the title possibly corresponding to the poem at the moment is predicted by using the language model through reverse query. And then using a line search (beam search), selecting a verse capable of reversely predicting the title with the maximum probability, thereby increasing the degree of correlation of the generated verse with the title. So as to improve the quality of poetry sentence generated by the preset language model. On the basis, self iterative learning (self training) is used, and the quality of generating the verse by using the verse is further optimized. The preset language model generates verses according to different titles, and then the generated verses are used for fine tuning (fine tune) the language model, and the fine tuning can improve the quality of generating the verses by the language model. After the operation is repeated for a plurality of times, the obtained target language model has good verse generation capability, and the quality of the verse which meets the requirements is finally generated by matching with a reverse query method is very high.
The language model used in the invention is based on a gpt-2 model reproduced by megatron-lm, and the bottom layer model is formed by training transform-XL instead of transform. Compared with the Chinese pre-training model CPM in the prior art, the model has the following advantages: the used bottom layer model, namely, the transform-XL is superior to a common transform in bottom layer design, relative position coding is used, word representation and position representation vectors are decoupled, and the method has better generalization and universality compared with absolute position coding. The word segmentation is fine, the size of a word list is increased from 30000 to 50000, the duplication removing work of the same word is performed, and the duplication removing work of the same word is performed when the same word is identical to a word and a space is added to the word but different representations are performed in different training data. The training data has large scale and high quality, uses 302GB known question answering, Baidu encyclopedia, Baidu knowing and other data, has more quality than 100GB used by CPM, and covers the information of various industries. Through testing on the test set which is not used in the training, the model is found to have the confusion degree of only 4.68 for the real natural language in the test set, which is far better than 7.24 of CPM.
In the training process for fine tuning of the language model, the training method refers to the training method of Megatron-LM, 8 servers loaded with 8V 100 GPUs are used for joint training, each server uses 16 batch sizes at each step, an Adam optimizer is used, the learning rate is 1e-4, the beta value is (0.9, 0.95), the cosine mode learning attenuation is 0.01, the epsilon value is 1e-8, and 160000 steps are learned.
Fig. 4 shows an embodiment of an automatic poetry language model training device according to the present application.
In the embodiment shown in fig. 4, the automatic poetry language model training device mainly includes: a module 401 for inputting a preset initial text into a preset language model to obtain at least one complete verse text corresponding to the preset initial text, wherein the preset initial text includes a title and a genre; a module 402 for calculating the confusion rate between each complete verse text and the preset initial text, further obtaining the score corresponding to each complete verse text, and sorting the scores; a module 403 for obtaining a target complete verse text from the complete verse text according to the sorting result; and a module 404 for adjusting the language model according to the target complete poetry text to obtain a target language model.
In the specific implementation mode, a preset initial text is input into a preset language model, at least one complete verse text related to the preset initial text is obtained, according to the confusion rate between the at least one complete verse text and the preset initial text respectively, sequencing at least one complete verse text, obtaining a final target complete verse text according to a sequencing result, because the preset language model has good effect on generating different texts, but has limitation on generating verses, and the target complete verse text with higher degree of correlation with the preset initial text is obtained according to the screening of the confusion rate, so that according to the obtained target complete verse text, and adjusting related parameters in the language model to obtain a target language model, so that poetry quality can be improved in a poetry making process through the target language model.
The automatic poetry language model training device provided by the application can be used for executing the automatic poetry language model training method described in any embodiment, the implementation principle and the technical effect are similar, and the repeated description is omitted.
Fig. 5 shows an embodiment of an automatic poetry method of the present application.
In the embodiment shown in fig. 5, the automatic poetry method mainly includes: step S501, inputting a preset initial text into a preset target language model, and obtaining at least one complete verse text corresponding to the preset initial text, wherein the preset initial text comprises a title and a genre; step S502, calculating the confusion rate between each complete poetry sentence text and a preset initial text, further obtaining the corresponding score of each complete poetry sentence text, and sequencing the confusion rate; and step S503, acquiring a target complete verse text from the complete verse text according to the sequencing result.
In the specific implementation mode, a preset initial text is input into a target language model, at least one complete poetry sentence text related to the preset initial text is obtained, the at least one complete poetry sentence text is sequenced according to the confusion rate between the preset initial text and the preset initial text respectively, and a final target complete poetry sentence text is obtained according to the sequencing result.
The automatic poetry method provided by the application is similar to the realization principle and the technical effect of the automatic poetry language model training method described in any embodiment, and is not repeated herein.
Fig. 6 shows an embodiment of an automatic poetry apparatus of the present application.
In the embodiment shown in fig. 6, the automatic poetry making device mainly comprises: a module 601, configured to input a preset initial text into a preset target language model, and obtain at least one complete verse text corresponding to the preset initial text, where the preset initial text includes a title and a genre; a module 602, configured to calculate a confusion rate between each complete verse text and a preset initial text, so as to obtain a score corresponding to each complete verse text, and sort the confusion rates; and a module 603 for obtaining a target complete verse text from the complete verse text according to the sorting result.
In the specific implementation mode, a preset initial text is input into a target language model, at least one complete poetry sentence text related to the preset initial text is obtained, the at least one complete poetry sentence text is sequenced according to the confusion rate between the preset initial text and the preset initial text respectively, and a final target complete poetry sentence text is obtained according to the sequencing result.
In one embodiment of the present application, the functional modules in an automatic acting device of the present application may be directly in hardware, in a software module executed by a processor, or in a combination of both.
A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium.
The Processor may be a Central Processing Unit (CPU), other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), other Programmable logic devices, discrete Gate or transistor logic, discrete hardware components, or any combination thereof. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
The automatic poetry making device provided by the application can be used for executing the automatic poetry making method described in any one of the embodiments, the implementation principle and the technical effect are similar, and the description is omitted here.
In another embodiment of the present application, a computer-readable storage medium stores computer instructions operable to perform the automated poetry language model training method or the automated poetry method described in the above embodiments.
In one particular embodiment of the present application, a computer device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores computer instructions executable by the at least one processor, the at least one processor operating the computer instructions to perform the automated poetry language model training method or the automated poetry method described in the above embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and all equivalent structural changes made by using the contents of the specification and the drawings, which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

1. An automatic poetry language model training method is characterized by comprising the following steps:
inputting a preset initial text into a preset language model to obtain at least one complete poetry sentence text corresponding to the preset initial text, wherein the preset initial text comprises a title and a genre;
calculating the confusion rate between each complete verse text and the preset initial text, further obtaining the score corresponding to each complete verse text, and sequencing the scores;
acquiring a target complete verse text from the complete verse text according to the sequencing result;
and adjusting the language model according to the target complete poetry sentence text to obtain a target language model.
2. The method for automatically poetry language model training according to claim 1, wherein a preset initial text is input into a preset language model to obtain at least one complete poetry text corresponding to the preset initial text, further comprising:
in the language model, acquiring at least one first verse text corresponding to a preset initial text according to the preset initial text;
calculating first confusion rates between the first verse texts and the preset initial texts respectively, and further obtaining first scores corresponding to the first verse texts respectively;
and screening the first verse texts according to the first scores to obtain first target verse texts with a preset first number.
3. The method for automatically poetry language model training according to claim 2, wherein a preset initial text is input into a preset language model to obtain at least one complete poetry text corresponding to the preset initial text, further comprising:
in the language model, acquiring at least one Nth verse text corresponding to the front N-1 target verse text according to the preset initial text and the front N-1 target verse text, wherein N is a natural number which is not more than the number of verses and is more than 1;
calculating the Nth confusion rate of the previous N poetry sentence texts and the preset initial text respectively, and further obtaining the Nth score corresponding to each N poetry sentence text respectively;
and screening the Nth poetry sentence text according to the Nth score to obtain a preset second number of Nth target poetry sentence texts, and further obtain at least one complete poetry sentence text.
4. The method for automatically poetry language model training according to claim 2, wherein in the language model, at least one first poetry text corresponding to the preset initial text is obtained according to the preset initial text, and further comprising:
inputting the preset initial text into the language model to obtain at least one first text;
and respectively judging the text format of each first text, and determining the first verse text conforming to the verse format from the first texts.
5. The method of automatically poetry language model training as claimed in claim 1, wherein said inputting said preset initial text into said language model to obtain at least one first text, further comprises:
inputting the preset initial text into the language model, and acquiring at least one first word text corresponding to the preset initial text;
respectively generating second word texts corresponding to the first word texts according to the preset initial texts and each first word text;
and respectively generating an Mth word text corresponding to the previous M-1 word text according to the preset initial text and the previous M-1 word text, so that the number of characters of the previous M word text is equal to the number of words of the preset initial text, wherein M is a natural number greater than 1.
6. The method of automatically poetry language model training as claimed in claim 1, wherein said calculating a confusion rate between each of said complete poetry texts and said predetermined initial texts further comprises:
calculating the confusion rate between the complete poetry sentence text and the preset initial text through the language model, and obtaining the confusion rate corresponding to each complete poetry sentence text.
7. The method of automatically poetry language model training according to claim 1, wherein the language model is adjusted according to the target complete poetry sentence text to obtain a target language model, further comprising:
and adjusting relevant parameters of the language model according to the preset initial text and the target complete poetry text to obtain the target language model, so that the probability of the target language model outputting the target complete poetry text according to the preset initial text reaches a preset threshold value.
8. An automatic poetry language model training device is characterized by comprising:
the method comprises the following steps of inputting a preset initial text into a preset language model, and obtaining at least one complete verse text corresponding to the preset initial text, wherein the preset initial text comprises a title and a genre; a module for calculating the confusion rate between each complete verse text and the preset initial text, further obtaining the score corresponding to each complete verse text, and sequencing the scores;
a module for obtaining a target complete verse text from the complete verse text according to the sorting result;
and the module is used for adjusting the language model according to the target complete poetry sentence text to obtain a target language model.
9. An automatic poetry method, comprising:
inputting a preset initial text into a preset target language model to obtain at least one complete verse text corresponding to the preset initial text, wherein the preset initial text comprises a title and a genre;
calculating the confusion rate between each complete verse text and the preset initial text, further obtaining the corresponding score of each complete verse text, and sequencing the confusion rates;
and acquiring a target complete verse text from the complete verse text according to the sequencing result.
10. An automatic poem making device, comprising:
the method comprises the following steps of inputting a preset initial text into a preset target language model, and obtaining at least one complete verse text corresponding to the preset initial text, wherein the preset initial text comprises a title and a genre;
a module for calculating the confusion rate between each complete verse text and the preset initial text, further obtaining the corresponding score of each complete verse text, and sequencing the confusion rate;
and the module is used for acquiring a target complete verse text from the complete verse text according to the sequencing result.
CN202210003512.8A 2022-01-05 2022-01-05 Automatic poem making language model training method and device and automatic poem making method and device Pending CN114021545A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210003512.8A CN114021545A (en) 2022-01-05 2022-01-05 Automatic poem making language model training method and device and automatic poem making method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210003512.8A CN114021545A (en) 2022-01-05 2022-01-05 Automatic poem making language model training method and device and automatic poem making method and device

Publications (1)

Publication Number Publication Date
CN114021545A true CN114021545A (en) 2022-02-08

Family

ID=80069691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210003512.8A Pending CN114021545A (en) 2022-01-05 2022-01-05 Automatic poem making language model training method and device and automatic poem making method and device

Country Status (1)

Country Link
CN (1) CN114021545A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109582952A (en) * 2018-10-31 2019-04-05 腾讯科技(深圳)有限公司 Poem generation method, device, computer equipment and medium
CN110134968A (en) * 2019-05-22 2019-08-16 网易(杭州)网络有限公司 Poem generation method, device, equipment and storage medium based on deep learning
CN111368514A (en) * 2019-12-10 2020-07-03 爱驰汽车有限公司 Model training and ancient poetry generating method, ancient poetry generating model, equipment and medium
CN112101006A (en) * 2020-09-14 2020-12-18 中国平安人寿保险股份有限公司 Poetry generation method and device, computer equipment and storage medium
CN112183058A (en) * 2020-09-22 2021-01-05 甘肃农业大学 Poetry generation method and device based on BERT sentence vector input
CN112287678A (en) * 2020-11-03 2021-01-29 沈阳雅译网络技术有限公司 Ancient poetry automatic generation method based on pre-training model
US20210097141A1 (en) * 2019-09-27 2021-04-01 International Business Machines Corporation Artificial intelligence based word generation
CN112989812A (en) * 2021-03-04 2021-06-18 中山大学 Distributed poetry generation method based on cloud data center
CN113312448A (en) * 2021-04-02 2021-08-27 新大陆数字技术股份有限公司 Poetry generation method and system and readable storage medium
CN113761846A (en) * 2021-07-02 2021-12-07 北京智谱华章科技有限公司 Pre-training model text generation method based on reverse prompt

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109582952A (en) * 2018-10-31 2019-04-05 腾讯科技(深圳)有限公司 Poem generation method, device, computer equipment and medium
CN110134968A (en) * 2019-05-22 2019-08-16 网易(杭州)网络有限公司 Poem generation method, device, equipment and storage medium based on deep learning
US20210097141A1 (en) * 2019-09-27 2021-04-01 International Business Machines Corporation Artificial intelligence based word generation
CN111368514A (en) * 2019-12-10 2020-07-03 爱驰汽车有限公司 Model training and ancient poetry generating method, ancient poetry generating model, equipment and medium
CN112101006A (en) * 2020-09-14 2020-12-18 中国平安人寿保险股份有限公司 Poetry generation method and device, computer equipment and storage medium
CN112183058A (en) * 2020-09-22 2021-01-05 甘肃农业大学 Poetry generation method and device based on BERT sentence vector input
CN112287678A (en) * 2020-11-03 2021-01-29 沈阳雅译网络技术有限公司 Ancient poetry automatic generation method based on pre-training model
CN112989812A (en) * 2021-03-04 2021-06-18 中山大学 Distributed poetry generation method based on cloud data center
CN113312448A (en) * 2021-04-02 2021-08-27 新大陆数字技术股份有限公司 Poetry generation method and system and readable storage medium
CN113761846A (en) * 2021-07-02 2021-12-07 北京智谱华章科技有限公司 Pre-training model text generation method based on reverse prompt

Similar Documents

Publication Publication Date Title
Swathi et al. An optimal deep learning-based LSTM for stock price prediction using twitter sentiment analysis
CN111324728A (en) Text event abstract generation method and device, electronic equipment and storage medium
El Mohadab et al. Predicting rank for scientific research papers using supervised learning
CN103678271B (en) A kind of text correction method and subscriber equipment
CN113505204A (en) Recall model training method, search recall device and computer equipment
CN112417092A (en) Intelligent text automatic generation system based on deep learning and implementation method thereof
CN115392237B (en) Emotion analysis model training method, device, equipment and storage medium
CN111538838B (en) Problem generating method based on article
CN111160000A (en) Composition automatic scoring method, device terminal equipment and storage medium
CN111339407A (en) Implementation method of information extraction cloud platform
CN116883545A (en) Picture data set expansion method, medium and device based on diffusion model
CN117951274A (en) RAG knowledge question-answering method and device based on fusion vector and keyword retrieval
CN114298055B (en) Retrieval method and device based on multilevel semantic matching, computer equipment and storage medium
CN110569355B (en) Viewpoint target extraction and target emotion classification combined method and system based on word blocks
CN115600605A (en) Method, system, equipment and storage medium for jointly extracting Chinese entity relationship
CN117271792A (en) Method for constructing enterprise domain knowledge base based on large model
Touati-Hamad et al. Arabic quran verses authentication using deep learning and word embeddings
CN116595130B (en) Corpus expansion method and device under multiple tasks based on small language model
CN117370736A (en) Fine granularity emotion recognition method, electronic equipment and storage medium
CN115422369B (en) Knowledge graph completion method and device based on improved TextRank
CN117034921A (en) Prompt learning training method, device and medium based on user data
CN114021545A (en) Automatic poem making language model training method and device and automatic poem making method and device
CN106293114B (en) Predict the method and device of user's word to be entered
CN111090720A (en) Hot word adding method and device
Hao Online piano learning game design method: Piano music style recognition based on CRNNH

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Zou Xu

Inventor after: Yang Zhilin

Inventor after: Yin Da

Inventor after: Ding Ming

Inventor before: Zou Xu

Inventor before: Yang Zhilin

Inventor before: Yin Da

Inventor before: Ding Ming

Inventor before: Tang Jie

CB03 Change of inventor or designer information
RJ01 Rejection of invention patent application after publication

Application publication date: 20220208

RJ01 Rejection of invention patent application after publication