CN117933250A - New menu generation method based on improved generation countermeasure network - Google Patents

New menu generation method based on improved generation countermeasure network Download PDF

Info

Publication number
CN117933250A
CN117933250A CN202410333828.2A CN202410333828A CN117933250A CN 117933250 A CN117933250 A CN 117933250A CN 202410333828 A CN202410333828 A CN 202410333828A CN 117933250 A CN117933250 A CN 117933250A
Authority
CN
China
Prior art keywords
menu
network
food material
recipe
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410333828.2A
Other languages
Chinese (zh)
Other versions
CN117933250B (en
Inventor
***
吴凯旋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Fanmeili Robot Technology Co ltd
Original Assignee
Nanjing Fanmeili Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Fanmeili Robot Technology Co ltd filed Critical Nanjing Fanmeili Robot Technology Co ltd
Priority to CN202410333828.2A priority Critical patent/CN117933250B/en
Publication of CN117933250A publication Critical patent/CN117933250A/en
Application granted granted Critical
Publication of CN117933250B publication Critical patent/CN117933250B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • General Preparation And Processing Of Foods (AREA)

Abstract

The invention discloses a new menu generating method based on an improved generation countermeasure network, which is used for collecting a menu data set S and dividing the menu data set S into four sub-data sets according to food material combination, a food material pretreatment method, manufacturing steps and notes. According to the new menu generation method based on the improved generation countermeasure network, the menu generation process is decoupled into the four parts of the food material combination module, the food material pretreatment method module, the manufacturing step module and the notice module, and the problem of gradient disappearance or gradient explosion caused by overlong data sequence length in the menu generation process is solved by utilizing a transform decoder-encoder structure based on a multi-head attention mechanism, so that the problem of poor modeling capability of context dependency of menu data is solved, more innovative and diversified development ideas are provided for menu research and development personnel, the research and development period is shortened, and the trial-and-error cost is reduced.

Description

New menu generation method based on improved generation countermeasure network
Technical Field
The invention belongs to the technical field of artificial intelligence content generation and cooking, and relates to a new menu generation method based on an improved generation countermeasure network.
Background
In the rapid development of modern catering industry, the development of new recipes is a key factor for attracting customers, satisfying taste demands and promoting brand innovation. At present, the development of new recipes usually depends on experience and feel of chefs, limiting innovation and diversity of dishes. Meanwhile, in the process of developing a new recipe, a new recipe and a combination of different cooking methods are required to be continuously tried, thereby increasing the time consumption and the development cost of developing the new recipe.
With the development of artificial intelligence content generation technology, the generation of an countermeasure network becomes a powerful tool, and is widely applied to the fields of image, audio, text generation and the like. However, in the field of cooking technology, due to the complexity and diversity of recipes, conventional generation countermeasure networks have a certain limitation in generating recipes. The following two aspects are specifically embodied:
Recipe data sequence length problem: recipe data generally consists of food seasoning, pretreatment methods, manufacturing steps and notes, belonging to long text sequences. While generating an countermeasure network may lead to gradient extinction or gradient explosion problems when processing long text sequences. Thus, the model may lose the ability to model far dependencies in the sequence when generating the recipe.
Continuity problem of recipe sample space: the recipe text data is generally discrete and highly structured, but the generated countermeasure network cannot meet the requirement of consistency of the generated recipe data in grammar and semantics when processing discrete text sequences, so that the generated recipe data is reasonable in grammar but is not smooth in semantics or generates paradoxical recipe data.
Accordingly, a new recipe generation method based on an improved generation countermeasure network is needed to solve the above-described problems.
Disclosure of Invention
In order to solve the problems set forth in the background art, the invention provides a new menu generation method based on improved generation of an countermeasure network.
A new recipe generation method based on improved generation of an countermeasure network, comprising the steps of:
Step one, collecting a menu data set S, and dividing the menu data set S into four sub-data sets according to food material combination, a food material pretreatment method, manufacturing steps and notes: food material combination dataset S 1, food material pretreatment method dataset S 2, manufacturing step dataset S 3, and notice dataset S 4;
Step two, performing word segmentation processing on the food material combination data set S 1, the food material pretreatment method data set S 2, the manufacturing step data set S 3 and the notice data set S 4 to construct a Chinese menu word list, and converting Chinese words into word vectors which can be identified by a model;
step three, constructing a menu generator network G, wherein the menu generator network G comprises four transformer decoder networks, the outputs of the four transformer decoder networks respectively correspond to a food material combination module, a food material preprocessing method module, a manufacturing step module and a notice module, the menu generator network G is initialized by using a maximum likelihood estimation method, and a menu discriminator network D is pre-trained by using menu data and a menu data set S generated by the menu generator network G;
Step four, performing cyclic resistance training on the menu generator network G and the menu discriminator network D by using a reinforcement learning method, and updating network parameters of the menu generator network G and the menu discriminator network D until the menu generator network G converges;
And fifthly, inputting the food material keywords subjected to word vector conversion into the menu generator network G, and outputting a complete menu containing the food materials.
Further, the food material combination dataset S 1 in the first step includes the usage amounts of main food material, auxiliary food material, side dish and seasoning required for completing the one-way dish. The main food material or the auxiliary food material is used as a keyword to be input into a menu generation model G for guiding the generation of the whole menu.
Furthermore, in the second step, word segmentation is performed on the menu data set S to construct a chinese menu word list, and chinese words are converted into word vectors that can be identified by a model, including the following steps:
S21, segmenting the Chinese sentence sequence in the menu data set S into common Chinese words through a Chinese word segmentation tool, and mapping the common Chinese words into unique integer indexes in a Chinese menu word list through a hash table;
S22, converting the Chinese words into word vectors which can be identified by the model by using a one-hot coding method. For training and generating inferences for subsequent models.
Further, the menu generator network G in the third step is a hierarchical menu generator, and includes a converter encoder network, four converter decoder networks and a full connection layer, the menu generator network G decouples the generation of the whole menu into four parts, and finally, the menu information generated by the four modules is fused and output through the full connection layer, where the converter encoder network is used to extract the characteristics of the input food keywords, and the outputs of the four converter decoder networks respectively correspond to the food combination module, the food preprocessing method module, the manufacturing step module and the attention module;
furthermore, the menu discriminator network D is a binary classification network composed of an embedded layer, a convolution layer, a pooling layer, a full connection layer and an output layer, and is used for judging whether the input data is true or false.
Further, in the third step, the menu generator network G is initialized by using the maximum likelihood estimation method, which is represented by the following formula:
,
,
wherein, Model parameters obtained for maximum likelihood estimation method,/>Is a model parameter,/>N is the number of training samples, for the log-likelihood function of the training data,/>Is the i-th observation data,/>Is a sample under data distributionProbability of/>Is the model at the parameter/>Generate samples/>Is a probability of (2).
Further, in the third step, the menu generator network G is initialized by using the maximum likelihood estimation method, which includes the following steps:
S31, performing parameter initialization training on a food material combination data set S 1 by using a maximum likelihood estimation method on a network formed by a transducer encoder network and a food material combination module, and freezing parameters of a food material pretreatment method module, a manufacturing step module and a notice module;
S32, after training a network formed by a transducer encoder network and a food material combination module, freezing parameters of the transducer encoder network and the food material combination module, and performing parameter initialization training on a food material pretreatment method data set S 2 by using a maximum likelihood estimation method on the network formed by the food material pretreatment method module and the frozen parameter transducer encoder network;
S33, referring to steps S31 and S32, the parameters of the manufacturing step module and the notice module are initialized on the manufacturing step data set S 3 and the notice data set S 4 respectively by using a maximum likelihood estimation method.
Furthermore, in the fourth step, the loop antagonism training is performed on the menu generator network G and the menu discriminator network D by using a reinforcement learning method, which includes the following steps:
s41, circularly calling a generated menu sequence S of a menu discriminator D, and selecting a menu generator G to generate an action a of a next word by utilizing an epsilon-Greedy strategy of a Q-learning algorithm;
S42, entering the next state And obtaining the current reward signal R, updating the cost function Q until the generation sequence of the menu generator network G reaches the expected quality, and updating the network parameters of the menu generator network G.
And (3) introducing a reinforcement learning method based on Q-learning to perform cyclic antagonism training on the layered menu generation network, constructing an update cost function and performing action selection by utilizing an epsilon-Greedy strategy, and iteratively updating the sequence state of the menu generator network generation data, so as to obtain the direction with the maximum reward value according to the authenticity judged by a menu discriminator and guide the update of the menu generator network parameters, thereby solving the problem of low consistency of the menu data generated by the generator network in terms of semantics and grammar.
Furthermore, the reinforcement learning method in the fourth step is implemented based on a Q-learning algorithm, and the cost function Q is defined asThe cost function Q is updated by:
,
wherein, Is the value of the Q cost function, s represents the state of the partial menu sequence which is currently generated, a represents the action of the generator to generate the next word, A is the learning rate, R is the current rewarding factor,/>,/>Is the output probability of the menu discriminator network to the generated partial menu sequence,/>Is a discount factor,/>Is the next state,/>Is at/>Action of the next selection.
Further, in the fourth step, the network parameters of the menu generator network G are updated by the following formula:
,
wherein, Is a parameter of the recipe generator network, A is the learning rate,/>Is a gradient operation,/>Is the value of the Q-cost function, s represents the state of the partial recipe sequence that has been currently generated, and a represents the action of the generator to generate the next word.
Further, in the fourth step, the network parameters of the recipe discriminator network D are updated by the following formula:
,
,
wherein, Is the output probability of the menu discriminator to the real menu data,/>Is the output probability of the menu discriminator for generating menu data,/>Is menu data generated by a menu generator, and z is input food material keyword data of the menu generator,/>Is the parameter of the menu discriminator, A is the learning rate,/>Is a gradient operation.
The beneficial effects are that: according to the new menu generation method based on the improved generation countermeasure network, the menu generation process is decoupled into the four parts of the food material combination module, the food material pretreatment method module, the manufacturing step module and the notice module, and the problem of gradient disappearance or gradient explosion caused by overlong data sequence length in the menu generation process is solved by utilizing a transform decoder-encoder structure based on a multi-head attention mechanism, so that the problem of poor modeling capability of context dependency of menu data is solved, more innovative and diversified development ideas are provided for menu research and development personnel, the research and development period is shortened, and the trial-and-error cost is reduced.
Drawings
FIG. 1 is a flow chart of a new recipe generation method based on an improved generation countermeasure network of the present invention;
FIG. 2 is a block diagram of a hierarchical menu generator network of the present invention;
FIG. 3 is a block diagram of a recipe discriminator of the present invention;
FIG. 4 is a flow chart of reinforcement learning based cyclic resistance training of the present invention;
fig. 5 is a schematic diagram of menu generation according to an embodiment of the present invention.
Detailed Description
The invention will now be described in further detail with reference to the accompanying drawings. The drawings are simplified schematic representations which merely illustrate the basic structure of the invention and therefore show only the structures which are relevant to the invention.
Referring to fig. 1, the new menu generating method based on the improved generation countermeasure network of the present invention includes the following steps:
s001, collecting a complete menu data set S, and dividing the complete menu data set S into four sub data sets S 1、S2、S3、S4 according to food material combination, a food material pretreatment method, manufacturing steps and notes;
Preferably, the food material combination dataset S 1 includes main food materials, auxiliary food materials, side dishes, seasonings and corresponding usage amounts required for completing a dish. The main food material or the auxiliary food material is used as a keyword to be input into a menu generation model G for guiding the generation of the whole menu.
S002, performing word segmentation on the four sub-data sets, constructing a Chinese recipe word list, and converting Chinese words into word vectors which can be identified by a model;
Preferably, the Chinese menu vocabulary is used for segmenting the Chinese sentence sequence in the menu data set S into common words or phrases through a common Chinese word segmentation tool, and mapping the common words or phrases into unique integer indexes in the constructed vocabulary through a hash table. And finally, converting the Chinese words into word vectors which can be recognized by the model by using a one-hot coding method, and training and generating reasoning of a subsequent model.
S003, initializing a hierarchical menu generator network G by using a maximum likelihood estimation method, and pre-training a menu discriminator network D through data generated by the menu generator G and a menu data set S;
Preferably, referring to fig. 2, the hierarchical menu generator network G is mainly composed of a single transducer encoder and four transducer decoder networks, so as to decouple the generation of the whole menu into four parts, and finally, the menu information generated by the four modules is fused and output through a full connection layer. The coder network is used for extracting the characteristics of the input food keywords, and the outputs of the four decoder networks respectively correspond to the food combination module, the food preprocessing method module, the manufacturing step module and the notice module.
Preferably, referring to fig. 3, the menu discriminator network D is a binary classification network composed of an embedded layer, a convolution layer, a pooling layer, a full connection layer and an output layer, and is used for judging whether the input data is true or false.
Preferably, the hierarchical menu generator network G is initialized by layering using maximum likelihood estimation, the process of which is represented as follows:
,
,
wherein, Is a model parameter,/>For the log-likelihood function of the training data, N is the number of training samples,Is the i-th observation data,/>Is the sample/>, under data distributionProbability of/>Is a model in parametersGenerate samples/>Is a probability of (2). The specific initialization steps are as follows:
The network consisting of the encoder and the food combination module decoder is trained on the data set S 1 by utilizing the maximum likelihood estimation method, and parameters of three modules, namely the food pretreatment method, the manufacturing step and the notice are frozen.
After the network is trained, the network parameters of the encoder and food combination module decoder are frozen, and the parameters are initialized on the data set S 2 by using a maximum likelihood estimation method on the network consisting of the encoder with frozen parameters and the food preprocessing method module.
Similarly, the fabrication steps and the notice module network are parameter initialized on datasets S 3 and S 4, respectively, using maximum likelihood estimation.
S004, referring to FIG. 4, performing cyclic antagonism training on a menu generator network G and a menu discriminator network D by using an improved reinforcement learning method, and iteratively updating parameters of the menu generator network G and the menu discriminator network D until a generator model G converges;
Preferably, the reinforcement learning method is implemented based on Q-learning algorithm, and the cost function Q is defined as Where s represents the state of the partial recipe sequence that has been currently generated and a represents the action of the generator to generate the next word. The reward function R is defined as/>Wherein/>Is the output probability of the menu discriminator network to the generated partial menu sequence. The update rule of the cost function Q is as follows:
,
wherein, Is the value of the Q cost function, A is the learning rate, R is the current reward factor,/>Is a discount factor that is used to determine the discount,Is the next state,/>Is at/>Action of the next selection.
Preferably, the cyclic resistance training method is to generate a menu sequence s by circularly calling a menu discriminator D and select an action a of generating a next word by using an epsilon-Greedy strategy. Thereafter, enter the next stateAnd obtains the current bonus signal R. And continuously updating the cost function Q until the generation sequence of the menu generator reaches the expected quality, and updating the network parameters of the menu generator G. And then training and updating parameters of the menu discriminator by utilizing the currently trained menu generator data and the real data. The ratio of the times of training the menu generator and the menu discriminator is 2:5.
Preferably, the recipe generator network G parameter iterative update method is updated by minimizing a negative Q-value gradient by the following expression.
,
Wherein,Is a parameter of the recipe generator network, A is the learning rate,/>Is a gradient operation.
Preferably, the recipe discriminator network D parameter iterative updating method is to generate cross entropy of the sequence and the real sequence by minimizingAnd updating by using a gradient descent optimization algorithm, wherein the expression is as follows:
,
,
wherein, Is the output probability of the discriminator to the real menu data,/>Is the output probability of the discriminator for generating menu data,/>Is menu data generated by the generator, z is input food material keyword data of the generator,Is the parameter of the discriminator, A is the learning rate,/>Is a gradient operation.
S005, referring to FIG. 5, the food material keywords subjected to word vector conversion are sent to a recipe generator network G, and finally a complete recipe containing the food material is output.
With the above-described preferred embodiments according to the present invention as an illustration, the above-described descriptions can be used by persons skilled in the relevant art to make various changes and modifications without departing from the scope of the technical idea of the present invention. The technical scope of the present invention is not limited to the description, but must be determined according to the scope of claims.

Claims (10)

1. A new recipe generation method based on improved generation of an countermeasure network, comprising the steps of:
Step one, collecting a menu data set S, and dividing the menu data set S into four sub-data sets according to food material combination, a food material pretreatment method, manufacturing steps and notes: food material combination dataset S 1, food material pretreatment method dataset S 2, manufacturing step dataset S 3, and notice dataset S 4;
Step two, performing word segmentation processing on the food material combination data set S 1, the food material pretreatment method data set S 2, the manufacturing step data set S 3 and the notice data set S 4 to construct a Chinese menu word list, and converting Chinese words into word vectors which can be identified by a model;
step three, constructing a menu generator network G, wherein the menu generator network G comprises four transformer decoder networks, the outputs of the four transformer decoder networks respectively correspond to a food material combination module, a food material preprocessing method module, a manufacturing step module and a notice module, the menu generator network G is initialized by using a maximum likelihood estimation method, and a menu discriminator network D is pre-trained by using menu data and a menu data set S generated by the menu generator network G;
Step four, performing cyclic resistance training on the menu generator network G and the menu discriminator network D by using a reinforcement learning method, and updating network parameters of the menu generator network G and the menu discriminator network D until the menu generator network G converges;
And fifthly, inputting the food material keywords subjected to word vector conversion into the menu generator network G, and outputting a complete menu containing the food materials.
2. The method for generating a new recipe based on an improved generation countermeasure network according to claim 1, wherein the food material combination dataset S 1 in the step one includes usage amounts of main food material, auxiliary food material, side dish and seasoning required for completing a dish.
3. The method for generating a new recipe based on an improved generation countermeasure network according to claim 1, wherein in the second step, the recipe data set S is subjected to word segmentation processing, a chinese recipe vocabulary is constructed, and chinese words are converted into word vectors recognizable by a model, comprising the steps of:
S21, segmenting the Chinese sentence sequence in the menu data set S into common Chinese words through a Chinese word segmentation tool, and mapping the common Chinese words into unique integer indexes in a Chinese menu word list through a hash table;
S22, converting the Chinese words into word vectors which can be identified by the model by using a one-hot coding method.
4. The method for generating a new recipe based on an improved generation countermeasure network according to claim 1, wherein in the third step, the recipe generator network G is a hierarchical recipe generator, and includes a converter encoder network, four converter decoder networks and a full connection layer, the recipe generator network G decouples the generation of the entire recipe into four parts, and finally, the recipe information generated by the four modules is fused and output through the full connection layer, where the converter encoder network is used for extracting features of the input food material keywords, and the outputs of the four converter decoder networks respectively correspond to the food material combination module, the food material preprocessing method module, the manufacturing step module and the attention module.
5. The new recipe generation method based on the improved generation countermeasure network of claim 1, wherein the recipe generator network G is initialized by using a maximum likelihood estimation method in step three, which is expressed by:
,
,
wherein, Model parameters obtained for maximum likelihood estimation method,/>Is a model parameter,/>N is the number of training samples, for the log-likelihood function of the training data,/>Is the i-th observation data,/>Is the sample/>, under data distributionProbability of/>Is the model at the parameter/>Generate samples/>Is a probability of (2).
6. The new recipe generation method based on the improved generation countermeasure network of claim 4, wherein the initialization of the recipe generator network G by using the maximum likelihood estimation method in the third step includes the steps of:
S31, performing parameter initialization training on a food material combination data set S 1 by using a maximum likelihood estimation method on a network formed by a transducer encoder network and a food material combination module, and freezing parameters of a food material pretreatment method module, a manufacturing step module and a notice module;
S32, after training a network formed by a transducer encoder network and a food material combination module, freezing parameters of the transducer encoder network and the food material combination module, and performing parameter initialization training on a food material pretreatment method data set S 2 by using a maximum likelihood estimation method on the network formed by the food material pretreatment method module and the frozen parameter transducer encoder network;
S33, referring to steps S31 and S32, the parameters of the manufacturing step module and the notice module are initialized on the manufacturing step data set S 3 and the notice data set S 4 respectively by using a maximum likelihood estimation method.
7. The new recipe generation method based on the improved generation countermeasure network according to claim 1, wherein the loop countermeasure training is performed on the recipe generator network G and the recipe discriminator network D by using a reinforcement learning method in step four, comprising the steps of:
s41, circularly calling a generated menu sequence S of a menu discriminator D, and selecting a menu generator G to generate an action a of a next word by utilizing an epsilon-Greedy strategy of a Q-learning algorithm;
S42, entering the next state And obtaining the current reward signal R, updating the cost function Q until the generation sequence of the menu generator network G reaches the expected quality, and updating the network parameters of the menu generator network G.
8. The method for generating new recipes based on the improved generation countermeasure network according to claim 1, wherein the reinforcement learning method in the fourth step is implemented based on a Q-learning algorithm, and the cost function Q is defined asThe cost function Q is updated by:
,
wherein, Is the value of the Q cost function, s represents the state of the partial menu sequence which is currently generated, a represents the action of the generator to generate the next word, A is the learning rate, R is the current rewarding factor,/>,/>Is the output probability of the menu discriminator network to the generated partial menu sequence,/>Is a discount factor,/>Is the next state,/>Is at/>Action of the next selection.
9. The new recipe generation method based on the improved generation countermeasure network according to claim 1, wherein the network parameters of the recipe generator network G in the fourth step are updated by:
,
wherein, Is a parameter of the recipe generator network, A is the learning rate,/>Is a gradient operation,/>Is the value of the Q-cost function, s represents the state of the partial recipe sequence that has been currently generated, and a represents the action of the generator to generate the next word.
10. The method for generating a new recipe based on an improved generation countermeasure network according to claim 1, wherein the network parameters of the recipe discriminator network D in the fourth step are updated by:
,
,
wherein, Is the output probability of the menu discriminator to the real menu data,/>Is the output probability of the menu discriminator for generating menu data,/>Is menu data generated by a menu generator, and z is input food material keyword data of the menu generator,/>Is the parameter of the menu discriminator, A is the learning rate,/>Is a gradient operation.
CN202410333828.2A 2024-03-22 2024-03-22 New menu generation method based on improved generation countermeasure network Active CN117933250B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410333828.2A CN117933250B (en) 2024-03-22 2024-03-22 New menu generation method based on improved generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410333828.2A CN117933250B (en) 2024-03-22 2024-03-22 New menu generation method based on improved generation countermeasure network

Publications (2)

Publication Number Publication Date
CN117933250A true CN117933250A (en) 2024-04-26
CN117933250B CN117933250B (en) 2024-06-18

Family

ID=90763321

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410333828.2A Active CN117933250B (en) 2024-03-22 2024-03-22 New menu generation method based on improved generation countermeasure network

Country Status (1)

Country Link
CN (1) CN117933250B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135567A (en) * 2019-05-27 2019-08-16 中国石油大学(华东) The image method for generating captions of confrontation network is generated based on more attentions
CN112259247A (en) * 2020-10-22 2021-01-22 平安科技(深圳)有限公司 Method, device, equipment and medium for confrontation network training and medical data supplement
CN112488301A (en) * 2020-12-09 2021-03-12 孙成林 Food inversion method based on multitask learning and attention mechanism
CN112560438A (en) * 2020-11-27 2021-03-26 同济大学 Text generation method based on generation of confrontation network
CN112699288A (en) * 2020-12-31 2021-04-23 天津工业大学 Recipe generation method and system based on condition-generation type confrontation network
CN113987808A (en) * 2021-10-29 2022-01-28 国网辽宁省电力有限公司阜新供电公司 Electricity user complaint early warning method of feature weighted Bayesian network
CN114298455A (en) * 2021-11-11 2022-04-08 国网辽宁省电力有限公司经济技术研究院 Comprehensive energy hybrid modeling method based on GAN technology
CN115587909A (en) * 2021-07-06 2023-01-10 南京大学 Judicial text data amplification method based on generating type confrontation network
CN115795011A (en) * 2022-11-24 2023-03-14 北京工业大学 Emotional dialogue generation method based on improved generation of confrontation network
CN116049275A (en) * 2022-12-27 2023-05-02 北京师范大学珠海校区 Menu management method and device, electronic equipment and storage medium
CN116991968A (en) * 2023-09-26 2023-11-03 济南大学 Menu generation method, system, storage medium and device based on tree structure

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135567A (en) * 2019-05-27 2019-08-16 中国石油大学(华东) The image method for generating captions of confrontation network is generated based on more attentions
CN112259247A (en) * 2020-10-22 2021-01-22 平安科技(深圳)有限公司 Method, device, equipment and medium for confrontation network training and medical data supplement
WO2021189960A1 (en) * 2020-10-22 2021-09-30 平安科技(深圳)有限公司 Method and apparatus for training adversarial network, method and apparatus for supplementing medical data, and device and medium
CN112560438A (en) * 2020-11-27 2021-03-26 同济大学 Text generation method based on generation of confrontation network
CN112488301A (en) * 2020-12-09 2021-03-12 孙成林 Food inversion method based on multitask learning and attention mechanism
CN112699288A (en) * 2020-12-31 2021-04-23 天津工业大学 Recipe generation method and system based on condition-generation type confrontation network
CN115587909A (en) * 2021-07-06 2023-01-10 南京大学 Judicial text data amplification method based on generating type confrontation network
CN113987808A (en) * 2021-10-29 2022-01-28 国网辽宁省电力有限公司阜新供电公司 Electricity user complaint early warning method of feature weighted Bayesian network
CN114298455A (en) * 2021-11-11 2022-04-08 国网辽宁省电力有限公司经济技术研究院 Comprehensive energy hybrid modeling method based on GAN technology
CN115795011A (en) * 2022-11-24 2023-03-14 北京工业大学 Emotional dialogue generation method based on improved generation of confrontation network
CN116049275A (en) * 2022-12-27 2023-05-02 北京师范大学珠海校区 Menu management method and device, electronic equipment and storage medium
CN116991968A (en) * 2023-09-26 2023-11-03 济南大学 Menu generation method, system, storage medium and device based on tree structure

Also Published As

Publication number Publication date
CN117933250B (en) 2024-06-18

Similar Documents

Publication Publication Date Title
CN108280064B (en) Combined processing method for word segmentation, part of speech tagging, entity recognition and syntactic analysis
CN110222164B (en) Question-answer model training method, question and sentence processing device and storage medium
CN111462750B (en) Semantic and knowledge enhanced end-to-end task type dialogue system and method
CN110442705A (en) A kind of abstract automatic generation method based on conceptual hands network
CN112069328B (en) Method for establishing entity relation joint extraction model based on multi-label classification
JP2004272243A (en) Method for recognizing speech
CN110457661B (en) Natural language generation method, device, equipment and storage medium
CN113254616B (en) Intelligent question-answering system-oriented sentence vector generation method and system
KR20190143415A (en) Method of High-Performance Machine Reading Comprehension through Feature Selection
CN114692602A (en) Drawing convolution network relation extraction method guided by syntactic information attention
Pichl et al. Alquist 2.0: Alexa prize socialbot based on sub-dialogue models
CN111899766B (en) Speech emotion recognition method based on optimization fusion of depth features and acoustic features
CN112766507A (en) Complex question knowledge base question-answering method based on embedded and candidate subgraph pruning
CN108363685B (en) Self-media data text representation method based on recursive variation self-coding model
CN114896371A (en) Training method and device of natural language processing model
CN109979461A (en) A kind of voice translation method and device
CN117933250B (en) New menu generation method based on improved generation countermeasure network
CN111199152A (en) Named entity identification method based on label attention mechanism
CN116681078A (en) Keyword generation method based on reinforcement learning
CN115600584A (en) Mongolian emotion analysis method combining DRCNN-BiGRU dual channels with GAP
CN116910190A (en) Method, device and equipment for acquiring multi-task perception model and readable storage medium
CN115495566A (en) Dialog generation method and system for enhancing text features
CN114596843A (en) Fusion method based on end-to-end voice recognition model and language model
CN112951270A (en) Voice fluency detection method and device and electronic equipment
CN110879833B (en) Text prediction method based on light weight circulation unit LRU

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant