WO2022206091A1 - Data generation method and apparatus - Google Patents

Data generation method and apparatus Download PDF

Info

Publication number
WO2022206091A1
WO2022206091A1 PCT/CN2022/070250 CN2022070250W WO2022206091A1 WO 2022206091 A1 WO2022206091 A1 WO 2022206091A1 CN 2022070250 W CN2022070250 W CN 2022070250W WO 2022206091 A1 WO2022206091 A1 WO 2022206091A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
model
corpus
training
data generation
Prior art date
Application number
PCT/CN2022/070250
Other languages
French (fr)
Chinese (zh)
Inventor
刘瑞雪
陈蒙
Original Assignee
京东科技控股股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东科技控股股份有限公司 filed Critical 京东科技控股股份有限公司
Publication of WO2022206091A1 publication Critical patent/WO2022206091A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/42Data-driven translation
    • G06F40/49Data-driven translation using very large corpora, e.g. the web

Definitions

  • Embodiments of the present disclosure relate to the field of computer technology, in particular to the field of artificial intelligence, and in particular, to a method and apparatus for generating data.
  • Data Augmentation is a technology that allows limited data to generate more equivalent data to expand the training data set, and is an effective means to overcome the lack of training data.
  • deep learning methods usually require a large amount of training data to avoid overfitting.
  • data augmentation sometimes enough data cannot be obtained, which requires data augmentation to solve such problems.
  • text data enhancement methods include two methods.
  • One is to locally modify the sentence to generate a new sentence under the premise of maintaining the original structure of the sentence.
  • new sentences are generated using simple synonym replacement, random word exchange, random word deletion, etc.
  • Another example is the recently proposed Masked language model, which performs masked prediction on words and performs conditional adjustments on class labels to achieve data expansion.
  • the other is to pre-train the text generation model through a large amount of data, and then use the data generation method to generate complete sentences through the text generation model instead of making some local changes.
  • back translation that is, first translating the corpus into another language, and then translating it back to the source language, to generate more varied sentences.
  • paraphrasing is used to generate more sentences by adding noise to the input of the text generation model.
  • Embodiments of the present disclosure propose methods and apparatus for generating data.
  • An embodiment of the present disclosure provides a method for generating data, the method includes: acquiring target training data and target data generation conditions, where the target training data includes a corpus of a target domain marked with feature tags; The corpus with the feature label is determined as the target sample corpus, the feature label of the target sample corpus is determined as the target sample label, and the target sample set is obtained; based on the target sample set, the pre-training model is trained, and the parameters of the pre-training model are adjusted.
  • the target data generation model of wherein the pre-training model is obtained through the following steps: constructing an initial model and training the initial model based on the general sample set to obtain a pre-training model; using the target data generation model, based on target data generation conditions, generate target data.
  • training the pre-training model based on the target sample set includes: inputting the target sample label into the pre-training model, using the target sample corpus as the expected output, training the pre-training model, and obtaining the target data generation model.
  • the target data generation conditions include target feature labels; and, using a target data generation model and based on the target data generation conditions, generating target data includes: inputting the target feature labels into the target data generation model to obtain the target corpus; The target corpus is determined as target data.
  • the target feature label is a classification label estimated by a pre-built classification model based on the corpus to be recognized; and before determining the target corpus as the target data, the method further includes: inputting the target corpus into the classification model to obtain the target corpus The classification label of the corpus; in response to determining that the preset label set of the classification model includes the classification label of the target corpus, the target corpus is determined as the target data, and the target data is used to construct the training samples of the classification model.
  • training the pre-training model based on the target sample set includes: inputting the target sample corpus into the pre-training model, using the target sample label as the expected output, training the pre-training model, and obtaining the target data generation model.
  • the target data generation condition includes the target corpus to be recognized; and, using the target data generation model and based on the target data generation condition, generating the target data includes: inputting the target corpus to be recognized into the target data generation model to obtain the target data generation model. Identify the feature tags of the corpus; determine the feature tags of the target corpus to be recognized as the target data.
  • Embodiments of the present disclosure provide an apparatus for generating data, the apparatus comprising: a data acquisition unit configured to acquire target training data and target data generation conditions, where the target training data includes corpus of a target domain marked with feature tags;
  • the sample construction unit is configured to determine the corpus marked with the feature label in the target training data as the target sample corpus, and determine the feature label of the target sample corpus as the target sample label to obtain the target sample set;
  • the model adjustment unit is configured to be based on The target sample set, the pre-training model is trained, the parameters of the pre-training model are adjusted, and the target data generation model after retraining is obtained, wherein the pre-training model is obtained through the following steps: constructing an initial model and training the initial model based on the general sample set to obtain a pre-training model.
  • a training model; a data generation unit configured to use a target data generation model to generate target data based on target data generation conditions.
  • the model adjustment unit is further configured to: input the target sample label into the pre-training model, use the target sample corpus as the expected output, train the pre-training model, and obtain the target data generation model.
  • the target data generation conditions include target feature labels; and the data generation unit is further configured to: input the target feature labels into the target data generation model to obtain target corpus; and determine the target corpus as target data.
  • the target feature label is a classification label estimated by a pre-built classification model based on the corpus to be recognized; and the data generating unit further includes a data verification module configured to: input the target corpus into the classification model to obtain the target corpus In response to determining that the classification label of the target corpus is included in the preset label set of the classification model, the target corpus is determined as the target data, and the target data is used to construct the training samples of the classification model.
  • the model adjustment unit is further configured to: input the target sample corpus into the pre-training model, use the target sample label as the expected output, train the pre-training model, and obtain the target data generation model.
  • the target data generation condition includes the target corpus to be recognized; and the data generating unit is further configured to: input the target corpus to be recognized into the target data generation model to obtain the feature label of the target corpus; The feature labels of the corpus are determined as the target data.
  • Embodiments of the present disclosure provide an electronic device, including: one or more processors; a storage device on which one or more programs are stored, when the one or more programs are processed by the one or more programs
  • the processor executes such that the one or more processors implement the method in any of the above embodiments.
  • Embodiments of the present disclosure also provide a computer-readable medium on which a computer program is stored, wherein when the program is executed by a processor, the method in any of the foregoing embodiments is implemented.
  • FIG. 1 is an exemplary system architecture diagram to which some embodiments of the present disclosure may be applied;
  • FIG. 2 is a flowchart of one embodiment of a method for generating data according to the present disclosure
  • FIG. 3 is a flowchart of yet another embodiment of a method for generating data according to the present disclosure
  • FIG. 4 is a flowchart of yet another embodiment of a method for generating data according to the present disclosure
  • FIG. 5 is a schematic structural diagram of an embodiment of an apparatus for generating data according to the present disclosure
  • FIG. 6 is a schematic structural diagram of an electronic device suitable for implementing embodiments of the present disclosure.
  • FIG. 1 illustrates an exemplary system architecture 100 of a method for generating data or an apparatus for generating data to which embodiments of the present disclosure may be applied.
  • the system architecture 100 may include terminal devices 101 , 102 , and 103 , a network 104 and a server 105 .
  • the network 104 is a medium used to provide a communication link between the terminal devices 101 , 102 , 103 and the server 105 .
  • the network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
  • the user can use the terminal devices 101, 102, 103 to interact with the server 105 through the network 104 to receive or send data, etc.
  • the user can send the original data of the target field to the server, and can also receive the target data from the server to generate the target generated by the model. data.
  • the terminal devices 101, 102, and 103 may be hardware or software.
  • the terminal devices 101, 102, and 103 may be electronic devices with communication functions, including but not limited to smart phones, tablet computers, e-book readers, laptop computers, and desktop computers.
  • the terminal devices 101, 102, and 103 are software, they can be installed in the electronic devices listed above. It can be implemented, for example, as multiple software or software modules for providing distributed services, or as a single software or software module. There is no specific limitation here.
  • the server 105 may be a server that provides various services, such as a background data server that processes the raw data uploaded by the terminal devices 101 , 102 , and 103 (eg, constructs training samples based on target training data).
  • the background data server can use the received raw data to adjust the pre-training model, and the obtained data generation model is used to generate new data, and feed back the processing result (eg, the generated target data) to the terminal device.
  • the server may be hardware or software.
  • the server can be implemented as a distributed server cluster composed of multiple servers, or can be implemented as a single server.
  • the server is software, it may be implemented as multiple software or software modules for providing distributed services, or may be implemented as a single software or software module. There is no specific limitation here.
  • the method for generating data provided by the embodiments of the present disclosure may be executed by the terminal devices 101 , 102 , and 103 , or may be executed by the server 105 .
  • the means for generating data may be provided in the terminal devices 101 , 102 , and 103 , and may also be provided in the server 105 . There is no specific limitation here.
  • the method for generating data includes the following steps:
  • Step 201 Obtain target training data and target data generation conditions.
  • the target training data includes the corpus of the target domain marked with feature labels.
  • Feature tags represent the features of the corpus and can include multiple dimensions.
  • structural feature tags can represent the structural features of the corpus
  • intent tags can represent the intent features of the corpus
  • semantic tags can represent the semantic features of the corpus.
  • the target data generation condition represents the user's expectation for the generated data, for example, it may be data including entity information in the target domain, and may also be data including specific syntactic structure or semantic information.
  • the operator when an operator receives a data generation task in a certain technical field, the operator can directly obtain the target training data and target data generation conditions of the field from the business party through the execution subject (for example, the server 105 shown in FIG. 1 ), and further The real corpus in the field can be obtained from the network, and the corresponding feature labels of each real corpus can be marked to obtain the target training data.
  • the execution subject for example, the server 105 shown in FIG. 1
  • target training data may also include unlabeled corpus.
  • Step 202 Determine the corpus marked with the feature label in the target training data as the target sample corpus, and determine the feature label of the target sample corpus as the target sample label to obtain the target sample set.
  • the corpus of the target domain may include features of multiple dimensions of the real corpus in the target domain, such as sentence structure features, word features, semantic features, and the like.
  • the target sample label can characterize the characteristics of the target sample corpus from multiple dimensions, for example, the target sample corpus can be labeled from the sentence structure dimension, and the target sample corpus can also be labeled from the keyword dimension. Entity-named dimensions mark the target sample corpus.
  • Step 203 based on the target sample set, train the pre-training model, adjust the parameters of the pre-training model, and obtain the target data generation model after retraining.
  • the pre-training model is obtained through the following steps: constructing an initial model and training the initial model based on a general sample set to obtain a pre-training model.
  • the training data in the general sample set is easily obtained training data in various fields.
  • models such as ELMo (Embeddings from Language Models), BERT (Bidirectional Encoder Representation from Transformers) or GPT (Generative Pre-Training) can be selected as
  • ELMo Embeddings from Language Models
  • BERT Bidirectional Encoder Representation from Transformers
  • GPT Generic Pre-Training
  • the pre-training model obtained after the initial model is trained on the sample set can learn basic data generation rules (for example, it can generate coherent and real corpus), but for some fields where data acquisition is difficult, the pre-training model generates The similarity between the data and the real data in the field is low.
  • the pre-training model is re-trained based on the target sample set, and the parameters in the pre-training model are adjusted so that the pre-training model learns the rules for generating data in the target field, so that the data generated by the target data model obtained by re-training is more accurate. close to the real data.
  • Step 204 using the target data generation model, and based on the target data generation conditions, to generate target data.
  • the target data generation model represents the corresponding relationship between target data generation conditions and target data.
  • the amount of data in a specific field is small, and in order to enhance the data in this field, this field can be used as the target field.
  • the execution body can build a general sample set based on public data on the Internet (for example, Chinese novels or dialogue materials), and then train the initial GPT model based on the general sample set, and the obtained pre-trained GPT model can generate coherent real sentences .
  • the execution body can obtain the corpus of the target domain as the target training data, and construct the target sample set, and then retrain the GPT model based on the target sample set, adjust the parameters of the GPT model, and make it learn the generation rules of the real corpus in the target domain,
  • the GPT model obtained after training is the target data generation model.
  • the execution body can obtain the target data generation conditions (such as keyword tags, sentence structure tags, semantic tags, etc.), and input the target data tags into the GPT model, and GPT can generate new corpus, thus expanding the target field. amount of data.
  • Table 1 shows the target training data (including input labels and training corpus) and the target corpus generated by GPT in this example.
  • the pre-trained model is retrained by using a small amount of data in the target domain, so that the obtained data generation model learns the data generation rules of the target domain, and the data can be enhanced. Improve the authenticity and pertinence of the data generated.
  • FIG. 3 is a flowchart of another embodiment of the method for generating data according to the present disclosure.
  • the following steps are included:
  • Step 301 Obtain target training data and target data generation conditions.
  • Step 302 Determine the corpus marked with the feature label in the target training data as the target sample corpus, and determine the feature label of the target sample corpus as the target sample label to obtain the target sample set. Steps 301 and 302 are similar to the foregoing steps 201 and 202 and will not be repeated here.
  • Step 303 input the target sample label into the pre-training model, use the target sample corpus as the expected output, train the pre-training model, and obtain the target data generation model.
  • the target sample label can represent the characteristics of the target sample corpus.
  • the pre-training model uses the target sample label as the conditional label, and uses the conditional label to constrain the corpus generation process, and then determines the loss function by comparing the target sample corpus and the generated corpus to obtain the target data generation model.
  • the target data generation model characterizes the correspondence between the conditional labels and the generated corpus.
  • Step 304 input the target feature label into the target data generation model to obtain the target corpus.
  • the target data generation conditions include target feature labels, which represent the user's expectation of the generated corpus in one or more dimensions.
  • the execution body inputs the target feature label into the target data generation model, as the condition label of the target data generation model, and constrains its corpus generation process to generate the target corpus that meets the user's expectation.
  • Step 305 determining the target corpus as target data.
  • the flow 400 of the method for generating data in this embodiment highlights the step of generating corpus data by using the target data generation model.
  • the method for generating data in this embodiment only needs a small amount of training data in the target field to ensure that the generated corpus is closer to the real expectation of the target field, and the data can be enhanced in a more targeted manner.
  • the target feature label is a classification label estimated by a pre-built classification model based on the corpus to be recognized, and before the target corpus is determined as the target data (step 306 ), the above process 300 may further include: inputting the target corpus into the classification model to obtain the classification label of the target corpus; in response to determining that the classification label of the target corpus is included in the preset label set of the classification model, the target corpus is determined as the target data, and the target data is used as the target corpus. training samples for building a classification model.
  • the target data generation model is used to expand the training data of the classification model. If the corpus generated by the target data generation model can be correctly identified by the classification model, it proves that the authenticity of the corpus generated by the target data generation model meets the training requirements of the classification model.
  • the data volume of training samples is positively related to the accuracy of the model. Therefore, in order to ensure the accuracy of the classification model, sufficient classification sample corpus is required, and for some specific fields, the data volume of the corpus is difficult to obtain. larger.
  • the field can be used as the target field, and the execution subject is expected to construct the target sample set based on the obtained small number of classified samples, and obtain the target data generation model through retraining. Then, the sample classification labels of the constructed classification model are input into the target data generation model to obtain the target corpus. Then, the target corpus is input into the classification model. If the classification label output by the classification model is consistent with the sample classification label, it means that the authenticity of the target corpus meets the training requirements of the classification model. In this way, the obtained target data can effectively expand the sample data of the classification model.
  • a corpus can correspond to multiple feature tags, which respectively represent the features of the corpus from multiple dimensions.
  • multiple target sample labels can be input into the pre-training model at the same time, so that the target data generation model can learn data of multiple dimensions Generate rules.
  • the target data generation condition may include target feature labels of multiple dimensions, and each target feature label represents a data generation condition of one dimension.
  • the executive body can constrain its corpus generation process from multiple dimensions, thus realizing data augmentation that integrates multiple dimensions.
  • the target data generation conditions may include intent tags, structure tags, entity tags, and technical field tags at the same time, which respectively represent the user's expectations of the generated corpus from the dimensions of intent, structure, entity, and technical field.
  • the execution body can input the above-mentioned multiple feature tags into the target data generation model at the same time, and constrain the generation process of the corpus from the above-mentioned multiple dimensions, so as to obtain the target corpus that meets the user's needs.
  • the user uses the target generation model to expand the corpus data in the field of air conditioners, and the user can set the target data generation conditions as: “air conditioner”, “green”, and “purchase” according to their own needs, where “air conditioner” is the domain label, "green” is the entity label, and "purchase” is the intent label.
  • the execution body inputs the above three feature labels into the target data generation model at the same time to generate the target corpus.
  • the target corpus can be "I want to buy a Green Air Conditioners", “How to Buy Green Air Conditioners", etc.
  • Table 2 shows the correspondence between multi-dimensional labels, training corpus, and target corpus in this example.
  • the process 400 of the method for generating data includes the following steps:
  • Step 401 Obtain target training data and target data generation conditions.
  • Step 402 Determine the corpus marked with the feature label in the target training data as the target sample corpus, and determine the feature label of the target sample corpus as the target sample label to obtain the target sample set. Steps 401 and 402 are similar to the aforementioned steps 201 and 202, and are not described again here.
  • Step 403 Input the target sample corpus into the pre-training model, use the target sample label as the expected output, train the pre-training model, and obtain the target data generation model.
  • Step 404 input the target corpus to be recognized into the target data generation model, and obtain the feature label of the target corpus to be recognized.
  • the target data generation condition includes the target corpus to be recognized.
  • the target data generation model characterizes the correspondence between corpus and labels.
  • the execution body inputs the target corpus to be recognized into the target data generation model, identifies the features of the target corpus to be recognized, and outputs a target feature label representing the feature of the target corpus to be recognized.
  • Step 405 determining the feature tag of the target corpus to be recognized as target data.
  • the process 400 of the method for generating data in this embodiment embodies the step of identifying the feature labels of the corpus through the target data generation model.
  • the amount of data is large and only a small number of labels
  • the method for generating data in this embodiment only needs a small amount of training data in the target field to ensure the accuracy of recognition, and can enhance the data more effectively.
  • the present disclosure provides an embodiment of an apparatus for generating data.
  • the apparatus embodiment corresponds to the method embodiment shown in FIG. 2 .
  • the device can be specifically applied to various electronic devices.
  • the apparatus 500 for generating data in this embodiment includes: a data acquisition unit 501 configured to acquire target training data and target data generation conditions, and the target training data includes corpus of the target domain marked with feature tags
  • the sample construction unit 502 is configured to determine the corpus marked with the feature label in the target training data as the target sample corpus, and the feature label of the target sample corpus is determined as the target sample label to obtain the target sample set;
  • the model adjustment unit 503 is It is configured to train the pre-training model based on the target sample set, adjust the parameters of the pre-training model, and obtain the target data generation model after retraining, wherein the pre-training model is obtained through the following steps: constructing an initial model and training the initial model based on the general sample set , to obtain a pre-training model;
  • the data generation unit 504 is configured to use the target data generation model to generate target data based on the target data generation conditions.
  • the model adjustment unit 503 is further configured to: input the target sample label into the pre-training model, use the target sample corpus as the expected output, train the pre-training model, and obtain the target data generation model.
  • the target data generation conditions include target feature labels; and, the data generation unit 504 is further configured to: input the target feature labels into the target data generation model to obtain target corpus; and determine the target corpus as target data.
  • the target feature label is a classification label estimated by a pre-built classification model based on the corpus to be recognized; and the data generation unit 504 further includes a data verification module, configured to: input the target corpus into the classification model to obtain the target corpus.
  • the classification label of the corpus in response to determining that the preset label set of the classification model includes the classification label of the target corpus, the target corpus is determined as the target data, and the target data is used to construct the training samples of the classification model.
  • the model adjustment unit 503 is further configured to: input the target sample corpus into the pre-training model, use the target sample label as the expected output, train the pre-training model, and obtain the target data generation model.
  • the target data generation conditions include the target corpus to be recognized; and the data generating unit 504 is further configured to: input the target corpus to be recognized into the target data generation model to obtain the feature label of the target corpus; The feature labels of the recognition corpus are determined as target data.
  • FIG. 6 it shows a schematic structural diagram of an electronic device (eg, the server or terminal device in FIG. 1 ) 600 suitable for implementing the embodiments of the present disclosure.
  • Terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (Personal Digital Assistants), PADs (Tablet Computers), etc., as well as mobile terminals such as digital TVs, desktop computers, etc. etc. Fixed terminal.
  • the terminal device shown in FIG. 6 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
  • an electronic device 600 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 601 that may be loaded into random access according to a program stored in a read only memory (ROM) 602 or from a storage device 608 Various appropriate actions and processes are executed by the programs in the memory (RAM) 603 . In the RAM 603, various programs and data required for the operation of the electronic device 600 are also stored.
  • the processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604.
  • An input/output (I/O) interface 605 is also connected to bus 604 .
  • I/O interface 605 input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 607 of a computer, etc.; a storage device 608 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 609.
  • Communication means 609 may allow electronic device 600 to communicate wirelessly or by wire with other devices to exchange data. While FIG. 6 shows electronic device 600 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in FIG. 6 may represent one device, or may represent multiple devices as required.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication device 609, or from the storage device 608, or from the ROM 602.
  • the processing apparatus 601 the above-described functions defined in the methods of the embodiments of the present disclosure are executed.
  • the computer-readable medium described in the embodiments of the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal in baseband or propagated as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: obtains target training data and target data generation conditions, and the target training data includes a feature label marked with The corpus of the target domain is determined; the corpus marked with the feature label in the target training data is determined as the target sample corpus, and the feature label of the target sample corpus is determined as the target sample label to obtain the target sample set; based on the target sample set, train the pre-training model , adjust the parameters of the pre-training model to obtain the target data generation model after retraining, wherein, the pre-training model is obtained through the following steps: constructing an initial model and training the initial model based on the general sample set to obtain a pre-training model; using the target data to generate the model , generate target data based on
  • Computer program code for carrying out operations of embodiments of the present disclosure may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, or a combination thereof, Also included are conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider to via Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure may be implemented in software or hardware.
  • the described unit can also be provided in the processor, for example, it can be described as: a processor includes a data acquisition unit, a sample construction unit, a model adjustment unit and a data generation unit. Wherein, the names of these units do not constitute a limitation on the unit itself under certain circumstances.
  • the data acquisition unit can also be described as "a unit that acquires target training data and target data generation conditions".

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Disclosed in embodiments of the present disclosure are a data generation method and apparatus. A specific embodiment of the method comprises: acquiring target training data and a target data generation condition, wherein the target training data comprises language materials in the target field, and each language material is marked with a feature tag; constructing a target sample set on the basis of the target training data; training a pre-training model on the basis of the target sample set, and adjusting parameters of the pre-training model to obtain a re-trained target data generation model, wherein the pre-training model is obtained by the following steps: constructing an initial model and training the initial model on the basis of a general sample set to obtain the pre-training model; and using the target data generation model to generate target data on the basis of the target data generation condition.

Description

用于生成数据的方法和装置Method and apparatus for generating data
交叉引用cross reference
本申请要求于2021年3月30日提交的、申请号为202110340188.4、发明名称为“用于生成数据的方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202110340188.4 and the invention title "Method and Apparatus for Generating Data" filed on March 30, 2021, the entire contents of which are incorporated herein by reference.
技术领域technical field
本公开的实施例涉及计算机技术领域,具体涉及人工智能领域,尤其涉及用于生成数据的方法和装置。Embodiments of the present disclosure relate to the field of computer technology, in particular to the field of artificial intelligence, and in particular, to a method and apparatus for generating data.
背景技术Background technique
数据增强(Data Augmentation)是一种让有限的数据产生更多的等价数据以扩展训练数据集的技术,是克服训练数据不足的有效手段。例如深度学习方法通常需要大量的训练数据才能避免过度拟合,然而实践中有时无法获得足够的数据,这就需要借助数据增强解决这类问题。Data Augmentation is a technology that allows limited data to generate more equivalent data to expand the training data set, and is an effective means to overcome the lack of training data. For example, deep learning methods usually require a large amount of training data to avoid overfitting. However, in practice, sometimes enough data cannot be obtained, which requires data augmentation to solve such problems.
相关技术中,文本数据增强方法包括两种,一种是在保持句子原有结构的前提下,对句子的进行局部修改,生成新的句子。例如,利用简单的同义词替换、随机交换词、随机删词等方法生成新句子。再例如最近提出的Masked(掩模)语言模型,对单词进行掩蔽预测,同时在类标签上进行条件调整,从而实现数据扩充。另一种是通过大量的数据对文本生成模型进行预训练,然后通过文本生成模型采用数据生成的方法生成完整的句子,而不是做一些局部的改变。例如,反向翻译(back translation),即先将语料翻译成另外一种语言,再将其翻译回源语言,以生成更多变化的句子。再例如,采用复述(paraphrasing)的方法,在文本生成模型的输入端增加噪音来生成更多语句。In the related art, text data enhancement methods include two methods. One is to locally modify the sentence to generate a new sentence under the premise of maintaining the original structure of the sentence. For example, new sentences are generated using simple synonym replacement, random word exchange, random word deletion, etc. Another example is the recently proposed Masked language model, which performs masked prediction on words and performs conditional adjustments on class labels to achieve data expansion. The other is to pre-train the text generation model through a large amount of data, and then use the data generation method to generate complete sentences through the text generation model instead of making some local changes. For example, back translation, that is, first translating the corpus into another language, and then translating it back to the source language, to generate more varied sentences. For another example, paraphrasing is used to generate more sentences by adding noise to the input of the text generation model.
发明内容SUMMARY OF THE INVENTION
本公开的实施例提出了用于生成数据的方法和装置。Embodiments of the present disclosure propose methods and apparatus for generating data.
本公开的实施例提供了一种用于生成数据的方法,该方法包括:获取目标训练数据和目标数据生成条件,目标训练数据包括标记有特征标签的目标领域的语料;将目标训练数据中标记有特征标签的语料确定为目标样本语料,将目标样本语料的特征标签确定为目标样本标签,得到目标样本集;基于目标样本集,训练预训练模型,调整预训练模型的参数,得到再次训练后的目标数据生成模型,其中,预训练模型经由如下步骤得到:构建初始模型并基于通用样本集训练初始模型,得到预训练模型;采用目标数据生成模型,基于目标数据生成条件,生成目标数据。An embodiment of the present disclosure provides a method for generating data, the method includes: acquiring target training data and target data generation conditions, where the target training data includes a corpus of a target domain marked with feature tags; The corpus with the feature label is determined as the target sample corpus, the feature label of the target sample corpus is determined as the target sample label, and the target sample set is obtained; based on the target sample set, the pre-training model is trained, and the parameters of the pre-training model are adjusted. The target data generation model of , wherein the pre-training model is obtained through the following steps: constructing an initial model and training the initial model based on the general sample set to obtain a pre-training model; using the target data generation model, based on target data generation conditions, generate target data.
在一些实施例中,基于目标样本集,训练预训练模型,包括:将目标样本标签输入预训练模型,将目标样本语料作为期望输出,训练预训练模型,得到目标数据生成模型。In some embodiments, training the pre-training model based on the target sample set includes: inputting the target sample label into the pre-training model, using the target sample corpus as the expected output, training the pre-training model, and obtaining the target data generation model.
在一些实施例中,目标数据生成条件包括目标特征标签;以及,采用目标数据生成模型,基于目标数据生成条件,生成目标数据,包括:将目标特征标签输入目标数据生成模型,得到目标语料;将目标语料确定为目标数据。In some embodiments, the target data generation conditions include target feature labels; and, using a target data generation model and based on the target data generation conditions, generating target data includes: inputting the target feature labels into the target data generation model to obtain the target corpus; The target corpus is determined as target data.
在一些实施例中,目标特征标签为预先构建的分类模型基于待识别语料估计出的分类标签;以及,在将目标语料确定为目标数据之前,方法还包括:将目标语料输入分类模型,得到目标语料的分类标签;响应于确定分类模型的预设标签集中包括目标语料的分类标签,将目标语料确定为目标数据,目标数据用于构建分类模型的训练样本。In some embodiments, the target feature label is a classification label estimated by a pre-built classification model based on the corpus to be recognized; and before determining the target corpus as the target data, the method further includes: inputting the target corpus into the classification model to obtain the target corpus The classification label of the corpus; in response to determining that the preset label set of the classification model includes the classification label of the target corpus, the target corpus is determined as the target data, and the target data is used to construct the training samples of the classification model.
在一些实施例中,基于目标样本集,训练预训练模型,包括:将目标样本语料输入预训练模型,将目标样本标签作为期望输出,训练预训练模型,得到目标数据生成模型。In some embodiments, training the pre-training model based on the target sample set includes: inputting the target sample corpus into the pre-training model, using the target sample label as the expected output, training the pre-training model, and obtaining the target data generation model.
在一些实施例中,目标数据生成条件包括目标待识别语料;以及,采用目标数据生成模型,基于目标数据生成条件,生成目标数据,包括:将目标待识别语料输入目标数据生成模型,得到目标待识别语料的特征标签;将目标待识别语料的特征标签确定为目标数据。In some embodiments, the target data generation condition includes the target corpus to be recognized; and, using the target data generation model and based on the target data generation condition, generating the target data includes: inputting the target corpus to be recognized into the target data generation model to obtain the target data generation model. Identify the feature tags of the corpus; determine the feature tags of the target corpus to be recognized as the target data.
本公开的实施例提供了一种用于生成数据的装置,装置包括:数据获取单元,被配置成获取目标训练数据和目标数据生成条件,目标训练数据包括标记有特征标签的目标领域的语料;样本构建单元,被配置成将目标训练数据中标记有特征标签的语料确定为目标样本语料,将目标样本语料的特征标签确定为目标样本标签,得到目标样本集;模型调整单元,被配置成基于目标样本集,训练预训练模型,调整预训练模型的参数,得到再次训练后的目标数据生成模型,其中,预训练模型经由如下步骤得到:构建初始模型并基于通用样本集训练初始模型,得到预训练模型;数据生成单元,被配置成采用目标数据生成模型,基于目标数据生成条件,生成目标数据。Embodiments of the present disclosure provide an apparatus for generating data, the apparatus comprising: a data acquisition unit configured to acquire target training data and target data generation conditions, where the target training data includes corpus of a target domain marked with feature tags; The sample construction unit is configured to determine the corpus marked with the feature label in the target training data as the target sample corpus, and determine the feature label of the target sample corpus as the target sample label to obtain the target sample set; the model adjustment unit is configured to be based on The target sample set, the pre-training model is trained, the parameters of the pre-training model are adjusted, and the target data generation model after retraining is obtained, wherein the pre-training model is obtained through the following steps: constructing an initial model and training the initial model based on the general sample set to obtain a pre-training model. A training model; a data generation unit, configured to use a target data generation model to generate target data based on target data generation conditions.
在一些实施例中,模型调整单元被进一步配置成:将目标样本标签输入预训练模型,将目标样本语料作为期望输出,训练预训练模型,得到目标数据生成模型。In some embodiments, the model adjustment unit is further configured to: input the target sample label into the pre-training model, use the target sample corpus as the expected output, train the pre-training model, and obtain the target data generation model.
在一些实施例中,目标数据生成条件包括目标特征标签;以及,数据生成单元被进一步配置成:将目标特征标签输入目标数据生成模型,得到目标语料;将目标语料确定为目标数据。In some embodiments, the target data generation conditions include target feature labels; and the data generation unit is further configured to: input the target feature labels into the target data generation model to obtain target corpus; and determine the target corpus as target data.
在一些实施例中,目标特征标签为预先构建的分类模型基于待识别语料估计出的分类标签;以及,数据生成单元还包括数据验证模块,被配置成:将目标语料输入分类模型,得到目标语料的分类标签;响应于确定分类模型的预设标签集中包括目标语料的分类标签,将目标语料确定为目标数据,目标数据用于构建分类模型的训练样本。In some embodiments, the target feature label is a classification label estimated by a pre-built classification model based on the corpus to be recognized; and the data generating unit further includes a data verification module configured to: input the target corpus into the classification model to obtain the target corpus In response to determining that the classification label of the target corpus is included in the preset label set of the classification model, the target corpus is determined as the target data, and the target data is used to construct the training samples of the classification model.
在一些实施例中,模型调整单元还被进一步配置成:将目标样本语料输入预训练模型,将目标样本标签作为期望输出,训练预训练模型,得到目标数据生成模型。In some embodiments, the model adjustment unit is further configured to: input the target sample corpus into the pre-training model, use the target sample label as the expected output, train the pre-training model, and obtain the target data generation model.
在一些实施例中,目标数据生成条件包括目标待识别语料;以及,数据生成单元还被配置成:将目标待识别语料输入目标数据生成模型,得到目标待识别语料的特征标签;将目标待识别语料的特征标签确定为目标数据。In some embodiments, the target data generation condition includes the target corpus to be recognized; and the data generating unit is further configured to: input the target corpus to be recognized into the target data generation model to obtain the feature label of the target corpus; The feature labels of the corpus are determined as the target data.
本公开的实施例提供了一种电子设备,包括:一个或多个处理器;存储装置,其上存储有一个或多个程序,当所述一个或多个程序被所 述一个或多个处理器执行,使得所述一个或多个处理器实现上述任一实施例中的方法。Embodiments of the present disclosure provide an electronic device, including: one or more processors; a storage device on which one or more programs are stored, when the one or more programs are processed by the one or more programs The processor executes such that the one or more processors implement the method in any of the above embodiments.
本公开的实施例还提供了一种计算机可读介质,其上存储有计算机程序,其中,所述程序被处理器执行时实现上述任一实施例中的方法。Embodiments of the present disclosure also provide a computer-readable medium on which a computer program is stored, wherein when the program is executed by a processor, the method in any of the foregoing embodiments is implemented.
附图说明Description of drawings
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本公开的其它特征、目的和优点将会变得更明显:Other features, objects and advantages of the present disclosure will become more apparent upon reading the detailed description of non-limiting embodiments taken with reference to the following drawings:
图1是本公开的一些实施例可以应用于其中的示例性***架构图;FIG. 1 is an exemplary system architecture diagram to which some embodiments of the present disclosure may be applied;
图2是根据本公开的用于生成数据的方法的一个实施例的流程图;2 is a flowchart of one embodiment of a method for generating data according to the present disclosure;
图3是根据本公开的用于生成数据的方法的又一个实施例的流程图;3 is a flowchart of yet another embodiment of a method for generating data according to the present disclosure;
图4是根据本公开的用于生成数据的方法的又一个实施例的流程图;4 is a flowchart of yet another embodiment of a method for generating data according to the present disclosure;
图5是根据本公开的用于生成数据的装置的一个实施例的结构示意图;5 is a schematic structural diagram of an embodiment of an apparatus for generating data according to the present disclosure;
图6是适于用来实现本公开的实施例的电子设备的结构示意图。6 is a schematic structural diagram of an electronic device suitable for implementing embodiments of the present disclosure.
具体实施方式Detailed ways
下面结合附图和实施例对本公开作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关方案,而非对该方案的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关方案相关的部分。The present disclosure will be further described in detail below with reference to the accompanying drawings and embodiments. It can be understood that, the specific embodiments described herein are only used to explain the relevant solutions, rather than limit the solutions. In addition, it should be noted that, for the convenience of description, only the parts related to the relevant solutions are shown in the drawings.
需要说明的是,在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本公开。It should be noted that the embodiments of the present disclosure and the features of the embodiments may be combined with each other under the condition of no conflict. The present disclosure will be described in detail below with reference to the accompanying drawings and in conjunction with embodiments.
图1示出了可以应用本公开的实施例的用于生成数据的方法或用于生成数据的装置的示例性***架构100。FIG. 1 illustrates an exemplary system architecture 100 of a method for generating data or an apparatus for generating data to which embodiments of the present disclosure may be applied.
如图1所示,***架构100可以包括终端设备101、102、103, 网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。As shown in FIG. 1 , the system architecture 100 may include terminal devices 101 , 102 , and 103 , a network 104 and a server 105 . The network 104 is a medium used to provide a communication link between the terminal devices 101 , 102 , 103 and the server 105 . The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
用户可以使用终端设备101、102、103通过网络104与服务器105交互,以接收或发送数据等,例如用户可以将目标领域的原始数据发送至服务器,还可以从服务器接收目标数据生成模型生成的目标数据。The user can use the terminal devices 101, 102, 103 to interact with the server 105 through the network 104 to receive or send data, etc. For example, the user can send the original data of the target field to the server, and can also receive the target data from the server to generate the target generated by the model. data.
终端设备101、102、103可以是硬件,也可以是软件。当终端设备101、102、103为硬件时,可以是具备通信功能的电子设备,包括但不限于智能手机、平板电脑、电子书阅读器、膝上型便携计算机和台式计算机等等。当终端设备101、102、103为软件时,可以安装在上述所列举的电子设备中。其可以实现成例如用来提供分布式服务的多个软件或软件模块,也可以实现成单个软件或软件模块。在此不做具体限定。The terminal devices 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, and 103 are hardware, they may be electronic devices with communication functions, including but not limited to smart phones, tablet computers, e-book readers, laptop computers, and desktop computers. When the terminal devices 101, 102, and 103 are software, they can be installed in the electronic devices listed above. It can be implemented, for example, as multiple software or software modules for providing distributed services, or as a single software or software module. There is no specific limitation here.
服务器105可以是提供各种服务的服务器,例如对终端设备101、102、103上传的原始数据进行处理(例如基于目标训练数据构建训练样本)的后台数据服务器。后台数据服务器可以利用接收到的原始数据对预训练模型进行调整,得到的数据生成模型用于生成新的数据,并将处理结果(例如生成的目标数据)反馈给终端设备。The server 105 may be a server that provides various services, such as a background data server that processes the raw data uploaded by the terminal devices 101 , 102 , and 103 (eg, constructs training samples based on target training data). The background data server can use the received raw data to adjust the pre-training model, and the obtained data generation model is used to generate new data, and feed back the processing result (eg, the generated target data) to the terminal device.
需要说明的是,服务器可以是硬件,也可以是软件。当服务器为硬件时,可以实现成多个服务器组成的分布式服务器集群,也可以实现成单个服务器。当服务器为软件时,可以实现成例如用来提供分布式服务的多个软件或软件模块,也可以实现成单个软件或软件模块。在此不做具体限定。It should be noted that the server may be hardware or software. When the server is hardware, it can be implemented as a distributed server cluster composed of multiple servers, or can be implemented as a single server. When the server is software, it may be implemented as multiple software or software modules for providing distributed services, or may be implemented as a single software or software module. There is no specific limitation here.
需要说明的是,本公开的实施例所提供的用于生成数据的方法可以由终端设备101、102、103执行,也可以由服务器105执行。相应地,用于生成数据的装置可以设置于终端设备101、102、103中,也可以设置于服务器105中。在此不做具体限定。It should be noted that the method for generating data provided by the embodiments of the present disclosure may be executed by the terminal devices 101 , 102 , and 103 , or may be executed by the server 105 . Correspondingly, the means for generating data may be provided in the terminal devices 101 , 102 , and 103 , and may also be provided in the server 105 . There is no specific limitation here.
继续参考图2,示出了根据本公开的用于生成数据的方法的一个实施例的流程200。该用于生成数据的方法,包括以下步骤:With continued reference to FIG. 2, a flow 200 of one embodiment of a method for generating data in accordance with the present disclosure is shown. The method for generating data includes the following steps:
步骤201,获取目标训练数据和目标数据生成条件。Step 201: Obtain target training data and target data generation conditions.
在本实施例中,目标训练数据包括标记有特征标签的目标领域的语料。特征标签表征语料的特征,可以包括多个维度,例如,结构特征标签可以表征语料的结构特征,意图标签可以表征语料的意图特征,语义标签可以表征语料的语义特征。目标数据生成条件表征用户对于生成的数据的期望,例如可以是包括目标领域中实体信息的数据,还可以是包含特定句法结构或语义信息的数据。In this embodiment, the target training data includes the corpus of the target domain marked with feature labels. Feature tags represent the features of the corpus and can include multiple dimensions. For example, structural feature tags can represent the structural features of the corpus, intent tags can represent the intent features of the corpus, and semantic tags can represent the semantic features of the corpus. The target data generation condition represents the user's expectation for the generated data, for example, it may be data including entity information in the target domain, and may also be data including specific syntactic structure or semantic information.
作为示例,操作人员接收到某个技术领域的数据生成任务时,可以通过执行主体(例如图1中所示的服务器105)从业务方直接获取该领域的目标训练数据和目标数据生成条件,还可以从网络上获取该领域的真实语料,并对各真实语料标记对应的特征标签,得到目标训练数据。As an example, when an operator receives a data generation task in a certain technical field, the operator can directly obtain the target training data and target data generation conditions of the field from the business party through the execution subject (for example, the server 105 shown in FIG. 1 ), and further The real corpus in the field can be obtained from the network, and the corresponding feature labels of each real corpus can be marked to obtain the target training data.
需要说明的是,目标训练数据还可以包括未经标注的语料。It should be noted that the target training data may also include unlabeled corpus.
步骤202,将目标训练数据中标记有特征标签的语料确定为目标样本语料,将目标样本语料的特征标签确定为目标样本标签,得到目标样本集。Step 202: Determine the corpus marked with the feature label in the target training data as the target sample corpus, and determine the feature label of the target sample corpus as the target sample label to obtain the target sample set.
在本实施例中,目标领域的语料可以包括目标领域中真实语料的多个维度的特征,例如句子结构特征、词特征、语义特征等。相应地,目标样本标签可以从多个维度表征目标样本语料的特征,例如可以从句子结构维度标记目标样本语料,还可以从关键词的维度标记目标样本语料,再例如,还可以从目标领域的实体命名的维度标记目标样本语料。In this embodiment, the corpus of the target domain may include features of multiple dimensions of the real corpus in the target domain, such as sentence structure features, word features, semantic features, and the like. Correspondingly, the target sample label can characterize the characteristics of the target sample corpus from multiple dimensions, for example, the target sample corpus can be labeled from the sentence structure dimension, and the target sample corpus can also be labeled from the keyword dimension. Entity-named dimensions mark the target sample corpus.
步骤203,基于目标样本集,训练预训练模型,调整预训练模型的参数,得到再次训练后的目标数据生成模型。 Step 203 , based on the target sample set, train the pre-training model, adjust the parameters of the pre-training model, and obtain the target data generation model after retraining.
在本实施例中,预训练模型经由如下步骤得到:构建初始模型并基于通用样本集训练初始模型,得到预训练模型。其中,通用样本集中的训练数据为容易获取到的、各个领域的训练数据。作为示例,可以选择ELMo(Embeddings from Language Models,语言模型的嵌入)、BERT(Bidirectional Encoder Representation from Transformers,来自变压器的双向编码器表示)或GPT(Generative Pre-Training,生成式预训练)等模型作为初始模型,之后执行主体可以通过网络获取公开数 据,并基于公开数据构建通用样本集,如此可以确保初始模型在预训练阶段有足够的训练样本。In this embodiment, the pre-training model is obtained through the following steps: constructing an initial model and training the initial model based on a general sample set to obtain a pre-training model. Among them, the training data in the general sample set is easily obtained training data in various fields. As an example, models such as ELMo (Embeddings from Language Models), BERT (Bidirectional Encoder Representation from Transformers) or GPT (Generative Pre-Training) can be selected as After the initial model, the execution subject can obtain public data through the network, and build a common sample set based on the public data, so as to ensure that the initial model has enough training samples in the pre-training stage.
初始模型经通过样本集训练之后得到的预训练模型,可以学习基本的数据生成规则(例如可以生成连贯的、真实语料),但是对于某些数据获取难度较大的领域来说,预训练模型生成的数据与该领域的真实数据之间的相似度较低。此时,基于目标样本集对预训练模型进行再次训练,调整预训练模型中的参数,使预训练模型学习目标领域中数据的生成规则,以使经再次训练得到的目标数据模型生成的数据更加接近于真实数据。The pre-training model obtained after the initial model is trained on the sample set can learn basic data generation rules (for example, it can generate coherent and real corpus), but for some fields where data acquisition is difficult, the pre-training model generates The similarity between the data and the real data in the field is low. At this time, the pre-training model is re-trained based on the target sample set, and the parameters in the pre-training model are adjusted so that the pre-training model learns the rules for generating data in the target field, so that the data generated by the target data model obtained by re-training is more accurate. close to the real data.
步骤204,采用目标数据生成模型,基于目标数据生成条件,生成目标数据。 Step 204 , using the target data generation model, and based on the target data generation conditions, to generate target data.
在本实施例中,目标数据生成模型表征目标数据生成条件与目标数据之间的对应关系。In this embodiment, the target data generation model represents the corresponding relationship between target data generation conditions and target data.
在一个具体的示例中,某个特定领域的数据量较小,为了增强该领域的数据,可以将该领域作为目标领域。执行主体可以基于网络上的公开数据(例如可以是中文小说或对话语料等数据)构建通用样本集,然后基于通用样本集训练初始的GPT模型,得到的预训练的GPT模型可以生成连贯的真实句子。之后,执行主体可以获取目标领域的语料作为目标训练数据,并构建目标样本集,然后基于目标样本集再次训练该GPT模型,调整GPT模型的参数,使其学习目标领域中真实语料的生成规则,训练完成后得到的GPT模型即为目标数据生成模型。再之后,执行主体可以获取目标数据生成条件(例如可以是关键词标签、句结构标签、语义标签等),并将目标数据标签输入GPT模型,即可由GPT生成新的语料,从而拓展了目标领域的数据量。表1示出了该示例中的目标训练数据(包括输入标签和训练语料)和GPT生成的目标语料。In a specific example, the amount of data in a specific field is small, and in order to enhance the data in this field, this field can be used as the target field. The execution body can build a general sample set based on public data on the Internet (for example, Chinese novels or dialogue materials), and then train the initial GPT model based on the general sample set, and the obtained pre-trained GPT model can generate coherent real sentences . After that, the execution body can obtain the corpus of the target domain as the target training data, and construct the target sample set, and then retrain the GPT model based on the target sample set, adjust the parameters of the GPT model, and make it learn the generation rules of the real corpus in the target domain, The GPT model obtained after training is the target data generation model. After that, the execution body can obtain the target data generation conditions (such as keyword tags, sentence structure tags, semantic tags, etc.), and input the target data tags into the GPT model, and GPT can generate new corpus, thus expanding the target field. amount of data. Table 1 shows the target training data (including input labels and training corpus) and the target corpus generated by GPT in this example.
表1Table 1
Figure PCTCN2022070250-appb-000001
Figure PCTCN2022070250-appb-000001
本公开的实施例提供的用于生成数据的方法和装置,通过目标领域的少量数据对预训练模型进行再次训练,以使得到的数据生成模型学习目标领域的数据生成规则,可以增强数据,并提高生成的数据的真实性和针对性。In the method and apparatus for generating data provided by the embodiments of the present disclosure, the pre-trained model is retrained by using a small amount of data in the target domain, so that the obtained data generation model learns the data generation rules of the target domain, and the data can be enhanced. Improve the authenticity and pertinence of the data generated.
继续参见图3,图3根据本公开的用于生成数据的方法的又一个实施例的流程图,在图3所示的流程300中,包括以下步骤:Continue to refer to FIG. 3 , which is a flowchart of another embodiment of the method for generating data according to the present disclosure. In the process 300 shown in FIG. 3 , the following steps are included:
步骤301,获取目标训练数据和目标数据生成条件。Step 301: Obtain target training data and target data generation conditions.
步骤302,将目标训练数据中标记有特征标签的语料确定为目标样本语料,将目标样本语料的特征标签确定为目标样本标签,得到目标样本集。步骤301、步骤302与前述步骤201、步骤202相近,此处不再赘述。Step 302: Determine the corpus marked with the feature label in the target training data as the target sample corpus, and determine the feature label of the target sample corpus as the target sample label to obtain the target sample set. Steps 301 and 302 are similar to the foregoing steps 201 and 202 and will not be repeated here.
步骤303,将目标样本标签输入预训练模型,将目标样本语料作为期望输出,训练预训练模型,得到目标数据生成模型。 Step 303 , input the target sample label into the pre-training model, use the target sample corpus as the expected output, train the pre-training model, and obtain the target data generation model.
在本实施例中,目标样本标签可以表征目标样本语料的特征。预训练模型以目标样本标签作为条件标签,以条件标签约束语料的生成过程,然后通过对比目标样本语料与生成的语料,确定损失函数,以此得到目标数据生成模型。目标数据生成模型表征条件标签与生成语料之间的对应关系。In this embodiment, the target sample label can represent the characteristics of the target sample corpus. The pre-training model uses the target sample label as the conditional label, and uses the conditional label to constrain the corpus generation process, and then determines the loss function by comparing the target sample corpus and the generated corpus to obtain the target data generation model. The target data generation model characterizes the correspondence between the conditional labels and the generated corpus.
步骤304,将目标特征标签输入目标数据生成模型,得到目标语料。 Step 304, input the target feature label into the target data generation model to obtain the target corpus.
在本实施例中,目标数据生成条件包括目标特征标签,表征用户在一个或多个维度上对生成的语料的期望。In this embodiment, the target data generation conditions include target feature labels, which represent the user's expectation of the generated corpus in one or more dimensions.
执行主体将目标特征标签输入目标数据生成模型,作为目标数据生成模型的条件标签,约束其语料生成过程,以生成符合用户期望的目标语料。The execution body inputs the target feature label into the target data generation model, as the condition label of the target data generation model, and constrains its corpus generation process to generate the target corpus that meets the user's expectation.
步骤305,将目标语料确定为目标数据。 Step 305, determining the target corpus as target data.
从图3可以看出,本实施例中的用于生成数据的方法的流程400突出了利用目标数据生成模型生成语料数据的步骤,对于语料的特征标签数据量充足、语料数据量欠缺的应用场景,本实施例中的用于生成数据的方法仅需要少量目标领域的训练数据即可确保生成的语料更贴近于目标领域的真实预料,可以更有针对性地增强数据。As can be seen from FIG. 3 , the flow 400 of the method for generating data in this embodiment highlights the step of generating corpus data by using the target data generation model. , the method for generating data in this embodiment only needs a small amount of training data in the target field to ensure that the generated corpus is closer to the real expectation of the target field, and the data can be enhanced in a more targeted manner.
在本实施例的一些可选的实现方式中,目标特征标签为预先构建的分类模型基于待识别语料估计出的分类标签,以及,在将目标语料确定为目标数据(步骤306)之前,上述流程300中的还可以进一步包括:将目标语料输入分类模型,得到目标语料的分类标签;响应于确定分类模型的预设标签集中包括目标语料的分类标签,将目标语料 确定为目标数据,目标数据用于构建分类模型的训练样本。In some optional implementations of this embodiment, the target feature label is a classification label estimated by a pre-built classification model based on the corpus to be recognized, and before the target corpus is determined as the target data (step 306 ), the above process 300 may further include: inputting the target corpus into the classification model to obtain the classification label of the target corpus; in response to determining that the classification label of the target corpus is included in the preset label set of the classification model, the target corpus is determined as the target data, and the target data is used as the target corpus. training samples for building a classification model.
在本实现方式中,目标数据生成模型用于拓展分类模型的训练数据。若目标数据生成模型生成的语料可以被分类模型正确识别,则证明目标生成模型生成的语料的真实性符合分类模型的训练需求。In this implementation, the target data generation model is used to expand the training data of the classification model. If the corpus generated by the target data generation model can be correctly identified by the classification model, it proves that the authenticity of the corpus generated by the target data generation model meets the training requirements of the classification model.
作为示例,需要构建用于识别某个特定领域的语料的分类模型。可以理解的是,训练样本的数据量与模型的准确度是正相关的,因而为了确保分类模型的准确度,需要足够的分类样本语料,而对于某些特定领域来说,语料的数据量获取难度较大。此时可以将该领域作为目标领域,由执行主体基于所获得的少量分类样本预料构建目标样本集,通过再次训练得到目标数据生成模型。之后将构建分类模型的样本分类标签输入目标数据生成模型,得到目标语料。然后将目标语料再输入该分类模型,若分类模型输出的分类标签与样本分类标签一致,则说明目标语料的真实性符合分类模型的训练需求。如此,得到的目标数据可以有效地拓展分类模型的样本数据。As an example, you need to build a classification model that recognizes corpus in a particular domain. It is understandable that the data volume of training samples is positively related to the accuracy of the model. Therefore, in order to ensure the accuracy of the classification model, sufficient classification sample corpus is required, and for some specific fields, the data volume of the corpus is difficult to obtain. larger. At this time, the field can be used as the target field, and the execution subject is expected to construct the target sample set based on the obtained small number of classified samples, and obtain the target data generation model through retraining. Then, the sample classification labels of the constructed classification model are input into the target data generation model to obtain the target corpus. Then, the target corpus is input into the classification model. If the classification label output by the classification model is consistent with the sample classification label, it means that the authenticity of the target corpus meets the training requirements of the classification model. In this way, the obtained target data can effectively expand the sample data of the classification model.
需要说明的是,一个语料可以对应有多个特征标签,分别从多个维度表征语料的特征。在上述实施例的一些可选的实现方式中,基于目标样本集再次训练预训练模型时,可以同时将多个目标样本标签输入预训练模型,以使目标数据生成模型可以学习多个维度的数据生成规则。It should be noted that a corpus can correspond to multiple feature tags, which respectively represent the features of the corpus from multiple dimensions. In some optional implementations of the above embodiments, when retraining the pre-training model based on the target sample set, multiple target sample labels can be input into the pre-training model at the same time, so that the target data generation model can learn data of multiple dimensions Generate rules.
相应地,目标数据生成条件可以包括多个维度的目标特征标签,每个目标特征标签表征一个维度的数据生成条件。如此一来,执行主体可以从多个维度约束其语料生成过程,从而实现了融合多个维度的数据增强。Correspondingly, the target data generation condition may include target feature labels of multiple dimensions, and each target feature label represents a data generation condition of one dimension. In this way, the executive body can constrain its corpus generation process from multiple dimensions, thus realizing data augmentation that integrates multiple dimensions.
作为示例,目标数据生成条件可以同时包括意图标签、结构标签、实体标签、技术领域标签,分别表征用户从意图、结构、实体、技术领域等维度对生成的语料的期望。执行主体可以将上述多个特征标签同时输入目标数据生成模型,从以上多个维度约束语料的生成过程,以得到满足用户需求的目标语料。As an example, the target data generation conditions may include intent tags, structure tags, entity tags, and technical field tags at the same time, which respectively represent the user's expectations of the generated corpus from the dimensions of intent, structure, entity, and technical field. The execution body can input the above-mentioned multiple feature tags into the target data generation model at the same time, and constrain the generation process of the corpus from the above-mentioned multiple dimensions, so as to obtain the target corpus that meets the user's needs.
在一个具体的示例中,用户利用目标生成模型拓展空调领域的语料数据,用户可以根据自身需求将目标数据生成条件设定为:“空调”、 “绿色”、“购买”,其中,“空调”为领域标签,“绿色”为实体标签,“购买”为意图标签,之后,执行主体将上述三个特征标签同时输入目标数据生成模型,以生成目标语料,例如目标语料可以是“我想买一个绿色的空调”、“如何购买绿色的空调”等。表2示出了本示例中多维度标签、训练语料以及目标语料的对应关系。In a specific example, the user uses the target generation model to expand the corpus data in the field of air conditioners, and the user can set the target data generation conditions as: "air conditioner", "green", and "purchase" according to their own needs, where "air conditioner" is the domain label, "green" is the entity label, and "purchase" is the intent label. After that, the execution body inputs the above three feature labels into the target data generation model at the same time to generate the target corpus. For example, the target corpus can be "I want to buy a Green Air Conditioners", "How to Buy Green Air Conditioners", etc. Table 2 shows the correspondence between multi-dimensional labels, training corpus, and target corpus in this example.
表2Table 2
Figure PCTCN2022070250-appb-000002
Figure PCTCN2022070250-appb-000002
接下来参考图4,其示出了用于生成数据的方法的又一个实施例的流程400。该用于生成数据的方法的流程400,包括以下步骤:Referring next to Figure 4, a flow 400 of yet another embodiment of a method for generating data is shown. The process 400 of the method for generating data includes the following steps:
步骤401,获取目标训练数据和目标数据生成条件。Step 401: Obtain target training data and target data generation conditions.
步骤402,将目标训练数据中标记有特征标签的语料确定为目标样本语料,将目标样本语料的特征标签确定为目标样本标签,得到目标样本集。步骤401、步骤402与前述步骤201、步骤202相近,此处不再赘述。Step 402: Determine the corpus marked with the feature label in the target training data as the target sample corpus, and determine the feature label of the target sample corpus as the target sample label to obtain the target sample set. Steps 401 and 402 are similar to the aforementioned steps 201 and 202, and are not described again here.
步骤403,将目标样本语料输入预训练模型,将目标样本标签作为期望输出,训练预训练模型,得到目标数据生成模型。Step 403: Input the target sample corpus into the pre-training model, use the target sample label as the expected output, train the pre-training model, and obtain the target data generation model.
步骤404,将目标待识别语料输入目标数据生成模型,得到目标待识别语料的特征标签。 Step 404 , input the target corpus to be recognized into the target data generation model, and obtain the feature label of the target corpus to be recognized.
在本实施例中,目标数据生成条件包括目标待识别语料。目标数据生成模型表征语料与标签之间的对应关系。执行主体将目标待识别语料输入目标数据生成模型,识别目标待识别语料的特征,并输出表征目标待识别语料特征的目标特征标签。In this embodiment, the target data generation condition includes the target corpus to be recognized. The target data generation model characterizes the correspondence between corpus and labels. The execution body inputs the target corpus to be recognized into the target data generation model, identifies the features of the target corpus to be recognized, and outputs a target feature label representing the feature of the target corpus to be recognized.
步骤405,将目标待识别语料的特征标签确定为目标数据。 Step 405, determining the feature tag of the target corpus to be recognized as target data.
从图4中可以看出,本实施例中的用于生成数据的方法的流程400体现了通过目标数据生成模型识别语料的特征标签的步骤,对于某些语料数据量较大且只有少部分标记有特征标签的场合,本实施例中的用于生成数据的方法仅需要少量的目标领域的训练数据即可确保识别的准确度,可以更有效地增强数据。As can be seen from FIG. 4 , the process 400 of the method for generating data in this embodiment embodies the step of identifying the feature labels of the corpus through the target data generation model. For some corpora, the amount of data is large and only a small number of labels When there are feature labels, the method for generating data in this embodiment only needs a small amount of training data in the target field to ensure the accuracy of recognition, and can enhance the data more effectively.
进一步参考图5,作为对上述各图所示方法的实现,本公开提供了一种用于生成数据的装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。With further reference to FIG. 5 , as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for generating data. The apparatus embodiment corresponds to the method embodiment shown in FIG. 2 . The device can be specifically applied to various electronic devices.
如图5所示,本实施例的用于生成数据的装置500包括:数据获取单元501,被配置成获取目标训练数据和目标数据生成条件,目标训练数据包括标记有特征标签的目标领域的语料;样本构建单元502,被配置成将目标训练数据中标记有特征标签的语料确定为目标样本语料,将目标样本语料的特征标签确定为目标样本标签,得到目标样本集;模型调整单元503,被配置成基于目标样本集,训练预训练模型,调整预训练模型的参数,得到再次训练后的目标数据生成模型,其中,预训练模型经由如下步骤得到:构建初始模型并基于通用样本集训练初始模型,得到预训练模型;数据生成单元504,被配置成采用目标数据生成模型,基于目标数据生成条件,生成目标数据。As shown in FIG. 5 , the apparatus 500 for generating data in this embodiment includes: a data acquisition unit 501 configured to acquire target training data and target data generation conditions, and the target training data includes corpus of the target domain marked with feature tags The sample construction unit 502 is configured to determine the corpus marked with the feature label in the target training data as the target sample corpus, and the feature label of the target sample corpus is determined as the target sample label to obtain the target sample set; The model adjustment unit 503 is It is configured to train the pre-training model based on the target sample set, adjust the parameters of the pre-training model, and obtain the target data generation model after retraining, wherein the pre-training model is obtained through the following steps: constructing an initial model and training the initial model based on the general sample set , to obtain a pre-training model; the data generation unit 504 is configured to use the target data generation model to generate target data based on the target data generation conditions.
在本实施例中,模型调整单元503被进一步配置成:将目标样本标签输入预训练模型,将目标样本语料作为期望输出,训练预训练模型,得到目标数据生成模型。In this embodiment, the model adjustment unit 503 is further configured to: input the target sample label into the pre-training model, use the target sample corpus as the expected output, train the pre-training model, and obtain the target data generation model.
在本实施例中,目标数据生成条件包括目标特征标签;以及,数据生成单元504被进一步配置成:将目标特征标签输入目标数据生成 模型,得到目标语料;将目标语料确定为目标数据。In this embodiment, the target data generation conditions include target feature labels; and, the data generation unit 504 is further configured to: input the target feature labels into the target data generation model to obtain target corpus; and determine the target corpus as target data.
在本实施例中,目标特征标签为预先构建的分类模型基于待识别语料估计出的分类标签;以及,数据生成单元504还包括数据验证模块,被配置成:将目标语料输入分类模型,得到目标语料的分类标签;响应于确定分类模型的预设标签集中包括目标语料的分类标签,将目标语料确定为目标数据,目标数据用于构建分类模型的训练样本。In this embodiment, the target feature label is a classification label estimated by a pre-built classification model based on the corpus to be recognized; and the data generation unit 504 further includes a data verification module, configured to: input the target corpus into the classification model to obtain the target corpus. The classification label of the corpus; in response to determining that the preset label set of the classification model includes the classification label of the target corpus, the target corpus is determined as the target data, and the target data is used to construct the training samples of the classification model.
在本实施例中,模型调整单元503还被进一步配置成:将目标样本语料输入预训练模型,将目标样本标签作为期望输出,训练预训练模型,得到目标数据生成模型。In this embodiment, the model adjustment unit 503 is further configured to: input the target sample corpus into the pre-training model, use the target sample label as the expected output, train the pre-training model, and obtain the target data generation model.
在本实施例中,目标数据生成条件包括目标待识别语料;以及,数据生成单元504还被配置成:将目标待识别语料输入目标数据生成模型,得到目标待识别语料的特征标签;将目标待识别语料的特征标签确定为目标数据。In this embodiment, the target data generation conditions include the target corpus to be recognized; and the data generating unit 504 is further configured to: input the target corpus to be recognized into the target data generation model to obtain the feature label of the target corpus; The feature labels of the recognition corpus are determined as target data.
下面参考图6,其示出了适于用来实现本公开的实施例的电子设备(例如图1中的服务器或终端设备)600的结构示意图。本公开的实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图6示出的终端设备仅仅是一个示例,不应对本公开的实施例的功能和使用范围带来任何限制。Referring next to FIG. 6 , it shows a schematic structural diagram of an electronic device (eg, the server or terminal device in FIG. 1 ) 600 suitable for implementing the embodiments of the present disclosure. Terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (Personal Digital Assistants), PADs (Tablet Computers), etc., as well as mobile terminals such as digital TVs, desktop computers, etc. etc. Fixed terminal. The terminal device shown in FIG. 6 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
如图6所示,电子设备600可以包括处理装置(例如中央处理器、图形处理器等)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储装置608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有电子设备600操作所需的各种程序和数据。处理装置601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。As shown in FIG. 6, an electronic device 600 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 601 that may be loaded into random access according to a program stored in a read only memory (ROM) 602 or from a storage device 608 Various appropriate actions and processes are executed by the programs in the memory (RAM) 603 . In the RAM 603, various programs and data required for the operation of the electronic device 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604 .
通常,以下装置可以连接至I/O接口605:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置607; 包括例如磁带、硬盘等的存储装置608;以及通信装置609。通信装置609可以允许电子设备600与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的电子设备600,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。图6中示出的每个方框可以代表一个装置,也可以根据需要代表多个装置。Typically, the following devices can be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 607 of a computer, etc.; a storage device 608 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 609. Communication means 609 may allow electronic device 600 to communicate wirelessly or by wire with other devices to exchange data. While FIG. 6 shows electronic device 600 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in FIG. 6 may represent one device, or may represent multiple devices as required.
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置608被安装,或者从ROM 602被安装。在该计算机程序被处理装置601执行时,执行本公开的实施例的方法中限定的上述功能。需要说明的是,本公开的实施例所述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的***、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开的实施例中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行***、装置或者器件使用或者与其结合使用。而在本公开的实施例中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行***、装置或者器件使用或者与其结合使用 的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。In particular, according to embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart. In such an embodiment, the computer program may be downloaded and installed from the network via the communication device 609, or from the storage device 608, or from the ROM 602. When the computer program is executed by the processing apparatus 601, the above-described functions defined in the methods of the embodiments of the present disclosure are executed. It should be noted that the computer-readable medium described in the embodiments of the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two. The computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing. In embodiments of the present disclosure, a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device. Rather, in embodiments of the present disclosure, a computer-readable signal medium may include a data signal in baseband or propagated as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device . Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取目标训练数据和目标数据生成条件,目标训练数据包括标记有特征标签的目标领域的语料;将目标训练数据中标记有特征标签的语料确定为目标样本语料,将目标样本语料的特征标签确定为目标样本标签,得到目标样本集;基于目标样本集,训练预训练模型,调整预训练模型的参数,得到再次训练后的目标数据生成模型,其中,预训练模型经由如下步骤得到:构建初始模型并基于通用样本集训练初始模型,得到预训练模型;采用目标数据生成模型,基于目标数据生成条件,生成目标数据。The above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device. The above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: obtains target training data and target data generation conditions, and the target training data includes a feature label marked with The corpus of the target domain is determined; the corpus marked with the feature label in the target training data is determined as the target sample corpus, and the feature label of the target sample corpus is determined as the target sample label to obtain the target sample set; based on the target sample set, train the pre-training model , adjust the parameters of the pre-training model to obtain the target data generation model after retraining, wherein, the pre-training model is obtained through the following steps: constructing an initial model and training the initial model based on the general sample set to obtain a pre-training model; using the target data to generate the model , generate target data based on target data generation conditions.
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的实施例的操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)——连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for carrying out operations of embodiments of the present disclosure may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, or a combination thereof, Also included are conventional procedural programming languages - such as the "C" language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider to via Internet connection).
附图中的流程图和框图,图示了按照本公开各种实施例的***、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。 例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的***来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It is also noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
描述于本公开的实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器包括连数据获取单元、样本构建单元、模型调整单元和数据生成单元。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定,例如,数据获取单元还可以被描述为“获取目标训练数据和目标数据生成条件的单元”。The units involved in the embodiments of the present disclosure may be implemented in software or hardware. The described unit can also be provided in the processor, for example, it can be described as: a processor includes a data acquisition unit, a sample construction unit, a model adjustment unit and a data generation unit. Wherein, the names of these units do not constitute a limitation on the unit itself under certain circumstances. For example, the data acquisition unit can also be described as "a unit that acquires target training data and target data generation conditions".
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开的实施例中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开的实施例中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。The above description is merely a preferred embodiment of the present disclosure and an illustration of the technical principles employed. Those skilled in the art should understand that the scope of the invention involved in the embodiments of the present disclosure is not limited to the technical solution formed by the specific combination of the above-mentioned technical features, and should also cover, without departing from the above-mentioned inventive concept, the above-mentioned Other technical solutions formed by any combination of technical features or their equivalent features. For example, a technical solution is formed by replacing the above-mentioned features with the technical features disclosed in the embodiments of the present disclosure (but not limited to) with similar functions.

Claims (14)

  1. 一种用于生成数据的方法,其中,包括:A method for generating data, comprising:
    获取目标训练数据和目标数据生成条件,所述目标训练数据包括标记有特征标签的目标领域的语料;Obtain target training data and target data generation conditions, where the target training data includes corpus of the target domain marked with feature labels;
    将所述目标训练数据中标记有特征标签的语料确定为目标样本语料,将所述目标样本语料的特征标签确定为目标样本标签,得到目标样本集;Determining the corpus marked with the feature label in the target training data as the target sample corpus, and determining the feature label of the target sample corpus as the target sample label to obtain the target sample set;
    基于所述目标样本集,训练预训练模型,调整所述预训练模型的参数,得到再次训练后的目标数据生成模型,其中,所述预训练模型经由如下步骤得到:构建初始模型并基于通用样本集训练所述初始模型,得到所述预训练模型;Based on the target sample set, a pre-training model is trained, the parameters of the pre-training model are adjusted, and a target data generation model after retraining is obtained, wherein the pre-training model is obtained through the following steps: constructing an initial model and based on general samples Set training the initial model to obtain the pre-training model;
    采用所述目标数据生成模型,基于所述目标数据生成条件,生成目标数据。Using the target data generation model, target data is generated based on the target data generation conditions.
  2. 根据权利要求1所述的方法,其中,基于所述目标样本集,训练预训练模型,包括:将所述目标样本标签输入所述预训练模型,将所述目标样本语料作为期望输出,训练所述预训练模型,得到所述目标数据生成模型。The method according to claim 1, wherein training a pre-training model based on the target sample set comprises: inputting the target sample label into the pre-training model, using the target sample corpus as an expected output, and training the pre-training model. The pre-training model is used to obtain the target data generation model.
  3. 根据权利要求1-2任一项所述的方法,其中,所述目标数据生成条件包括目标特征标签;以及,The method according to any one of claims 1-2, wherein the target data generation condition includes a target feature label; and,
    采用所述目标数据生成模型,基于所述目标数据生成条件,生成目标数据,包括:将所述目标特征标签输入所述目标数据生成模型,得到目标语料;将所述目标语料确定为目标数据。Using the target data generation model to generate target data based on the target data generation conditions includes: inputting the target feature label into the target data generation model to obtain target corpus; and determining the target corpus as target data.
  4. 根据权利要求3所述的方法,其中,所述目标特征标签为预先构建的分类模型基于待识别语料估计出的分类标签;以及,The method according to claim 3, wherein the target feature label is a classification label estimated by a pre-built classification model based on the corpus to be recognized; and,
    在将所述目标语料确定为目标数据之前,所述方法还包括:将所述目标语料输入所述分类模型,得到所述目标语料的分类标签;响应 于确定所述分类模型的预设标签集中包括所述目标语料的分类标签,将所述目标语料确定为目标数据,所述目标数据用于构建所述分类模型的训练样本。Before determining the target corpus as target data, the method further includes: inputting the target corpus into the classification model to obtain a classification label of the target corpus; in response to determining the preset label set of the classification model The classification label of the target corpus is included, and the target corpus is determined as target data, and the target data is used to construct a training sample of the classification model.
  5. 根据权利要求1-4任一项所述的方法,其中,基于所述目标样本集,训练预训练模型,包括:The method according to any one of claims 1-4, wherein, based on the target sample set, training a pre-training model, comprising:
    将所述目标样本语料输入所述预训练模型,将所述目标样本标签作为期望输出,训练所述预训练模型,得到所述目标数据生成模型。The target sample corpus is input into the pre-training model, and the target sample label is used as an expected output to train the pre-training model to obtain the target data generation model.
  6. 根据权利要求1-5任一项所述的方法,其中,所述目标数据生成条件包括目标待识别语料;以及,The method according to any one of claims 1-5, wherein the target data generation condition comprises a target corpus to be recognized; and,
    采用所述目标数据生成模型,基于所述目标数据生成条件,生成目标数据,包括:将所述目标待识别语料输入所述目标数据生成模型,得到所述目标待识别语料的特征标签;将所述目标待识别语料的特征标签确定为目标数据。Using the target data generation model, and based on the target data generation conditions, generating target data includes: inputting the target corpus to be recognized into the target data generation model to obtain a feature label of the target corpus to be recognized; The feature label of the target corpus to be recognized is determined as the target data.
  7. 一种用于生成数据的装置,其中,包括:An apparatus for generating data, comprising:
    数据获取单元,被配置成获取目标训练数据和目标数据生成条件,所述目标训练数据包括标记有特征标签的目标领域的语料;a data acquisition unit, configured to acquire target training data and target data generation conditions, the target training data including the corpus of the target field marked with feature tags;
    样本构建单元,被配置成将所述目标训练数据中标记有特征标签的语料确定为目标样本语料,将所述目标样本语料的特征标签确定为目标样本标签,得到目标样本集;The sample construction unit is configured to determine the corpus marked with the feature label in the target training data as the target sample corpus, and determine the feature label of the target sample corpus as the target sample label to obtain the target sample set;
    模型调整单元,被配置成基于所述目标样本集,训练预训练模型,调整所述预训练模型的参数,得到再次训练后的目标数据生成模型,其中,所述预训练模型经由如下步骤得到:构建初始模型并基于通用样本集训练所述初始模型,得到所述预训练模型;A model adjustment unit, configured to train a pre-training model based on the target sample set, adjust the parameters of the pre-training model, and obtain a target data generation model after retraining, wherein the pre-training model is obtained through the following steps: constructing an initial model and training the initial model based on a general sample set to obtain the pre-training model;
    数据生成单元,被配置成采用所述目标数据生成模型,基于所述目标数据生成条件,生成目标数据。A data generation unit configured to generate target data based on the target data generation condition using the target data generation model.
  8. 根据权利要求7所述的装置,其中,所述模型调整单元被进一 步配置成:将所述目标样本标签输入所述预训练模型,将所述目标样本语料作为期望输出,训练所述预训练模型,得到所述目标数据生成模型。The apparatus according to claim 7, wherein the model adjustment unit is further configured to: input the target sample label into the pre-training model, use the target sample corpus as an expected output, and train the pre-training model to obtain the target data generation model.
  9. 根据权利要求7-8任一项所述的装置,其中,所述目标数据生成条件包括目标特征标签;以及,The apparatus according to any one of claims 7-8, wherein the target data generation condition includes a target feature label; and,
    所述数据生成单元被进一步配置成:将所述目标特征标签输入所述目标数据生成模型,得到目标语料;将所述目标语料确定为目标数据。The data generation unit is further configured to: input the target feature label into the target data generation model to obtain a target corpus; and determine the target corpus as target data.
  10. 根据权利要求9所述的装置,其中,所述目标特征标签为预先构建的分类模型基于待识别语料估计出的分类标签;以及,The device according to claim 9, wherein the target feature label is a classification label estimated by a pre-built classification model based on the corpus to be recognized; and,
    所述数据生成单元还包括数据验证模块,被配置成:将所述目标语料输入所述分类模型,得到所述目标语料的分类标签;响应于确定所述分类模型的预设标签集中包括所述目标语料的分类标签,将所述目标语料确定为目标数据,所述目标数据用于构建所述分类模型的训练样本。The data generating unit further includes a data verification module configured to: input the target corpus into the classification model to obtain a classification label of the target corpus; in response to determining that the preset label set of the classification model includes the The classification label of the target corpus, the target corpus is determined as the target data, and the target data is used to construct the training sample of the classification model.
  11. 根据权利要求7-10任一项所述的装置,其中,所述模型调整单元还被进一步配置成:将所述目标样本语料输入所述预训练模型,将所述目标样本标签作为期望输出,训练所述预训练模型,得到所述目标数据生成模型。The apparatus according to any one of claims 7-10, wherein the model adjustment unit is further configured to: input the target sample corpus into the pre-training model, and use the target sample label as an expected output, Train the pre-training model to obtain the target data generation model.
  12. 根据权利要求7-11所述的装置,其中,所述目标数据生成条件包括目标待识别语料;以及,The apparatus according to claims 7-11, wherein the target data generation condition includes a target corpus to be recognized; and,
    所述数据生成单元还被配置成:将所述目标待识别语料输入所述目标数据生成模型,得到所述目标待识别语料的特征标签;将所述目标待识别语料的特征标签确定为目标数据。The data generation unit is further configured to: input the target corpus to be recognized into the target data generation model to obtain a feature label of the target corpus to be recognized; determine the feature label of the target corpus to be recognized as target data .
  13. 一种电子设备,包括:An electronic device comprising:
    一个或多个处理器;one or more processors;
    存储装置,其上存储有一个或多个程序,a storage device on which one or more programs are stored,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-6中任一所述的方法。The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
  14. 一种计算机可读介质,其上存储有计算机程序,其中,所述程序被处理器执行时实现如权利要求1-6中任一所述的方法。A computer-readable medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method according to any one of claims 1-6.
PCT/CN2022/070250 2021-03-30 2022-01-05 Data generation method and apparatus WO2022206091A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110340188.4A CN115146624A (en) 2021-03-30 2021-03-30 Method and apparatus for generating data
CN202110340188.4 2021-03-30

Publications (1)

Publication Number Publication Date
WO2022206091A1 true WO2022206091A1 (en) 2022-10-06

Family

ID=83403542

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/070250 WO2022206091A1 (en) 2021-03-30 2022-01-05 Data generation method and apparatus

Country Status (2)

Country Link
CN (1) CN115146624A (en)
WO (1) WO2022206091A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116029492A (en) * 2022-12-01 2023-04-28 广州云趣信息科技有限公司 Order sending method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170358295A1 (en) * 2016-06-10 2017-12-14 Conduent Business Services, Llc Natural language generation, a hybrid sequence-to-sequence approach
CN111339278A (en) * 2020-02-28 2020-06-26 支付宝(杭州)信息技术有限公司 Method and device for generating training speech generating model and method and device for generating answer speech
CN111898369A (en) * 2020-08-17 2020-11-06 腾讯科技(深圳)有限公司 Article title generation method, model training method and device and electronic equipment
CN112182210A (en) * 2020-09-25 2021-01-05 四川华空天行科技有限公司 Language generation model based on composition data feature classifier and writing support method
CN112541346A (en) * 2020-12-24 2021-03-23 北京百度网讯科技有限公司 Abstract generation method and device, electronic equipment and readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170358295A1 (en) * 2016-06-10 2017-12-14 Conduent Business Services, Llc Natural language generation, a hybrid sequence-to-sequence approach
CN111339278A (en) * 2020-02-28 2020-06-26 支付宝(杭州)信息技术有限公司 Method and device for generating training speech generating model and method and device for generating answer speech
CN111898369A (en) * 2020-08-17 2020-11-06 腾讯科技(深圳)有限公司 Article title generation method, model training method and device and electronic equipment
CN112182210A (en) * 2020-09-25 2021-01-05 四川华空天行科技有限公司 Language generation model based on composition data feature classifier and writing support method
CN112541346A (en) * 2020-12-24 2021-03-23 北京百度网讯科技有限公司 Abstract generation method and device, electronic equipment and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116029492A (en) * 2022-12-01 2023-04-28 广州云趣信息科技有限公司 Order sending method and device
CN116029492B (en) * 2022-12-01 2023-12-01 广州云趣信息科技有限公司 Order sending method and device

Also Published As

Publication number Publication date
CN115146624A (en) 2022-10-04

Similar Documents

Publication Publication Date Title
US10388284B2 (en) Speech recognition apparatus and method
JP7208952B2 (en) Method and apparatus for generating interaction models
CN107491534B (en) Information processing method and device
US10698932B2 (en) Method and apparatus for parsing query based on artificial intelligence, and storage medium
US11775761B2 (en) Method and apparatus for mining entity focus in text
US10061865B2 (en) Determining answer stability in a question answering system
CN107861954B (en) Information output method and device based on artificial intelligence
US20230023789A1 (en) Method for identifying noise samples, electronic device, and storage medium
US11586817B2 (en) Word vector retrofitting method and apparatus
US11321534B2 (en) Conversation space artifact generation using natural language processing, machine learning, and ontology-based techniques
CN111563390B (en) Text generation method and device and electronic equipment
WO2022156434A1 (en) Method and apparatus for generating text
US11036996B2 (en) Method and apparatus for determining (raw) video materials for news
WO2022174496A1 (en) Data annotation method and apparatus based on generative model, and device and storage medium
CN111666416A (en) Method and apparatus for generating semantic matching model
CN111144124A (en) Training method of machine learning model, intention recognition method, related device and equipment
WO2023274187A1 (en) Information processing method and apparatus based on natural language inference, and electronic device
WO2022105536A1 (en) Method and apparatus for generating page
WO2023005968A1 (en) Text category recognition method and apparatus, and electronic device and storage medium
KR20200080400A (en) Method for providing sententce based on persona and electronic device for supporting the same
CN113434683A (en) Text classification method, device, medium and electronic equipment
WO2023005763A1 (en) Information processing method and apparatus, and electronic device
CN114462425B (en) Social media text processing method, device and equipment and storage medium
CN110827799B (en) Method, apparatus, device and medium for processing voice signal
WO2022206091A1 (en) Data generation method and apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22778280

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 140224)