CN111046677B - Method, device, equipment and storage medium for obtaining translation model - Google Patents

Method, device, equipment and storage medium for obtaining translation model Download PDF

Info

Publication number
CN111046677B
CN111046677B CN201911250280.0A CN201911250280A CN111046677B CN 111046677 B CN111046677 B CN 111046677B CN 201911250280 A CN201911250280 A CN 201911250280A CN 111046677 B CN111046677 B CN 111046677B
Authority
CN
China
Prior art keywords
parallel corpus
translation model
target
pairs
language
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911250280.0A
Other languages
Chinese (zh)
Other versions
CN111046677A (en
Inventor
潘骁
王明轩
李磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201911250280.0A priority Critical patent/CN111046677B/en
Publication of CN111046677A publication Critical patent/CN111046677A/en
Application granted granted Critical
Publication of CN111046677B publication Critical patent/CN111046677B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Machine Translation (AREA)

Abstract

The embodiment of the disclosure discloses a method, a device, equipment and a storage medium for acquiring a translation model, wherein the method comprises the following steps: acquiring a multilingual parallel corpus pair set; initially training a universal translation model through the multilingual parallel corpus pair set to obtain a trained universal translation model; acquiring a parallel corpus pair set of a target type, and performing directional training on the trained general translation model through the parallel corpus pair set of the target type to acquire a target translation model matched with the language of the target type; according to the technical scheme of the embodiment of the invention, the universal translation model with the multi-language inter-translation function can use the vocabulary association and the grammar structure shared by multiple languages for reference, and simultaneously strengthen the vocabulary association and the grammar structure of directional translation, and simultaneously, under the condition that the number of parallel corpus pairs of the target types is small, the corresponding language translation model can still be quickly established, so that the accuracy of the language translation model is greatly improved.

Description

Method, device, equipment and storage medium for obtaining translation model
Technical Field
The present disclosure relates to computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for obtaining a translation model.
Background
With the continuous development of computer technology, various translation software appears in the visual field of people, and becomes an important channel for people to acquire external information.
The existing translation software, the language translation model thereof is usually obtained based on continuous training of a large number of monolingual parallel corpora (for example, monolingual parallel corpora composed of chinese documents and corresponding english documents) for implementing directional translation (for example, chinese translation english), however, the acquisition of a large number of monolingual parallel corpora is not easy, especially parallel corpora related to the small language, the acquisition difficulty is very large, and therefore, the established language translation model is extremely poor in accuracy on the premise that a large number of parallel corpora cannot be acquired.
Disclosure of Invention
The disclosure provides a translation model acquisition method, a translation model acquisition device, translation model acquisition equipment and a storage medium, which are used for quickly establishing a directional language translation model and improving the accuracy of the language translation model.
In a first aspect, an embodiment of the present disclosure provides a method for obtaining a translation model, including:
acquiring a multilingual parallel corpus pair set; each parallel corpus pair comprises paired source language corpora and target language corpora, the multilingual parallel corpus pair set comprises at least two parallel corpus pairs, and the source language and the target language in each parallel corpus pair are at least partially different;
initially training a universal translation model through the multilingual parallel corpus pair set to obtain a trained universal translation model;
acquiring a parallel corpus pair set of a target type, and performing directional training on the trained general translation model through the parallel corpus pair set of the target type to acquire a target translation model matched with the language of the target type; and the source languages of the parallel corpus pairs in the target type are the same, and the target languages are also the same.
In a second aspect, an embodiment of the present disclosure provides an apparatus for obtaining a translation model, including:
the parallel corpus acquiring module is used for acquiring a multilingual parallel corpus pair set; each parallel corpus pair comprises paired source language corpora and target language corpora, the multilingual parallel corpus pair set comprises at least two parallel corpus pairs, and the source language and the target language in each parallel corpus pair are at least partially different;
the initial training execution module is used for carrying out initial training on the universal translation model through the multilingual parallel corpus pair set so as to obtain a trained universal translation model;
the directional training execution module is used for acquiring a parallel corpus pair set of a target type, and directionally training the trained general translation model through the parallel corpus pair set of the target type to acquire a target translation model matched with the language of the target type; and the source languages of the parallel corpus pairs in the target type are the same, and the target languages are also the same.
In a third aspect, an embodiment of the present disclosure provides an electronic device, which includes a memory, a processing apparatus, and a computer program stored in the memory and executable on the processing apparatus, where the processing apparatus implements a method for obtaining a translation model according to any embodiment of the present disclosure when executing the computer program.
In a fourth aspect, embodiments of the present disclosure provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform the method for obtaining a translation model according to any of the embodiments of the present disclosure.
According to the technical scheme of the embodiment, the universal translation model is initially trained through the multilingual parallel corpus pair set, and then the universal translation model is directionally translated through the target type parallel corpus pair set, so that the universal translation model with the multilingual mutual translation function can use the vocabulary association and the grammar structure shared by multiple languages for reference, meanwhile, the vocabulary association and the grammar structure of directional translation are strengthened, meanwhile, under the condition that the target type parallel corpus pair is few, the corresponding language translation model can still be quickly established, and the accuracy of the language translation model is greatly improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a flowchart of a method for obtaining a translation model in a first embodiment of the present disclosure;
fig. 2 is a block diagram of a translation model obtaining apparatus in a second embodiment of the disclosure;
fig. 3 is a block diagram of a device in a third embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Example one
Fig. 1 is a flowchart of a method for obtaining a translation model according to an embodiment of the present disclosure, where the embodiment is applicable to a case where there are fewer parallel corpus pairs of a target category, and the method may be executed by a device for obtaining a translation model according to an embodiment of the present disclosure, where the device may be implemented by software and/or hardware, and is integrated in an application program, and the method specifically includes the following steps:
s110, acquiring a multilingual parallel corpus pair set; each parallel corpus pair comprises paired source language corpus and target language corpus, the multilingual parallel corpus pair set comprises at least two parallel corpus pairs, and the source language and the target language in each parallel corpus pair are at least partially different.
The parallel corpus pair is a corresponding corpus between two languages, including a source language corpus and a target language corpus, for example, a chinese-english parallel corpus includes a chinese document and a corresponding english document, if a chinese-english translation operation is performed through a translation model, the chinese document is the source language corpus, and the english document is the target language corpus.
The multilingual parallel corpus pair set comprises at least two parallel corpus pairs, and the source language and the target language in each parallel corpus pair can be partially different, for example, the multilingual parallel corpus pair set comprises Chinese-English parallel corpus pairs and Chinese-Japanese parallel corpus pairs, wherein the source languages of the two parallel corpus pairs are Chinese and the target languages are English and Japanese respectively; or the source language and the target language may be different, for example, the multilingual parallel corpus pair includes a chinese-english parallel corpus pair and a korean-japanese parallel corpus pair, where the source language of the two parallel corpus pairs is chinese and english, respectively, and the target language is japanese and korean, respectively; optionally, in this disclosure, the language type included in the set of multilingual parallel corpus pairs is not specifically limited.
Optionally, in an embodiment of the present disclosure, a parallel corpus pair in the multi-language parallel corpus pair set includes a text parallel corpus pair or a speech text corpus pair; the multilingual parallel corpus pair set comprising several parallel corpus pairs may be text data set or speech data set.
And S120, initially training the universal translation model through the multilingual parallel corpus pair set to obtain the trained universal translation model.
Optionally, in an embodiment of the present disclosure, the generic translation model includes a sequence-to-sequence model; sequence to Sequence (seq 2seq) model, which is a neural network of Encoder-Decoder structure, the input is a Sequence (Sequence) and the output is also a Sequence; in the Encoder, a variable-length sequence is converted into a fixed-length vector expression, and the Decoder converts the fixed-length vector into a variable-length target signal sequence, so as to realize the input of an indefinite length to an indefinite-length output. The sequence-to-sequence model may include various types, for example, a seq2seq model based on a Recurrent Neural Network (RNN), a seq2seq model based on a Convolution Operation (CONV), a seq2seq model based on a Transformer, and the like, and in the embodiment of the present disclosure, optionally, the type of the sequence-to-sequence model is not particularly limited.
The universal translation model is initially trained through the multilingual parallel corpus pair set, so that the universal translation model is subjected to vocabulary association and grammar structure learning, and has a multilingual inter-translation function. Optionally, in this disclosure, in the multilingual parallel corpus pair set, at least part of source language corpora and/or target language corpora of the parallel corpus pairs are doped with vocabularies of other languages; for example, the source language corpus is the Chinese "apple and
Figure BDA0002308826970000061
", in which Korean is doped
Figure BDA0002308826970000062
(corresponding to Chinese being 'Pear'), the target language corpus is English 'Apple and Pear', Korean Chinese-English parallel corpus pairs are doped, the diversity of the parallel corpus pairs is increased, and after a general translation model is trained, the Chinese-English parallel corpus pairs are addedThe adaptability of the universal translation model is added, and the multilingual inter-translation capability of the universal translation model is enhanced.
S130, acquiring a parallel corpus pair set of a target type, and performing directional training on the trained general translation model through the parallel corpus pair set of the target type to acquire a target translation model matched with the target type; and the source languages of the parallel corpus pairs in the target type are the same, and the target languages are also the same.
Each parallel corpus pair included in the set of parallel corpus pairs of the target category is a parallel corpus pair in which the source language and the target language are the same, for example, the set of parallel corpus pairs of the target category is korean, and each parallel corpus pair included therein is a korean parallel corpus pair; the initially trained general translation model is trained again through the target type parallel corpus pair set, so that the general translation model with the multilingual mutual translation function can refer to the vocabulary association and grammar structure shared by the multilinguals and simultaneously strengthen the vocabulary association and grammar structure of directional translation.
Optionally, in this disclosure, in the set of parallel corpus pairs of the target category, at least a part of the parallel corpus pairs are subsets of the set of multilingual parallel corpus pairs; specifically, part or all of the parallel corpus pairs in the target-type parallel corpus pair set are derived from the multilingual parallel corpus pair set, and the method aims to strengthen the training effect during initial training of the general translation model and avoid losing the function of multilingual inter-translation due to the introduction of the target-type parallel corpus pair set, namely the parallel corpus pair set of a specific language type; the set of parallel corpus pairs of the target category may not include any subset in the set of multilingual parallel corpus pairs, that is, all the parallel corpus pairs included in the set of parallel corpus pairs of the target category are new parallel corpus pairs of the target category, so as to enhance the directional training of the target category; specifically, the source language corpus of the parallel corpus pair included in the target category may be derived from the multilingual parallel corpus pair collection, and the target language corpus is derived from other corpus collections other than the multilingual parallel corpus pair collection; alternatively, the target language corpus may be derived from a set of multilingual parallel corpus pairs and the source language corpus may be derived from a set of corpora other than the set of multilingual parallel corpus pairs.
Optionally, in this embodiment of the present disclosure, after obtaining the set of parallel corpus pairs of the target category, the method further includes: performing noise adding operation on the set of the parallel corpus pairs of the target category to generate parallel corpus pairs containing noise; wherein the noise adding operation comprises adding words and phrases in the corpus, deleting words and phrases and/or disturbing the word and phrase sequence in the corpus. Specifically, noise adding operation is performed on the source language corpus in the set according to the parallel corpus of the target type, the target language corpus is not changed, for example, the source language corpus is Chinese 'I Love life', after noise is added, the source language corpus is modified into 'I Love life', the target language corpus is English 'I Love Live' and is not modified, the set is trained on the universal translation model according to the parallel corpus of the target type to which the noise is added, the anti-noise (namely anti-interference) capability of the universal translation model is increased, and the error correction capability is improved.
Optionally, in this embodiment of the present disclosure, after obtaining the set of parallel corpus pairs of the target category, the method further includes: and respectively performing word order reversal operation on the source language linguistic data and the target language linguistic data in the target type parallel linguistic data pair set to obtain new parallel linguistic data pairs. For example, the source language corpus is Chinese 'Apple and Pear', is modified into 'Pear and Apple' after the word sequence is inverted, the target language corpus is English 'Apple and Pear', and is modified into 'Pear and Apple' after the word sequence is inverted, so that the parallel corpus pairs are more diverse, and the universal translation model is trained through the target kind of parallel corpus pair sets after the word sequence is inverted, so that the universal translation model strengthens the vocabulary association and the grammatical structure of directional translation.
According to the technical scheme of the embodiment, the universal translation model is initially trained through the multilingual parallel corpus pair set, and then the universal translation model is directionally translated through the target type parallel corpus pair set, so that the universal translation model with the multilingual mutual translation function can use the vocabulary association and the grammar structure shared by multiple languages for reference, meanwhile, the vocabulary association and the grammar structure of directional translation are strengthened, meanwhile, under the condition that the target type parallel corpus pair is few, the corresponding language translation model can still be quickly established, and the accuracy of the language translation model is greatly improved.
Example two
Fig. 2 is a block diagram of a structure of an apparatus for obtaining a translation model according to a second embodiment of the present disclosure, which specifically includes: a parallel corpus acquisition module 210, an initial training execution module 220, and a directional training execution module 230.
A parallel corpus acquiring module 210, configured to acquire a multilingual parallel corpus pair set; each parallel corpus pair comprises paired source language corpora and target language corpora, the multilingual parallel corpus pair set comprises at least two parallel corpus pairs, and the source language and the target language in each parallel corpus pair are at least partially different;
an initial training execution module 220, configured to perform initial training on a general translation model through the multi-language parallel corpus pair set to obtain a trained general translation model;
the directional training execution module 230 is configured to obtain a set of parallel corpus pairs of a target category, and perform directional training on the trained general translation model through the set of parallel corpus pairs of the target category to obtain a target translation model matched with the language of the target category; and the source languages of the parallel corpus pairs in the target type are the same, and the target languages are also the same.
According to the technical scheme of the embodiment, the universal translation model is initially trained through the multilingual parallel corpus pair set, and then the universal translation model is directionally translated through the target type parallel corpus pair set, so that the universal translation model with the multilingual mutual translation function can use the vocabulary association and the grammar structure shared by multiple languages for reference, meanwhile, the vocabulary association and the grammar structure of directional translation are strengthened, meanwhile, under the condition that the target type parallel corpus pair is few, the corresponding language translation model can still be quickly established, and the accuracy of the language translation model is greatly improved.
Optionally, on the basis of the above technical solution, the general translation model includes a sequence-to-sequence model.
Optionally, on the basis of the above technical solution, the parallel corpus pair in the multilingual parallel corpus pair set includes a text parallel corpus pair or a speech text corpus pair.
Optionally, on the basis of the above technical solution, the apparatus for obtaining a translation model further includes:
the noise adding execution module is used for performing noise adding operation on the set of the parallel corpus pairs of the target type to generate parallel corpus pairs containing noise; wherein the noise adding operation comprises adding words and phrases in the corpus, deleting words and phrases and/or disturbing the word and phrase sequence in the corpus.
Optionally, on the basis of the above technical solution, the apparatus for obtaining a translation model further includes:
and the word order reversal execution module is used for respectively carrying out word order reversal operation on the source language linguistic data and the target language linguistic data in the target type parallel linguistic data pair set so as to obtain new parallel linguistic data pairs.
Optionally, on the basis of the above technical solution, in the set of parallel corpus pairs of the target category, at least a part of the parallel corpus pairs is a subset of the set of multilingual parallel corpus pairs.
Optionally, on the basis of the above technical solution, in the multilingual parallel corpus set, at least part of source language corpora and/or target language corpora of the parallel corpus pairs are doped with vocabularies of other languages.
The device can execute the method for acquiring the translation model provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method. Technical details that are not elaborated in this embodiment may be referred to a method provided by any embodiment of the present disclosure.
EXAMPLE III
Fig. 3 shows a schematic structural diagram of an electronic device (e.g., the terminal device or the server in fig. 1) 300 suitable for implementing an embodiment of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 300 to communicate wirelessly or by wire with other devices to exchange data. While fig. 3 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 309, or installed from the storage means 308, or installed from the ROM 302. The computer program, when executed by the processing device 301, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a multilingual parallel corpus pair set; each parallel corpus pair comprises paired source language corpora and target language corpora, the multilingual parallel corpus pair set comprises at least two parallel corpus pairs, and the source language and the target language in each parallel corpus pair are at least partially different; initially training a universal translation model through the multilingual parallel corpus pair set to obtain a trained universal translation model; acquiring a parallel corpus pair set of a target type, and performing directional training on the trained general translation model through the parallel corpus pair set of the target type to acquire a target translation model matched with the language of the target type; and the source languages of the parallel corpus pairs in the target type are the same, and the target languages are also the same.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. The name of the module does not in some cases constitute a limitation on the module itself, for example, the initial training execution module may be described as "a module for initially training the generic translation model through the set of multilingual parallel corpus pairs to obtain a trained generic translation model". The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, [ example 1 ] there is provided a translation model acquisition method including:
acquiring a multilingual parallel corpus pair set; each parallel corpus pair comprises paired source language corpora and target language corpora, the multilingual parallel corpus pair set comprises at least two parallel corpus pairs, and the source language and the target language in each parallel corpus pair are at least partially different;
initially training a universal translation model through the multilingual parallel corpus pair set to obtain a trained universal translation model;
acquiring a parallel corpus pair set of a target type, and performing directional training on the trained general translation model through the parallel corpus pair set of the target type to acquire a target translation model matched with the language of the target type; and the source languages of the parallel corpus pairs in the target type are the same, and the target languages are also the same.
In accordance with one or more embodiments of the present disclosure, [ example 2 ] there is provided the method of example 1, further comprising:
the generic translation model includes a sequence-to-sequence model.
In accordance with one or more embodiments of the present disclosure, [ example 3 ] there is provided the method of example 1, further comprising:
the parallel corpus pairs in the multilingual parallel corpus pair set comprise text parallel corpus pairs or speech text corpus pairs.
In accordance with one or more embodiments of the present disclosure, [ example 4 ] there is provided the method of example 1, further comprising:
performing noise adding operation on the set of the parallel corpus pairs of the target category to generate parallel corpus pairs containing noise; wherein the noise adding operation comprises adding words and phrases in the corpus, deleting words and phrases and/or disturbing the word and phrase sequence in the corpus.
In accordance with one or more embodiments of the present disclosure, [ example 5 ] there is provided the method of example 1, further comprising:
and respectively performing word order reversal operation on the source language linguistic data and the target language linguistic data in the target type parallel linguistic data pair set to obtain new parallel linguistic data pairs.
In accordance with one or more embodiments of the present disclosure, [ example 6 ] there is provided the method of example 1, further comprising:
at least part of the parallel corpus pairs in the set of parallel corpus pairs of the target category are subsets of the set of multilingual parallel corpus pairs.
In accordance with one or more embodiments of the present disclosure, [ example 7 ] there is provided the method of example 1, further comprising:
in the multilingual parallel corpus pair set, at least part of source language corpora and/or target language corpora of the parallel corpus pairs are doped with vocabularies of other languages.
According to one or more embodiments of the present disclosure, [ example 8 ] there is provided an acquisition apparatus of a translation model, including:
the parallel corpus acquiring module is used for acquiring a multilingual parallel corpus pair set; each parallel corpus pair comprises paired source language corpora and target language corpora, the multilingual parallel corpus pair set comprises at least two parallel corpus pairs, and the source language and the target language in each parallel corpus pair are at least partially different;
the initial training execution module is used for carrying out initial training on the universal translation model through the multilingual parallel corpus pair set so as to obtain a trained universal translation model;
the directional training execution module is used for acquiring a parallel corpus pair set of a target type, and directionally training the trained general translation model through the parallel corpus pair set of the target type to acquire a target translation model matched with the language of the target type; and the source languages of the parallel corpus pairs in the target type are the same, and the target languages are also the same.
According to one or more embodiments of the present disclosure, [ example 9 ] there is provided an electronic device comprising a memory, a processing apparatus, and a computer program stored on the memory and executable on the processing apparatus, the processing apparatus implementing the method of obtaining a translation model according to any one of examples 1-7 when executing the program.
According to one or more embodiments of the present disclosure, [ example 10 ] there is provided a storage medium containing computer-executable instructions for performing the method of obtaining a translation model according to any one of examples 1-7 when executed by a computer processor.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (9)

1. A method for acquiring a translation model is characterized by comprising the following steps:
acquiring a multilingual parallel corpus pair set; each parallel corpus pair comprises source language corpora and target language corpora in pairs, the multilingual parallel corpus pair set comprises at least three parallel corpus pairs, and the source language and the target language in each parallel corpus pair are at least partially different; in the multilingual parallel corpus pair set, the source language corpus and/or the target language corpus of at least part of parallel corpus pairs are doped with vocabularies of other languages;
initially training a universal translation model through the multilingual parallel corpus pair set to obtain a trained universal translation model; the universal translation model is trained and completed based on vocabulary association and a grammar structure;
acquiring a parallel corpus pair set of a target type, and performing directional training on the trained general translation model through the parallel corpus pair set of the target type to acquire a target translation model matched with the language of the target type; and the source languages of the parallel corpus pairs in the target type are the same, and the target languages are also the same.
2. The method of claim 1, wherein the generic translation model comprises a sequence-to-sequence model.
3. The method of claim 1, wherein the parallel corpus pairs in the set of multilingual parallel corpus pairs comprise text parallel corpus pairs or speech-to-text corpus pairs.
4. The method according to claim 1, further comprising, after obtaining the set of parallel corpus pairs of the target category:
performing noise adding operation on the set of the parallel corpus pairs of the target category to generate parallel corpus pairs containing noise; wherein the noise adding operation comprises adding words and phrases in the corpus, deleting words and phrases and/or disturbing the word and phrase sequence in the corpus.
5. The method according to claim 1, further comprising, after obtaining the set of parallel corpus pairs of the target category:
and respectively performing word order reversal operation on the source language linguistic data and the target language linguistic data in the target type parallel linguistic data pair set to obtain new parallel linguistic data pairs.
6. The method of claim 1, wherein at least some of the parallel corpus pairs of the target category are subsets of the set of multilingual parallel corpus pairs.
7. An apparatus for obtaining a translation model, comprising:
the parallel corpus acquiring module is used for acquiring a multilingual parallel corpus pair set; each parallel corpus pair comprises source language corpora and target language corpora in pairs, the multilingual parallel corpus pair set comprises at least three parallel corpus pairs, and the source language and the target language in each parallel corpus pair are at least partially different; in the multilingual parallel corpus pair set, the source language corpus and/or the target language corpus of at least part of parallel corpus pairs are doped with vocabularies of other languages;
the initial training execution module is used for carrying out initial training on the universal translation model through the multilingual parallel corpus pair set so as to obtain a trained universal translation model; the universal translation model is trained and completed based on vocabulary association and a grammar structure;
the directional training execution module is used for acquiring a parallel corpus pair set of a target type, and directionally training the trained general translation model through the parallel corpus pair set of the target type to acquire a target translation model matched with the language of the target type; and the source languages of the parallel corpus pairs in the target type are the same, and the target languages are also the same.
8. An electronic device comprising a memory, a processing means and a computer program stored on the memory and executable on the processing means, characterized in that the processing means, when executing the program, implements the method of obtaining a translation model according to any of claims 1-6.
9. A storage medium containing computer-executable instructions for performing the method of obtaining a translation model according to any of claims 1-6 when executed by a computer processor.
CN201911250280.0A 2019-12-09 2019-12-09 Method, device, equipment and storage medium for obtaining translation model Active CN111046677B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911250280.0A CN111046677B (en) 2019-12-09 2019-12-09 Method, device, equipment and storage medium for obtaining translation model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911250280.0A CN111046677B (en) 2019-12-09 2019-12-09 Method, device, equipment and storage medium for obtaining translation model

Publications (2)

Publication Number Publication Date
CN111046677A CN111046677A (en) 2020-04-21
CN111046677B true CN111046677B (en) 2021-07-20

Family

ID=70235131

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911250280.0A Active CN111046677B (en) 2019-12-09 2019-12-09 Method, device, equipment and storage medium for obtaining translation model

Country Status (1)

Country Link
CN (1) CN111046677B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368566B (en) * 2020-03-19 2023-06-30 中国工商银行股份有限公司 Text processing method, text processing device, electronic equipment and readable storage medium
CN112633017B (en) * 2020-12-24 2023-07-25 北京百度网讯科技有限公司 Translation model training method, translation processing method, translation model training device, translation processing equipment and storage medium
CN112749570B (en) * 2021-01-31 2024-03-08 云知声智能科技股份有限公司 Data enhancement method and system based on multilingual machine translation
CN113139391B (en) * 2021-04-26 2023-06-06 北京有竹居网络技术有限公司 Translation model training method, device, equipment and storage medium
CN113204977B (en) * 2021-04-29 2023-09-26 北京有竹居网络技术有限公司 Information translation method, device, equipment and storage medium
CN113553866A (en) * 2021-07-14 2021-10-26 沈阳雅译网络技术有限公司 Method for realizing inter-translation among multiple languages by using single network model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016065327A1 (en) * 2014-10-24 2016-04-28 Google Inc. Neural machine translation systems with rare word processing
CN108829684A (en) * 2018-05-07 2018-11-16 内蒙古工业大学 A kind of illiteracy Chinese nerve machine translation method based on transfer learning strategy
CN109271644A (en) * 2018-08-16 2019-01-25 北京紫冬认知科技有限公司 A kind of translation model training method and device
CN110309516A (en) * 2019-05-30 2019-10-08 清华大学 Training method, device and the electronic equipment of Machine Translation Model

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9235567B2 (en) * 2013-01-14 2016-01-12 Xerox Corporation Multi-domain machine translation model adaptation
CN109670190B (en) * 2018-12-25 2023-05-16 北京百度网讯科技有限公司 Translation model construction method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016065327A1 (en) * 2014-10-24 2016-04-28 Google Inc. Neural machine translation systems with rare word processing
CN108829684A (en) * 2018-05-07 2018-11-16 内蒙古工业大学 A kind of illiteracy Chinese nerve machine translation method based on transfer learning strategy
CN109271644A (en) * 2018-08-16 2019-01-25 北京紫冬认知科技有限公司 A kind of translation model training method and device
CN110309516A (en) * 2019-05-30 2019-10-08 清华大学 Training method, device and the electronic equipment of Machine Translation Model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
藏汉神经网络机器翻译研究;李亚超等;《中文信息学报》;20171130;第31卷(第6期);第103-109页 *

Also Published As

Publication number Publication date
CN111046677A (en) 2020-04-21

Similar Documents

Publication Publication Date Title
CN111046677B (en) Method, device, equipment and storage medium for obtaining translation model
CN111008533B (en) Method, device, equipment and storage medium for obtaining translation model
CN111382261B (en) Abstract generation method and device, electronic equipment and storage medium
CN111368559A (en) Voice translation method and device, electronic equipment and storage medium
CN113139391B (en) Translation model training method, device, equipment and storage medium
CN111597825B (en) Voice translation method and device, readable medium and electronic equipment
WO2022228221A1 (en) Information translation method, apparatus and device, and storage medium
CN113378586B (en) Speech translation method, translation model training method, device, medium, and apparatus
CN111368560A (en) Text translation method and device, electronic equipment and storage medium
CN112270200B (en) Text information translation method and device, electronic equipment and storage medium
CN112380876B (en) Translation method, device, equipment and medium based on multilingual machine translation model
CN111339789A (en) Translation model training method and device, electronic equipment and storage medium
CN112487797A (en) Data generation method and device, readable medium and electronic equipment
CN115640815A (en) Translation method, translation device, readable medium and electronic equipment
CN112257459B (en) Language translation model training method, translation method, device and electronic equipment
CN114765025A (en) Method for generating and recognizing speech recognition model, device, medium and equipment
CN111400454A (en) Abstract generation method and device, electronic equipment and storage medium
WO2023138361A1 (en) Image processing method and apparatus, and readable storage medium and electronic device
WO2022121859A1 (en) Spoken language information processing method and apparatus, and electronic device
WO2022116819A1 (en) Model training method and apparatus, machine translation method and apparatus, and device and storage medium
CN112836476B (en) Summary generation method, device, equipment and medium
CN113591498A (en) Translation processing method, device, equipment and medium
CN112489652A (en) Text acquisition method and device for voice information and storage medium
CN112820280A (en) Generation method and device of regular language model
CN111737572A (en) Search statement generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant