CN114255738A - Speech synthesis method, apparatus, medium, and electronic device - Google Patents

Speech synthesis method, apparatus, medium, and electronic device Download PDF

Info

Publication number
CN114255738A
CN114255738A CN202111653397.0A CN202111653397A CN114255738A CN 114255738 A CN114255738 A CN 114255738A CN 202111653397 A CN202111653397 A CN 202111653397A CN 114255738 A CN114255738 A CN 114255738A
Authority
CN
China
Prior art keywords
sequence
text
target
audio information
laughter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111653397.0A
Other languages
Chinese (zh)
Inventor
何爽爽
吉伶俐
梅晓
马泽君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202111653397.0A priority Critical patent/CN114255738A/en
Publication of CN114255738A publication Critical patent/CN114255738A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • G10L13/10Prosody rules derived from text; Stress or intonation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/06Elementary speech units used in speech synthesisers; Concatenation rules

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)

Abstract

The present disclosure relates to a speech synthesis method, apparatus, medium, and electronic device, the method comprising: determining a target text in the text to be processed according to the received text to be processed, wherein the target text is used for representing the text for laughter synthesis; determining a target audio information sequence corresponding to the target text according to the target text and the text to be processed, wherein the target audio information sequence is determined based on pre-labeled audio information sequences under multiple laughter types, and each audio information sequence under the laughter type comprises a target phoneme sequence, a target tone sequence and a prosody sequence; according to the target audio information sequence and the voice synthesis model, laughter audio information of the target text is generated to obtain audio information of the text to be processed, so that voice synthesis can be performed by adopting various sound production modes and composition structures, the difference between laughter of voice synthesis and voice formed by a real person is reduced, and the fidelity of voice synthesis is improved.

Description

Speech synthesis method, apparatus, medium, and electronic device
Technical Field
The present disclosure relates to the field of speech processing, and in particular, to a speech synthesis method, apparatus, medium, and electronic device.
Background
At present, the Speech synthesis technology brings great convenience To people's life, for example, TTS (Text To Speech, from Text To Speech) can intelligently convert characters into natural Speech streams through a neural network.
In the above text-to-language conversion process, the synthesis of laughter is usually based on the phonetic phoneme of the text to label it, so as to realize the speech synthesis of laughter. However, in an actual application scenario, for different types of characters in a text, a sound production manner, a composition structure and the like adopted by the characters may be greatly different from a voice, and a voice synthesis in the related art still has a large difference from a voice read by a real person.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, the present disclosure provides a method of speech synthesis, the method comprising:
determining a target text in the text to be processed according to the received text to be processed, wherein the target text is used for representing the text for laughter synthesis;
determining a target audio information sequence corresponding to the target text according to the target text and the text to be processed, wherein the target audio information sequence is determined based on pre-labeled audio information sequences under multiple laughter types, and each audio information sequence under the laughter type comprises a target phoneme sequence, a target tone sequence and a prosody sequence;
and generating laughter audio information of the target text according to the target audio information sequence and the speech synthesis model so as to obtain the audio information of the text to be processed.
In a second aspect, the present disclosure provides a speech synthesis apparatus, the apparatus comprising:
the device comprises a first determining module, a second determining module and a processing module, wherein the first determining module is used for determining a target text in a text to be processed according to the received text to be processed, and the target text is used for representing the text for laughter synthesis;
a second determining module, configured to determine a target audio information sequence corresponding to the target text according to the target text and the text to be processed, where the target audio information sequence is determined based on pre-labeled audio information sequences in multiple laughter types, and each audio information sequence in the laughter type includes a target phoneme sequence, a target tone sequence, and a prosody sequence;
and the generating module is used for generating laughter audio information of the target text according to the target audio information sequence and the speech synthesis model so as to obtain the audio information of the text to be processed.
In a third aspect, the present disclosure provides a computer readable medium having stored thereon a computer program which, when executed by a processing apparatus, performs the steps of the method of the first aspect.
In a fourth aspect, the present disclosure provides an electronic device comprising:
a storage device having one or more computer programs stored thereon;
one or more processing devices for executing the one or more computer programs in the storage device to implement the steps of the method of the first aspect.
In the technical scheme, a target text used for representing laughter synthesis in the text to be processed is determined according to the received text to be processed, so that a target audio information sequence corresponding to the target text can be determined according to the target text and the text to be processed, and laughter audio information of the target text is generated according to the target audio information sequence and a speech synthesis model to obtain audio information of the text to be processed. Therefore, by the technical scheme, personalized voice synthesis can be performed on the text in the voice synthesis process aiming at the auxiliary languages such as laughter, and when laughter synthesis is performed, corresponding audio information sequences can be selected from multiple pre-marked laughter types aiming at the target text corresponding to the laughter synthesis, so that synthesis can be performed by adopting multiple sound production modes and composition structures when laughter synthesis is performed, the difference between the obtained laughter in the voice synthesis and the voice formed by a real person is reduced, and the fidelity of the voice synthesis is effectively improved. Meanwhile, by means of pre-labeling various laughter types, the laughter synthesis diversity can be improved while the laughter labeling diversity is improved, the accuracy of the voice synthesis method is improved, and the real application scene is fitted.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale. In the drawings:
FIG. 1 is a flow diagram of a method of speech synthesis provided in accordance with an embodiment of the present disclosure;
FIG. 2 is a flow diagram of an exemplary implementation of determining a target audio information sequence corresponding to a target text based on the target text and a text to be processed;
FIG. 3 is a diagram of a training audio sequence;
FIG. 4 is a block diagram of a speech synthesis apparatus provided in accordance with an embodiment of the present disclosure;
FIG. 5 illustrates a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Fig. 1 is a flowchart of a speech synthesis method according to an embodiment of the present disclosure, where as shown in fig. 1, the method includes:
in step 11, a target text in the text to be processed is determined according to the received text to be processed, wherein the target text is a text for representing laughter synthesis.
The text to be processed may be a text for speech synthesis, such as an article or a group of dialogues.
As an example, the target text in the text to be processed may be determined by keyword matching, for example, in an actual application scenario, in the text, laughter is usually represented by multi-word continuous sequencing of "ha", "yah", "hip-hop", "hey", and the like, detection may be performed by keywords, and after identifying consecutive keywords, the identified consecutive keywords are determined as the target text.
As another example, the target text in the text to be processed may be determined by a text prediction model. For example, the text may be labeled in advance, that is, a part of the text corresponding to laughter is labeled, so that a neural network is trained based on the labeled sample, and a trained predictive text prediction model is obtained, where the neural network may adopt a network model commonly used in the art, and the training mode is not described herein again. The target text corresponding to the position where laughter synthesis is needed in the input text to be processed can be predicted through the text prediction model.
In step 12, a target audio information sequence corresponding to the target text is determined according to the target text and the text to be processed, wherein the target audio information sequence is determined based on pre-labeled audio information sequences under multiple types of laughter, and each audio information sequence under the type of laughter includes a target phoneme sequence, a target tone sequence and a prosody sequence.
In this embodiment, multiple laughing sound types can be labeled in advance according to the possible laughing sound production mode in the actual scene, that is, the phoneme sequence, the tone sequence and the prosody sequence under the multiple laughing sound types can be labeled in advance, so that when laughing sound synthesis is performed, a corresponding audio information sequence can be selected from the multiple pre-labeled laughing sound types, and while the laughing sound labeling diversity is improved, the laughing sound synthesis diversity and the matching degree with the actual application scene can also be improved.
The phoneme is the minimum voice unit divided according to the natural attribute of the voice, and is analyzed according to the pronunciation action in the syllable, and one action forms one phoneme. Phonemes are divided into two major categories, vowels and consonants. For the Chinese Mandarin, a phone includes an initial (the initial is a complete syllable formed by consonants before the final and together with the final) and a final (i.e., vowel), and the corresponding phone is "nihao" as exemplified by "hello". The tone is used to indicate a change in elevation of a sound. Prosody is used to indicate where a pause should be made when reading text.
In step 13, laughter audio information of the target text is generated according to the target audio information sequence and the speech synthesis model to obtain audio information of the text to be processed.
Illustratively, for a target text in the text to be processed, a target audio information sequence can be obtained in the above manner, so as to obtain laughter audio information of the target text; for other texts except the target text in the text to be processed, speech synthesis can be performed according to a TTS technology in the related technology, for example, to obtain audio information corresponding to the other texts, and then audio information corresponding to the whole text to be processed is obtained by combining laughter audio information.
In the technical scheme, a target text used for representing laughter synthesis in the text to be processed is determined according to the received text to be processed, so that a target audio information sequence corresponding to the target text can be determined according to the target text and the text to be processed, and laughter audio information of the target text is generated according to the target audio information sequence and a speech synthesis model to obtain audio information of the text to be processed. Therefore, by the technical scheme, personalized voice synthesis can be performed on the text in the voice synthesis process aiming at the auxiliary languages such as laughter, and when laughter synthesis is performed, corresponding audio information sequences can be selected from multiple pre-marked laughter types aiming at the target text corresponding to the laughter synthesis, so that synthesis can be performed by adopting multiple sound production modes and composition structures when laughter synthesis is performed, the difference between the obtained laughter in the voice synthesis and the voice formed by a real person is reduced, and the fidelity of the voice synthesis is effectively improved. Meanwhile, by means of pre-labeling various laughter types, the laughter synthesis diversity can be improved while the laughter labeling diversity is improved, the accuracy of the voice synthesis method is improved, and the real application scene is fitted.
In a possible embodiment, an exemplary implementation manner of determining a target audio information sequence corresponding to the target text according to the target text and the text to be processed is as follows, as shown in fig. 2, and this step may include:
in step 21, a context text corresponding to the target text in the text to be processed is determined.
The context text corresponding to the target text may be a complete sentence to which the target text belongs, and after the text to be processed is obtained, sentence splitting processing is usually required to be performed on the text to be processed, so that after the corresponding target text is determined, the complete sentence to which the target text belongs may be obtained to serve as the context text, and thus, a scene type corresponding to the target text is analyzed.
In step 22, scene types corresponding to the target text are determined according to the context text, wherein at least one candidate audio information sequence corresponding to each scene type includes audio information sequences in at least one laughter type.
The scene type may be used to characterize a language scene for language synthesis, for example, the scene type may include, but is not limited to, various types of embarrassment, reluctant, inebriate, emotional tendency-free, general happy, very happy, and happy, which may be set according to an actual application scene, and this disclosure does not limit this. It should be noted that, a candidate audio information sequence corresponding to each scene type is set in each scene type, and the candidate audio information sequence is used to represent an audio feature for laughter synthesis in the scene type, for example, the candidate audio information sequence in a scene type may be obtained by combining based on one or more of pre-labeled audio information sequences in multiple laughter types.
As an example, a scene type may be labeled in advance for a plurality of sentences, and the classification model is trained by using the labeled sentences as training samples, so as to obtain a trained scene classification model. The classification model may be a multi-classification model commonly used in the art, and the multi-classification model may be trained by using a sentence in a training sample as an input and using a label of a scene type corresponding to the sentence as a target output, and a specific training process is not described herein again. Thus, the contextual text may be input into the scene classification model, and the classification output by the scene classification model may be taken as the scene type.
As another example, the determining the scene type corresponding to the target text according to the context text may be implemented as follows, and may include:
and determining the speaker role information corresponding to the context text. In an actual application scenario, the context including the synthesis of the smiling sound is generally a dialog of characters in text, and the text may be a novel text or a script word text, for example. Therefore, in this embodiment, the speaker role information corresponding to the context text, that is, the feature information of the person who uttered the context text, may be determined, where the feature information of each person in the text may be obtained through a person profile, for example, through a person description in a novel text or a person introduction in a scenario, so that after the speaker corresponding to the context text is determined, the speaker role information may be obtained directly according to the feature information of the speaker, for example, the speaker role information may be used to characterize the speaker as being sent or sent back, and the like.
And then, carrying out scene classification according to the context text and the speaker role information, and determining the determined classification as the scene type.
Similarly, in this step, a plurality of sentences and character features corresponding to the sentences may be labeled in advance, and the classification model may be trained based on the labeled sentences as training samples to obtain a trained scene classification model. The classification model may be a multi-classification model commonly used in the art, and the sentence in the training sample and the character features corresponding to the sentence may be used as input, and the label of the scene type corresponding to the sentence is used as target output to train the multi-classification model, and the specific training process is not described herein again. Thus, the context text and speaker character information can be input to the scene classification model, and the classification output by the scene classification model can be used as the scene type.
Therefore, in the embodiment, the scene type corresponding to the target text can be determined by simultaneously combining the content in the context text and the speaker role information (namely character characteristics) corresponding to the context text, so that the accuracy of the determined scene type is improved, and accurate data support is provided for performing speech synthesis on laughter corresponding to the target text. Meanwhile, the matching of the synthesized laughter and the scenes of the text content can be improved to a certain extent, and the fidelity of the voice synthesis is further improved.
In step 23, a target audio information sequence corresponding to the target text is generated according to the candidate audio information sequence under the scene type.
Therefore, by the technical scheme, the scene to which the target text belongs can be predicted based on the context information corresponding to the target text, so that the target audio information sequence corresponding to the target text can be generated based on the candidate audio information sequence under the determined scene type, the target audio information sequence corresponding to the target text is matched with the scene to which the target text belongs, personalized speech synthesis is performed on the target text, meanwhile, the diversity of laughter synthesis is further improved, the sublingual pronunciation of spoken language dialog in the practical application scene is attached, and the expression degree of the speech synthesis to text content is improved.
As an example, analyzing laughter audio occurring in a practical application scene, the applicant has found the following:
1. the end of a laugh at the beginning or end of a sentence will typically contain a recovery-type inspiration segment (the vocal cords may or may not vibrate), i.e., offset in the table below, by which the speaker is guaranteed to continue speaking after ventilation, whereas the laugh at the end of a sentence will typically not have an inspiration segment.
2. The beginning of laughter may have a vowel start segment, onset in the table below.
3. The vowel segment in the previous laughing unit is often influenced by the consonant segment of the next laughing unit, and the gas is fed in the second half segment and the formants change; the onset of laughter may have a phase of resistance removal like a plosive, producing a straight bar.
Therefore, in the embodiment of the present disclosure, the portions of the candidate audio information sequences in the plurality of scene types that are preset based on the laughter feature are set as follows:
type of scene Candidate audio information sequence Number of syllables of laughter
Embarrassment hn>、he>、ha> Single, double
Is barely about he-、ha> Single, double
All-grass of indifference hx Sheet
No emotional tendency hn>、he>、ha>、ha-、h Single, double
General care is taken hn>、hn-、he>、ha>、hx、hy> Single, double and multiple
Very happy hn>、hn<、he>、he<、ha>、hx、hy> Multiple purpose
To obtain an intention hy-、hy< Single, double and multiple
Wherein, each phoneme sequence of the audio information sequence under the pre-labeled multiple laughter types is characterized as follows:
Figure BDA0003447173660000091
Figure BDA0003447173660000101
as shown in the above table, the plurality of types of laughter include a type of laughter corresponding to an inhalation segment and a type of laughter corresponding to an exhalation segment,
wherein the laugh types of the inspiratory segment include a first laugh type (e.g., the type labeled vd in the above table) corresponding to vocal cord vibration and a second laugh type (e.g., the type labeled uvd in the above table) corresponding to vocal cord non-vibration;
the laughter type of the expiratory segment includes a voiced laughter type corresponding to the laughter utterance stage and an activated laughter type corresponding to the laughter activation stage, each syllable in the phone sequence under the voiced laughter type being constructed based on consonants and vowels, and each syllable in the phone sequence under the activated laughter type being constructed based on vowels. The sounding laughter type can be shown as types 1-8 in the above table, and the starting laughter type can be shown as variants 1 and 2 in the above table, that is, the laughter types corresponding to variants 1 and 2 are used to represent the vowel starting segment existing at the beginning of the laughter.
It should be noted that the division of the laugh types shown in the above table is an exemplary description, and the present disclosure is not limited thereto, and other laugh types and labels can be determined based on the actual application scene based on the above classification, so that the fineness and diversity of the laugh types are improved, thereby improving the diversity of laugh synthesis and the personification and reality of the synthesized laugh.
Wherein, in the determined target audio information sequence, different phonemes corresponding to the same syllable have the same tone and rhythm, and the tone comprises ascending tone, flat tone and descending tone. Symbols following a sequence of phonemes in a sequence of audio information are used to represent tonal information, where ">" is used to represent a down tone, "-" is used to represent a flat tone, and "<" is used to represent an up tone. Therefore, after the scene type is determined, the target audio information sequence corresponding to the target text can be determined according to the candidate audio information sequence under the scene type. Illustratively, if the determined scene type is "marginal", a target audio information sequence corresponding to the target text may be determined according to the candidate audio information sequence "he-, ha >" corresponding to the "marginal", and the target text is "haha", then "ha >" may be selected from the candidate audio information sequence "he-, ha >" to further determine a phoneme sequence and a tone sequence, so as to obtain a target phoneme sequence and a target tone sequence "ha > ha >".
Meanwhile, the prosodic features are further determined according to the corresponding number of laughter syllables including ' single-syllable and ' double ', and can be generated according to the default mode of the scene type or according to the actual number of syllables. Therefore, laughter can be characterized through various phoneme sequences and tone sequences, corresponding laughter characteristics under various scenes are obtained, synthesis of laughter voice can be more in line with the pronunciation mode of a real person, and accuracy of voice synthesis is improved.
In a possible embodiment, an exemplary implementation manner of generating a target audio information sequence corresponding to a target text according to a candidate audio information sequence under a scene type is as follows, and the steps may include:
and determining a phoneme sequence and a tone sequence corresponding to each syllable contained in the target text according to the candidate audio information sequence corresponding to the scene type. The manner of determining the phoneme sequence and the tone sequence in this step is similar to that described above, and is not described herein again.
And determining a target audio information sequence corresponding to the target text according to the phoneme sequence and tone sequence corresponding to each syllable contained in the target text and the number of syllables in the target text.
As an example, when the number of syllables in the target text is less than the preset threshold, the target audio information sequence corresponding to the target text may be determined according to the phoneme sequence and the tone sequence corresponding to each syllable in the target text, which are determined from the candidate audio information sequences. For example, in this step, the phoneme sequence and pitch sequence corresponding to each syllable included in the target text determined from the candidate audio information sequence are "ha >", and "uvd", and when the number of syllables is smaller than the threshold, the sequences corresponding to the syllables may be concatenated, and the target phoneme sequence and target pitch corresponding to the obtained target text may be represented as "ha > ha > uvd". Wherein the prosodic sequence is determined in a manner similar to that described above and will not be described in detail herein.
As another example, when the number of syllables in the target text is greater than or equal to the preset threshold, the determined phoneme sequence and pitch sequence corresponding to each syllable in the target text may be selected based on the number of syllables, and the target audio information sequence corresponding to the target text may be determined. For example, in a novel text, the text "haohahahahahahahahahahahahahahahahahahahahahahahahahahahahahas" may represent a laughter, and no utterance is made according to the number of syllables in the text in actual speech synthesis, and for such a longer text, the number of syllables in the target text may be selected according to the phone sequence and the tone sequence corresponding to each syllable, for example, a preset threshold may be set to 6 in the scene, and for such a longer text, it may select a target phone sequence and a target tone that include 3 syllables, 4 syllables, or 5 syllables as the number of syllables in the target audio information sequence, and the specific target phone sequence and the target tone may be determined according to the phone sequence and the tone corresponding to each of the syllables.
Therefore, by the technical scheme, when the target text is subjected to diversity audio information sequence generation, the number of syllables contained in the target text can be further combined, so that the accuracy of the target audio information sequence corresponding to the determined target text and the adaptability to an actual application scene can be further improved, the fidelity of speech synthesis is further improved, and the use experience of a user is improved.
In one possible embodiment, the speech synthesis model is obtained by:
obtaining training samples, wherein each training sample comprises a training audio sequence and a labeling sequence of the training audio sequence, and the labeling sequence comprises a training phoneme sequence, a training tone sequence and a training prosody sequence corresponding to the training audio sequence.
The training phoneme sequence, the training tone sequence and the training prosody sequence can be obtained by performing artificial labeling based on the laughter feature and the phoneme sequence.
As an example, the labeling sequence of the training audio sequence in each training sample is determined based on a spectrogram of the training audio and an audio information sequence under the pre-labeled multiple types of laughter, wherein the types of laughter include a type of laughter corresponding to an inspiratory segment and a type of laughter corresponding to an expiratory segment, the audio information sequence under the type of laughter corresponding to the inspiratory segment is labeled in the inspiratory segment determined based on the spectrogram, and the audio information sequence under the type of laughter corresponding to the expiratory segment is labeled in the expiratory segment determined based on the spectrogram. The pre-labeled laugh types are described in detail above, and it should be noted that the corresponding laugh types in the table are exemplary and do not limit the number of laugh types and the like in the present disclosure.
For example, as shown in fig. 3, a spectrogram of a training audio sequence is illustrated, in the example illustrated in fig. 3, which includes 3 audio waveforms, where each audio waveform corresponds to a syllable, and through the syllable sequence, a training text may be labeled to obtain a training phoneme sequence, a training tone sequence, and a training prosody sequence "ha > ha > uvd, and prosody phrase pauses corresponding thereto, where a plurality of types of laughter pre-labeled in the table information may be manually labeled to obtain labeling information, a first layer of expiratory segments is labeled with a near-Chinese character" haha ", an inspiratory segment sp represents a short pause in a sentence, a second layer of expiratory segments is labeled with a training phoneme sequence and a training tone sequence, and an inspiratory segment label uvd represents that a vocal cord does not vibrate.
And for each training audio sequence, splicing the vectors corresponding to the labeling sequences of the training audio sequence to obtain a spliced vector, and inputting the spliced vector into an encoder of a preset model to obtain a feature vector corresponding to the training audio sequence. The labeling sequence can be converted into a vector by vectorizing (embedding) the training phoneme sequence, the training tone sequence and the training prosody sequence corresponding to the training audio sequence. The concatenation may be performed in an exemplary sequential manner, such as based on a concat () function, to obtain the concatenation vector. The preset model may be a tacontron model, which is an end-to-end speech synthesis framework and includes an encoder, an attention module, and a decoder.
Inputting the feature vector into an attention module of the preset model to obtain a context vector corresponding to the feature vector; inputting the context vector into a decoder of a preset model, and obtaining synthetic audio information corresponding to the training audio sequence, which may be a frame sequence of Mel spectrum Mel), for example.
The feature vector is input into the attention module for calculation, so that the characteristics related to laughter can be paid more attention in the speech synthesis process, and the accuracy of the synthesized audio information can be improved to a certain extent when decoding is carried out based on the context vector. The specific processing method of the attention module and the encoder may adopt a calculation method commonly used in a Tacotron model in the art, and is not described herein again.
And determining the target loss of the preset model according to the synthetic audio information and the target audio information extracted from the training audio sequence, and training the preset model according to the target loss to obtain the voice synthesis model.
For example, feature extraction may be performed on the training audio sequence, such as taking Mel spectrum extracted from the training audio sequence as the target audio information, so that MSE (Mean Square Error) calculation may be performed based on the synthesized audio information and the target audio information to obtain the target loss of the preset model. As an example, in the case that the target loss is greater than the loss threshold, the parameters of the preset model may be optimally updated by the Adam optimizer according to the target loss to implement the training of the preset model.
Therefore, by the technical scheme, the phoneme sequence, the tone sequence and the prosody sequence corresponding to the training audio sequence are combined to perform voice synthesis, so that the accuracy of voice synthesis can be improved to a certain extent, the application range of the voice synthesis model can be widened, and the adaptation and the support of a multi-pronunciation mode corresponding to laughter can be realized.
Accordingly, if the laughter audio information of the target text is generated based on the target audio information sequence and the speech synthesis model, the target audio information sequence may be input to the speech synthesis model to perform calculation processing through the speech synthesis model to output audio information.
In one possible embodiment, the method may further comprise:
and inputting the audio information of the text to be processed into a vocoder (vocoder) to obtain the voice information corresponding to the text to be processed, and outputting the voice information. Illustratively, the vocoder may be configured to generate a time domain waveform, i.e., speech information, from a sequence of predicted mel-frequency spectrum frames. Therefore, the voice information can be further generated to be output to the user, reading convenience is provided for the user, the output voice information fits a real reading mode of the user, and the user experience is improved.
Based on the same inventive concept, the present disclosure also provides a speech synthesis apparatus, as shown in fig. 4, the apparatus 10 includes:
a first determining module 100, configured to determine a target text in a to-be-processed text according to the received to-be-processed text, where the target text is a text used for representing laughter synthesis;
a second determining module 200, configured to determine, according to the target text and the text to be processed, a target audio information sequence corresponding to the target text, where the target audio information sequence is determined based on pre-labeled audio information sequences in multiple types of laughter, and each audio information sequence in the type of laughter includes a target phoneme sequence, a target intonation sequence, and a prosody sequence;
a generating module 300, configured to generate laughter audio information of the target text according to the target audio information sequence and the speech synthesis model, so as to obtain audio information of the text to be processed.
Optionally, the speech synthesis model is obtained by:
acquiring training samples, wherein each training sample comprises a training audio sequence and a labeling sequence of the training audio sequence, and the labeling sequence comprises a training phoneme sequence, a training tone sequence and a training prosody sequence corresponding to the training audio sequence;
for each training audio sequence, splicing vectors corresponding to all labeling sequences of the training audio sequence to obtain a spliced vector, and inputting the spliced vector into an encoder of a preset model to obtain a feature vector corresponding to the training audio sequence;
inputting the feature vector into an attention module of the preset model to obtain a context vector corresponding to the feature vector;
inputting the context vector into a decoder of a preset model to obtain synthetic audio information corresponding to the training audio sequence;
and determining the target loss of the preset model according to the synthetic audio information and the target audio information extracted from the training audio sequence, and training the preset model according to the target loss to obtain the voice synthesis model.
Optionally, the labeled sequence of the training audio sequence in each training sample is determined based on a spectrogram of the training audio and an audio information sequence under multiple pre-labeled laughing sound types, wherein the laughing sound types include a laughing sound type corresponding to an inspiratory segment and a laughing sound type corresponding to an expiratory segment, the audio information sequence under the laughing sound type corresponding to the inspiratory segment is labeled in the inspiratory segment determined based on the spectrogram, and the audio information sequence under the laughing sound type corresponding to the expiratory segment is labeled in the expiratory segment determined based on the spectrogram.
Optionally, in the target audio information sequence, different phones corresponding to the same syllable have the same tone and prosody, and the tone includes an ascending tone, a flat tone, and a descending tone.
Optionally, the second determining module includes:
the first determining submodule is used for determining a context text corresponding to the target text in the text to be processed;
a second determining submodule, configured to determine, according to the context text, scene types corresponding to the target text, where at least one candidate audio information sequence corresponds to each scene type, and the candidate audio information sequences include audio information sequences in at least one laughter type;
and the generating submodule is used for generating a target audio information sequence corresponding to the target text according to the candidate audio information sequence under the scene type.
Optionally, the second determining sub-module includes:
a third determining submodule, configured to determine speaker role information corresponding to the context text;
and the fourth determining submodule is used for carrying out scene classification according to the context text and the speaker role information and determining the determined classification as the scene type.
Optionally, the generating sub-module includes:
a fifth determining submodule, configured to determine, according to the candidate audio information sequence corresponding to the scene type, a phoneme sequence and a tone sequence corresponding to each syllable included in the target text;
and the sixth determining submodule is used for determining a target audio information sequence corresponding to the target text according to the phoneme sequence and the tone sequence corresponding to each syllable contained in the target text and the number of syllables in the target text.
Optionally, the plurality of types of laughter includes a type of laughter corresponding to an inhalation segment and a type of laughter corresponding to an exhalation segment,
wherein the laugh types of the inspiratory segment include a first laugh type corresponding to vocal cord vibration and a second laugh type corresponding to vocal cord non-vibration;
the laughter type of the expiratory segment includes a voiced laughter type corresponding to the laughter utterance stage and an activated laughter type corresponding to the laughter activation stage, each syllable in the phone sequence under the voiced laughter type being constructed based on consonants and vowels, and each syllable in the phone sequence under the activated laughter type being constructed based on vowels.
Referring now to FIG. 5, a block diagram of an electronic device 600 suitable for use in implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining a target text in the text to be processed according to the received text to be processed, wherein the target text is used for representing the text for laughter synthesis; determining a target audio information sequence corresponding to the target text according to the target text and the text to be processed, wherein the target audio information sequence is determined based on pre-labeled audio information sequences under multiple laughter types, and each audio information sequence under the laughter type comprises a target phoneme sequence, a target tone sequence and a prosody sequence; and generating laughter audio information of the target text according to the target audio information sequence and the speech synthesis model so as to obtain the audio information of the text to be processed.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. The name of the module does not in some cases constitute a limitation on the module itself, for example, the first determining module may also be described as a "module for determining a target text in a received text to be processed according to the received text to be processed".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Example 1 provides a speech synthesis method according to one or more embodiments of the present disclosure, wherein the method includes:
determining a target text in the text to be processed according to the received text to be processed, wherein the target text is used for representing the text for laughter synthesis;
determining a target audio information sequence corresponding to the target text according to the target text and the text to be processed, wherein the target audio information sequence is determined based on pre-labeled audio information sequences under multiple laughter types, and each audio information sequence under the laughter type comprises a target phoneme sequence, a target tone sequence and a prosody sequence;
and generating laughter audio information of the target text according to the target audio information sequence and the speech synthesis model so as to obtain the audio information of the text to be processed.
Example 2 provides the method of example 1, wherein the speech synthesis model is obtained by:
acquiring training samples, wherein each training sample comprises a training audio sequence and a labeling sequence of the training audio sequence, and the labeling sequence comprises a training phoneme sequence, a training tone sequence and a training prosody sequence corresponding to the training audio sequence;
for each training audio sequence, splicing vectors corresponding to all labeling sequences of the training audio sequence to obtain a spliced vector, and inputting the spliced vector into an encoder of a preset model to obtain a feature vector corresponding to the training audio sequence;
inputting the feature vector into an attention module of the preset model to obtain a context vector corresponding to the feature vector;
inputting the context vector into a decoder of a preset model to obtain synthetic audio information corresponding to the training audio sequence;
and determining the target loss of the preset model according to the synthetic audio information and the target audio information extracted from the training audio sequence, and training the preset model according to the target loss to obtain the voice synthesis model.
Example 3 provides the method of example 2, wherein the sequence of labels of the training audio sequence in each of the training samples is determined based on a spectrogram of the training audio and the sequence of audio information in the pre-labeled multiple types of laughter, the types of laughter including a type of laughter corresponding to an inspiratory segment and a type of laughter corresponding to an expiratory segment, wherein the sequence of audio information in the type of laughter corresponding to the inspiratory segment is labeled in the inspiratory segment determined based on the spectrogram, and the sequence of audio information in the type of laughter corresponding to the expiratory segment is labeled in the expiratory segment determined based on the spectrogram.
Example 4 provides the method of example 1, wherein, in the target audio information sequence, different phones corresponding to the same syllable have the same intonation and prosody, the intonation including up, down, and down.
Example 5 provides the method of example 1, wherein,
determining a target audio information sequence corresponding to the target text according to the target text and the text to be processed, including:
determining a context text corresponding to the target text in the text to be processed;
determining scene types corresponding to the target text according to the context text, wherein at least one candidate audio information sequence corresponds to each scene type, and the candidate audio information sequences comprise audio information sequences in at least one laughing sound type;
and generating a target audio information sequence corresponding to the target text according to the candidate audio information sequence under the scene type.
Example 6 provides the method of example 5, wherein the determining the scene type corresponding to the target text from the context text, according to one or more embodiments of the present disclosure, includes:
determining speaker role information corresponding to the context text;
and carrying out scene classification according to the context text and the speaker role information, and determining the determined classification as the scene type.
Example 7 provides the method of example 5, wherein the generating a target audio information sequence corresponding to the target text according to the candidate audio information sequence in the scene type includes:
determining a phoneme sequence and a tone sequence corresponding to each syllable contained in the target text according to the candidate audio information sequence corresponding to the scene type;
and determining a target audio information sequence corresponding to the target text according to the phoneme sequence and tone sequence corresponding to each syllable contained in the target text and the number of syllables in the target text.
Example 8 provides the method of example 1, wherein the plurality of laugh types includes a laugh type corresponding to an inspiratory segment and a laugh type corresponding to an expiratory segment,
wherein the laugh types of the inspiratory segment include a first laugh type corresponding to vocal cord vibration and a second laugh type corresponding to vocal cord non-vibration;
the laughter type of the expiratory segment includes a voiced laughter type corresponding to the laughter utterance stage and an activated laughter type corresponding to the laughter activation stage, each syllable in the phone sequence under the voiced laughter type being constructed based on consonants and vowels, and each syllable in the phone sequence under the activated laughter type being constructed based on vowels.
Example 9 provides a speech synthesis apparatus according to one or more embodiments of the present disclosure, wherein the apparatus includes:
the device comprises a first determining module, a second determining module and a processing module, wherein the first determining module is used for determining a target text in a text to be processed according to the received text to be processed, and the target text is used for representing the text for laughter synthesis;
a second determining module, configured to determine a target audio information sequence corresponding to the target text according to the target text and the text to be processed, where the target audio information sequence is determined based on pre-labeled audio information sequences in multiple laughter types, and each audio information sequence in the laughter type includes a target phoneme sequence, a target tone sequence, and a prosody sequence;
and the generating module is used for generating laughter audio information of the target text according to the target audio information sequence and the speech synthesis model so as to obtain the audio information of the text to be processed.
Example 10 provides a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processing device, implements the steps of the method of any of examples 1-8, in accordance with one or more embodiments of the present disclosure.
Example 11 provides, in accordance with one or more embodiments of the present disclosure, an electronic device, comprising:
a storage device having one or more computer programs stored thereon;
one or more processing devices for executing the one or more computer programs in the storage device to implement the steps of the method of any of examples 1-8.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.

Claims (11)

1. A method of speech synthesis, the method comprising:
determining a target text in the text to be processed according to the received text to be processed, wherein the target text is used for representing the text for laughter synthesis;
determining a target audio information sequence corresponding to the target text according to the target text and the text to be processed, wherein the target audio information sequence is determined based on pre-labeled audio information sequences under multiple laughter types, and each audio information sequence under the laughter type comprises a target phoneme sequence, a target tone sequence and a prosody sequence;
and generating laughter audio information of the target text according to the target audio information sequence and the speech synthesis model so as to obtain the audio information of the text to be processed.
2. The method of claim 1, wherein the speech synthesis model is obtained by:
acquiring training samples, wherein each training sample comprises a training audio sequence and a labeling sequence of the training audio sequence, and the labeling sequence comprises a training phoneme sequence, a training tone sequence and a training prosody sequence corresponding to the training audio sequence;
for each training audio sequence, splicing vectors corresponding to all labeling sequences of the training audio sequence to obtain a spliced vector, and inputting the spliced vector into an encoder of a preset model to obtain a feature vector corresponding to the training audio sequence;
inputting the feature vector into an attention module of the preset model to obtain a context vector corresponding to the feature vector;
inputting the context vector into a decoder of a preset model to obtain synthetic audio information corresponding to the training audio sequence;
and determining the target loss of the preset model according to the synthetic audio information and the target audio information extracted from the training audio sequence, and training the preset model according to the target loss to obtain the voice synthesis model.
3. The method of claim 2, wherein the labeled sequence of the training audio sequence in each of the training samples is determined based on a spectrogram of the training audio and the audio information sequence in the pre-labeled plurality of types of laughter, the types of laughter including a type of laughter corresponding to an inspiratory segment and a type of laughter corresponding to an expiratory segment, wherein the determined inspiratory segment based on the spectrogram is labeled with the audio information sequence in the type of laughter corresponding to the inspiratory segment, and the determined expiratory segment based on the spectrogram is labeled with the audio information sequence in the type of laughter corresponding to the expiratory segment.
4. The method of claim 1, wherein different phones corresponding to the same syllable have the same pitch and prosody in the target audio information sequence, and wherein the pitch comprises an ascending pitch, a horizontal pitch, and a descending pitch.
5. The method according to claim 1, wherein the determining a target audio information sequence corresponding to the target text according to the target text and the text to be processed comprises:
determining a context text corresponding to the target text in the text to be processed;
determining scene types corresponding to the target text according to the context text, wherein at least one candidate audio information sequence corresponds to each scene type, and the candidate audio information sequences comprise audio information sequences in at least one laughing sound type;
and generating a target audio information sequence corresponding to the target text according to the candidate audio information sequence under the scene type.
6. The method of claim 5, wherein the determining a scene type corresponding to the target text from the context text comprises:
determining speaker role information corresponding to the context text;
and carrying out scene classification according to the context text and the speaker role information, and determining the determined classification as the scene type.
7. The method according to claim 5, wherein the generating a target audio information sequence corresponding to the target text according to the candidate audio information sequence under the scene type comprises:
determining a phoneme sequence and a tone sequence corresponding to each syllable contained in the target text according to the candidate audio information sequence corresponding to the scene type;
and determining a target audio information sequence corresponding to the target text according to the phoneme sequence and tone sequence corresponding to each syllable contained in the target text and the number of syllables in the target text.
8. The method of claim 1, wherein the plurality of laugh types includes a laugh type corresponding to an inspiratory segment and a laugh type corresponding to an expiratory segment,
wherein the laugh types of the inspiratory segment include a first laugh type corresponding to vocal cord vibration and a second laugh type corresponding to vocal cord non-vibration;
the laughter type of the expiratory segment includes a voiced laughter type corresponding to the laughter utterance stage and an activated laughter type corresponding to the laughter activation stage, each syllable in the phone sequence under the voiced laughter type being constructed based on consonants and vowels, and each syllable in the phone sequence under the activated laughter type being constructed based on vowels.
9. A speech synthesis apparatus, characterized in that the apparatus comprises:
the device comprises a first determining module, a second determining module and a processing module, wherein the first determining module is used for determining a target text in a text to be processed according to the received text to be processed, and the target text is used for representing the text for laughter synthesis;
a second determining module, configured to determine a target audio information sequence corresponding to the target text according to the target text and the text to be processed, where the target audio information sequence is determined based on pre-labeled audio information sequences in multiple laughter types, and each audio information sequence in the laughter type includes a target phoneme sequence, a target tone sequence, and a prosody sequence;
and the generating module is used for generating laughter audio information of the target text according to the target audio information sequence and the speech synthesis model so as to obtain the audio information of the text to be processed.
10. A computer-readable medium, on which a computer program is stored, characterized in that the program, when being executed by processing means, carries out the steps of the method of any one of claims 1 to 8.
11. An electronic device, comprising:
a storage device having one or more computer programs stored thereon;
one or more processing devices for executing the one or more computer programs in the storage device to implement the steps of the method of any one of claims 1-8.
CN202111653397.0A 2021-12-30 2021-12-30 Speech synthesis method, apparatus, medium, and electronic device Pending CN114255738A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111653397.0A CN114255738A (en) 2021-12-30 2021-12-30 Speech synthesis method, apparatus, medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111653397.0A CN114255738A (en) 2021-12-30 2021-12-30 Speech synthesis method, apparatus, medium, and electronic device

Publications (1)

Publication Number Publication Date
CN114255738A true CN114255738A (en) 2022-03-29

Family

ID=80798926

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111653397.0A Pending CN114255738A (en) 2021-12-30 2021-12-30 Speech synthesis method, apparatus, medium, and electronic device

Country Status (1)

Country Link
CN (1) CN114255738A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114937104A (en) * 2022-06-24 2022-08-23 北京有竹居网络技术有限公司 Virtual object face information generation method and device and electronic equipment
CN117894294A (en) * 2024-03-14 2024-04-16 暗物智能科技(广州)有限公司 Personification auxiliary language voice synthesis method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114937104A (en) * 2022-06-24 2022-08-23 北京有竹居网络技术有限公司 Virtual object face information generation method and device and electronic equipment
CN117894294A (en) * 2024-03-14 2024-04-16 暗物智能科技(广州)有限公司 Personification auxiliary language voice synthesis method and system

Similar Documents

Publication Publication Date Title
CN111899719B (en) Method, apparatus, device and medium for generating audio
CN112309366B (en) Speech synthesis method, speech synthesis device, storage medium and electronic equipment
CN111489734B (en) Model training method and device based on multiple speakers
CN108899009B (en) Chinese speech synthesis system based on phoneme
CN111369967B (en) Virtual character-based voice synthesis method, device, medium and equipment
CN111292720A (en) Speech synthesis method, speech synthesis device, computer readable medium and electronic equipment
US11361753B2 (en) System and method for cross-speaker style transfer in text-to-speech and training data generation
CN111369971B (en) Speech synthesis method, device, storage medium and electronic equipment
CN112786011B (en) Speech synthesis method, synthesis model training method, device, medium and equipment
TWI721268B (en) System and method for speech synthesis
CN112331176B (en) Speech synthesis method, speech synthesis device, storage medium and electronic equipment
JP7228998B2 (en) speech synthesizer and program
CN111292719A (en) Speech synthesis method, speech synthesis device, computer readable medium and electronic equipment
CN112309367B (en) Speech synthesis method, speech synthesis device, storage medium and electronic equipment
CN114255738A (en) Speech synthesis method, apparatus, medium, and electronic device
WO2023035261A1 (en) An end-to-end neural system for multi-speaker and multi-lingual speech synthesis
CN113808571B (en) Speech synthesis method, speech synthesis device, electronic device and storage medium
WO2023160553A1 (en) Speech synthesis method and apparatus, and computer-readable medium and electronic device
CN113421550A (en) Speech synthesis method, device, readable medium and electronic equipment
CN115101046A (en) Method and device for synthesizing voice of specific speaker
Li et al. End-to-end mongolian text-to-speech system
CN112802447A (en) Voice synthesis broadcasting method and device
US20070055524A1 (en) Speech dialog method and device
CN116129859A (en) Prosody labeling method, acoustic model training method, voice synthesis method and voice synthesis device
CN114242035A (en) Speech synthesis method, apparatus, medium, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination