WO2019218481A1 - 一种语音合成方法、***及终端设备 - Google Patents

一种语音合成方法、***及终端设备 Download PDF

Info

Publication number
WO2019218481A1
WO2019218481A1 PCT/CN2018/097560 CN2018097560W WO2019218481A1 WO 2019218481 A1 WO2019218481 A1 WO 2019218481A1 CN 2018097560 W CN2018097560 W CN 2018097560W WO 2019218481 A1 WO2019218481 A1 WO 2019218481A1
Authority
WO
WIPO (PCT)
Prior art keywords
sentence
voice data
feature
speech
data
Prior art date
Application number
PCT/CN2018/097560
Other languages
English (en)
French (fr)
Inventor
朱坤
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019218481A1 publication Critical patent/WO2019218481A1/zh

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • G10L13/10Prosody rules derived from text; Stress or intonation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/04Details of speech synthesis systems, e.g. synthesiser structure or memory management
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • G10L13/10Prosody rules derived from text; Stress or intonation
    • G10L2013/105Duration
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Definitions

  • the present application belongs to the field of data processing technologies, and in particular, to a voice synthesis method, system, and terminal device.
  • An audiobook is a work recorded by a person or multiple people based on a manuscript and with different voice expressions and recording formats. At present, audio books on the market are manually recorded and saved in advance, and played directly during use. However, this requires a lot of human resources for early recording. In order to save labor costs, voice data can be synthesized by speech synthesis technology.
  • the speech synthesis technology refers to a technique of generating artificial speech by means of mechanical or electronic means, and converting the text information generated by the computer itself or externally input into an audible speech for output.
  • the speech synthesis technology first analyzes the text data to obtain the words and words in the text data, and then obtains the basic speech data corresponding to the words and words from the speech library, and finally obtains the basic speech data.
  • the final speech data is combined in order, but the obtained speech data is not highly anthropomorphic, and thus there is a problem of low quality.
  • the existing speech synthesis technology has the problem of low quality of synthesized speech data.
  • the embodiment of the present application provides a method, a system, and a terminal device for synthesizing a voice to solve the problem that the quality of the synthesized voice data is low.
  • a first aspect of the present application provides a speech synthesis method, including:
  • a second aspect of the present application provides a speech synthesis system comprising:
  • the sentiment analysis module is configured to obtain text data, extract a mood feature word by using a clause, and analyze an emotional attribute of each sentence according to the tone feature word;
  • a speech synthesis module configured to synthesize basic speech data of each statement according to an emotional attribute of each statement based on a preset speech database and a preset speech pronunciation model;
  • the voice adjustment module is configured to perform prosodic feature adjustment on the basic voice data of each sentence according to the tone feature word, and acquire target voice data.
  • a third aspect of the present application provides a terminal device including a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, the processor executing the computer readable The following steps are implemented when the instruction is executed:
  • a fourth aspect of the present application provides a computer readable storage medium storing computer readable instructions that, when executed by a processor, implement the following steps:
  • the speech synthesis method, system and terminal device provided by the present application analyze the emotional attribute of the sentence by extracting the mood feature words of each sentence in the text data, and adjust the emotional attribute of the sentence through the preset speech pronunciation model.
  • the voice data is adjusted in the prosody characteristics of the basic voice data to obtain the target voice data with higher anthropomorphicity.
  • it is more emotional, more in line with the characteristics of actual user pronunciation, effectively improve the quality of speech synthesis data, and solve the problem of low quality of synthesized speech data in existing speech synthesis technology.
  • FIG. 1 is a schematic flowchart of an implementation process of a speech synthesis method according to Embodiment 1 of the present application;
  • step S101 is a schematic flowchart of an implementation process of step S101 corresponding to the first embodiment provided by the second embodiment of the present application;
  • step S102 is a schematic flowchart of an implementation process of step S102 corresponding to the first embodiment provided by the third embodiment of the present application;
  • step S103 is a schematic flowchart of an implementation process of step S103 corresponding to the first embodiment provided in Embodiment 4 of the present application;
  • FIG. 5 is a schematic structural diagram of a voice synthesizing system according to Embodiment 5 of the present application.
  • FIG. 6 is a schematic structural diagram of an sentiment analysis module 101 corresponding to Embodiment 5 provided in Embodiment 6 of the present application;
  • FIG. 7 is a schematic structural diagram of a speech synthesis module 102 corresponding to Embodiment 5 provided in Embodiment 7 of the present application;
  • FIG. 8 is a schematic structural diagram of a voice adjustment module 103 in Embodiment 5 according to Embodiment 8 of the present application;
  • FIG. 9 is a schematic diagram of a terminal device provided in Embodiment 9 of the present application.
  • the present invention provides a speech synthesis method, system and terminal device, which are analyzed by extracting the mood feature words of each sentence in the text data.
  • the emotional attribute of the sentence, and the basic speech data obtained by adjusting the emotional attribute of the sentence combined with the speech, and the prosody adjustment of the basic speech data to obtain the target speech data with higher anthropomorphicity.
  • the pronunciation of emotional sentiment words it is more emotional, more in line with the characteristics of actual user pronunciation, effectively improve the quality of speech synthesis data, and solve the problem of low quality of synthesized speech data in existing speech synthesis technology.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • this embodiment provides a voice synthesis method, which specifically includes:
  • Step S101 Acquire text data, extract a mood feature word in a clause, and analyze an emotional attribute of each sentence according to the tone feature word.
  • the text data with text information is obtained through a terminal, and the format of the text data may be a text format (txt), a rich text format (RTF), or a document (Document, DOC), or the like.
  • a text format such as a Portable Document Format (PDF) or a picture
  • PDF Portable Document Format
  • the tone feature words in each sentence are extracted in units of sentences.
  • Modality features are words, symbols, or combinations of words that are emotional, such as “happy,” “scorpio,” “great,” “okay,” “?” and other words that express emotions and moods. And symbols. Since the mood feature can reflect the user's emotional tendency, there will be different prosodic features when performing pronunciation. Therefore, the mood features in each sentence are extracted, and the emotional attributes of each sentence are analyzed according to the mood features.
  • a tone feature word database is preset, and a tone feature word matching the tone feature word database in each sentence is extracted according to the tone feature word database.
  • users use a combination of words to express their emotions.
  • the combination rules of words are set according to the grammar rules, and the words satisfying the combination rules are extracted together when the tone feature words are extracted.
  • the above combination rules include but are not limited to the following combination rules:
  • the present embodiment extracts the mood feature words in units of sentences, and analyzes the emotional attributes of the sentences for the mood features of each sentence.
  • each sentence of the text data is split, split into multiple words, and the split word combination is divided into neutral words and mood features, wherein the mood features include positive words and
  • the negative words can be analyzed according to the above-mentioned neutral words, positive words and the proportion of negative words in the statement.
  • Step S102 synthesize the basic voice data of each sentence according to the emotional attributes of the respective sentences based on the preset voice database and the preset voice pronunciation model.
  • a sentence that has been split into a combination of a plurality of words is used to obtain voice data of each word in a preset voice database in units of words, and the voice data of the plurality of words is synthesized to obtain a whole.
  • the phonetic data of the sentence is used to obtain voice data of each word in a preset voice database in units of words, and the voice data of the plurality of words is synthesized to obtain a whole.
  • the acoustic characteristics of the speech data are adjusted according to the preset speech pronunciation model according to the emotional attribute of the sentence, so as to obtain the basic speech data corresponding to the emotional attribute of the sentence, so that the pronunciation and the actual user are The pronunciation is closer.
  • the above acoustic features include characteristics such as sound intensity, speech rate, and pitch.
  • the preset speech pronunciation model is constructed by collecting a large number of actual user voice data as training samples, and performing emotional attribute marking on each sentence in the speech data, and training by using a neural network. The acoustic characteristics of the phonetic pronunciation corresponding to each emotional attribute.
  • Step S103 Perform prosodic feature adjustment on the basic voice data of each sentence according to the tone feature word, and acquire target voice data.
  • the basic speech data is based on the emotional attributes of the whole sentence.
  • the prosodic features are adjusted for the basic speech data of the whole sentence.
  • the above prosody features include pitch, pitch, and length.
  • the sound intensity is the change of the accent and softness of the voice; the pitch is the tone and intonation of the voice; the length is the rhythm of the voice.
  • each tone feature word corresponds to one (or one type) of prosodic features. Therefore, the prosodic feature corresponding to the mood feature word is obtained first, and the prosodic feature in the basic phonetic data is prosody based on the prosodic feature of the tone feature word. Adjustment, if there are multiple mood features in a sentence, the rhythm features are adjusted for all the mood features to obtain voice data more in line with the actual user's pronunciation.
  • the prosody feature may be adjusted by presetting the prosodic feature parameters of various mood features, such as setting the prosodic feature parameters indicating the happy mood features to sound intensity 1, pitch 1 and length 1, setting The prosodic feature parameters indicating the sad tone feature words are pitch strength 2, pitch 2, and length 2. Or adjusting based on the percentage of the prosodic feature parameters of the basic speech data. For example, when expressing the prosodic feature of the happy mood feature, the pitch corresponding to the tone feature word is increased by 10% based on the basic speech data. Reduce the sound length by 15%.
  • the speech synthesis method analyzes the emotional attribute of the sentence by extracting the mood feature words of each sentence in the text data, and obtains the anthropomorphic basic speech data through the preset speech pronunciation model combined with the emotional attribute adjustment of the sentence, and then The prosody features of the basic speech data are adjusted to obtain target speech data that is closer to the actual user's pronunciation.
  • the pronunciation of emotional sentiment words it is more emotional, more in line with the characteristics of actual user pronunciation, effectively improve the quality of speech synthesis data, and solve the problem of low quality of synthesized speech data in existing speech synthesis technology.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • step S101 in the first embodiment specifically includes:
  • Step S201 Acquire an sentiment analysis parameter of a plurality of preset dimensions of the sentence according to the mood feature word.
  • each vocabulary is scored from three preset dimensions: “positive, neutral, negative”, and the three corresponding dimensions of [positive, neutral, negative] are obtained.
  • the scores are combined and the ratio of the scores of the three preset dimensions of the statement is calculated separately.
  • the sentence can be divided into neutral words and mood features after being split, and the mood features can be divided into positive words and negative words.
  • the mood feature words are ranked and the level score of the corresponding level of the mood feature words is set.
  • Step S202 determining an emotional attribute of the statement by using a ratio of sentiment analysis parameters of the total dimension of the sentiment analysis parameters of the plurality of preset dimensions.
  • the score of the preset dimension is calculated according to the grade score of the mood feature word, and then the proportion of the total dimension score of each preset dimension is calculated, for example, the scores of the three preset dimensions of a sentence are [+10, 4 , -6], at this time, the proportion of the scores of the three dimensions is [+0.5, 0.2, -0.3].
  • the text sentiment classification model using the support vector mechanism calculates the proportion of the scores of the preset three dimensions corresponding to each sentence, and then determines the emotional attributes corresponding to the statement.
  • the text sentiment classification model is trained in advance using a large number of different text data to obtain a text sentiment analysis model that satisfies the vector mechanism.
  • the emotional attribute can be divided into a plurality of different states, thereby quantifying the user's emotion, and each obtained sentence is quantized to represent the emotional attribute of the sentence.
  • the above-mentioned quantified emotional attributes include, but are not limited to, happiness, sadness, anger, fear, doubt, and normality.
  • Embodiment 3 is a diagrammatic representation of Embodiment 3
  • step S102 in the first embodiment specifically includes:
  • Step S301 Acquire voice data corresponding to each word of the sentence from the preset voice database.
  • each sentence is split, split into multiple words, and the voice data of each word is obtained in a preset voice database in units of words.
  • Step S302 synthesize the voice data to obtain electronic voice data of the sentence.
  • the speech data of a plurality of words is synthesized to obtain the speech data of the entire sentence, that is, the electronic speech data of the sentence is obtained.
  • Step S303 Adjust the pitch, the sound intensity and the speech rate of the electronic voice data according to the emotional attribute of the sentence by the phonetic pronunciation model to obtain the basic voice data of the sentence.
  • the pitch, the sound intensity and the speech rate of the electronic voice data are adjusted according to the preset voice pronunciation model according to the emotional attribute of the sentence, so as to obtain the basic voice data corresponding to the emotional attribute of the sentence, Make the pronunciation closer to the actual user's pronunciation.
  • Embodiment 4 is a diagrammatic representation of Embodiment 4:
  • step S103 in the first embodiment specifically includes:
  • Step S401 Acquire a prosodic feature adjustment rule of the mood feature words in each sentence, including: an adjustment rule of the prosodic feature parameters such as pitch, pitch, and length.
  • the mood feature words are classified, and the grades of different categories of mood feature words are set, and the corresponding rhythm feature adjustment rules are obtained according to the classification and level of the tone feature words, specifically the pitch of the tone feature words. Adjustment rules for prosodic feature parameters such as sound intensity and length.
  • the prosodic feature parameters of the different levels of different tone traits are pre-set, and then the genre feature points are classified and graded, and then the corresponding prosodic feature parameters are obtained to obtain the corresponding prosody feature adjustment rules.
  • Step S402 Adjust the pitch, the sound intensity and the sound length of the basic voice data according to the prosody feature adjustment rule of the mood feature word to obtain the target voice data.
  • the pitch, the sound intensity and the pitch of each tone feature word in the basic phonetic data are adjusted according to the adjustment rule, and after adjustment, the voice tone can be obtained.
  • Target voice data that is closer to the actual user's pronunciation.
  • the adjusting process may be: calculating a prosodic feature parameter of the mood feature word according to the prosody feature adjustment rule, and then adjusting a prosodic feature of the corresponding mood feature word in the basic phonetic data to the prosody feature parameter. It is also possible to adjust the basic voice data in the form of a percentage according to the prosody feature adjustment rule, which is not limited herein.
  • step S402 the following steps are further included:
  • the pitch, the sound intensity, and the length of each word of the sentence are adjusted based on the average value to obtain smooth transition speech data.
  • the prosody characteristic parameter of the target speech data adjusted by the prosodic feature can be adjusted again in units of whole sentences, so that the sentence can smoothly transition.
  • the average value of the prosodic feature parameters of each sentence is obtained according to the prosodic feature parameter of the target speech data.
  • the pitch, the intensity and the length of the words are adjusted using the above average value.
  • the pitch, sound intensity and length of the words connected by the first mood feature word and the last mood feature word are adjusted.
  • the average value of the prosodic feature parameters of the whole sentence is calculated according to the prosodic feature parameters of the whole sentence, and the rhythm characteristic parameter of the "play” is adjusted to the average value, thereby effectively reducing the sound intensity of "playing" and "good".
  • Embodiment 5 is a diagrammatic representation of Embodiment 5:
  • the embodiment provides a speech synthesis system 100 for performing the method steps in the first embodiment, which includes an sentiment analysis module 101, a speech synthesis module 102, and a speech adjustment module 103.
  • the sentiment analysis module 101 is configured to obtain text data, extract a mood feature word in a clause, and analyze an emotional attribute of each sentence according to the tone feature word;
  • the speech synthesis module 102 is configured to synthesize basic speech data of each sentence according to the emotional attributes of the respective sentences based on the preset speech database and the preset speech pronunciation model;
  • the voice adjustment module 103 is configured to perform prosodic feature adjustment on the basic voice data of each sentence according to the tone feature word to acquire target voice data.
  • the speech synthesis system provided by the embodiment of the present application is based on the same concept as the method embodiment shown in FIG. 1 of the present application, and the technical effects thereof are the same as those of the method embodiment shown in FIG. 1 of the present application.
  • the speech synthesis system can also analyze the emotional attribute of the sentence by extracting the mood feature words of each sentence in the text data, and obtain the personification by adjusting the emotional attribute of the sentence by the preset voice pronunciation model. Based on the basic voice data, the prosody features of the basic voice data are adjusted to obtain target voice data that is closer to the actual user's pronunciation. In the pronunciation of emotional sentiment words, it is more emotional, more in line with the characteristics of actual user pronunciation, effectively improve the quality of speech synthesis data, and solve the problem of low quality of synthesized speech data in existing speech synthesis technology.
  • the sentiment analysis module 101 in the fifth embodiment includes a structure for performing the method steps in the embodiment corresponding to FIG. 2, which includes a parameter acquisition unit 201 and an emotion analysis unit 202. .
  • the parameter obtaining unit 201 is configured to acquire the sentiment analysis parameters of the plurality of preset dimensions of the statement according to the mood feature words.
  • the sentiment analysis unit 202 is configured to determine an emotional attribute of the statement by using a ratio of sentiment analysis parameters of the total dimension of the sentiment analysis parameters of the plurality of preset dimensions.
  • the speech synthesis module 102 in the fifth embodiment includes a structure for performing the method steps in the embodiment corresponding to FIG. 3, which includes a voice data acquiring unit 301, and voice data synthesis. Unit 302 and acoustic feature adjustment unit 303.
  • the voice data acquiring unit 301 is configured to acquire voice data corresponding to each word of the sentence from the preset voice database.
  • the voice data synthesizing unit 302 is configured to synthesize the voice data to obtain electronic voice data to the sentence.
  • the acoustic feature adjustment unit 303 is configured to adjust the pitch, the sound intensity, and the speech rate of the electronic voice data according to the emotional attribute of the sentence by the phonetic pronunciation model to obtain the basic voice data of the sentence.
  • the voice adjustment module 103 in the fifth embodiment includes a structure for performing the method steps in the embodiment corresponding to FIG. 4, and includes a prosody feature adjustment rule acquisition unit 401 and a prosody.
  • the prosody feature adjustment rule acquisition unit 401 is configured to obtain a prosody feature adjustment rule of the tone feature words in each sentence, including: adjustment rules of prosodic feature parameters such as pitch, pitch, and length.
  • the prosody feature adjustment unit 402 is configured to adjust the pitch, the sound intensity, and the pitch of the basic voice data according to the prosodic feature adjustment rule of the mood feature word to obtain target voice data.
  • the voice adjustment module 103 further includes a feature parameter acquisition unit, a calculation unit, and a smooth transition adjustment unit.
  • the feature parameter acquisition unit is configured to acquire a prosodic feature parameter of the target voice data.
  • the calculation unit is configured to calculate an average value of the prosodic feature parameters of each sentence according to the prosodic feature parameters of the target speech data.
  • the smooth transition adjustment unit is configured to adjust the pitch, the sound intensity and the sound length of each word of the sentence according to the average value to obtain smooth transition voice data.
  • FIG. 9 is a schematic diagram of a terminal device provided in Embodiment 9 of the present application.
  • the terminal device 9 of this embodiment includes a processor 90, a memory 91, and computer readable instructions 92, such as programs, stored in the memory 91 and executable on the processor 90.
  • the processor 90 executes the computer readable instructions 92, the steps in the above embodiments of the respective speech synthesis methods are implemented, such as steps S101 to S103 shown in FIG.
  • the processor 90 implements the functions of the modules/units in the above-described system embodiments when the computer readable instructions 92 are executed, such as the functions of the modules 101-103 shown in FIG.
  • the computer readable instructions 92 may be partitioned into one or more modules/units that are stored in the memory 91 and executed by the processor 90, To complete this application.
  • the one or more modules/units may be a series of computer readable instruction instruction segments capable of performing a particular function, the instruction segments being used to describe the execution of the computer readable instructions 92 in the terminal device 9.
  • the computer readable instructions 92 may be divided into an sentiment analysis module, a speech synthesis module, and a speech adjustment module, and the specific functions of each module are as follows:
  • the sentiment analysis module is configured to obtain text data, extract a mood feature word by using a clause, and analyze an emotional attribute of each sentence according to the tone feature word;
  • a speech synthesis module configured to synthesize basic speech data of each statement according to an emotional attribute of each statement based on a preset speech database and a preset speech pronunciation model;
  • the voice adjustment module is configured to perform prosodic feature adjustment on the basic voice data of each sentence according to the tone feature word, and acquire target voice data.
  • the terminal device 9 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud management server.
  • the terminal device may include, but is not limited to, a processor 90 and a memory 91. It will be understood by those skilled in the art that FIG. 9 is only an example of the terminal device 9, does not constitute a limitation of the terminal device 9, may include more or less components than those illustrated, or combine some components, or different components.
  • the terminal device may further include an input/output device, a network access device, a bus, and the like.
  • the so-called processor 90 can be a central processing unit (Central Processing Unit, CPU), can also be other general-purpose processors, digital signal processors (DSP), application specific integrated circuits (Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the memory 91 may be an internal storage unit of the terminal device 9, such as a hard disk or a memory of the terminal device 9.
  • the memory 91 may also be an external storage device of the terminal device 9, for example, a plug-in hard disk provided on the terminal device 9, a smart memory card (SMC), and a secure digital (SD). Card, flash card (Flash Card) and so on.
  • the memory 91 may also include both an internal storage unit of the terminal device 9 and an external storage device.
  • the memory 91 is configured to store the computer readable instructions and other programs and data required by the terminal device.
  • the memory 91 can also be used to temporarily store data that has been output or is about to be output.
  • each functional unit and module in the foregoing wireless terminal may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit, and the integrated unit may be hardware.
  • Formal implementation can also be implemented in the form of software functional units.
  • the specific names of the respective functional units and modules are only for the purpose of facilitating mutual differentiation, and are not intended to limit the scope of protection of the present application.
  • the disclosed system/terminal device and method may be implemented in other manners.
  • the system/terminal device embodiment described above is merely illustrative.
  • the division of the module or unit is only a logical function division.
  • there may be another division manner for example, multiple units.
  • components may be combined or integrated into another system, or some features may be omitted or not performed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, system or unit, and may be in electrical, mechanical or other form.
  • the units provided as separate component descriptions may or may not be physically separated, and the components arranged for unit display may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. on. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated modules/units if implemented in the form of software functional units and arranged to be sold or used as separate products, may be stored in a computer readable storage medium.
  • the present application implements all or part of the processes in the foregoing embodiments, and may also be implemented by computer readable instructions, which may be stored in a computer readable storage medium.
  • the computer readable instructions when executed by a processor, may implement the steps of the various method embodiments described above.
  • the computer readable instructions comprise computer readable instruction code, which may be in the form of source code, an object code form, an executable file or some intermediate form or the like.
  • the computer readable medium can include any entity or system capable of carrying the computer readable instruction code, a recording medium, a USB flash drive, a removable hard drive, a magnetic disk, an optical disk, a computer memory, a read only memory (ROM, Read-Only) Memory), random access memory (RAM, Random) Access Memory), electrical carrier signals, telecommunications signals, and software distribution media.
  • ROM Read Only memory
  • RAM Random Access Memory
  • electrical carrier signals telecommunications signals
  • telecommunications signals and software distribution media. It should be noted that the content contained in the computer readable medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in a jurisdiction, for example, in some jurisdictions, according to legislation and patent practice, computer readable media It does not include electrical carrier signals and telecommunication signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)

Abstract

一种语音合成方法、***及终端设备,该方法包括:获取文本数据,分句提取语气特征词,并根据语气特征词分析各个语句的情感属性(S101);基于预设语音数据库和预设语音发音模型根据各个语句的情感属性合成各个语句的基础语音数据(S102);根据语气特征词对各个语句的基础语音数据进行韵律特征调整,获取目标语音数据(S103)。通过提取文本数据中每句语句的语气特征词来分析语句的情感属性,并经过预设语音发音模型结合语句的情感属性调整得到的基础语音数据,在对基础语音数据进行韵律特征调整,得到拟人度较高的目标语音数据。在进行情感倾向词发音时,更富有感情,更加符合实际用户发音的特征,有效提高语音合成数据的质量。

Description

一种语音合成方法、***及终端设备
本申请申明享有2018年05月14日递交的申请号为201810456213.3、名称为“一种任务分配方法、***及终端设备”中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。
技术领域
本申请属于数据处理技术领域,尤其涉及一种语音合成方法、***及终端设备。
背景技术
有声书是一种个人或多人依据文稿并借助不同的声音表情和录音格式所录制的作品。目前市面上的有声书都是人工提前录制好并保存起来,在使用时直接播放。然而这需要耗费大量的人力资源进行提前录制。为了节省人力成本,能够通过语音合成技术合成语音数据。语音合成技术是指通过机械或电子等方法产生人造语音,将计算机自己产生的、或外部输入的文字信息的进行转变为可以听得懂的语音进行输出的技术。目前的语音合成技术在进行语音合成时,都是先对文本数据进行分析得到文本数据中的字和词,之后从语音库获取这些字和词对应的基本语音数据,最后将获取的基本语音数据按顺序进行组合得到最终的语音数据,然而得到的语音数据拟人度不高,因而存在质量低下的问题。
综上所述,现有的语音合成技术存在合成得到的语音数据质量低下的问题。
技术问题
有鉴于此,本申请实施例提供了一种语音合成方法、***及终端设备,以解决现有语音合成技术存在合成得到的语音数据质量低下的问题。
技术解决方案
本申请的第一方面提供了一种语音合成方法,包括:
获取文本数据,分句提取语气特征词,并根据所述语气特征词分析各个语句的情感属性;
基于预设语音数据库和预设语音发音模型根据所述各个语句的情感属性合成各个语句的基础语音数据;
根据所述语气特征词对所述各个语句的基础语音数据进行韵律特征调整,获取目标语音数据。
本申请的第二方面提供了一种语音合成***,包括:
情感分析模块,用于获取文本数据,分句提取语气特征词,并根据所述语气特征词分析各个语句的情感属性;
语音合成模块,用于基于预设语音数据库和预设语音发音模型根据所述各个语句的情感属性合成各个语句的基础语音数据;
语音调整模块,用于根据所述语气特征词对所述各个语句的基础语音数据进行韵律特征调整,获取目标语音数据。
本申请的第三方面提供了一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现以下步骤:
获取文本数据,分句提取语气特征词,并根据所述语气特征词分析各个语句的情感属性;
基于预设语音数据库和预设语音发音模型根据所述各个语句的情感属性合成各个语句的基础语音数据;
根据所述语气特征词对所述各个语句的基础语音数据进行韵律特征调整,获取目标语音数据。
本申请的第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,所述计算机可读指令被处理器执行时实现以下步骤:
获取文本数据,分句提取语气特征词,并根据所述语气特征词分析各个语句的情感属性;
基于预设语音数据库和预设语音发音模型根据所述各个语句的情感属性合成各个语句的基础语音数据;
根据所述语气特征词对所述各个语句的基础语音数据进行韵律特征调整,获取目标语音数据。
有益效果
本申请提供的一种语音合成方法、***及终端设备,通过提取文本数据中每句语句的语气特征词来分析语句的情感属性,并经过预设语音发音模型结合语句的情感属性调整得到的基础语音数据,在对基础语音数据进行韵律特征调整,得到拟人度较高的目标语音数据。在进行情感倾向词发音时,更富有感情,更加符合实际用户发音的特征,有效提高语音合成数据的质量,解决了现有的语音合成技术存在合成得到的语音数据质量低下的问题。
附图说明
图1是本申请实施例一提供的一种语音合成方法的实现流程示意图;
图2是本申请实施例二提供的对应实施例一步骤S101的实现流程示意图;
图3是本申请实施例三提供的对应实施例一步骤S102的实现流程示意图;
图4是本申请实施例四提供的对应实施例一步骤S103的实现流程示意图;
图5是本申请实施例五提供的一种语音合成***的结构示意图;
图6是本申请实施例六提供的对应实施例五中情感分析模块101的结构示意图;
图7是本申请实施例七提供的对应实施例五中语音合成模块102的结构示意图;
图8是本申请实施例八提供的对应实施例五中语音调整模块103的结构示意图;
图9是本申请实施例九提供的终端设备的示意图。
本发明的实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定***结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的***、***、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
本申请实施例为了解决现有的语音合成技术存在合成得到的语音数据质量低下的问题,提供了一种语音合成方法、***及终端设备,通过提取文本数据中每句语句的语气特征词来分析语句的情感属性,并经过预设语音发音模型结合语句的情感属性调整得到的基础语音数据,在对基础语音数据进行韵律特征调整,得到拟人度较高的目标语音数据。在进行情感倾向词发音时,更富有感情,更加符合实际用户发音的特征,有效提高语音合成数据的质量,解决了现有的语音合成技术存在合成得到的语音数据质量低下的问题。
为了说明本申请所述的技术方案,下面通过具体实施例来进行说明。
实施例一:
如图1所示,本实施例提供了一种语音合成方法,其具体包括:
步骤S101:获取文本数据,分句提取语气特征词,并根据所述语气特征词分析各个语句的情感属性。
在具体应用中,通过终端获取带有文字信息的文本数据,该文本数据的格式可以为文本格式(txt)、富文本格式(Rich Text Format,RTF)或者文档(Document,DOC)等,也可以为便携式文档格式(Portable Document Format,PDF)或者图片等包含文字信息的文件,再将PDF或者图片转换成能够直接读取文本数据的文件即可,在此不加以限制。
在具体应用中,获取到文本数据后,以句子为单位,提取每个句子中的语气特征词。语气特征词是指带有感情色彩的字词、符号或者字词组合,如“开心”、“天呐”、“太棒了”“好吧”、“?”等表示情绪和语气的字词及符号。由于语气特征词能够体现用户的情感倾向,在进行发音时会有不同的韵律特征。因此将各个语句中的语气特征词提取出来,并根据语气特征词分析各个语句的情感属性。
在具体应用中,预先设置语气特征词数据库,根据语气特征词数据库提取各个语句中与语气特征词数据库相匹配的语气特征词。在表达情绪时,用户会采用字词的组合来表达自己的情绪。为了丰富语气特征词数据库、准确提取各个语句中的语气特征词,根据语法规则设置字词的组合规则,在提取语气特征词时将满足组合规则的字词一并进行提取。示例性的,上述组合规则包括但不限于以下组合规则:
A:程度副词+情感词,如“较+好”、“很+好”、“特别+好”等;
B:否定词+情感词,如“不+好”、“不+坏”等;
C:否定词+程度副词+情感词,如“不+太+好”、“不+太+坏”等;
D:程度副词+否定词+情感词,如“很+不+好”、“还+不+坏”等。
在具体应用中,为了保证每一句话的语音合成效果,本实施例是以句子为单位进行语气特征词提取,针对各个语句的语气特征词分析该语句的情感属性。
在具体应用中,先将文本数据的各个语句进行拆分,拆分为多个字词的组合,将拆分的字词组合区分为中立词及语气特征词,其中语气特征词包括正面词和负面词,可以根据上述中立词、正面词以及负面词在该语句中所占的比例分析得到该语句的情感属性。
步骤S102:基于预设语音数据库和预设语音发音模型根据所述各个语句的情感属性合成各个语句的基础语音数据。
在具体应用中,将已经拆分为多个字词的组合的语句以字词为单位在预设语音数据库中获取每个字词的语音数据,将多个字词的语音数据进行合成得到整句的语音数据。
在具体应用中,得到整句的语音数据后,根据该语句的情感属性基于预设语音发音模型对语音数据进行声学特征调整,以得到对应语句情感属性的基础语音数据,使得发音与实际用户的发音更加接近。在具体应用中,上述声学特征包括声音强度、语速、音调的高低等特征。
在具体应用中,上述预设语音发音模型的构建过程为:采集大量实际用户的语音数据作为训练样本,并对语音数据中以每一个句子为单位进行情感属性标记,利用神经网络进行训练,得到每种情感属性对应的语音发音的声学特征。
步骤S103:根据所述语气特征词对所述各个语句的基础语音数据进行韵律特征调整,获取目标语音数据。
在具体应用中,基础语音数据是基于整句的情感属性,为了进一步得到更符合实际用户在对应情感时的发音特点,针对语气特征词再对整句的基础语音数据进行韵律特征调整。
在具体应用中,上述韵律特征包括音强、音高及音长。音强则是语音的重音、轻音等强弱变化;音高则是语音的字调和语调;音长则是语音的节奏快慢。
在具体应用中,由于不同情感倾向的语气特征词表达的用户情绪是不一样的,而不同的情绪对应的发言的韵律特征会存在较大差异,如开心时的音调会明显比悲伤时的音调高。即每个语气特征词对应一种(或一类)韵律特征,因此,先获取到语气特征词对应的韵律特征,根据该语气特征词的韵律特征对基础语音数据中的语气特征词进行韵律特征调整,若一个句子中有多个语气特征词,则对全部语气特征词都进行韵律特征调整,得到更加符合实际用户发音的语音数据。
在具体应用中,对韵律特征进行调整可以是预先设置各类语气特征词的韵律特征参数,如设置表示开心的语气特征词的韵律特征参数为音强1、音高1以及音长1,设置表示悲伤的语气特征词的韵律特征参数为音强2、音高2以及音长2。或者是基于基础语音数据的韵律特征参数的百分比进行调整,如表示开心的语气特征词在进行韵律特征调整时,是将该语气特征词对应的音高在基础语音数据的基础上升高10%,将音长缩短15%。
本实施例提供的语音合成方法,通过提取文本数据中每句语句的语气特征词来分析语句的情感属性,并经过预设语音发音模型结合语句的情感属性调整得到拟人化的基础语音数据,再对基础语音数据进行韵律特征调整,得到更加接近实际用户发音的目标语音数据。在进行情感倾向词发音时,更富有感情,更加符合实际用户发音的特征,有效提高语音合成数据的质量,解决了现有的语音合成技术存在合成得到的语音数据质量低下的问题。
实施例二:
如图2所示,在本实施例中,实施例一中的步骤S101具体包括:
步骤S201:根据所述语气特征词获取所述语句的多个预设维度的情感分析参数。
在具体应用中,将语句进行拆分后对每个词汇从[正面、中立、负面]等三个预设维度进行评分,得到该语句对应[正面、中立、负面]等三个预设维度的评分综合,再分别计算该语句的三个预设维度所占分数的比值。示例性的,语句被拆分后可以分为中立词和语气特征词,语气特征词可分为正面词及负面词。在对语气特征词进行分类时,对语气特征词进行分级并设置该语气特征词对应级别的等级评分。
如“开心”:设置其为正面词且其等级评分为+2;
“太棒了”:设置其为正面词且等级评分为+5;
“不好”:设置其为负面词且等级评分为-2”;
“很不好”:设置其为负面词且等级评分为-5。需要说明的是,上述对语气特征词进行分类、分级及评分可以基于神经网络构建分类评分模块进行实现,具体实现手段不加以赘述。
步骤S202:对所述多个预设维度的情感分析参数所占总维度的情感分析参数的比例确定所述语句的情感属性。
在具体应用中,根据语气特征词的等级评分计算预设维度的分数,再计算各个预设维度所占总维度分数的比例,如一个句子的三个预设维度的评分为[+10,4,-6],此时得到三个维度分数所占分数比例即为[+0.5,0.2,-0.3]。
在具体应用中,利用支持向量机制的文本情感分类模型对各个语句对应的预设三个维度的分数所占的比例值进行计算,进而判断出该语句对应的情感属性。预先利用大量不同文本数据对文本情感分类模型进行训练,以得到满足向量机制的文本情感分析模型。其中,可以将情感属性划分为多种不同的状态,以此来量化用户的情感,得到的每个句子对应的量化来表示该语句的情感属性。示例性的,上述量化的情感属性包括但不限于:开心、悲伤、愤怒、恐惧、疑惑及正常。
实施例三:
如图3所示,在本实施例中,实施例一中的步骤S102具体包括:
步骤S301:从所述预设语音数据库中获取与语句的各个词对应的语音数据。
在具体应用中,将各个语句进行拆分,拆分为多个字词的组合,以字词为单位在预设语音数据库中获取每个字词的语音数据
步骤S302:将所述语音数据进行合成获得到所述语句的电子语音数据。
在具体应用中,将多个字词的语音数据进行合成得到整句的语音数据即得到该语句的电子语音数据。
步骤S303:通过语音发音模型根据所述语句的情感属性对所述电子语音数据的音高、音强及语速进行调整,得到所述语句的基础语音数据。
在具体应用中,得到电子语音数据后,根据该语句的情感属性基于预设语音发音模型对电子语音数据的音高、音强及语速进行调整,以得到对应语句情感属性的基础语音数据,使得发音与实际用户的发音更加接近。
实施例四:
如图4所示,在本实施例中,实施例一中的步骤S103具体包括:
步骤S401:获取各个语句中语气特征词的韵律特征调整规则,包括:音高、音强及音长等韵律特征参数的调整规则。
在具体应用中,对语气特征词进行分类、并设定不同类别的语气特征词的等级,根据该语气特征词的分类及等级获取对应的韵律特征调整规则,具体为该语气特征词的音高、音强及音长等韵律特征参数的调整规则。
在具体应用中,预先设定不同类别不同等级的语气特征词的韵律特征参数,再对各个语气特征词进行类别和等级划分,进而得到其对应的韵律特征参数获取其对应的韵律特征调整规则。
步骤S402:根据语气特征词的韵律特征调整规则对所述基础语音数据的音高、音强及音长进行调整,得到目标语音数据。
在具体应用中,在获取到语气特征词对应的韵律特征调整规则后,根据该调整规则对基础语音数据中各个语气特征词的音高、音强及音长进行调整,经过调整后就能够得到更接近实际用户发音的目标语音数据。
在具体应用中,上述调整过程可以是根据韵律特征调整规则计算得到语气特征词的韵律特征参数,然后将基础语音数据中对应的语气特征词的韵律特征调整至该韵律特征参数。也可以是根据韵律特征调整规则采用百分比的形式对基础语音数据进行调整,在此不加以限制。
在一个实施例中,在上述步骤S402之后,还包括以下步骤:
获取所述目标语音数据的韵律特征参数;
根据目标语音数据的韵律特征参数计算各个语句的韵律特征参数的平均值;
根据所述平均值对语句的各个词的音高、音强及音长进行调整,得到平滑过渡的语音数据。
在具体应用中,由于上述韵律特征调整仅是针对语气特征词,因此可能会导致出现语音突变的问题,使得语气特征词的发音与语气特征词前后相连字词的发音出现突兀不和谐的情况。在具体应用中,为了避免上述问题,可以通过对经过韵律特征调整后的目标语音数据的韵律特征参数以整句为单位进行再一次的调整,使得语句能够平滑过渡。具体为:根据目标语音数据的韵律特征参数得到各个语句的韵律特征参数的平均值。对于语气特征词相连的字词,采用上述平均值对该字词的音高、音强及音长进行调整。在具体应用中,当多个语气特征词相连时,只需对于第一个语气特征词和最后一个语气特征词相连的字词的音高、音强及音长进行调整。
示例性的,如“我们下午去游乐场玩好不好!”中,“好不好”作为语气特征词,其音高及音调都会升高,而与其相连的“玩”的音高及音调则不会升高,因此可能出现从“玩”到“好不好”时音调和音高突然提升的情况,使得发音过渡不自然。因此根据整句的韵律特征参数计算得到整句的韵律特征参数平均值,将“玩”的韵律特征参数调整为该平均值,则能够有效地减少“玩”和“好不好”的音强和音调之间的落差,实现平滑过渡。
实施例五:
如图5所示,本实施例提供一种语音合成***100,用于执行实施例一中的方法步骤,其包括情感分析模块101、语音合成模块102以及语音调整模块103。
情感分析模块101用于获取文本数据,分句提取语气特征词,并根据所述语气特征词分析各个语句的情感属性;
语音合成模块102用于基于预设语音数据库和预设语音发音模型根据所述各个语句的情感属性合成各个语句的基础语音数据;
语音调整模块103用于根据所述语气特征词对所述各个语句的基础语音数据进行韵律特征调整,获取目标语音数据。
需要说明的是,本申请实施例提供的语音合成***,由于与本申请图1所示方法实施例基于同一构思,其带来的技术效果与本申请图1所示方法实施例相同,具体内容可参见本申请图1所示方法实施例中的叙述,此处不再赘述。
因此,本实施例提供的一种语音合成***,同样能够通过提取文本数据中每句语句的语气特征词来分析语句的情感属性,并经过预设语音发音模型结合语句的情感属性调整得到拟人化的基础语音数据,再对基础语音数据进行韵律特征调整,得到更加接近实际用户发音的目标语音数据。在进行情感倾向词发音时,更富有感情,更加符合实际用户发音的特征,有效提高语音合成数据的质量,解决了现有的语音合成技术存在合成得到的语音数据质量低下的问题。
实施例六:
如图6所示,在本实施例中,实施例五中的情感分析模块101包括用于执行图2所对应的实施例中的方法步骤的结构,其包括参数获取单元201及情感分析单元202。
参数获取单元201用于根据所述语气特征词获取所述语句的多个预设维度的情感分析参数。
情感分析单元202用于对所述多个预设维度的情感分析参数所占总维度的情感分析参数的比例确定所述语句的情感属性。
实施例七:
如图7所示,在本实施例中,实施例五中的语音合成模块102包括用于执行图3所对应的实施例中的方法步骤的结构,其包括语音数据获取单元301、语音数据合成单元302以及声学特征调整单元303。
语音数据获取单元301用于从所述预设语音数据库中获取与语句的各个词对应的语音数据。
语音数据合成单元302用于将所述语音数据进行合成获得到所述语句的电子语音数据。
声学特征调整单元303用于通过语音发音模型根据所述语句的情感属性对所述电子语音数据的音高、音强及语速进行调整,得到所述语句的基础语音数据。
实施例八:
如图8所示,在本实施例中,实施例五中的语音调整模块103包括用于执行图4所对应的实施例中的方法步骤的结构,其包括韵律特征调整规则获取单元401及韵律特征调整单元402。
韵律特征调整规则获取单元401用于获取各个语句中语气特征词的韵律特征调整规则,包括:音高、音强及音长等韵律特征参数的调整规则。
韵律特征调整单元402用于根据语气特征词的韵律特征调整规则对所述基础语音数据的音高、音强及音长进行调整,得到目标语音数据。
在一个实施例中,上述语音调整模块103还包括特征参数获取单元、计算单元以及平滑过渡调整单元。
特征参数获取单元用于获取所述目标语音数据的韵律特征参数。
计算单元用于根据目标语音数据的韵律特征参数计算各个语句的韵律特征参数的平均值。
平滑过渡调整单元,用于根据所述平均值对语句的各个词的音高、音强及音长进行调整,得到平滑过渡的语音数据。
实施例九:
图9是本申请实施例九提供的终端设备的示意图。如图9所示,该实施例的终端设备9包括:处理器90、存储器91以及存储在所述存储器91中并可在所述处理器90上运行的计算机可读指令92,例如程序。所述处理器90执行所述计算机可读指令92时实现上述各个语音合成方法实施例中的步骤,例如图1所示的步骤S101至S103。或者,所述处理器90执行所述计算机可读指令92时实现上述***实施例中各模块/单元的功能,例如图5所示模块101至103的功能。
示例性的,所述计算机可读指令92可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器91中,并由所述处理器90执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机可读指令指令段,该指令段用于描述所述计算机可读指令92在所述终端设备9中的执行过程。例如,所述计算机可读指令92可以被分割成情感分析模块、语音合成模块以及语音调整模块,各模块具体功能如下:
情感分析模块,用于获取文本数据,分句提取语气特征词,并根据所述语气特征词分析各个语句的情感属性;
语音合成模块,用于基于预设语音数据库和预设语音发音模型根据所述各个语句的情感属性合成各个语句的基础语音数据;
语音调整模块,用于根据所述语气特征词对所述各个语句的基础语音数据进行韵律特征调整,获取目标语音数据。
所述终端设备9可以是桌上型计算机、笔记本、掌上电脑及云端管理服务器等计算设备。所述终端设备可包括,但不仅限于,处理器90、存储器91。本领域技术人员可以理解,图9仅仅是终端设备9的示例,并不构成对终端设备9的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述终端设备还可以包括输入输出设备、网络接入设备、总线等。
所称处理器90可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器 (Digital Signal Processor,DSP)、专用集成电路 (Application Specific Integrated Circuit,ASIC)、现成可编程门阵列 (Field-Programmable Gate Array,FPGA) 或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器91可以是所述终端设备9的内部存储单元,例如终端设备9的硬盘或内存。所述存储器91也可以是所述终端设备9的外部存储设备,例如所述终端设备9上配备的插接式硬盘,智能存储卡(Smart Media Card, SMC),安全数字(Secure Digital, SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器91还可以既包括所述终端设备9的内部存储单元也包括外部存储设备。所述存储器91用于存储所述计算机可读指令以及所述终端设备所需的其他程序和数据。所述存储器91还可以用于暂时地存储已经输出或者将要输出的数据。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述***的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述无线终端中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的***/终端设备和方法,可以通过其它的方式实现。例如,以上所描述的***/终端设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,***或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述设置为分离部件说明的单元可以是或者也可以不是物理上分开的,设置为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的模块/单元如果以软件功能单元的形式实现并设置为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一计算机可读存储介质中,该计算机可读指令在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机可读指令包括计算机可读指令代码,所述计算机可读指令代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机可读指令代码的任何实体或***、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括是电载波信号和电信信号。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种语音合成方法,其特征在于,包括:
    获取文本数据,分句提取语气特征词,并根据所述语气特征词分析各个语句的情感属性;
    基于预设语音数据库和预设语音发音模型根据所述各个语句的情感属性合成各个语句的基础语音数据;
    根据所述语气特征词对所述各个语句的基础语音数据进行韵律特征调整,获取目标语音数据。
  2. 根据权利要求1所述的语音合成方法,其特征在于,所述根据所述语气特征词分析各个语句的情感属性,包括:
    根据所述语气特征词获取所述语句的多个预设维度的情感分析参数;
    对所述多个预设维度的情感分析参数所占总维度的情感分析参数的比例确定所述语句的情感属性。
  3. 根据权利要求1所述的语音合成方法,其特征在于,基于预设语音数据库和预设语音发音模型根据所述各个语句的情感属性合成各个语句的基础语音数据,包括:
    从所述预设语音数据库中获取与语句的各个词对应的语音数据;
    将所述语音数据进行合成获得到所述语句的电子语音数据;
    通过语音发音模型根据所述语句的情感属性对所述电子语音数据的音高、音强及语速进行调整,得到所述语句的基础语音数据。
  4. 根据权利要求1所述的语音合成方法,其特征在于,根据所述语气特征词对所述各个语句的基础语音数据进行韵律特征调整,获取目标语音数据,包括:
    获取各个语句中语气特征词的韵律特征调整规则,包括:音高、音强及音长等韵律特征参数的调整规则;
    根据语气特征词的韵律特征调整规则对所述基础语音数据的音高、音强及音长进行调整,得到目标语音数据。
  5. 根据权利要求4所述的语音合成方法,其特征在于,根据语气特征词的韵律特征调整规则对所述基础语音数据的音高、音强及音长进行调整,得到目标语音数据之后,还包括:
    获取所述目标语音数据的韵律特征参数;
    根据目标语音数据的韵律特征参数计算各个语句的韵律特征参数的平均值;
    根据所述平均值对语句的各个词的音高、音强及音长进行调整,得到平滑过渡的语音数据。
  6. 一种语音合成***,其特征在于,包括:
    情感分析模块,用于获取文本数据,分句提取语气特征词,并根据所述语气特征词分析各个语句的情感属性;
    语音合成模块,用于基于预设语音数据库和预设语音发音模型根据所述各个语句的情感属性合成各个语句的基础语音数据;
    语音调整模块,用于根据所述语气特征词对所述各个语句的基础语音数据进行韵律特征调整,获取目标语音数据。
  7. 根据权利要求6所述的语音合成***,其特征在于,所述情感分析模块包括:
    参数获取单元,用于根据所述语气特征词获取所述语句的多个预设维度的情感分析参数;
    情感分析单元,用于对所述多个预设维度的情感分析参数所占总维度的情感分析参数的比例确定所述语句的情感属性。
  8. 根据权利要求6所述的语音合成***,其特征在于,所述语音合成模块包括:
    语音数据获取单元,用于从所述预设语音数据库中获取与语句的各个词对应的语音数据;
    语音数据合成单元,用于将所述语音数据进行合成获得到所述语句的电子语音数据;
    声学特征调整单元,用于通过语音发音模型根据所述语句的情感属性对所述电子语音数据的音高、音强及语速进行调整,得到所述语句的基础语音数据。
  9. 根据权利要求1所述的语音合成***,其特征在于,所述语音调整模块包括:
    韵律特征调整规则获取单元,用于获取各个语句中语气特征词的韵律特征调整规则,包括:音高、音强及音长等韵律特征参数的调整规则;
    韵律特征调整单元,用于根据语气特征词的韵律特征调整规则对所述基础语音数据的音高、音强及音长进行调整,得到目标语音数据。
  10. 根据权利要求9所述的语音合成***,其特征在于,所述语音调整模块还包括:
    特征参数获取单元,用于获取所述目标语音数据的韵律特征参数;
    计算单元,用于根据目标语音数据的韵律特征参数计算各个语句的韵律特征参数的平均值;
    平滑过渡调整单元,用于根据所述平均值对语句的各个词的音高、音强及音长进行调整,得到平滑过渡的语音数据。
  11. 一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现如下步骤:
    获取文本数据,分句提取语气特征词,并根据所述语气特征词分析各个语句的情感属性;
    基于预设语音数据库和预设语音发音模型根据所述各个语句的情感属性合成各个语句的基础语音数据;
    根据所述语气特征词对所述各个语句的基础语音数据进行韵律特征调整,获取目标语音数据。
  12. 根据权利要求11所述的终端设备,其特征在于,所述根据所述语气特征词分析各个语句的情感属性,包括:
    根据所述语气特征词获取所述语句的多个预设维度的情感分析参数;
    对所述多个预设维度的情感分析参数所占总维度的情感分析参数的比例确定所述语句的情感属性。
  13. 根据权利要求12所述的终端设备,其特征在于,基于预设语音数据库和预设语音发音模型根据所述各个语句的情感属性合成各个语句的基础语音数据,包括:
    从所述预设语音数据库中获取与语句的各个词对应的语音数据;
    将所述语音数据进行合成获得到所述语句的电子语音数据;
    通过语音发音模型根据所述语句的情感属性对所述电子语音数据的音高、音强及语速进行调整,得到所述语句的基础语音数据。
  14. 根据权利要求11所述的终端设备,其特征在于,4. 根据权利要求1所述的语音合成方法,其特征在于,根据所述语气特征词对所述各个语句的基础语音数据进行韵律特征调整,获取目标语音数据,包括:
    获取各个语句中语气特征词的韵律特征调整规则,包括:音高、音强及音长等韵律特征参数的调整规则;
    根据语气特征词的韵律特征调整规则对所述基础语音数据的音高、音强及音长进行调整,得到目标语音数据。
  15. 根据权利要求14所述的终端设备,其特征在于,根据语气特征词的韵律特征调整规则对所述基础语音数据的音高、音强及音长进行调整,得到目标语音数据之后,所述处理器执行所述计算机可读指令时还实现如下步骤:
    获取所述目标语音数据的韵律特征参数;
    根据目标语音数据的韵律特征参数计算各个语句的韵律特征参数的平均值;
    根据所述平均值对语句的各个词的音高、音强及音长进行调整,得到平滑过渡的语音数据。
  16. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,其特征在于,所述计算机可读指令被处理器执行时实现如下步骤:
    获取文本数据,分句提取语气特征词,并根据所述语气特征词分析各个语句的情感属性;
    基于预设语音数据库和预设语音发音模型根据所述各个语句的情感属性合成各个语句的基础语音数据;
    根据所述语气特征词对所述各个语句的基础语音数据进行韵律特征调整,获取目标语音数据。
  17. 根据权利要求16所述的计算机可读存储介质,其特征在于,所述根据所述语气特征词分析各个语句的情感属性,包括:
    根据所述语气特征词获取所述语句的多个预设维度的情感分析参数;
    对所述多个预设维度的情感分析参数所占总维度的情感分析参数的比例确定所述语句的情感属性。
  18. 根据权利要求17所述的计算机可读存储介质,其特征在于,基于预设语音数据库和预设语音发音模型根据所述各个语句的情感属性合成各个语句的基础语音数据,包括:
    从所述预设语音数据库中获取与语句的各个词对应的语音数据;
    将所述语音数据进行合成获得到所述语句的电子语音数据;
    通过语音发音模型根据所述语句的情感属性对所述电子语音数据的音高、音强及语速进行调整,得到所述语句的基础语音数据。
  19. 根据权利要求16所述的计算机可读存储介质,其特征在于,基于预设语音数据库和预设语音发音模型根据所述各个语句的情感属性合成各个语句的基础语音数据,包括:
    从所述预设语音数据库中获取与语句的各个词对应的语音数据;
    将所述语音数据进行合成获得到所述语句的电子语音数据;
    通过语音发音模型根据所述语句的情感属性对所述电子语音数据的音高、音强及语速进行调整,得到所述语句的基础语音数据。
  20. 根据权利要求16所述的计算机可读存储介质,其特征在于,根据语气特征词的韵律特征调整规则对所述基础语音数据的音高、音强及音长进行调整,得到目标语音数据之后,所述计算机可读指令被处理器执行时还实现如下步骤:
    获取所述目标语音数据的韵律特征参数;
    根据目标语音数据的韵律特征参数计算各个语句的韵律特征参数的平均值;
    根据所述平均值对语句的各个词的音高、音强及音长进行调整,得到平滑过渡的语音数据。
PCT/CN2018/097560 2018-05-14 2018-07-27 一种语音合成方法、***及终端设备 WO2019218481A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810456213.3A CN108615524A (zh) 2018-05-14 2018-05-14 一种语音合成方法、***及终端设备
CN201810456213.3 2018-05-14

Publications (1)

Publication Number Publication Date
WO2019218481A1 true WO2019218481A1 (zh) 2019-11-21

Family

ID=63663006

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/097560 WO2019218481A1 (zh) 2018-05-14 2018-07-27 一种语音合成方法、***及终端设备

Country Status (2)

Country Link
CN (1) CN108615524A (zh)
WO (1) WO2019218481A1 (zh)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7125608B2 (ja) * 2018-10-05 2022-08-25 日本電信電話株式会社 音響モデル学習装置、音声合成装置、及びプログラム
CN109461435B (zh) * 2018-11-19 2022-07-01 北京光年无限科技有限公司 一种面向智能机器人的语音合成方法及装置
CN109599094A (zh) * 2018-12-17 2019-04-09 海南大学 声音美容与情感修饰的方法
CN109545245A (zh) * 2018-12-21 2019-03-29 斑马网络技术有限公司 语音处理方法及装置
CN109710748B (zh) * 2019-01-17 2021-04-27 北京光年无限科技有限公司 一种面向智能机器人的绘本阅读交互方法和***
CN110379409B (zh) * 2019-06-14 2024-04-16 平安科技(深圳)有限公司 语音合成方法、***、终端设备和可读存储介质
CN111031386B (zh) * 2019-12-17 2021-07-30 腾讯科技(深圳)有限公司 基于语音合成的视频配音方法、装置、计算机设备及介质
CN111091810A (zh) * 2019-12-19 2020-05-01 佛山科学技术学院 基于语音信息的vr游戏人物表情控制方法及存储介质
WO2021127979A1 (zh) * 2019-12-24 2021-07-01 深圳市优必选科技股份有限公司 语音合成方法、装置、计算机设备及计算机可读存储介质
CN111128118B (zh) * 2019-12-30 2024-02-13 科大讯飞股份有限公司 语音合成方法、相关设备及可读存储介质
CN113539230A (zh) * 2020-03-31 2021-10-22 北京奔影网络科技有限公司 语音合成方法及装置
CN112349272A (zh) * 2020-10-15 2021-02-09 北京捷通华声科技股份有限公司 语音合成方法、装置、存储介质及电子装置
CN113990286A (zh) * 2021-10-29 2022-01-28 北京大学深圳研究院 语音合成方法、装置、设备及存储介质
CN114783402B (zh) * 2022-06-22 2022-09-13 广东电网有限责任公司佛山供电局 一种合成语音的变奏方法、装置、电子设备及存储介质

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101000765A (zh) * 2007-01-09 2007-07-18 黑龙江大学 基于韵律特征的语音合成方法
CN101064103A (zh) * 2006-04-24 2007-10-31 中国科学院自动化研究所 基于音节韵律约束关系的汉语语音合成方法及***
CN101176146A (zh) * 2005-05-18 2008-05-07 松下电器产业株式会社 声音合成装置
KR20080060909A (ko) * 2006-12-27 2008-07-02 엘지전자 주식회사 문장 상태에 따른 음성을 합성하여 출력하는 방법 및 이를이용한 음성합성기
CN101452699A (zh) * 2007-12-04 2009-06-10 株式会社东芝 韵律自适应及语音合成的方法和装置
CN102103856A (zh) * 2009-12-21 2011-06-22 盛大计算机(上海)有限公司 语音合成方法及***
KR20120117041A (ko) * 2011-04-14 2012-10-24 한국과학기술원 개인 운율 모델에 기반하여 감정 음성을 합성하기 위한 방법 및 장치 및 기록 매체
CN103198827A (zh) * 2013-03-26 2013-07-10 合肥工业大学 基于韵律特征参数和情感参数关联性的语音情感修正方法
CN103366731A (zh) * 2012-03-31 2013-10-23 盛乐信息技术(上海)有限公司 语音合成方法及***
US20150046164A1 (en) * 2013-08-07 2015-02-12 Samsung Electronics Co., Ltd. Method, apparatus, and recording medium for text-to-speech conversion
CN105355193A (zh) * 2015-10-30 2016-02-24 百度在线网络技术(北京)有限公司 语音合成方法和装置
CN106688034A (zh) * 2014-09-11 2017-05-17 微软技术许可有限责任公司 具有情感内容的文字至语音转换

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003271172A (ja) * 2002-03-15 2003-09-25 Sony Corp 音声合成方法、音声合成装置、プログラム及び記録媒体、並びにロボット装置
US20050187772A1 (en) * 2004-02-25 2005-08-25 Fuji Xerox Co., Ltd. Systems and methods for synthesizing speech using discourse function level prosodic features
KR101160193B1 (ko) * 2010-10-28 2012-06-26 (주)엠씨에스로직 감성적 음성합성 장치 및 그 방법

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101176146A (zh) * 2005-05-18 2008-05-07 松下电器产业株式会社 声音合成装置
CN101064103A (zh) * 2006-04-24 2007-10-31 中国科学院自动化研究所 基于音节韵律约束关系的汉语语音合成方法及***
KR20080060909A (ko) * 2006-12-27 2008-07-02 엘지전자 주식회사 문장 상태에 따른 음성을 합성하여 출력하는 방법 및 이를이용한 음성합성기
CN101000765A (zh) * 2007-01-09 2007-07-18 黑龙江大学 基于韵律特征的语音合成方法
CN101452699A (zh) * 2007-12-04 2009-06-10 株式会社东芝 韵律自适应及语音合成的方法和装置
CN102103856A (zh) * 2009-12-21 2011-06-22 盛大计算机(上海)有限公司 语音合成方法及***
KR20120117041A (ko) * 2011-04-14 2012-10-24 한국과학기술원 개인 운율 모델에 기반하여 감정 음성을 합성하기 위한 방법 및 장치 및 기록 매체
CN103366731A (zh) * 2012-03-31 2013-10-23 盛乐信息技术(上海)有限公司 语音合成方法及***
CN103198827A (zh) * 2013-03-26 2013-07-10 合肥工业大学 基于韵律特征参数和情感参数关联性的语音情感修正方法
US20150046164A1 (en) * 2013-08-07 2015-02-12 Samsung Electronics Co., Ltd. Method, apparatus, and recording medium for text-to-speech conversion
CN106688034A (zh) * 2014-09-11 2017-05-17 微软技术许可有限责任公司 具有情感内容的文字至语音转换
CN105355193A (zh) * 2015-10-30 2016-02-24 百度在线网络技术(北京)有限公司 语音合成方法和装置

Also Published As

Publication number Publication date
CN108615524A (zh) 2018-10-02

Similar Documents

Publication Publication Date Title
WO2019218481A1 (zh) 一种语音合成方法、***及终端设备
US10789290B2 (en) Audio data processing method and apparatus, and computer storage medium
CN108806656B (zh) 歌曲的自动生成
Goudbeek et al. Beyond arousal: Valence and potency/control cues in the vocal expression of emotion
WO2020177190A1 (zh) 一种处理方法、装置及设备
Tran et al. Improvement to a NAM-captured whisper-to-speech system
Weirich et al. Gender identity is indexed and perceived in speech
CN110852075B (zh) 自动添加标点符号的语音转写方法、装置及可读存储介质
CN109147831A (zh) 一种语音连接播放方法、终端设备及计算机可读存储介质
Puchtler et al. Hui-audio-corpus-german: A high quality tts dataset
Pravena et al. Development of simulated emotion speech database for excitation source analysis
Proutskova et al. Breathy, resonant, pressed–automatic detection of phonation mode from audio recordings of singing
de Mareüil et al. A diachronic study of initial stress and other prosodic features in the French news announcer style: corpus-based measurements and perceptual experiments
De Boer et al. Application of linear discriminant analysis to the long-term averaged spectra of simulated disorders of oral-nasal balance
CN114927126A (zh) 基于语义分析的方案输出方法、装置、设备以及存储介质
CN115796653A (zh) 一种面试发言评价方法及***
Kanwal et al. Identifying the evidence of speech emotional dialects using artificial intelligence: A cross-cultural study
Lee et al. Acoustic voice variation in spontaneous speech
Green et al. Range in the Use and Realization of BIN in African American English
CN112185341A (zh) 基于语音合成的配音方法、装置、设备和存储介质
Hou et al. Code-switching automatic speech recognition for nursing record documentation: system development and evaluation
JP2007133052A (ja) 学習機器とそのプログラム
KR102484006B1 (ko) 음성 장애 환자를 위한 음성 자가 훈련 방법 및 사용자 단말 장치
CN114254649A (zh) 一种语言模型的训练方法、装置、存储介质及设备
Chen et al. Voice-Cloning Artificial-Intelligence Speakers Can Also Mimic Human-Specific Vocal Expression

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18918843

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26.02.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18918843

Country of ref document: EP

Kind code of ref document: A1