EP1789953B1 - Method and device for selecting acoustic units and a voice synthesis device - Google Patents

Method and device for selecting acoustic units and a voice synthesis device Download PDF

Info

Publication number
EP1789953B1
EP1789953B1 EP05798354A EP05798354A EP1789953B1 EP 1789953 B1 EP1789953 B1 EP 1789953B1 EP 05798354 A EP05798354 A EP 05798354A EP 05798354 A EP05798354 A EP 05798354A EP 1789953 B1 EP1789953 B1 EP 1789953B1
Authority
EP
European Patent Office
Prior art keywords
acoustic
sequence
models
units
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP05798354A
Other languages
German (de)
French (fr)
Other versions
EP1789953A1 (en
Inventor
Olivier Rosec
Soufiane Rouibia
Thierry Moudenc
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orange SA
Original Assignee
France Telecom SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by France Telecom SA filed Critical France Telecom SA
Publication of EP1789953A1 publication Critical patent/EP1789953A1/en
Application granted granted Critical
Publication of EP1789953B1 publication Critical patent/EP1789953B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/06Elementary speech units used in speech synthesisers; Concatenation rules
    • G10L13/07Concatenation rules

Definitions

  • the present invention relates to a method for selecting acoustic units corresponding to acoustic embodiments of symbolic units.
  • These acoustic units contain natural speech signals and each comprise a plurality of symbolic parameters representing acoustic characteristics.
  • Such selection methods are used, for example, in the context of speech synthesis.
  • Each symbolic unit may be associated with a subset of natural speech segments, or acoustic units, such as phones, diphones or the like; representing variations of pronunciation of the symbolic unit.
  • corpus approach makes it possible to define, for the same symbolic unit, a corpus of acoustic units of variable size and parameters recorded in different linguistic contexts and according to different prosodic variants.
  • each comprises a plurality of symbolic parameters representing acoustic characteristics allowing its representation in mathematical form.
  • This type of method generally requires a preliminary phase of learning or determination of contextual acoustic models, including the determination of probabilistic models, for example, of the type called hidden Markov models or HMM, then their classification according to their symbolic parameters which take into account their phonetic context. Contextual acoustic models are thus determined in the form of mathematical laws.
  • the classification is used to perform a preselection of acoustic units according to their symbolic parameters.
  • Final selection typically involves cost functions based on a cost attributed to each concatenation between two acoustic units as well as a cost attributed to the use of each unit.
  • the object of the present invention is to solve this problem by defining a high-performance method of selecting acoustic units using a finite set of contextual acoustic models.
  • the method of the invention makes it possible to take into account spectrum, energy and duration information at the time of selection, thus allowing a reliable and good quality selection.
  • the invention also relates to a device as defined in the acoustic unit selection claim 20 corresponding to acoustic embodiments of symbolic units of a phonological nature, this device comprising means adapted to the implementation of a method selection as defined above; and a device for synthesizing a speech signal, remarkable in that it includes means adapted to the implementation of such a selection method.
  • the present invention also relates to a computer program on an information carrier as defined by claim 22, this program comprising instructions adapted to the implementation of a method for selecting acoustic units according to the invention, when the program is loaded and executed in a computer system.
  • the figure 1 represents a general process flow diagram of the invention implemented as part of a speech synthesis method.
  • the steps of the acoustic unit selection method according to the invention are determined by the instructions of a computer program used for example in a voice synthesis device.
  • the method according to the invention is then implemented when the aforementioned program is loaded into computer means incorporated in the device in question, and whose operation is then controlled by the execution of the program.
  • computer program herein refers to one or more computer programs forming a set (software) whose purpose is the implementation of the invention when it is executed by an appropriate computer system.
  • the invention also relates to such a computer program, particularly in the form of software stored on an information carrier.
  • an information carrier may be constituted by any entity or device capable of storing a program according to the invention.
  • the medium in question may comprise a hardware storage means, such as a ROM, for example a CD ROM or a microelectronic circuit ROM, or a magnetic recording means, for example a hard disk.
  • the information carrier may be an integrated circuit in which the program is incorporated, the circuit being adapted to execute or to be used in the execution of the method in question.
  • the information medium can also be a transmissible immaterial medium, such as an electrical or optical signal that can be conveyed via an electrical or optical cable, by radio or by other means.
  • a program according to the invention can in particular be downloaded to an Internet type network.
  • a computer program according to the invention can use any programming language and be in the form of source code, object code, or intermediate code between source code and object code (for example eg, a partially compiled form), or in any other form desirable for implementing a method according to the invention.
  • the selection method firstly comprises a preliminary step 2 for determining contextual acoustic models, implemented from a given set of acoustic units contained in a database 3.
  • This determination step 2 is also called learning and allows to define mathematical laws representing the acoustic units which each contain a natural speech signal and symbolic parameters representing their acoustic characteristics.
  • the method comprises following step 2 of determining contextual acoustic models, a step 4 of determining at least one target sequence of symbolic units of a phonological nature.
  • this target sequence is unique and corresponds to a text to be synthesized.
  • the method then comprises a step 5 of determining a sequence of contextual acoustic models, as obtained from the previous step 2, and corresponding to the target sequence.
  • the method further comprises a step 6 of determining an acoustic mask from said sequence of contextual acoustic models. This template matches the most likely spectrum and energy settings given the sequence of contextual acoustic models previously determined.
  • Step 6 of determining an acoustic mask is followed by a step 7 of selection of acoustic units according to this acoustic mask applied to the target sequence of symbolic units.
  • the acoustic units selected come from a given set of acoustic units for speech synthesis, formed of a database 8 identical to or different from the database 3.
  • the method comprises a step 9 for synthesizing a voice signal from the selected acoustic units and the database 8, so as to reconstruct a voice signal from each natural speech signal contained in the selected acoustic units.
  • the method makes it possible, in particular by virtue of the determination and use of the acoustic mask, to have optimum control of the acoustic parameters of the signal generated by reference to the template.
  • Step 2 of determining the acoustic models is conventional. It is implemented from the database 3 containing a finite number of symbolic units of phonological nature as well as the associated speech and phonetic transcriptions. This set of symbolic units is divided into sets, each comprising all the acoustic units corresponding to the different realizations of the same symbolic unit.
  • Step 2 begins with a substep 22 for determining, for each symbolic unit, a probabilistic model which, in the embodiment described, is a hidden Markov model with discrete states, commonly referred to as HMM (Hidden Markov Model). ).
  • HMM Hidden Markov Model
  • These models have three states and are defined, for each state, by a Gaussian law of mean ⁇ and covariance ⁇ which models the distribution of observations and by probabilities of state retention and transition to the other states of the model.
  • the parameters constituting an HMM model are therefore the parameters of mean and covariance of the Gaussian laws of the different states and the transition matrix grouping the different transition probabilities between the states.
  • these probabilistic models are derived from a finite alphabet of models comprising, for example, 36 different models that describe the probability of acoustic realization of symbolic units of a phonological nature.
  • the discrete models each comprise an observable random process corresponding to the acoustic realization of symbolic units and an unobservable random process designated Q and having known probabilistic properties called "Markov properties" according to which the realization of the future state of a random process depends only on the present state of this process.
  • each natural speech signal contained in an acoustic unit is analyzed asynchronously with, for example, a fixed step of 5 milliseconds and a window of 10 milliseconds.
  • a fixed step of 5 milliseconds and a window of 10 milliseconds For each window centered on an analysis instant t, twelve cepstral coefficients or MF Fre frequency and the energy and their first and second derivatives are obtained.
  • C t is called a spectrum vector and energy comprising the cepstral coefficients and energy values, and a vector comprising o t c t and its first and second derivatives.
  • the vector o t is called the acoustic vector of the instant t and comprises the spectrum and energy information of the natural speech signal analyzed.
  • each symbolic unit or phoneme is associated with an HMM model, called a left-handed three-state model that models the distribution of observations.
  • step 2 also comprises a substep 24 of determining probabilistic models adapted to the phonetic context.
  • this substep 24 corresponds to the learning of HMM models of the so-called triphone type.
  • the phoneme represents in phonology the division of words into linguistic subunits.
  • a phone refers to an acoustic realization of a phoneme. Acoustic realizations of phonemes are different according to the speech context. For example, depending on the phonetic context, coarticulation phenomena are observed to a greater or lesser extent. Similarly, depending on the prosodic context, differences in acoustic realization can appear.
  • a classical method of adaptation to the phonetic context takes into account the left and right contexts, which resulted in so-called triphone modeling.
  • triphone modeling During the learning of HMM models, for each triphone present in the database, the parameters of the Gaussian laws relating to each state are re-estimated from the representatives of this triphone.
  • Step 2 then comprises a substep 26 of classification of the probabilistic models according to their symbolic parameters in order to group within the same class, the models having acoustic similarities.
  • Such a classification can be obtained for example by the construction of decision trees.
  • a decision tree is constructed for each state of each HMM model.
  • the construction is performed by repeated divisions of the natural speech segments of the acoustic units of the set concerned, these divisions being operated on the symbolic parameters.
  • a criterion relating to the symbolic parameters is applied to separate the different acoustic units corresponding to the acoustic realizations of the same phoneme. Subsequently, a calculation of likelihood variation between the father node and the wire node is performed, this calculation being made from the parameters of the previously determined triphone models, in order to take into account the phonetic context.
  • the separation criterion leading to the maximum likelihood increase is retained and separation is effectively accepted if this likelihood increase exceeds a fixed threshold and the number of representatives present in each of the child nodes is sufficient.
  • This operation is repeated on each branch until a stop criterion stops the classification giving rise to the generation of a leaf of the tree or a class.
  • Each of the leaves of the tree of a model state is associated with a single Gaussian law of mean ⁇ and covariance ⁇ , which characterizes the representatives of this sheet and which forms parameters of this state, for a contextual acoustic model.
  • a contextual acoustic model can therefore be defined for each HMM model, by the route, for each state of the HMM model of the associated decision tree in order to assign a class to this state and to modify the parameters of mean and covariance of its Gaussian law for contextual adaptation.
  • the different symbolic units corresponding to the different realizations of the same phoneme are thus represented by the same HMM model and by different contextual acoustic models.
  • a contextual acoustic model is defined as being a HMM model whose non-observable process has as a transition matrix that of the phoneme model resulting from step 22 and in which, for each state, the mean and the covariance matrix of the observable process are the average and covariance matrix of the class obtained by the course of the decision tree corresponding to this state of this phoneme.
  • step 4 of determining a target sequence of symbolic units is performed.
  • This step 4 comprises firstly a substep 42 of acquiring a symbolic representation of a given text to be synthesized, such as a graphical or orthographic representation.
  • this graphical representation is a text written using the Latin alphabet designated by the reference TXT on the figure 3 .
  • the method then comprises a substep 44 for determining a sequence of symbolic units of a phonological nature from the graphical representation.
  • This sequence of symbolic units identified by the UP reference on the figure 3 is, for example, composed of phonemes extracted from a phonetic alphabet.
  • This substep 44 is performed automatically by means of conventional techniques of the state of the art such as phonetization or other.
  • this sub-step 44 implements an automatic phononization system using databases and making it possible to decompose any text on a finite symbolic alphabet.
  • the method comprises step 5 of determining a sequence of contextual acoustic models corresponding to the target sequence.
  • This step first comprises a substep 52 of modeling the target sequence by its decomposition on the basis of probabilistic models and more precisely on the basis of probabilistic hidden Markov models designated HMM, determined during step 2 .
  • the sequence of probabilistic models thus obtained is referenced H 1 M and comprises the models H 1 to H M selected from 36 models of the finite alphabet and corresponds to the target sequence UP.
  • the method then comprises a sub-step 54 for forming contextual acoustic models by modifying parameters of the models of the sequence of the H 1 M models to form an A 1 M sequence of contextual acoustic models.
  • This training is performed by traversing, for each state of each model of the H 1 M sequence, the decision trees.
  • Each state of each model is modified and takes the average and covariance values of the sheet whose symbolic parameters correspond to those of the target.
  • the A 1 M sequence of contextual acoustic models is therefore a sequence of hidden Markov models whose average and covariance parameters have been adapted to the phonetic context.
  • the method then comprises step 6 of determining an acoustic mask.
  • This step 6 comprises a substep 62 for determining the temporal importance of each contextual acoustic model, by allocating, for each contextual acoustic model, a corresponding number of temporal units, a substep 64 of determination a temporal sequence of models and a substep 66 of determining a corresponding sequence of acoustic frames forming the acoustic mask.
  • the sub-step 62 for determining the temporal importance of each contextual acoustic model includes predicting the duration of each state of the contextual acoustic models.
  • This substep 62 receives as input the A 1 M sequence of acoustic models, including mean, covariance, and Gaussian density information for each state and transition matrices, as well as a duration value for each state. of model.
  • an average duration is defined for each class and the classification of a state in a class results in the allocation of this average duration to this state.
  • a duration prediction model such as exists in the state of the art, in particular for assigning each phoneme a desired value, is used to assign a duration to the different states of the sequence A 1 M of acoustic models. contextual.
  • N the total number of frames to be synthesized
  • [ ⁇ 1 , ⁇ 2 , ..., ⁇ N ] the sequence of the contextual acoustic models
  • Q [ q 1 , q 2 , ..., q N ], the corresponding sequence of states.
  • the sequence A is a temporal sequence of models, formed of the contextual acoustic models of the sequence A 1 M , each duplicated several times according to its temporal importance as represented on the figure 3 .
  • sequence of observations o t is completely defined by its static part c t formed of the spectrum and energy vector, the dynamic part being directly deduced therefrom.
  • the acoustic mask therefore corresponds to the most likely sequence of spectrum and energy vectors given the sequence of contextual acoustic models.
  • the method then goes to step 7 of selecting a sequence of acoustic units.
  • Step 7 begins with a sub-step 72 for determining a reference sequence of symbolic units, noted U.
  • This reference sequence U is formed from the target sequence UP and consists of symbolic units used for the synthesis. which may be different from those forming the target sequence UP.
  • the reference sequence U is formed of phonemes, diphonemes or others.
  • Each symbolic unit of the reference sequence U is associated with a finite set of acoustic units corresponding to different acoustic embodiments.
  • the method comprises a substep 74 of segmentation of the acoustic mask according to the reference sequence U.
  • the method of the invention is applicable to any type of acoustic units, the substep 74 of segmentation to adapt the acoustic mask to different types of units.
  • This segmentation is a decomposition of the acoustic mask on a basis of time units corresponding to the types of acoustic units used.
  • this segmentation corresponds to the grouping of the frames of the acoustic template C in segments of a duration close to that of the units of the reference sequence U, which correspond to the acoustic units used for the synthesis.
  • These segments are noted s i on the figure 3 .
  • the selection step 7 comprises a preselection sub-step 76 making it possible to define, for each symbolic unit U i of the reference sequence U, a subset E i of candidate acoustic units, as represented on FIG. figure 3 .
  • This preselection is carried out conventionally, for example according to the symbolic parameters of the acoustic units.
  • the method further comprises a sub-step 78 of aligning the acoustic mask with each possible sequence of acoustic units from the preselected candidate units to make the final selection.
  • each candidate acoustic unit is compared to the segments of the corresponding template by means of an alignment algorithm, such as for example a so-called DTW (Dynamic Time Warping) algorithm.
  • DTW Dynamic Time Warping
  • This DTW algorithm aligns each acoustic unit with the corresponding template segment to compute an overall distance between them, equal to the sum of the local distances on the alignment path, divided by the number of frames of the shortest segment. .
  • the overall distance thus defined makes it possible to determine a relative distance of duration between the compared signals.
  • the local distance used is the Euclidean distance between the spectrum and energy vectors comprising the MFCC coefficients and the energy information.
  • the method of the invention makes it possible to obtain a sequence of acoustic units that are optimally selected, thanks to the use of the acoustic mask.
  • the selection step 7 is followed by a synthesis step 9, which comprises a substep 92 of recovery, for each selected acoustic unit, of a speech signal natural in the database 8, a substep 94 of signal smoothing and a sub-step 96 of concatenation of different natural speech signals to deliver the final synthesized signal.
  • a prosodic modification algorithm such as for example an algorithm known as TD-PSOLA is used during the synthesis module during a sub-step of prosodic modification.
  • the hidden Markov models are models whose unobservable processes are discrete values.
  • the method can also be realized with models whose unobservable processes are continuous values.
  • this technique relies on the use of language models intended to weight the various hypotheses by their probability of appearing in the symbolic universe.
  • MFCC spectral parameters used in the example described can be replaced by other types of parameters, such as so-called Linear Spectral Frequencies (LSF) parameters, Linear Prediction Coefficients (LPC) parameters or parameters related to formants.
  • LSF Linear Spectral Frequencies
  • LPC Linear Prediction Coefficients
  • the method may also use other characteristic information of the voice signals, such as fundamental frequency information or voice quality information, especially during the steps of determining the contextual acoustic models, template determination and selection.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)
  • Input Circuits Of Receivers And Coupling Of Receivers And Audio Equipment (AREA)
  • Exchange Systems With Centralized Control (AREA)

Abstract

A method for selecting acoustic units each of which contains a natural speech signal and symbolic parameters involves a stage ( 4 ) for determining at least one target symbolic unit sequence; a stage ( 5 ) for determining a contextual acoustic model sequence corresponding to the target sequence; a stage ( 6 ) for determining an acoustic template on the basis of the contextual acoustic model sequence and a stage ( 7 ) for selecting the acoustic unit sequence according to the acoustic template applied to the target symbolic unit sequence. The invention is used for voice synthesis.

Description

La présente invention concerne un procédé de sélection d'unités acoustiques correspondant à des réalisations acoustiques d'unités symboliques. Ces unités acoustiques contiennent des signaux de parole naturelle et comportent chacune une pluralité de paramètres symboliques représentant des caractéristiques acoustiques.The present invention relates to a method for selecting acoustic units corresponding to acoustic embodiments of symbolic units. These acoustic units contain natural speech signals and each comprise a plurality of symbolic parameters representing acoustic characteristics.

De tels procédés de sélection sont utilisés, par exemple, dans le cadre de la synthèse de parole.Such selection methods are used, for example, in the context of speech synthesis.

De manière générale, il est possible de décomposer une langue parlée sur une base finie d'unités symboliques de nature phonologique, telles que des phonèmes ou autres, permettant la vocalisation d'un énoncé textuel quelconque.In general, it is possible to break down a language spoken on a finite basis of symbolic units of phonological nature, such as phonemes or others, allowing the vocalization of any textual utterance.

Chaque unité symbolique peut être associée à un sous-ensemble de segments de parole naturelle, ou unités acoustiques, telles que des phones, des diphones ou autres; représentant des variations de prononciation de l'unité symbolique.Each symbolic unit may be associated with a subset of natural speech segments, or acoustic units, such as phones, diphones or the like; representing variations of pronunciation of the symbolic unit.

En effet, une approche dite par corpus permet de définir, pour une même unité symbolique, un corpus d'unités acoustiques de taille et de paramètres variables enregistrées dans différents contextes linguistiques et selon différentes variantes prosodiques.Indeed, a so-called corpus approach makes it possible to define, for the same symbolic unit, a corpus of acoustic units of variable size and parameters recorded in different linguistic contexts and according to different prosodic variants.

Il se pose alors un problème de sélection de ces unités en fonction du contexte de l'utilisation pour minimiser les discontinuités aux instants de concaténation et limiter le recours à des algorithmes de modification prosodique.There is then a problem of selection of these units according to the context of the use to minimize discontinuities at times of concatenation and limit the use of prosodic modification algorithms.

Afin de permettre un traitement automatique de ces unités acoustiques, chacune comporte une pluralité de paramètres symboliques représentant des caractéristiques acoustiques permettant sa représentation sous forme mathématique.In order to allow automatic processing of these acoustic units, each comprises a plurality of symbolic parameters representing acoustic characteristics allowing its representation in mathematical form.

Il existe des procédés de sélection d'unités acoustiques, notamment dans le cadre des procédés de synthèse vocale, qui utilisent un nombre fini de modèles acoustiques contextuels pour modéliser une séquence cible d'unités symboliques et procéder à une sélection.There are methods for selecting acoustic units, particularly in the context of speech synthesis methods, which use a finite number of contextual acoustic models to model a target sequence of symbolic units and make a selection.

Un exemple d'un tel procédé de synthèse est décrit notamment dans les documents intitulés « The IBM Trainable Speech Synthesis System » publié par Donovan R.E. and Eide E.M., Proc. ICSLP, Sydney, 1998 , « Auto-Automatically Clustering Similar Units for Unit Selection in Speech Synthesis » publié par Black A.W. and Taylor P. Proc. Eurospeech, pp. 601-604, 1997 ou encore les brevets US 5970453 et GB 2313530 .An example of such a synthesis method is described in particular in the documents entitled "The IBM Trainable Speech Synthesis System" published by Donovan RE and Eide EM, Proc. ICSLP, Sydney, 1998 , «Auto-Automatically Clustering Similar Units for Unit Selection in Speech Synthesis "published by Black AW and Taylor P. Proc. Eurospeech, pp. 601-604, 1997 or patents US 5970453 and GB 2313530 .

Ce type de procédé requiert généralement une phase préalable d'apprentissage ou de détermination des modèles acoustiques contextuels, comprenant la détermination de modèles probabilistes, par exemple, du type dit modèles de Markov cachés ou HMM, puis leur classification en fonction de leurs paramètres symboliques qui prennent éventuellement en compte leur contexte phonétique. On détermine ainsi des modèles acoustiques contextuels sous la forme de lois mathématiques.This type of method generally requires a preliminary phase of learning or determination of contextual acoustic models, including the determination of probabilistic models, for example, of the type called hidden Markov models or HMM, then their classification according to their symbolic parameters which take into account their phonetic context. Contextual acoustic models are thus determined in the form of mathematical laws.

La classification est utilisée afin de réaliser une présélection d'unités acoustiques en fonction de leurs paramètres symboliques.The classification is used to perform a preselection of acoustic units according to their symbolic parameters.

La sélection finale fait généralement intervenir des fonctions de coût fondées sur un coût attribué à chaque concaténation entre deux unités acoustiques ainsi que sur un coût attribué à l'utilisation de chaque unité.Final selection typically involves cost functions based on a cost attributed to each concatenation between two acoustic units as well as a cost attributed to the use of each unit.

Toutefois, la détermination et la hiérarchisation de ces coûts, sont faites de manière approximative et nécessitent l'intervention d'un expert humain.However, the determination and prioritization of these costs is done in an approximate way and requires the intervention of a human expert.

En conséquence, la sélection réalisée n'est pas optimale et on dispose de peu de contrôle sur la qualité du signal synthétisé rendant impossible une évaluation de sa qualité a priori.As a result, the selection made is not optimal and there is little control over the quality of the synthesized signal making it impossible to evaluate its quality a priori.

Le but de la présente invention est de résoudre ce problème en définissant un procédé performant de sélection d'unités acoustiques utilisant un ensemble fini de modèles acoustiques contextuels.The object of the present invention is to solve this problem by defining a high-performance method of selecting acoustic units using a finite set of contextual acoustic models.

A cet effet, la présente invention telle que définie par la revendication 1 a pour objet un procédé de sélection d'unités acoustiques correspondant à des réalisations acoustiques d'unités symboliques de nature phonologique, lesdites unités acoustiques contenant chacune un signal de parole naturelle et des paramètres symboliques représentant leurs caractéristiques acoustiques, ledit procédé comportant :

  • une étape de détermination d'au moins une séquence cible d'unités symboliques ; et
  • une étape de détermination d'une séquence de modèles acoustiques contextuels correspondant à ladite séquence cible,
caractérisé en ce qu'il comporte en outre :
  • une étape de détermination d'un gabarit acoustique à partir de ladite séquence de modèles acoustiques contextuels ; et
  • une étape de sélection d'une séquence d'unités acoustiques en fonction dudit gabarit acoustique appliqué à ladite séquence cible d'unités symboliques.
For this purpose, the present invention as defined by claim 1 relates to a method of selecting acoustic units corresponding to acoustic embodiments of symbolic units of a phonological nature, said acoustic units each containing a natural speech signal and symbolic parameters representing their acoustic characteristics, said method comprising:
  • a step of determining at least one target sequence of symbolic units; and
  • a step of determining a sequence of contextual acoustic models corresponding to said target sequence,
characterized in that it further comprises:
  • a step of determining an acoustic mask from said sequence of contextual acoustic models; and
  • a step of selecting a sequence of acoustic units according to said acoustic mask applied to said target sequence of symbolic units.

Grâce à l'utilisation d'un gabarit acoustique, le procédé de l'invention permet de prendre en compte des informations de spectre, d'énergie et de durée au moment de la sélection, permettant ainsi une sélection fiable et de bonne qualité.Thanks to the use of an acoustic mask, the method of the invention makes it possible to take into account spectrum, energy and duration information at the time of selection, thus allowing a reliable and good quality selection.

Suivant d'autres caractéristiques de l'invention :

  • Le procédé comporte une étape préalable de détermination de modèles acoustiques contextuels, mise en oeuvre à partir d'un ensemble donné d'unités acoustiques ;
  • ladite étape de détermination de modèles acoustiques contextuels comprend :
    • une sous-étape de détermination, pour chaque unité acoustique, d'un modèle probabiliste issu d'un répertoire fini de modèles comportant chacun un processus aléatoire observable correspondant à la réalisation acoustique d'unités symboliques, et un processus aléatoire non observable possédant des propriétés probabilistes connues dites « propriétés de Markov » ;
    • une sous-étape de classification desdits modèles probabilistes en fonction de leurs paramètres symboliques,
    les processus aléatoires observables et non observables des modèles de chaque classe formant lesdits modèles acoustiques contextuels ;
  • ladite étape de détermination des modèles acoustiques contextuels comprend en outre une sous-étape de détermination de modèles probabilistes adaptés au contexte phonétique dont les paramètres sont utilisés au cours de ladite sous-étape de classification ;
  • ladite sous-étape de classification comporte une classification par arbres de décision, les paramètres desdits modèles probabilistes étant modifiés par le parcours desdits arbres de décision pour former lesdits modèles acoustiques contextuels ;
  • ladite étape de détermination d'au moins une séquence cible d'unités symboliques comprend :
    • une sous-étape d'acquisition d'une représentation symbolique d'un texte ; et
    • une sous-étape de détermination d'au moins une séquence d'unités symboliques à partir de ladite représentation symbolique ;
  • ladite étape de détermination d'une séquence de modèles acoustiques contextuels, comprend :
    • une sous-étape de modélisation de ladite séquence cible par sa décomposition sur une base de modèles probabilistes afin de délivrer une séquence de modèles probabilistes correspondant à ladite séquence cible ; et
    • une sous-étape de formation des modèles acoustiques contextuels par modification de paramètre desdits modèles probabilistes pour former ladite séquence de modèles acoustiques contextuels ;
  • ladite étape de détermination d'un gabarit acoustique comprend :
    • une sous-étape de détermination de l'importance temporelle de chaque modèle acoustique contextuel ;
    • une sous-étape de détermination, d'une séquence temporelle de modèles; et
    • une sous-étape de détermination d'une séquence de trames acoustiques correspondantes formant ledit gabarit acoustique ;
  • ladite sous-étape de détermination de l'importance temporelle de chaque modèle acoustique contextuel comprend la prédiction de sa durée ;
  • ladite étape de sélection d'une séquence d'unités acoustiques comprend :
    • une sous-étape de détermination d'une séquence référence d'unités symboliques à partir de ladite séquence cible, chaque unité symbolique de la séquence référence étant associée à un ensemble d'unités acoustiques ; et
    • une sous-étape d'alignement entre les unités acoustiques associées à ladite séquence référence et ledit gabarit acoustique ;
  • ladite étape de sélection comprend en outre une sous-étape de segmentation dudit gabarit acoustique en fonction de ladite séquence référence ;
  • ladite sous-étape de segmentation comprend une décomposition dudit gabarit acoustique sur une base d'unités temporelles ;
  • ledit gabarit étant segmenté chaque segment correspond à une unité symbolique de la séquence référence et ladite sous-étape d'alignement comporte l'alignement de chaque segment du gabarit avec chacune des unités acoustiques associées à l'unité symbolique correspondante issue de la séquence référence ;
  • ladite sous-étape d'alignement comprend la détermination d'un alignement optimal tel que déterminé par un algorithme dit "DTW" ;
  • ladite étape de sélection comprend en outre une sous-étape de présélection permettant de déterminer, pour chaque unité symbolique de la séquence référence, des unités acoustiques candidates ladite sous-étape d'alignement formant une sous-étape de sélection finale parmi ces unités candidates ;
  • lesdits modèles acoustiques contextuels sont des modèles probabilistes à processus observables à valeurs continues et à processus non observables à valeurs discrètes formant les états de ce processus ; et
  • lesdits modèles acoustiques contextuels sont des modèles probabilistes à processus non observables à valeurs continues.
According to other features of the invention:
  • The method comprises a preliminary step of determining contextual acoustic models, implemented from a given set of acoustic units;
  • said step of determining contextual acoustic models comprises:
    • a substep of determining, for each acoustic unit, a probabilistic model derived from a finite repertoire of models each comprising an observable random process corresponding to the acoustic realization of symbolic units, and an unobservable random process having properties known probabilists called "Markov properties";
    • a substep of classification of said probabilistic models according to their symbolic parameters,
    the observable and unobservable random processes of the models of each class forming said contextual acoustic models;
  • said step of determining the contextual acoustic models further comprises a substep of determining probabilistic models adapted to the phonetic context whose parameters are used during said substep of classification;
  • said substep of classification includes a classification by decision trees, the parameters of said probabilistic models being modified by the course of said decision trees to form said contextual acoustic models;
  • said step of determining at least one target sequence of symbolic units comprises:
    • a substep of acquiring a symbolic representation of a text; and
    • a substep of determining at least one sequence of symbolic units from said symbolic representation;
  • said step of determining a sequence of contextual acoustic models, comprises:
    • a sub-step of modeling said target sequence by decomposing it on the basis of probabilistic models in order to deliver a sequence of probabilistic models corresponding to said target sequence; and
    • a sub-step of forming contextual acoustic models by parameter modification of said probabilistic models to form said sequence of contextual acoustic models;
  • said step of determining an acoustic mask comprises:
    • a substep of determining the temporal importance of each contextual acoustic model;
    • a substep of determining a time sequence of patterns; and
    • a substep of determining a sequence of corresponding acoustic fields forming said acoustic mask;
  • said substep of determining the temporal importance of each contextual acoustic model includes predicting its duration;
  • said step of selecting a sequence of acoustic units comprises:
    • a substep of determining a reference sequence of symbolic units from said target sequence, each symbolic unit of the reference sequence being associated with a set of acoustic units; and
    • a substep of alignment between the acoustic units associated with said reference sequence and said acoustic mask;
  • said selecting step further comprises a substep of segmenting said acoustic mask according to said reference sequence;
  • said substep segmentation comprises decomposing said acoustic mask on a time unit basis;
  • said template being segmented each segment corresponds to a symbolic unit of the reference sequence and said alignment substep comprises aligning each segment of the template with each of the acoustic units associated with the corresponding symbolic unit derived from the reference sequence;
  • said substep of alignment comprises determining an optimal alignment as determined by a so-called "DTW"algorithm;
  • said selecting step further comprises a preselection sub-step for determining, for each symbolic unit of the reference sequence, candidate acoustic units said substep alignment forming a final selection sub-step among these candidate units;
  • said contextual acoustic models are probabilistic models with observable processes with continuous values and unobservable processes with discrete values forming the states of this process; and
  • these contextual acoustic models are probabilistic models with unobservable processes with continuous values.

L'invention concerne également un procédé de synthèse d'un signal de parole, caractérisé en ce qu'il comporte un procédé de sélection tel que décrit précédemment, ladite séquence cible correspondant à un texte à synthétiser et le procédé comportant en outre une étape de synthèse d'une séquence vocale à partir de ladite séquence d'unités acoustiques sélectionnées. Selon d'autres caractéristiques, ladite étape de synthèse comporte :

  • une sous-étape de récupération, pour chaque unité acoustique sélectionnée, d'un signal de parole naturelle ;
  • une sous-étape de lissage des signaux de parole ; et
  • une sous-étape de concaténation des différents signaux de parole naturelle.
The invention also relates to a method for synthesizing a speech signal, characterized in that it comprises a selection method as described above, said target sequence corresponding to a text to be synthesized and the method further comprising a step of synthesizing a voice sequence from said selected acoustic unit sequence. According to other features, said synthesis step comprises:
  • a substep of recovery, for each selected acoustic unit, of a natural speech signal;
  • a substep of smoothing the speech signals; and
  • a sub-step of concatenation of the different natural speech signals.

Corrélativement, l'invention concerne aussi un dispositif tel que défini dans la revendication 20 de sélection d'unités acoustiques correspondant à des réalisations acoustiques d'unités symboliques de nature phonologique, ce dispositif comportant des moyens adaptés à la mise en oeuvre d'un procédé de sélection tel que défini supra ; ainsi qu'un dispositif de synthèse d'un signal de parole, remarquable en ce qu'il inclut des moyens adaptés à la mise en oeuvre d'un tel procédé de sélection.Correlatively, the invention also relates to a device as defined in the acoustic unit selection claim 20 corresponding to acoustic embodiments of symbolic units of a phonological nature, this device comprising means adapted to the implementation of a method selection as defined above; and a device for synthesizing a speech signal, remarkable in that it includes means adapted to the implementation of such a selection method.

La présente invention concerne aussi un programme d'ordinateur sur un support d'informations tel que défini par la revendication 22, ce programme comportant des instructions adaptées à la mise en oeuvre d'un procédé de sélection d'unités acoustiques selon l'invention, lorsque le programme est chargé et exécuté dans un système informatique.The present invention also relates to a computer program on an information carrier as defined by claim 22, this program comprising instructions adapted to the implementation of a method for selecting acoustic units according to the invention, when the program is loaded and executed in a computer system.

Les avantages de ces dispositifs et programme d'ordinateur sont identiques à ceux mentionnés plus haut en relation avec le procédé de sélection d'unités acoustiques de l'invention.The advantages of these devices and computer program are identical to those mentioned above in connection with the method of selecting acoustic units of the invention.

L'invention sera mieux comprise à la lecture de la description qui va suivre, donnée uniquement à titre d'exemple et faite en se référant aux dessins annexés, sur lesquels :

  • la Fig.1 représente un organigramme général d'un procédé de synthèse vocale mettant en oeuvre un procédé de sélection selon l'invention ;
  • la Fig.2 représente un organigramme détaillé du procédé de la Fig.1 ; et
  • la Fig.3 représente le détail de signaux spécifiques au cours du procédé décrit en référence à la Fig.2.
The invention will be better understood on reading the description which follows, given solely by way of example and with reference to the appended drawings, in which:
  • the Fig.1 represents a general flowchart of a speech synthesis method implementing a selection method according to the invention;
  • the Fig.2 represents a detailed flowchart of the process of Fig.1 ; and
  • the Fig.3 represents the detail of specific signals during the process described with reference to the Fig.2 .

La figure 1 représente un organigramme général de procédé de l'invention mis en oeuvre dans le cadre d'un procédé de synthèse vocale.The figure 1 represents a general process flow diagram of the invention implemented as part of a speech synthesis method.

Selon une implémentation préférée, les étapes du procédé de sélection d'unités acoustiques selon l'invention sont déterminées par les instructions d'un programme d'ordinateur utilisé par exemple dans un dispositif de synthèse vocale.According to a preferred implementation, the steps of the acoustic unit selection method according to the invention are determined by the instructions of a computer program used for example in a voice synthesis device.

Le procédé selon l'invention est alors mis en oeuvre lorsque le programme précité est chargé dans des moyens informatiques incorporés dans le dispositif en question, et dont le fonctionnement est alors commandé par l'exécution du programme.The method according to the invention is then implemented when the aforementioned program is loaded into computer means incorporated in the device in question, and whose operation is then controlled by the execution of the program.

On entend ici par "programme d'ordinateur" un ou plusieurs programmes d'ordinateur formant un ensemble (logiciel) dont la finalité est la mise en oeuvre de l'invention lorsqu'il est exécuté par un système informatique approprié.The term "computer program" herein refers to one or more computer programs forming a set (software) whose purpose is the implementation of the invention when it is executed by an appropriate computer system.

En conséquence, l'invention a également pour objet un tel programme d'ordinateur, en particulier sous la forme d'un logiciel stocké sur un support d'informations. Un tel support d'informations peut être constitué par n'importe quelle entité ou dispositif capable de stocker un programme selon l'invention.Accordingly, the invention also relates to such a computer program, particularly in the form of software stored on an information carrier. Such an information carrier may be constituted by any entity or device capable of storing a program according to the invention.

Par exemple, le support en question peut comporter un moyen de stockage matériel, tel qu'une ROM, par exemple un CD ROM ou une ROM de circuit microélectronique, ou encore un moyen d'enregistrement magnétique, par exemple un disque dur. En variante, le support d'informations peut être un circuit intégré dans lequel le programme est incorporé, le circuit étant adapté pour exécuter ou pour être utilisé dans l'exécution du procédé en question.For example, the medium in question may comprise a hardware storage means, such as a ROM, for example a CD ROM or a microelectronic circuit ROM, or a magnetic recording means, for example a hard disk. As a variant, the information carrier may be an integrated circuit in which the program is incorporated, the circuit being adapted to execute or to be used in the execution of the method in question.

D'autre part, le support d'informations peut être aussi un support immatériel transmissible, tel qu'un signal électrique ou optique pouvant être acheminé via un câble électrique ou optique, par radio ou par d'autres moyens. Un programme selon l'invention peut être en particulier téléchargé sur un réseau de type Internet.On the other hand, the information medium can also be a transmissible immaterial medium, such as an electrical or optical signal that can be conveyed via an electrical or optical cable, by radio or by other means. A program according to the invention can in particular be downloaded to an Internet type network.

D'un point de vue conception, un programme d'ordinateur selon l'invention peut utiliser n'importe quel langage de programmation et être sous la forme de code source, code objet, ou de code intermédiaire entre code source et code objet (par ex., une forme partiellement compilée), ou dans n'importe quelle autre forme souhaitable pour implémenter un procédé selon l'invention.From a design point of view, a computer program according to the invention can use any programming language and be in the form of source code, object code, or intermediate code between source code and object code (for example eg, a partially compiled form), or in any other form desirable for implementing a method according to the invention.

De retour à la figure 1, le procédé de sélection selon l'invention comporte tout d'abord une étape 2 préalable de détermination de modèles acoustiques contextuels, mise en oeuvre à partir d'un ensemble donné d'unités acoustiques contenues dans une base de données 3.Back to the figure 1 , the selection method according to the invention firstly comprises a preliminary step 2 for determining contextual acoustic models, implemented from a given set of acoustic units contained in a database 3.

Cette étape 2 de détermination est également appelée apprentissage et permet de définir des lois mathématiques représentant les unités acoustiques qui contiennent chacune un signal de parole naturelle et des paramètres symboliques représentant leurs caractéristiques acoustiques.This determination step 2 is also called learning and allows to define mathematical laws representing the acoustic units which each contain a natural speech signal and symbolic parameters representing their acoustic characteristics.

Le procédé comprend suite à l'étape 2 de détermination de modèles acoustiques contextuels, une étape 4 de détermination d'au moins une séquence cible d'unités symboliques de nature phonologique. Dans le mode de réalisation décrit cette séquence cible est unique et correspond à un texte à synthétiser.The method comprises following step 2 of determining contextual acoustic models, a step 4 of determining at least one target sequence of symbolic units of a phonological nature. In the embodiment described, this target sequence is unique and corresponds to a text to be synthesized.

Le procédé comporte ensuite une étape 5 de détermination d'une séquence de modèles acoustiques contextuels, tels qu'issus de l'étape 2 préalable, et correspondant à la séquence cible.The method then comprises a step 5 of determining a sequence of contextual acoustic models, as obtained from the previous step 2, and corresponding to the target sequence.

Le procédé comporte en outre une étape 6 de détermination d'un gabarit acoustique à partir de ladite séquence de modèles acoustiques contextuels. Ce gabarit correspond aux paramètres de spectre et d'énergie les plus probables étant donné la séquence de modèles acoustiques contextuels déterminée précédemment.The method further comprises a step 6 of determining an acoustic mask from said sequence of contextual acoustic models. This template matches the most likely spectrum and energy settings given the sequence of contextual acoustic models previously determined.

L'étape 6 de détermination d'un gabarit acoustique est suivie d'une étape 7 de sélection d'unités acoustiques en fonction de ce gabarit acoustique appliqué à la séquence cible d'unités symboliques.Step 6 of determining an acoustic mask is followed by a step 7 of selection of acoustic units according to this acoustic mask applied to the target sequence of symbolic units.

Les unités acoustiques sélectionnées sont issues d'un ensemble donné d'unités acoustiques pour la synthèse vocale, formé d'une base de données 8 identique ou différente de la base de données 3.The acoustic units selected come from a given set of acoustic units for speech synthesis, formed of a database 8 identical to or different from the database 3.

Enfin, le procédé comporte une étape 9 de synthèse d'un signal vocal à partir des unités acoustiques sélectionnées et de la base de données 8, de manière à reconstituer un signal vocal à partir de chaque signal de parole naturelle contenu dans les unités acoustiques sélectionnées.Finally, the method comprises a step 9 for synthesizing a voice signal from the selected acoustic units and the database 8, so as to reconstruct a voice signal from each natural speech signal contained in the selected acoustic units. .

Ainsi, le procédé permet, notamment grâce à la détermination et à l'utilisation du gabarit acoustique, d'avoir un contrôle optimum des paramètres acoustiques du signal généré par référence au gabarit.Thus, the method makes it possible, in particular by virtue of the determination and use of the acoustic mask, to have optimum control of the acoustic parameters of the signal generated by reference to the template.

On va maintenant décrire en détail le procédé de l'invention en référence aux figures 2 et 3.The method of the invention will now be described in detail with reference to the figures 2 and 3 .

L'étape 2 de détermination des modèles acoustiques est classique. Elle est mise en oeuvre à partir de la base de données 3 contenant un nombre fini d'unités symboliques de nature phonologique ainsi que les signaux vocaux et transcriptions phonétiques associés. Cet ensemble d'unités symboliques est découpé en ensembles, chacun comprenant toutes les unités acoustiques correspondant aux différentes réalisations d'une même unité symbolique.Step 2 of determining the acoustic models is conventional. It is implemented from the database 3 containing a finite number of symbolic units of phonological nature as well as the associated speech and phonetic transcriptions. This set of symbolic units is divided into sets, each comprising all the acoustic units corresponding to the different realizations of the same symbolic unit.

L'étape 2 débute par une sous-étape 22 de détermination, pour chaque unité symbolique, d'un modèle probabiliste qui, dans le mode de réalisation décrit, est un modèle de Markov caché à états discrets, couramment désigné HMM (Hidden Markov Model).Step 2 begins with a substep 22 for determining, for each symbolic unit, a probabilistic model which, in the embodiment described, is a hidden Markov model with discrete states, commonly referred to as HMM (Hidden Markov Model). ).

Ces modèles comportent trois états et sont définis, pour chaque état, par une loi gaussienne de moyenne µ et de covariance Σ qui modélise la distribution des observations et par des probabilités de maintien dans l'état et de transition vers les autres états du modèle. Les paramètres constituant un modèle HMM sont donc les paramètres de moyenne et de covariance des lois gaussiennes des différents états et la matrice de transition regroupant les différentes probabilités de transition entre les états.These models have three states and are defined, for each state, by a Gaussian law of mean μ and covariance Σ which models the distribution of observations and by probabilities of state retention and transition to the other states of the model. The parameters constituting an HMM model are therefore the parameters of mean and covariance of the Gaussian laws of the different states and the transition matrix grouping the different transition probabilities between the states.

De manière classique, ces modèles probabilistes sont issus d'un alphabet fini de modèles comportant par exemple 36 modèles différents qui décrivent la probabilité de réalisation acoustique d'unités symboliques de nature phonologique.Typically, these probabilistic models are derived from a finite alphabet of models comprising, for example, 36 different models that describe the probability of acoustic realization of symbolic units of a phonological nature.

Par ailleurs, les modèles discrets comportent chacun un processus aléatoire observable correspondant à la réalisation acoustique d'unités symboliques et un processus aléatoire non observable désigné Q et possédant des propriétés probabilistes connues dites « propriétés de Markov » selon lesquelles la réalisation de l'état futur d'un processus aléatoire ne dépend que de l'état présent de ce processus.Moreover, the discrete models each comprise an observable random process corresponding to the acoustic realization of symbolic units and an unobservable random process designated Q and having known probabilistic properties called "Markov properties" according to which the realization of the future state of a random process depends only on the present state of this process.

Au cours de la sous-étape 22, chaque signal de parole naturelle contenu dans une unité acoustique est analysé de manière asynchrone avec, par exemple, un pas fixe de 5 millisecondes et une fenêtre de 10 millisecondes. Pour chaque fenêtre centrée sur un instant d'analyse t, douze coefficients cepstraux ou coefficients MFCC (Mel Frequency Cepstral Coefficient) et l'énergie ainsi que leurs dérivées premières et secondes, sont obtenus.During the substep 22, each natural speech signal contained in an acoustic unit is analyzed asynchronously with, for example, a fixed step of 5 milliseconds and a window of 10 milliseconds. For each window centered on an analysis instant t, twelve cepstral coefficients or MF Fre frequency and the energy and their first and second derivatives are obtained.

On appelle ct un vecteur de spectre et d'énergie comprenant les coefficients cepstraux ainsi que les valeurs d'énergie, et ot un vecteur comprenant ct et ses dérivées premières et secondes. Le vecteur ot est appelé vecteur acoustique de l'instant t et comprend les informations de spectre et d'énergie du signal de parole naturelle analysé.C t is called a spectrum vector and energy comprising the cepstral coefficients and energy values, and a vector comprising o t c t and its first and second derivatives. The vector o t is called the acoustic vector of the instant t and comprises the spectrum and energy information of the natural speech signal analyzed.

Grâce à cette analyse, chaque unité symbolique ou phonème est associée à un modèle HMM, dit modèle gauche droite à trois états qui modélise la distribution des observations.Through this analysis, each symbolic unit or phoneme is associated with an HMM model, called a left-handed three-state model that models the distribution of observations.

L'apprentissage de chacun de ces modèles HMM est réalisé de manière classique à l'aide, par exemple, d'un algorithme dit de Baum-Welch.Learning each of these HMM models is performed in a conventional manner using, for example, a so-called Baum-Welch algorithm.

En particulier, les propriétés mathématiques connues des modèles de Markov permettent de déterminer la probabilité conditionnelle d'observation de la réalisation acoustique désignée ot, étant donné l'état qt du processus non observable Q, dite probabilité de modèle, notée Pm, et correspondant à : P m = P o t | q t

Figure imgb0001
In particular, the known mathematical properties of the Markov models make it possible to determine the conditional probability of observation of the designated acoustic realization o t , given the state q t of the unobservable process Q, the so-called model probability, denoted P m , and corresponding to: P m = P o t | q t
Figure imgb0001

Avantageusement, l'étape 2 comporte également une sous-étape 24 de détermination de modèles probabilistes adaptés au contexte phonétique.Advantageously, step 2 also comprises a substep 24 of determining probabilistic models adapted to the phonetic context.

Plus précisément, cette sous-étape 24 correspond à l'apprentissage des modèles HMM de type dit triphone.More precisely, this substep 24 corresponds to the learning of HMM models of the so-called triphone type.

En effet, le phonème représente en phonologie le découpage des mots en sous unités linguistiques.Indeed, the phoneme represents in phonology the division of words into linguistic subunits.

Un phone désigne quant à lui une réalisation acoustique d'un phonème. Les réalisations acoustiques des phonèmes sont différentes suivant le contexte d'élocution. Par exemple, en fonction du contexte phonétique, des phénomènes de coarticulation sont observés de manière plus ou moins importante. De même, en fonction du contexte prosodique, des différences de réalisation acoustique peuvent apparaître.A phone refers to an acoustic realization of a phoneme. Acoustic realizations of phonemes are different according to the speech context. For example, depending on the phonetic context, coarticulation phenomena are observed to a greater or lesser extent. Similarly, depending on the prosodic context, differences in acoustic realization can appear.

Une méthode classique d'adaptation au contexte phonétique tient compte des contextes gauche et droit, ce qui abouti à la modélisation dite par triphone. Lors de l'apprentissage de modèles HMM, pour chaque triphone présent dans la base, les paramètres des lois gaussiennes relatives à chaque état sont réestimés à partir des représentants de ce triphone.A classical method of adaptation to the phonetic context takes into account the left and right contexts, which resulted in so-called triphone modeling. During the learning of HMM models, for each triphone present in the database, the parameters of the Gaussian laws relating to each state are re-estimated from the representatives of this triphone.

Les probabilités de transition entre chaque état des modèles restent cependant inchangées.The probabilities of transition between each state of the models remain however unchanged.

Lorsque le nombre de représentants d'un triphone dans le corpus acoustique est insuffisant, les paramètres du modèle de ce triphone risquent d'être mal estimés. Il est cependant possible de regrouper les phonèmes des contextes gauche et droit en classes pour obtenir des modèles plus génériques dépendants du contexte.When the number of representatives of a triphone in the acoustic corpus is insufficient, the parameters of the model of this triphone may be poorly estimated. However, it is possible to group the phonemes of the left and right contexts into classes to obtain more generic models depending on the context.

A titre d'exemple, on distingue différentes catégories de contextes, telles que plosive, fricative, voisée ou non voisée.For example, we distinguish different categories of contexts, such as plosive, fricative, voiced or unvoiced.

L'étape 2 comporte ensuite une sous-étape 26 de classification des modèles probabilistes en fonction de leurs paramètres symboliques afin de regrouper au sein d'une même classe, les modèles présentant des similitudes acoustiques.Step 2 then comprises a substep 26 of classification of the probabilistic models according to their symbolic parameters in order to group within the same class, the models having acoustic similarities.

Une telle classification peut être obtenue par exemple par la construction d'arbres de décision.Such a classification can be obtained for example by the construction of decision trees.

Un arbre de décision est construit pour chaque état de chaque modèle HMM. La construction est réalisée par divisions répétées des segments de parole naturelle des unités acoustiques de l'ensemble concerné, ces divisions étant opérées sur les paramètres symboliques.A decision tree is constructed for each state of each HMM model. The construction is performed by repeated divisions of the natural speech segments of the acoustic units of the set concerned, these divisions being operated on the symbolic parameters.

A chaque noeud de l'arbre, un critère portant sur les paramètres symbolique est appliqué pour séparer les différentes unités acoustiques correspondant aux réalisations acoustiques d'un même phonème. Par la suite, un calcul de variation de vraisemblance entre le noeud père et le noeud fil est réalisé, ce calcul étant réalisé à partir des paramètres des modèles de triphones déterminés précédemment, afin de prendre en compte le contexte phonétique. Le critère de séparation conduisant à l'augmentation maximale de la vraisemblance est retenu et la séparation est effectivement acceptée si cette augmentation de vraisemblance dépasse un seuil fixé et si le nombre de représentants présents dans chacun des noeuds fils est suffisant.At each node of the tree, a criterion relating to the symbolic parameters is applied to separate the different acoustic units corresponding to the acoustic realizations of the same phoneme. Subsequently, a calculation of likelihood variation between the father node and the wire node is performed, this calculation being made from the parameters of the previously determined triphone models, in order to take into account the phonetic context. The separation criterion leading to the maximum likelihood increase is retained and separation is effectively accepted if this likelihood increase exceeds a fixed threshold and the number of representatives present in each of the child nodes is sufficient.

Cette opération est répétée sur chaque branche jusqu'à ce qu'un critère d'arrêt stoppe la classification donnant lieu à la génération d'une feuille de l'arbre ou une classe.This operation is repeated on each branch until a stop criterion stops the classification giving rise to the generation of a leaf of the tree or a class.

Chacune des feuilles de l'arbre d'un état du modèle est associée à une unique loi gaussienne de moyenne µ et de covariance Σ, qui caractérise les représentants de cette feuille et qui forme des paramètres de cet état, pour un modèle acoustique contextuel.Each of the leaves of the tree of a model state is associated with a single Gaussian law of mean μ and covariance Σ, which characterizes the representatives of this sheet and which forms parameters of this state, for a contextual acoustic model.

Un modèle acoustique contextuel peut donc être défini pour chaque modèle HMM, par le parcours, pour chaque état du modèle HMM de l'arbre de décision associé afin d'attribuer une classe à cet état et de modifier les paramètres de moyenne et de covariance de sa loi gaussienne pour une adaptation au contexte. Les différentes unités symboliques correspondant aux différentes réalisations d'un même phonème sont donc représentées par un même modèle HMM et par des modèles acoustiques contextuels différents.A contextual acoustic model can therefore be defined for each HMM model, by the route, for each state of the HMM model of the associated decision tree in order to assign a class to this state and to modify the parameters of mean and covariance of its Gaussian law for contextual adaptation. The different symbolic units corresponding to the different realizations of the same phoneme are thus represented by the same HMM model and by different contextual acoustic models.

Ainsi, pour chaque phonème caractérisé par un ensemble de paramètre symboliques, un modèle acoustique contextuel est défini comme étant un modèle HMM dont le processus non observable a pour matrice de transition celle du modèle du phonème issu de l'étape 22 et dans lequel, pour chaque état, la moyenne et la matrice de covariance du processus observable sont les moyenne et matrice de covariance de la classe obtenue par le parcours de l'arbre de décision correspondant à cet état de ce phonème.Thus, for each phoneme characterized by a set of symbolic parameters, a contextual acoustic model is defined as being a HMM model whose non-observable process has as a transition matrix that of the phoneme model resulting from step 22 and in which, for each state, the mean and the covariance matrix of the observable process are the average and covariance matrix of the class obtained by the course of the decision tree corresponding to this state of this phoneme.

Une fois que les modèles acoustiques contextuels ont été déterminés, l'étape 4 de détermination d'une séquence cible d'unités symboliques est réalisée.Once the contextual acoustic models have been determined, step 4 of determining a target sequence of symbolic units is performed.

Cette étape 4 comporte tout d'abord une sous-étape 42 d'acquisition d'une représentation symbolique d'un texte donné à synthétiser, telle qu'une représentation graphémique ou orthographique.This step 4 comprises firstly a substep 42 of acquiring a symbolic representation of a given text to be synthesized, such as a graphical or orthographic representation.

Par exemple, cette représentation graphémique est un texte rédigé à l'aide de l'alphabet latin désigné par la référence TXT sur la figure 3.For example, this graphical representation is a text written using the Latin alphabet designated by the reference TXT on the figure 3 .

Le procédé comporte ensuite une sous-étape 44 de détermination d'une séquence d'unités symboliques de nature phonologique à partir de la représentation graphémique.The method then comprises a substep 44 for determining a sequence of symbolic units of a phonological nature from the graphical representation.

Cette séquence d'unités symboliques repérée par la référence UP sur la figure 3 est, par exemple, composée de phonèmes extraits d'un alphabet phonétique.This sequence of symbolic units identified by the UP reference on the figure 3 is, for example, composed of phonemes extracted from a phonetic alphabet.

Cette sous-étape 44 est réalisée automatiquement aux moyens de techniques classiques de l'état de l'art telles que la phonétisation ou autre.This substep 44 is performed automatically by means of conventional techniques of the state of the art such as phonetization or other.

Notamment, cette sous-étape 44 met en oeuvre un système de phonétisation automatique utilisant des bases de données et permettant de décomposer n'importe quel texte sur un alphabet symbolique fini.In particular, this sub-step 44 implements an automatic phononization system using databases and making it possible to decompose any text on a finite symbolic alphabet.

Ensuite, le procédé comporte l'étape 5 de détermination d'une séquence de modèles acoustiques contextuels correspondant à la séquence cible. Cette étape comporte tout d'abord une sous-étape 52 de modélisation de la séquence cible par sa décomposition sur une base de modèles probabilistes et plus précisément sur la base de modèles probabilistes de Markov cachés désignés HMM, déterminés au cours de l'étape 2.Then, the method comprises step 5 of determining a sequence of contextual acoustic models corresponding to the target sequence. This step first comprises a substep 52 of modeling the target sequence by its decomposition on the basis of probabilistic models and more precisely on the basis of probabilistic hidden Markov models designated HMM, determined during step 2 .

La séquence de modèles probabilistes ainsi obtenue est référencée H1 M et comporte les modèles H1 à HM sélectionnés parmi les 36 modèles de l'alphabet fini et correspond à la séquence cible UP.The sequence of probabilistic models thus obtained is referenced H 1 M and comprises the models H 1 to H M selected from 36 models of the finite alphabet and corresponds to the target sequence UP.

Le procédé comporte ensuite une sous-étape 54 de formation de modèles acoustiques contextuels par modification de paramètres des modèles de la séquence des modèles H1 M pour former une séquence A1 M de modèles acoustiques contextuels. Cette formation est réalisée en parcourant, pour chaque état de chaque modèle de la séquence H1 M, les arbres de décision. Chaque état de chaque modèle est modifié et prend les valeurs de moyenne et de covariance de la feuille dont les paramètres symboliques correspondent à ceux de la cible.The method then comprises a sub-step 54 for forming contextual acoustic models by modifying parameters of the models of the sequence of the H 1 M models to form an A 1 M sequence of contextual acoustic models. This training is performed by traversing, for each state of each model of the H 1 M sequence, the decision trees. Each state of each model is modified and takes the average and covariance values of the sheet whose symbolic parameters correspond to those of the target.

La séquence A1 M de modèles acoustiques contextuels est donc une séquence de modèles de Markov cachés, dont les paramètres de moyenne et de covariance ont été adaptés au contexte phonétique.The A 1 M sequence of contextual acoustic models is therefore a sequence of hidden Markov models whose average and covariance parameters have been adapted to the phonetic context.

Le procédé comporte ensuite l'étape 6 de détermination d'un gabarit acoustique. Cette étape 6 comprend une sous-étape 62 de détermination de l'importance temporelle de chaque modèle acoustique contextuel, par l'attribution, pour chaque modèle acoustique contextuel, d'un nombre d'unités temporelles correspondant, une sous-étape 64 de détermination d'une séquence temporelle de modèles et une sous-étape 66 de détermination d'une séquence de trames acoustiques correspondante formant le gabarit acoustique.The method then comprises step 6 of determining an acoustic mask. This step 6 comprises a substep 62 for determining the temporal importance of each contextual acoustic model, by allocating, for each contextual acoustic model, a corresponding number of temporal units, a substep 64 of determination a temporal sequence of models and a substep 66 of determining a corresponding sequence of acoustic frames forming the acoustic mask.

Plus particulièrement, la sous-étape 62 de détermination de l'importance temporelle de chaque modèle acoustique contextuel, comprend la prédiction de la durée de chaque état des modèles acoustiques contextuels. Cette sous-étape 62 reçoit en entrée la séquence A1 M de modèles acoustiques, comprenant des informations de moyenne, de covariance, et de densité de gaussienne pour chaque état et des matrices de transition, ainsi qu'une valeur de durée pour chaque état de modèle.More particularly, the sub-step 62 for determining the temporal importance of each contextual acoustic model includes predicting the duration of each state of the contextual acoustic models. This substep 62 receives as input the A 1 M sequence of acoustic models, including mean, covariance, and Gaussian density information for each state and transition matrices, as well as a duration value for each state. of model.

Ainsi, pour chaque modèle acoustique contextuel, il est possible de prendre la durée moyenne de chacun des états du modèle.Thus, for each contextual acoustic model, it is possible to take the average duration of each state of the model.

En variante, une durée moyenne est définie pour chaque classe et la classification d'un état dans une classe entraîne l'attribution à cet état de cette durée moyenne.As a variant, an average duration is defined for each class and the classification of a state in a class results in the allocation of this average duration to this state.

Avantageusement, un modèle de prédiction de durée tel qu'il en existe dans l'état de l'art notamment pour attribuer à chaque phonème une valeur désirée, est utilisé pour assigner une durée aux différents états de la séquence A1 M de modèles acoustiques contextuels.Advantageously, a duration prediction model such as exists in the state of the art, in particular for assigning each phoneme a desired value, is used to assign a duration to the different states of the sequence A 1 M of acoustic models. contextual.

A partir de chaque consigne de durée phonémique d, il convient de déterminer des durées pour chaque état d'un phonème. Pour cela, il est nécessaire de calculer pour chaque modèle acoustique contextuel λ, la durée relative de chaque état i, cette durée est notée α i λ ,

Figure imgb0002
et est donnée par la relation suivante : α i λ = d i λ i = 1 J i d i λ
Figure imgb0003
avec d i λ = 1 1 - a ii λ ,
Figure imgb0004

a ii λ
Figure imgb0005
est la probabilité a priori de rester dans l'état i, d i λ
Figure imgb0006
est la durée moyenne de l'état i du modèle λ, et Ji est le nombre d'états du modèle λ. La durée de l'état i du modèle λ considéré est alors d i λ = α i λ d .
Figure imgb0007
From each instruction of phonemic duration d, it is necessary to determine durations for each state of a phoneme. For this, it is necessary to calculate for each acoustic acoustic model λ, the relative duration of each state i, this duration is noted α i λ ,
Figure imgb0002
and is given by the following relation: α i λ = d ~ i λ Σ i = 1 J i d ~ i λ
Figure imgb0003
with d ~ i λ = 1 1 - at ii λ ,
Figure imgb0004

or at ii λ
Figure imgb0005
is the prior probability of remaining in state i, d ~ i λ
Figure imgb0006
is the average duration of the state i of the model λ, and J i is the number of states of the model λ. The duration of the state i of the model λ considered is then d i λ = α i λ d .
Figure imgb0007

Connaissant cette valeur d i λ ,

Figure imgb0008
il est alors possible de déterminer le nombre de trames de l'état i pour le modèle acoustique contextuel λ considéré, ce qui correspond à son importance temporelle. Le nombre total de trames à synthétiser est obtenu directement par la connaissance de l'importance temporelle de chaque modèle.Knowing this value d i λ ,
Figure imgb0008
it is then possible to determine the number of frames of the state i for the contextual acoustic model λ considered, which corresponds to its temporal importance. The total number of frames to be synthesized is obtained directly by the knowledge of the temporal importance of each model.

Ayant déterminé une séquence de modèles acoustiques et l'importance temporelle relative de chaque modèle, il est possible de générer une séquence temporelle de modèles au cours de la sous-étape 64. Soient N le nombre total de trames à synthétiser, on détermine Λ = [λ12,...,λ N ] la séquence des modèles acoustiques contextuels et Q = [q 1,q 2,..., qN ], la séquence d'états correspondante.Having determined a sequence of acoustic models and the relative temporal importance of each model, it is possible to generate a temporal sequence of models during the substep 64. Let N be the total number of frames to be synthesized, we determine Λ = [λ 1 , λ 2 , ..., λ N ] the sequence of the contextual acoustic models and Q = [ q 1 , q 2 , ..., q N ], the corresponding sequence of states.

La séquence A est une séquence temporelle de modèles, formée des modèles acoustiques contextuels de la séquence A1 M, chacun dupliqué plusieurs fois en fonction de son importance temporelle comme cela est représenté sur la figure 3.The sequence A is a temporal sequence of models, formed of the contextual acoustic models of the sequence A 1 M , each duplicated several times according to its temporal importance as represented on the figure 3 .

La détermination du gabarit requiert est réalisée lors de la sous-étape 66 par la détermination de la séquence d'observations O = o 1 T o 2 T o N T T

Figure imgb0009
maximisant P[O|Q, Λ]. T correspond dans ces équations à l'opérateur de transposition.The determination of the required template is carried out during the substep 66 by the determination of the sequence of observations O = o 1 T o 2 T ... o NOT T T
Figure imgb0009
maximizing P [ O | Q , Λ]. T corresponds in these equations to the transposition operator.

Comme indiqué précédemment, le vecteur d'observation ot de la trame t est constitué d'une partie statique ct = [ct (1), ct (2),...ct (P)] T , P étant le nombre de coefficients MFCC, et d'une partie dynamique Δct 2 ct constituée de la dérivée première et de la dérivée seconde des coefficients MFCC, d'où o t = c t T , Δ c t T , Δ 2 c t T T

Figure imgb0010
avec Δ c t = i = - L 1 L 1 w 1 i c t + i
Figure imgb0011
et Δ 2 c t = i = - L 2 L 2 w 2 i c t + i .
Figure imgb0012
As indicated above, the observation vector o t of the frame t consists of a static part c t = [ c t (1), c t (2), ... c t ( P )] T , P being the number of MFCC coefficients, and a dynamic part Δ c t , Δ 2 c t consisting of the first derivative and the second derivative of the MFCC coefficients, from which o t = vs t T , Δ vs t T , Δ 2 vs t T T
Figure imgb0010
with Δ vs t = Σ i = - The 1 The 1 w 1 i vs t + i
Figure imgb0011
and Δ 2 vs t = Σ i = - The 2 The 2 w 2 i vs t + i .
Figure imgb0012

Ainsi, la séquence d'observations ot est complètement définie par sa partie statique ct formée du vecteur de spectre et d'énergie, la partie dynamique étant directement déduite de celle-ci.Thus, the sequence of observations o t is completely defined by its static part c t formed of the spectrum and energy vector, the dynamic part being directly deduced therefrom.

La séquence d'observation s'écrit aussi sous forme matricielle de la façon suivante : O = W . C ,

Figure imgb0013
avec C = c 1 , c 2 , c N T ,
Figure imgb0014
W = w 1 w 2 w N T ,
Figure imgb0015
w t = w t 0 , w t 1 , w t 2
Figure imgb0016
et w t n = 0 P × P , , 0 P × P , w n - L n I P × P , , w n 0 I P × P , , w n L n I P × P , 0 P × P , , 0 P × P T , n = 0 , 1 , 2.
Figure imgb0017
Maximiser P[O|Q, Λ] par rapport à O revient à résoudre log P O | Q , Λ C = 0 ,
Figure imgb0018
avec log P O | Q , Λ = - 1 2 O T U - 1 O + O T U - 1 M + K ,
Figure imgb0019
U - 1 = diag U q 1 - 1 U q 2 - 1 U q N - 1 ,
Figure imgb0020
et M = μ q 1 T μ q 2 T μ q N T T
Figure imgb0021
où µ qt est le vecteur des moyennes et Uqt est la matrice de covariance de l'état qt , K étant une constante indépendante du vecteur d'observation O. L'équation (11) devient : RC = r
Figure imgb0022
avec R = W T U - 1 W
Figure imgb0023
et r = W T U - 1 M T
Figure imgb0024
The observation sequence is also written in matrix form as follows: O = W . VS ,
Figure imgb0013
with VS = vs 1 , vs 2 , ... vs NOT T ,
Figure imgb0014
W = w 1 w 2 ... w NOT T ,
Figure imgb0015
w t = w t 0 , w t 1 , w t 2
Figure imgb0016
and w t not = 0 P × P , ... , 0 P × P , w not - The not I P × P , ... , w not 0 I P × P , ... , w not The not I P × P , 0 P × P , ... , 0 P × P T , not = 0 , 1 , 2.
Figure imgb0017
Maximize P [ O | Q , Λ] in relation to O is to solve log P O | Q , Λ VS = 0 ,
Figure imgb0018
with log P O | Q , Λ = - 1 2 O T U - 1 O + O T U - 1 M + K ,
Figure imgb0019
U - 1 = diag U q 1 - 1 U q 2 - 1 ... U q NOT - 1 ,
Figure imgb0020
and M = μ q 1 T μ q 2 T ... μ q NOT T T
Figure imgb0021
where μ q t is the vector of averages and U q t is the covariance matrix of the state q t , where K is a constant independent of the observation vector O. Equation (11) becomes: RC = r
Figure imgb0022
with R = W T U - 1 W
Figure imgb0023
and r = W T U - 1 M T
Figure imgb0024

Comme R est une matrice de (NP × NP) éléments, la résolution directe de l'équation RC = r nécessite (N 3 P 3)opérations. Alternativement, pour réduire la complexité de l'algorithme, une procédure itérative de lissage connue peut être employée au cours de la sous-étape 66.Since R is a matrix of ( NP × NP ) elements, the direct resolution of the equation RC = r requires ( N 3 P 3 ) operations. Alternatively, to reduce the complexity of the algorithm, a known iterative smoothing procedure may be employed during substep 66.

La résolution de ces équations permet donc d'obtenir le gabarit acoustique note C, formé de trames ou vecteurs comprenant des informations de spectre et d'énergie.The resolution of these equations thus makes it possible to obtain the acoustic template note C, formed of frames or vectors comprising spectrum and energy information.

Le gabarit acoustique correspond donc à la séquence de vecteurs de spectre et d'énergie la plus probable étant donné la séquence de modèles acoustiques contextuels.The acoustic mask therefore corresponds to the most likely sequence of spectrum and energy vectors given the sequence of contextual acoustic models.

Le procédé se rend ensuite à l'étape 7 de sélection d'une séquence d'unités acoustiques.The method then goes to step 7 of selecting a sequence of acoustic units.

L'étape 7 débute par une sous-étape 72 de détermination d'une séquence référence d'unités symboliques, notée U. Cette séquence référence U est formée à partir de la séquence cible UP et est constituée d'unités symboliques utilisées pour la synthèse, lesquelles peuvent être différentes de celles formant la séquence cible UP. Par exemple, la séquence référence U est formée de phonèmes, de diphonèmes ou autres.Step 7 begins with a sub-step 72 for determining a reference sequence of symbolic units, noted U. This reference sequence U is formed from the target sequence UP and consists of symbolic units used for the synthesis. which may be different from those forming the target sequence UP. For example, the reference sequence U is formed of phonemes, diphonemes or others.

Dans le cas où les unités symboliques utilisées pour la synthèse sont les mêmes que celles utilisées pour définir la séquence cible UP, cette séquence est identique à la séquence référence U, de sorte que la sous-étape 72 n'est pas réalisée.In the case where the symbolic units used for the synthesis are the same as those used to define the target sequence UP, this sequence is identical to the reference sequence U, so that the substep 72 is not performed.

Chaque unité symbolique de la séquence référence U est associée à un ensemble fini d'unités acoustiques correspondant à différentes réalisations acoustiques.Each symbolic unit of the reference sequence U is associated with a finite set of acoustic units corresponding to different acoustic embodiments.

Ensuite, dans le mode de réalisation décrit, le procédé comprend une sous-étape 74 de segmentation du gabarit acoustique en fonction de la séquence référence U.Then, in the embodiment described, the method comprises a substep 74 of segmentation of the acoustic mask according to the reference sequence U.

En effet, pour pouvoir utiliser le gabarit acoustique, il est préférable d'opérer une segmentation de ce gabarit en fonction du type d'unités acoustiques à sélectionner.Indeed, to be able to use the acoustic mask, it is preferable to operate a segmentation of this template according to the type of acoustic units to be selected.

II est à noter d'ailleurs que le procédé de l'invention est applicable à tout type d'unités acoustiques, la sous-étape 74 de segmentation permettant d'adapter le gabarit acoustique aux différents types d'unités.It should be noted moreover that the method of the invention is applicable to any type of acoustic units, the substep 74 of segmentation to adapt the acoustic mask to different types of units.

Cette segmentation est une décomposition du gabarit acoustique sur une base d'unités temporelles correspondant aux types d'unités acoustiques utilisées. Ainsi, cette segmentation correspond au regroupement des trames du gabarit acoustique C par segments d'une durée proche de celle des unités de la séquence de référence U, qui correspondent aux unités acoustiques utilisées pour la synthèse. Ces segments sont notés si sur la figure 3.This segmentation is a decomposition of the acoustic mask on a basis of time units corresponding to the types of acoustic units used. Thus, this segmentation corresponds to the grouping of the frames of the acoustic template C in segments of a duration close to that of the units of the reference sequence U, which correspond to the acoustic units used for the synthesis. These segments are noted s i on the figure 3 .

Avantageusement, l'étape de sélection 7 comporte une sous-étape de présélection 76 permettant de définir, pour chaque unité symbolique Ui de la séquence référence U un sous-ensemble Ei d'unités acoustiques candidates, comme représenté sur la figure 3.Advantageously, the selection step 7 comprises a preselection sub-step 76 making it possible to define, for each symbolic unit U i of the reference sequence U, a subset E i of candidate acoustic units, as represented on FIG. figure 3 .

Cette présélection est réalisée de manière classique, par exemple en fonction des paramètres symboliques des unités acoustiques.This preselection is carried out conventionally, for example according to the symbolic parameters of the acoustic units.

Le procédé comprend en outre une sous-étape 78 d'alignement du gabarit acoustique avec chaque séquence d'unités acoustiques possible à partir des unités candidates présélectionnées pour effectuer la sélection finale.The method further comprises a sub-step 78 of aligning the acoustic mask with each possible sequence of acoustic units from the preselected candidate units to make the final selection.

Plus précisément, les paramètres de chaque unité acoustique candidates sont comparés aux segments du gabarit correspondant par le biais d'un algorithme d'alignement, tel que par exemple un algorithme dit DTW (Dynamic Time Warping).More specifically, the parameters of each candidate acoustic unit are compared to the segments of the corresponding template by means of an alignment algorithm, such as for example a so-called DTW (Dynamic Time Warping) algorithm.

Cet algorithme DTW effectue un alignement de chaque unité acoustique avec le segment de gabarit correspondant pour calculer une distance globale entre ces derniers, égale à la somme des distances locales sur le chemin d'alignement, divisée par le nombre de trames du segment le plus court. La distance globale ainsi définie permet de déterminer une distance relative de durée entre les signaux comparés.This DTW algorithm aligns each acoustic unit with the corresponding template segment to compute an overall distance between them, equal to the sum of the local distances on the alignment path, divided by the number of frames of the shortest segment. . The overall distance thus defined makes it possible to determine a relative distance of duration between the compared signals.

Dans le mode de réalisation décrit, la distance locale utilisée est la distance euclidienne entre les vecteurs de spectre et d'énergie comprenant les coefficients MFCC et les informations d'énergie.In the embodiment described, the local distance used is the Euclidean distance between the spectrum and energy vectors comprising the MFCC coefficients and the energy information.

Ainsi, le procédé de l'invention permet d'obtenir une séquence d'unités acoustiques sélectionnées de manière optimale, grâce à l'utilisation du gabarit acoustique.Thus, the method of the invention makes it possible to obtain a sequence of acoustic units that are optimally selected, thanks to the use of the acoustic mask.

Enfin, dans le cadre d'un procédé de synthèse, l'étape 7 de sélection est suivie d'une étape 9 de synthèse, qui comporte une sous-étape 92 de récupération, pour chaque unité acoustique sélectionnée, d'un signal de parole naturelle dans la base de données 8, une sous-étape 94 de lissage des signaux et une sous-étape 96 de concaténation de différents signaux de parole naturelle afin de délivrer le signal synthétisé final.Finally, in the context of a synthesis process, the selection step 7 is followed by a synthesis step 9, which comprises a substep 92 of recovery, for each selected acoustic unit, of a speech signal natural in the database 8, a substep 94 of signal smoothing and a sub-step 96 of concatenation of different natural speech signals to deliver the final synthesized signal.

En variante, lorsque des consignes prosodiques de fréquence fondamentale de durée et d'énergie sont fournies, un algorithme de modification prosodique tel que par exemple un algorithme connu sous le nom de TD-PSOLA est utilisé au cours du module de synthèse lors d'une sous-étape de modification prosodique.In a variant, when prosodic instructions of fundamental frequency of duration and energy are provided, a prosodic modification algorithm such as for example an algorithm known as TD-PSOLA is used during the synthesis module during a sub-step of prosodic modification.

Enfin, dans l'exemple décrit, les modèles de Markov cachés sont des modèles dont les processus non observables sont à valeurs discrètes.Finally, in the example described, the hidden Markov models are models whose unobservable processes are discrete values.

Cependant, le procédé peut également être réalisé avec des modèles dont les processus non observables sont à valeurs continues.However, the method can also be realized with models whose unobservable processes are continuous values.

Il est également possible d'utiliser pour chaque représentation graphémique, plusieurs séquences d'unités symboliques, la prise en compte de plusieurs séquences symboliques étant connue de l'état de la technique.It is also possible to use for each graphical representation, several sequences of symbolic units, the taking into account of several symbolic sequences being known from the state of the art.

En général, cette technique repose sur l'utilisation de modèles de langage destinés à pondérer les différentes hypothèses par leur probabilité d'apparition dans l'univers symbolique.In general, this technique relies on the use of language models intended to weight the various hypotheses by their probability of appearing in the symbolic universe.

Par ailleurs, les paramètres spectraux MFCC utilisés dans l'exemple décrit peuvent être remplacés par d'autres types de paramètres, tels que des paramètres dits LSF (Linear Spectral Frequencies), des paramètres LPC (Linear Prediction Coefficients) ou encore des paramètres reliés aux formants.Moreover, the MFCC spectral parameters used in the example described can be replaced by other types of parameters, such as so-called Linear Spectral Frequencies (LSF) parameters, Linear Prediction Coefficients (LPC) parameters or parameters related to formants.

Le procédé peut également utiliser d'autres informations caractéristiques des signaux vocaux, telles que des informations de fréquence fondamentale ou de qualité vocale, notamment lors des étapes de détermination des modèles acoustiques contextuels, de détermination du gabarit et de sélection.The method may also use other characteristic information of the voice signals, such as fundamental frequency information or voice quality information, especially during the steps of determining the contextual acoustic models, template determination and selection.

Claims (22)

  1. Method for selecting acoustic units corresponding to acoustic productions of symbolic units of a phonological nature, said acoustic units each containing a natural speech signal and symbolic parameters representing their acoustic characteristics, said method comprising:
    - a step (4) of determining at least one target sequence (UP) of symbolic units as a function of a text to be synthesized; and
    - a step (5) of determining a sequence (Λ1 M) of contextual acoustic models corresponding to said target sequence (UP),
    characterized in that it also comprises:
    - a step (6) of determining an acoustic template (C) based on said sequence (Λ1 M) of contextual acoustic models, the template corresponding to a sequence of vectors of spectrum and of the most probable energy considering the sequence of contextual acoustic models; and
    - a step (7) of selecting a sequence of acoustic units as a function of said acoustic template applied to said target sequence (UP) of symbolic units.
  2. Method according to Claim 1, characterized in that the method comprises a previous step (2) of determining contextual acoustic models, implemented from a given set of acoustic units.
  3. Method according to Claim 2, characterized in that said step (2) of determining contextual acoustic models comprises:
    - a sub-step (22) of determining, for each acoustic unit, a probabilistic model originating from a finite list of models each comprising an observable random process corresponding to the acoustic production of symbolic units, and a non-observable random process having known probabilistic properties called "Markov properties";
    - a sub-step (26) of classifying said probabilistic models as a function of their symbolic parameters,
    the observable and non-observable random processes of the models of each class forming said contextual acoustic models.
  4. Method according to Claim 3, characterized in that said step (2) of determining contextual acoustic models also comprises a sub-step (24) of determining probabilistic models adapted to the phonetic context, the parameters of which are used during said classification sub-step (26).
  5. Method according to either one of Claims 3 and 4,
    characterized in that said classification sub-step (26) comprises a classification by decision trees, the parameters of said probabilistic models being modified by running through said decision trees in order to form said contextual acoustic models.
  6. Method according to any one of Claims 1 to 5,
    characterized in that said step (4) of determining at least one target sequence (UP) of symbolic units comprises:
    - a sub-step (42) of acquiring a symbolic representation of a text; and
    - a sub-step (44) of determining at least one sequence (UP) of symbolic units based on said symbolic representation.
  7. Method according to any one of Claims 1 to 6,
    characterized in that said step (5) of determining a sequence (Λ1 M) of contextual acoustic models comprises:
    - a sub-step (52) of modelling said target sequence (UP) by its breakdown on a basis of probabilistic models in order to deliver a sequence (H1 M) of probabilistic models corresponding to said target sequence (UP); and
    - a sub-step (54) of forming contextual acoustic models by modifying parameter of said probabilistic models in order to form said sequence (Λ1 M) of contextual acoustic models.
  8. Method according to any one of Claims 1 to 7,
    characterized in that said step (6) of determining an acoustic template (C) comprises:
    - a sub-step (62) of determining the temporal importance of each contextual acoustic model;
    - a sub-step (64) of determining a temporal sequence (A) of models; and
    - a sub-step (66) of determining a sequence of corresponding acoustic frames forming said acoustic template (C).
  9. Method according to Claim 8, characterized in that said sub-step (62) of determining the temporal importance of each contextual acoustic model comprises the prediction of its duration.
  10. Method according to any one of Claims 1 to 9,
    characterized in that said step (7) of selecting a sequence of acoustic units comprises:
    - a sub-step (72) of determining a reference sequence (U) of symbolic units based on said target sequence (UP), each symbolic unit of the reference sequence (U) being associated with a set of acoustic units; and
    - a sub-step (78) of alignment between the acoustic units associated with said reference sequence (U) and said acoustic template (C).
  11. Method according to any one of Claims 1 to 10,
    characterized in that said selection step (7) also comprises a sub-step (74) of segmenting said acoustic template (C) as a function of said reference sequence (U).
  12. Method according to Claim 11, characterized in that said segmentation sub-step (74) comprises a breakdown of said acoustic template (C) on a basis of temporal units.
  13. Method according to Claims 10 and 11 taken together, characterized in that, said template being segmented, each segment corresponds to a symbolic unit of the reference sequence (U) and said alignment sub-step (78) comprises the alignment of each segment of the template (C) with each of the acoustic units associated with the corresponding symbolic unit originating from the reference sequence (U).
  14. Method according to any one of Claims 10 to 13,
    characterized in that said alignment sub-step (78) comprises the determination of an optimal alignment as determined by a "DTW" algorithm.
  15. Method according to any one of Claims 10 to 14,
    characterized in that said selection step (7) also comprises a preselection sub-step (76) for determining, for each symbolic unit of the reference sequence (U), candidate acoustic units, said alignment sub-step (78) forming a final selection sub-step amongst these candidate units.
  16. Method according to any one of Claims 1 to 15,
    characterized in that said contextual acoustic models are probabilistic models with observable processes with continuous values and with non-observable processes with discrete values forming the states of this process.
  17. Method according to any one of Claims 1 to 15,
    characterized in that said contextual acoustic models are probabilistic models with non-observable processes with continuous values.
  18. Method for synthesizing a speech signal,
    characterized in that it comprises a selection method according to any one of Claims 1 to 17, said target sequence corresponding to a text to be synthesized and the method also comprising a step (9) of synthesizing a voice sequence based on said sequence of selected acoustic units.
  19. Method according to Claim 18, characterized in that said synthesizing step comprises:
    - a sub-step (92) of retrieving, for each selected acoustic unit, a natural speech signal;
    - a sub-step (94) of smoothing the speech signals; and
    - a sub-step (96) of concatenating the various natural speech signals.
  20. Device for selecting acoustic units corresponding to acoustic productions of symbolic units of a phonological nature, characterized in that it comprises means adapted to the application of a selection method according to any one of Claims 1 to 17.
  21. Device for synthesizing a speech signal,
    characterized in that it includes means adapted to the application of a selection method according to any one of Claims 1 to 17.
  22. Computer program on a data medium, characterized in that it comprises instructions adapted to implement a selection method according to any one of Claims 1 to 17, when the program is loaded into and executed in a computer system.
EP05798354A 2004-09-16 2005-08-30 Method and device for selecting acoustic units and a voice synthesis device Active EP1789953B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR0409822 2004-09-16
PCT/FR2005/002166 WO2006032744A1 (en) 2004-09-16 2005-08-30 Method and device for selecting acoustic units and a voice synthesis device

Publications (2)

Publication Number Publication Date
EP1789953A1 EP1789953A1 (en) 2007-05-30
EP1789953B1 true EP1789953B1 (en) 2010-01-20

Family

ID=34949650

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05798354A Active EP1789953B1 (en) 2004-09-16 2005-08-30 Method and device for selecting acoustic units and a voice synthesis device

Country Status (5)

Country Link
US (1) US20070276666A1 (en)
EP (1) EP1789953B1 (en)
AT (1) ATE456125T1 (en)
DE (1) DE602005019070D1 (en)
WO (1) WO2006032744A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1953052B (en) * 2005-10-20 2010-09-08 株式会社东芝 Method and device of voice synthesis, duration prediction and duration prediction model of training
JP5238205B2 (en) * 2007-09-07 2013-07-17 ニュアンス コミュニケーションズ,インコーポレイテッド Speech synthesis system, program and method
JP4528839B2 (en) * 2008-02-29 2010-08-25 株式会社東芝 Phoneme model clustering apparatus, method, and program
ATE449400T1 (en) * 2008-09-03 2009-12-15 Svox Ag SPEECH SYNTHESIS WITH DYNAMIC CONSTRAINTS
US8315871B2 (en) * 2009-06-04 2012-11-20 Microsoft Corporation Hidden Markov model based text to speech systems employing rope-jumping algorithm
US8340965B2 (en) * 2009-09-02 2012-12-25 Microsoft Corporation Rich context modeling for text-to-speech engines
US8805687B2 (en) * 2009-09-21 2014-08-12 At&T Intellectual Property I, L.P. System and method for generalized preselection for unit selection synthesis
US8594993B2 (en) 2011-04-04 2013-11-26 Microsoft Corporation Frame mapping approach for cross-lingual voice transformation
CN102270449A (en) * 2011-08-10 2011-12-07 歌尔声学股份有限公司 Method and system for synthesising parameter speech
US9570066B2 (en) * 2012-07-16 2017-02-14 General Motors Llc Sender-responsive text-to-speech processing
US9489965B2 (en) * 2013-03-15 2016-11-08 Sri International Method and apparatus for acoustic signal characterization
WO2015092936A1 (en) * 2013-12-20 2015-06-25 株式会社東芝 Speech synthesizer, speech synthesizing method and program
US10902841B2 (en) 2019-02-15 2021-01-26 International Business Machines Corporation Personalized custom synthetic speech

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2296846A (en) * 1995-01-07 1996-07-10 Ibm Synthesising speech from text
GB2313530B (en) * 1996-05-15 1998-03-25 Atr Interpreting Telecommunica Speech synthesizer apparatus
US5950162A (en) * 1996-10-30 1999-09-07 Motorola, Inc. Method, device and system for generating segment durations in a text-to-speech system
US6163769A (en) * 1997-10-02 2000-12-19 Microsoft Corporation Text-to-speech using clustered context-dependent phoneme-based units
US6505158B1 (en) * 2000-07-05 2003-01-07 At&T Corp. Synthesis-based pre-selection of suitable units for concatenative speech

Also Published As

Publication number Publication date
US20070276666A1 (en) 2007-11-29
ATE456125T1 (en) 2010-02-15
EP1789953A1 (en) 2007-05-30
WO2006032744A1 (en) 2006-03-30
DE602005019070D1 (en) 2010-03-11

Similar Documents

Publication Publication Date Title
EP1789953B1 (en) Method and device for selecting acoustic units and a voice synthesis device
O'shaughnessy Interacting with computers by voice: automatic speech recognition and synthesis
US7136816B1 (en) System and method for predicting prosodic parameters
EP3373293B1 (en) Speech recognition method and apparatus
US10497362B2 (en) System and method for outlier identification to remove poor alignments in speech synthesis
WO2018118442A1 (en) Acoustic-to-word neural network speech recognizer
EP1453037A2 (en) Method of setting optimum-partitioned classified neural network and method and apparatus for automatic labeling using optimum-partitioned classified neural network
Rosenberg et al. Modeling phrasing and prominence using deep recurrent learning.
JP2018072697A (en) Phoneme collapse detection model learning apparatus, phoneme collapse section detection apparatus, phoneme collapse detection model learning method, phoneme collapse section detection method, program
EP1526508B1 (en) Method for the selection of synthesis units
Bougrine et al. Spoken arabic algerian dialect identification
Rasanen Basic cuts revisited: Temporal segmentation of speech into phone-like units with statistical learning at a pre-linguistic level
EP1846918B1 (en) Method of estimating a voice conversion function
Furui Generalization problem in ASR acoustic model training and adaptation
EP3308378B1 (en) System and method for outlier identification to remove poor alignments in speech synthesis
Ma et al. Language identification with deep bottleneck features
El Ouahabi et al. Amazigh speech recognition using triphone modeling and clustering tree decision
Garnaik et al. An approach for reducing pitch induced mismatches to detect keywords in children’s speech
EP0595950B1 (en) Real-time speech recognition device and method
Frikha et al. Hidden Markov models (HMMs) isolated word recognizer with the optimization of acoustical analysis and modeling techniques
Gaudard et al. Speech recognition based on template matching and phone posterior probabilities
Humayun et al. A review of social background profiling of speakers from speech accents
Pour et al. Persian Automatic Speech Recognition by the use of Whisper Model
Ahmed et al. Automatic phoneme recognition using mel-frequency cepstral coefficient and dynamic time warping
Scutelnicu et al. A speech to text transcription approach based on Romanian Corpus

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070227

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20090205

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 602005019070

Country of ref document: DE

Date of ref document: 20100311

Kind code of ref document: P

REG Reference to a national code

Ref country code: NL

Ref legal event code: VDEP

Effective date: 20100120

LTIE Lt: invalidation of european patent or patent extension

Effective date: 20100120

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100120

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100120

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100120

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100501

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100520

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100520

REG Reference to a national code

Ref country code: IE

Ref legal event code: FD4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100120

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100120

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100120

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100120

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100120

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100120

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100120

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100120

Ref country code: IE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100120

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100421

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100120

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100120

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100420

26N No opposition filed

Effective date: 20101021

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100120

BERE Be: lapsed

Owner name: FRANCE TELECOM

Effective date: 20100831

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100120

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20100831

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20100831

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20100831

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 602005019070

Country of ref document: DE

Effective date: 20110301

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20110301

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20100831

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100721

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20100830

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100120

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 11

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 12

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 13

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 14

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230720

Year of fee payment: 19

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230720

Year of fee payment: 19