CN102439660A - Voice-tag method and apparatus based on confidence score - Google Patents

Voice-tag method and apparatus based on confidence score Download PDF

Info

Publication number
CN102439660A
CN102439660A CN2010800015191A CN201080001519A CN102439660A CN 102439660 A CN102439660 A CN 102439660A CN 2010800015191 A CN2010800015191 A CN 2010800015191A CN 201080001519 A CN201080001519 A CN 201080001519A CN 102439660 A CN102439660 A CN 102439660A
Authority
CN
China
Prior art keywords
pronunciation
label
voice
degree
mentioned
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2010800015191A
Other languages
Chinese (zh)
Inventor
何磊
赵蕤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Toshiba Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Corp filed Critical Toshiba Corp
Publication of CN102439660A publication Critical patent/CN102439660A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • G10L15/187Phonemic context, e.g. pronunciation rules, phonotactical constraints or phoneme n-grams

Landscapes

  • Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)

Abstract

The invention provides a voice-tag method and apparatus based on confidence score. The voice-tag method based on confidence score comprises: performing phoneme recognition on a registration speech to obtain a plurality of pronunciation tags of the registration speech; calculating a confidence score for each of the plurality of pronunciation tags! selecting at least one best pronunciation tag from the plurality of pronunciation tags based on the confidence score of each of the plurality of pronunciation tags! and creating a voice tag item corresponding to the registration speech based on the selected at least one best pronunciation tag to add into a recognition network. The present invention optimizes voice tags based on confidence score to reduce the confusion of recognition network consisting of voice tags in the multi-pronunciation registration based voice-tag technology.

Description

Voice label method and apparatus based on the degree of confidence score
Technical field
The present invention relates to the information processing technology, particularly, relate to voice label method and apparatus based on the degree of confidence score.
Background technology
The voice label technology is a kind of application in the speech recognition technology, and it especially extensively adopts in built-in speech recognition system.
System work process based on the voice label technology is following: at first; Carry out the voice registration process; Be that the user is to system's input registration voice; System will register the label that speech conversion is the pronunciation of these voice of representative, and this pronunciation tag-shaped is become the voice label entry corresponding with these registration voice add in the recognition network of this system; Then, carry out speech recognition process, promptly when user's input test voice, system discerns tested speech based on the recognition network that it contains the voice label entry, to confirm its content.Usually, the recognition network of voice label system not only contains the voice label entry of registering voice, but also contains the entry that pronunciation is provided by dictionary or word sound modular converter, is referred to as the dictionary entry at this.
Initial voice label technology realizes based on template matches usually, in registration process, the registration voice is extracted the label of one or more templates as these registration voice that is:; In identifying, (Dynamic Time Warping DTW) matees tested speech and template label to utilize the dynamic time warping algorithm.In recent years, (HiddenMarkov Model, the HMM) widespread usage in speech recognition, aligned phoneme sequence have become the voice label method of main flow as the pronunciation label of registration voice along with the implicit Markov model based on phoneme.It should be noted that the difference according to category of language, phoneme also can change other voice unit into, for example for Chinese, can adopt initial consonant, rhythm auxiliary sequence as voice label.
In the method that adopts aligned phoneme sequence as the pronunciation label of registration voice, aligned phoneme sequence obtains through the registration voice are carried out phoneme recognition.The advantage of aligned phoneme sequence label is: at first, the aligned phoneme sequence label lacks than template label committed memory; Secondly, aligned phoneme sequence label entry combines to constitute new entry more easily with the dictionary entry.These advantages of aligned phoneme sequence label all help to expand recognition network the entry number that can provide.
But; The aligned phoneme sequence label also has certain deficiency: at first, with present phoneme recognition level, the phoneme recognition mistake is generally inevitable; Will cause the aligned phoneme sequence label representative of entirely accurate ground to register the pronunciation of voice like this, thereby cause identification error; Secondly, possibly exist between registration voice and the tested speech and do not match, so also can cause identification error.
Particularly, suppose that the registration voice are " Wang Ming (wang ming) ", then the correct the initial and the final sequence corresponding to these registration voice should be: w ang m ing.But; Because identification level; Speech recognition system possibly registered voice to this and provide incorrect recognition result, for example provides the initial and the final sequence " w an m ing ", thereby this incorrect sequence " w an m ing " will be added in the recognition network as the pronunciation label of registration voice " Wang Ming ".In the case; In tested speech also is under the situation of " Wang Ming "; If system judges that the sequence " w an m ing " in itself and the recognition network is the most approaching; Recognition result will be correct so, but because system may judge that other sequence is the most approaching in this tested speech and the recognition network, so can obtain wrong recognition result.
Therefore, in voice label technology, reduce the identification error that causes for above-mentioned reasons and become a present research emphasis based on the aligned phoneme sequence label.
In order to overcome the deficiency of above-mentioned aligned phoneme sequence stamp methods; The researcher has proposed the scheme of multiple sound registration: for registration voice, use a plurality of pronunciation labels based on different aligned phoneme sequence to constitute a voice label entry corresponding with these registration voice.Particularly, when the registration voice are carried out phoneme recognition, get optimum aligned phoneme sequence recognition result of top n or phoneme lattice recognition result pronunciation label as these registration voice.
Particularly, be example still with registration voice " Wang Ming ", suppose that speech recognition system discerned and provided the initial and the final sequence of three optimums of the descending arrangement of acoustics score to these registration voice:
1.“w?an?m?ing”;
2.“w?an?m?in”;
3.“w?ang?m?ing”;
Then in multiple sound registration, these three sequences are combined into a voice label entry corresponding to registration voice " Wang Ming ", join in the recognition network.Thereby in identifying, recognition network just can be complementary tested speech and registration voice " Wang Ming " as long as judge that any one sequence in tested speech and this three sequences is the most approaching.Discrimination can be enhanced like this.
Adopt the mode of this multiple sound registration, can clearly reduce the phoneme recognition mistake, also can reduce the recognition performance that is caused because of not matching between registration voice and the tested speech and descend the negative effect that speech recognition brought.
But; Because for registration voice; In the registration of single-shot sound is that an aligned phoneme sequence is added into recognition network, in multiple sound registration, then is that a plurality of aligned phoneme sequence are added in the recognition network, so the registration of multiple sound can increase the scale of recognition network.And a voice label entry constitutes the degree of obscuring that can increase recognition network by a plurality of pronunciation sequences, especially can obviously reduce the recognition performance of dictionary entry in the voice label system.
Summary of the invention
The present invention proposes in view of above-mentioned the problems of the prior art just; Its purpose is to provide a kind of voice label method and apparatus based on the degree of confidence score; So that in voice label technology based on multiple sound registration; Based on the degree of confidence voice label of must assigning to optimize, thereby reduce to comprise the degree of obscuring of the recognition network of voice label.
According to an aspect of the present invention, a kind of voice label method based on the degree of confidence score is provided, comprises: carry out phoneme recognition for the registration voice, to obtain a plurality of pronunciation labels of these registration voice; For above-mentioned a plurality of pronunciation labels calculate the degree of confidence score respectively; Based on the degree of confidence score of each pronunciation label in above-mentioned a plurality of pronunciation labels, from these a plurality of pronunciation labels, select at least one optimum pronunciation label; And based on selected above-mentioned at least one optimum pronunciation forming label voice label entry corresponding, to add in the recognition network with above-mentioned registration voice.
According to another aspect of the present invention, a kind of voice label method based on the degree of confidence score is provided, comprises: carry out phoneme recognition for the registration voice, to obtain a plurality of pronunciation labels of these registration voice; For above-mentioned a plurality of pronunciation labels are confirmed the weight based on the degree of confidence score respectively; Based on above-mentioned a plurality of pronunciation forming labels voice label entry corresponding, adding in the recognition network, and correspondingly write down each weight of these a plurality of pronunciation labels based on the degree of confidence score with above-mentioned registration voice; And when utilizing above-mentioned recognition network that tested speech is discerned; For a plurality of recognition result candidates that belong to same voice label entry among the recognition result candidate, according to its weight based on the degree of confidence score of the corresponding respectively label that respectively pronounces merge.
According to a further aspect of the invention, a kind of voice label device based on the degree of confidence score is provided, comprises: the phoneme recognition unit, it carries out phoneme recognition for the registration voice, to obtain a plurality of pronunciation labels of these registration voice; Degree of confidence score computing unit, it calculates the degree of confidence score respectively for above-mentioned a plurality of pronunciation labels; Pronunciation label selected cell, it selects at least one optimum pronunciation label based on the degree of confidence score of each pronunciation label in above-mentioned a plurality of pronunciation labels from these a plurality of pronunciation labels; And the voice label producing unit, it is based on selected above-mentioned at least one optimum pronunciation forming label voice label entry corresponding with above-mentioned registration voice, to add in the recognition network.
According to a further aspect of the invention, a kind of voice label device based on the degree of confidence score is provided, comprises: the phoneme recognition unit, it carries out phoneme recognition for the registration voice, to obtain a plurality of pronunciation labels of these registration voice; The degree of confidence weight is confirmed the unit, and it is that above-mentioned a plurality of pronunciation label is confirmed the weight based on the degree of confidence score respectively; The voice label producing unit, it is based on above-mentioned a plurality of pronunciation forming labels voice label entry corresponding with above-mentioned registration voice, adding in the recognition network, and correspondingly writes down each the weight based on the degree of confidence score of these a plurality of pronunciation labels; And recognition result merge cells; It is when utilizing above-mentioned recognition network that tested speech is discerned; For a plurality of recognition result candidates that belong to same voice label entry among the recognition result candidate, according to its weight based on the degree of confidence score of corresponding respectively pronunciation label merge.
Description of drawings
Believe through below in conjunction with the explanation of accompanying drawing, can make people understand the above-mentioned characteristics of the present invention, advantage and purpose better the specific embodiment of the invention.
Fig. 1 is the process flow diagram based on the voice label method of degree of confidence score according to the embodiment of the invention 1;
Fig. 2 is the figure that the example of the phoneme lattice of registering voice is shown;
Fig. 3 is the process flow diagram based on the voice label method of degree of confidence score according to the embodiment of the invention 2;
Fig. 4 is the block scheme based on the voice label device of degree of confidence score according to the embodiment of the invention 3; And
Fig. 5 is the block scheme based on the voice label device of degree of confidence score according to the embodiment of the invention 4.
Embodiment
Below in conjunction with accompanying drawing each preferred embodiment of the present invention is elaborated.
(embodiment 1)
At first combine Fig. 1~2 to describe embodiments of the invention 1.Fig. 1 is the process flow diagram based on the voice label method of degree of confidence score according to the embodiment of the invention 1.In the present embodiment, must assign to select to register the pronunciation label of voice based on degree of confidence.
Particularly, as shown in Figure 1, this method is at first in step 105, and the registration voice of importing for the user carry out phoneme recognition, to obtain a plurality of pronunciation labels of these registration voice.Particularly, the aligned phoneme sequence of these a plurality of pronunciation labels a plurality of optimums that can be these registration voice or the phoneme lattice (phoneme lattice) of these registration voice.So-called phoneme lattice are that identical part in a plurality of aligned phoneme sequence of the pronunciation of expression voice is combined and the multiple sound that generates is represented.
In this step; Registration voice for user's input; That for example adopts in this area widespread usage at present carries out phoneme recognition with implicit Markov model as acoustic model, the phonetic recognition system of utilizing Viterbi (Viterbi) search to decode, so as to obtain registering voice by the aligned phoneme sequence of a plurality of optimums of the big minispread of acoustics score or the phoneme lattice of these registration voice.
But; It will be appreciated by those skilled in the art that; In this step, as long as can obtain to register a plurality of pronunciation labels of voice, be not limited in above-mentioned this area widespread usage at present with implicit Markov model as acoustic model, the phonetic recognition system of utilizing Viterbi (Viterbi) search to decode; And can adopt any now known or phonetic recognition system or the method that can know in the future, the present invention does not have special restriction to this.
In step 110, for a plurality of pronunciation labels of above-mentioned registration voice calculate the degree of confidence score respectively.
Particularly, under the situation of the aligned phoneme sequence of a plurality of optimums that a plurality of pronunciation labels of above-mentioned registration voice are these registration voice, be that each aligned phoneme sequence is calculated the degree of confidence score.
At this, still the registration voice " Wang Ming (wang ming) " with the front are example.Suppose to have imported above-mentioned registration voice " Wang Ming (wang ming) " afterwards, obtain following three the initial and the final sequences by the descending arrangement of acoustics score through identification the user:
1.“w?an?m?ing”;
2.“w?an?m?in”;
3.“w?ang?m?ing”,
Then in this step, be each the calculating degree of confidence score in above-mentioned three sequences, it is following to suppose to obtain the degree of confidence score:
1. " w an m ing ", degree of confidence score: 70;
2. " w an m in ", degree of confidence score: 60;
3. " w ang m ing ", degree of confidence score: 75.
On the other hand, be under the situation of phoneme lattice at a plurality of pronunciation labels of above-mentioned registration voice, be the single phoneme on each arc in these phoneme lattice calculates the degree of confidence score.
For example; Suppose after above-mentioned registration voice " Wang Ming (wang ming) " are discerned; What obtain is that the multiple sound of the another kind of mode corresponding with above-mentioned the initial and the final sequence 1~3 representes, is the initial and the final lattice shown in Fig. 2 that it is to represent through identical part in above-mentioned 1~3 these three sequences is combined the multiple sound that generates.In the case, in this step, for these the initial and the final lattice, for each unit on the arc (initial consonant or simple or compound vowel of a Chinese syllable) " w ", " an ", " ang ", " m ", " in " " ing " calculate the degree of confidence score respectively.
Those skilled in the art can understand; In this step, can adopt any now known or can know in the future for aligned phoneme sequence or single phoneme calculate the method for degree of confidence score, for example based on the degree of confidence score computing method of posterior probability or based on the degree of confidence score computing method of inverse model etc.
Then, in step 115,, from these a plurality of pronunciation labels, select at least one optimum pronunciation label based on the degree of confidence score of each pronunciation label in above-mentioned a plurality of pronunciation labels.
In one embodiment, in this step, from above-mentioned a plurality of pronunciation labels, select the highest pronunciation label of degree of confidence score, as above-mentioned at least one optimum pronunciation label.
In the case; Under the situation of the aligned phoneme sequence of a plurality of optimums that above-mentioned a plurality of pronunciation labels are above-mentioned registration voice; Degree of confidence score based on each aligned phoneme sequence; From the aligned phoneme sequence of these a plurality of optimums, select the highest aligned phoneme sequence of degree of confidence score, register the pronunciation label of the optimum of voice as this.On the other hand; Under the situation of the phoneme lattice that above-mentioned a plurality of pronunciation labels are above-mentioned registration voice; Degree of confidence score based on the phoneme on each arc in these phoneme lattice; In these phoneme lattice, keep that the highest paths of degree of confidence score of the phoneme on the arc, and remove other the arc outside this path, thereby utilize the path of the arc that is kept to constitute the pronunciation label of the optimum of these registration voice.
In addition, in another embodiment, in this step, selection degree of confidence score is higher than the pronunciation label of predefined confidence threshold value from above-mentioned a plurality of pronunciation labels, as above-mentioned at least one optimum pronunciation label.
In the case; Under the situation of the aligned phoneme sequence of a plurality of optimums that above-mentioned a plurality of pronunciation labels are above-mentioned registration voice; Based on the degree of confidence score of each aligned phoneme sequence, from the aligned phoneme sequence of these a plurality of optimums, select the degree of confidence score to be higher than the aligned phoneme sequence of predefined confidence threshold value.For example; Under the situation of above-mentioned three sequences 1~3 of above-mentioned registration voice " Wang Ming (wang ming) "; If with the confidence threshold value setting for 65; The sequence 1 and 3 that then the degree of confidence score is higher than this confidence threshold value in these three sequences 1~3 will be selected out, as the pronunciation label of the optimum of these registration voice " Wang Ming (wang ming) ".
On the other hand; Under the situation of the phoneme lattice that above-mentioned a plurality of pronunciation labels are these registration voice; Degree of confidence score based on the phoneme on each arc in these phoneme lattice; The degree of confidence score of in above-mentioned phoneme lattice, removing the phoneme on the arc is lower than the arc of predefined confidence threshold value, and constitutes the pronunciation label of the optimum of these registration voice by remaining arc.
At this; Above-mentioned confidence threshold value can rule of thumb be set by the developer; Particularly, for example prepare a large amount of test datas in advance, and be utilized in the above-mentioned steps 105 employed phonetic recognition system these test datas are carried out phoneme recognition; And then, the phoneme recognition result calculates for carrying out the degree of confidence score; And the degree of confidence score through the preferable recognition result of statistical quality, set suitable confidence threshold value, so that can utilize this confidence threshold value to guarantee that the recognition result of better quality is selected out.
In step 120, based on selected above-mentioned at least one optimum pronunciation forming label voice label entry corresponding, to add in the recognition network with above-mentioned registration voice.Thereby, when user's input test voice, can discern tested speech based on this recognition network.About the making and the interpolation of voice label entry,, omit its detailed description at this owing to be the existing knowledge of this area.
It more than is exactly detailed description based on the voice label method of degree of confidence score to present embodiment.In the present embodiment; From a plurality of pronunciation labels of registration voice, select at least one optimum pronunciation label based on the degree of confidence score; Make the voice label entry of these registration voice, can optimize voice label, reduce multiple sound and be registered in the negative effect that is produced in the voice label application.Particularly, can reduce to comprise the scale of the recognition network of voice label, reduce the degree of obscuring of recognition network, and then help to improve voice label, the particularly recognition performance of dictionary entry.Simultaneously, the method for present embodiment has kept the advantage of multiple sound registration again to a certain extent, can reduce the negative effect that brought by the phoneme recognition mistake, reduces because of the identification error that does not match and cause between registration voice and the tested speech.
(embodiment 2)
The voice label method based on the degree of confidence score of the embodiment of the invention 2 is described below in conjunction with Fig. 3.In the present embodiment, must assign to merge a plurality of pronunciation labels of registering voice based on degree of confidence.
Particularly, as shown in Figure 3, this method is at first in step 305, and the registration voice of importing for the user carry out phoneme recognition, to obtain a plurality of pronunciation labels of these registration voice.About this step, since identical with the step 105 of prior figures 1, so omit detailed explanation.
In step 310, for a plurality of pronunciation labels of above-mentioned registration voice calculate the degree of confidence score respectively.About this step, since identical with the step 110 of prior figures 1, so omit detailed explanation.
Then, in step 315, for a plurality of pronunciation labels of above-mentioned registration voice are confirmed the weight based on the degree of confidence score respectively.Wherein, the pronunciation label that the degree of confidence score is high more, its weight is also high more.
In one embodiment, in this step, come to be the weight of each calculating in above-mentioned a plurality of pronunciation labels based on the degree of confidence score based on following formula (1):
Weight i=degree of confidence score i/ (degree of confidence score 1+ degree of confidence score 2+...+ degree of confidence score n) (1)
Wherein, Weight i representes the weight of i pronunciation label; The degree of confidence score of the degree of confidence score of the 1st pronunciation of degree of confidence score 1 expression label, the 2nd pronunciation of degree of confidence score 2 expressions label ..., degree of confidence score n representes n the degree of confidence score of label or the like of pronouncing, and n representes a plurality of pronunciation number of tags.That is to say that according to following formula (1), weight based on the degree of confidence score of each pronunciation label in a plurality of pronunciation labels is the ratio of degree of confidence score sum of degree of confidence score all a plurality of pronunciation labels with this of this pronunciation label.
Describe with object lesson below.Still the registration voice " Wang Ming (wangming) " with the front are example, are assumed to be among resulting recognition result of these registration voice and degree of confidence score result of calculation and the front embodiment 1 identically, are:
1. " w an m ing ", degree of confidence score: 70;
2. " w an m in ", degree of confidence score: 60;
3. " w ang m ing ", degree of confidence score: 75.
In the case, in this step, calculate based on the weight of degree of confidence score following respectively for above-mentioned sequence 1~3 according to following formula (1):
1. " w an m ing ", the degree of confidence score: 70, weight=70/ (70+60+75)=0.34;
2. " w an m in ", the degree of confidence score: 60, weight=60/ (70+60+75)=0.29;
3. " w ang m ing ", the degree of confidence score: 75, weight=75/ (70+60+75)=0.37.
That is to say, in the present embodiment, utilize weight, a plurality of pronunciation labels of registration voice are defined as a component of the voice label of these registration voice respectively based on the degree of confidence score.
Then; In step 320; Based on above-mentioned a plurality of pronunciation forming labels of above-mentioned registration voice and the corresponding voice label entry of these registration voice, adding in the recognition network, and correspondingly write down each weight of these a plurality of pronunciation labels based on the degree of confidence score.
In this step; Both can directly be based on step 305 and make the voice label entry corresponding with these registration voice for a plurality of pronunciation labels that the registration voice obtain; Also can be at first as the step 115 of above-mentioned embodiment 1 based on the degree of confidence score of each the pronunciation label in these a plurality of pronunciation labels; From these a plurality of pronunciation labels, select at least one optimum pronunciation label, make the voice label entry corresponding according to selected this at least one optimum pronunciation label then with these registration voice.About system of selection, can omit detailed explanation at this with reference to the specific descriptions of front about step 115.
Then, in step 325, when user's input test voice, this tested speech is discerned, to obtain a plurality of optimal identification result candidates of this tested speech based on above-mentioned recognition network.
Particularly, in this step, when based on above-mentioned recognition network tested speech being discerned, coupling obtains all close with this tested speech pronunciations sequences, the i.e. label that pronounces from recognition network, as a plurality of optimal identification result candidates of this tested speech.
For instance; Suppose to have imported under the situation of tested speech " Wu Ming (wu ming) " the user; Recognition network obtains all close with it sequences through coupling; For example possibly mate and obtain the most close pronunciation sequence " w u m ing " and similar sequence and, thereby finally obtain following recognition result by the descending arrangement of acoustics score for this tested speech " Wu Ming (wu ming) " corresponding to three sequences in the voice label entry of registration voice " Wang Ming ":
1.w an m in, acoustics score: 90;
2.w u m ing, acoustics score: 89;
3.w u n ing, acoustics score: 87;
4.w an m ing, acoustics score: 80;
5.w ang m ing, acoustics score: 70.
In step 330, in a plurality of optimal identification result candidates of above-mentioned tested speech, for a plurality of recognition result candidates that belong to same voice label entry, according to its weight based on the degree of confidence score of corresponding respectively pronunciation label merge.
Particularly; In this step; Belong to a plurality of recognition result candidates of same voice label entry among a plurality of optimal identification result candidates for above-mentioned tested speech, it merged into a recognition result candidate, and according to its distinguish the weight based on the degree of confidence score of corresponding pronunciation label; Ask for the weighted sum of these a plurality of recognition result candidates' that belong to same voice label entry acoustics score, as the recognition result candidate's after merging acoustics score.
Describe with object lesson below.Be example still with above-mentioned tested speech " Wu Ming " and above-mentioned recognition result candidate 1~5 thereof; Suppose according to recognition network can know among this recognition result candidate 1~5 recognition result candidate 1,4,5 belong to same voice label entry, promptly with the corresponding voice label entry of registration voice " Wang Ming "; And recognition result candidate 2,3 belongs to different voice label entry; Then in this step; Recognition result candidate 1,4,5 is merged into a recognition result candidate, and based on pairing each pronunciation labels of recognition result candidate 1,4,5 based on the weight of degree of confidence score promptly 0.29,0.34 and 0.37, ask for the weighted sum of recognition result candidate 1,4,5 acoustics score; As the recognition result candidate's after merging acoustics score, thereby the recognition result candidate after merging becomes:
1,4,5.w an m in (w an m ing, w ang m ing), merge back acoustics score: 90*0.29+80*0.34+70*0.37=79.2;
2.w u m ing, acoustics score: 89;
3.w u n ing, acoustics score: 87.
Like this, recognition result candidate 1,4,5 is merged for a recognition result candidate, corresponding to the voice label entry of registration voice " Wang Ming ".
At this; Need to prove; Though above-mentioned recognition result candidate 1,4,5 is merged for a recognition result candidate, because 1,4,5 of recognition result candidates just belong to a voice label entry before merging, corresponding to registration voice " Wang Ming "; Even so these recognition results candidate is merged, the recognition result candidate after the merging still can be corresponding with registration voice " Wang Ming ".
In step 335, after merging, the formed recognition result candidate, select the highest recognition result candidate of acoustics score, as final recognition result from above-mentioned a plurality of optimal identification result candidates.
In the above example; Merging through the recognition result candidate based on weight; Recognition result 2.wu m ing becomes the highest recognition result candidate of acoustics score, thereby it is selected out as final recognition result, has so promptly obtained correct recognition result.
In addition; If 2,3 among the recognition result candidate of the above-mentioned tested speech of hypothesis " Wu Ming " also belongs to same voice label entry; Then also will utilize weight to merge based on the degree of confidence score for this recognition result candidate 2,3; And if the acoustics score of the recognition result after merging is still for the highest; Then the recognition result after this merging will be selected out, thereby this recognition result candidate 2,3 common affiliated voice label entries will become the voice label entry that matees with tested speech " Wu Ming ", thereby can identify the correct content of tested speech " Wu Ming " according to this voice label entry.
It more than is exactly detailed description based on the voice label method of degree of confidence score to present embodiment.In the present embodiment, utilize weight to merge the recognition result candidate who belongs to same voice label entry, can reduce multiple sound and be registered in the negative effect that is produced in the voice label application based on the degree of confidence score.Particularly, can reduce to comprise the degree of obscuring of the recognition network of voice label, and then help to improve voice label, the particularly recognition performance of dictionary entry.Simultaneously, the method for present embodiment has kept the advantage of multiple sound registration again, can reduce the negative effect that brought by the phoneme recognition mistake, reduces because of the identification error that does not match and cause between registration voice and the tested speech.
(embodiment 3)
Under same inventive concept, the present invention provides a kind of voice label device based on the degree of confidence score.Be described in greater detail below in conjunction with accompanying drawing.
Fig. 4 is the block scheme based on the voice label device of degree of confidence score according to the embodiment of the invention 3.As shown in Figure 4, the voice label device 40 based on the degree of confidence score of present embodiment comprises: phoneme recognition unit 41, degree of confidence score computing unit 42, pronunciation label selected cell 43, voice label producing unit 44, tested speech recognition unit 45 and recognition network 46.
Particularly, phoneme recognition unit 41 carries out phoneme recognition for the registration voice, to obtain a plurality of pronunciation labels of these registration voice.The aligned phoneme sequence of a plurality of optimums that these a plurality of pronunciation labels can be these registration voice or the phoneme lattice of these registration voice.
In one embodiment; Phoneme recognition unit 41 is based on present realizing as acoustic model, the phonetic recognition system of utilizing Viterbi (Viterbi) search to decode with implicit Markov model of widespread usage in this area; Its registration voice for user input carry out phoneme recognition, with obtain registering voice by the aligned phoneme sequence of a plurality of optimums of the big minispread of acoustics score or the phoneme lattice of these registration voice.
Certainly, be not limited to this, phoneme recognition unit 41 can adopt any phonetic recognition system or method known now or that can know in the future to realize that the present invention does not have special restriction to this.
Degree of confidence score computing unit 42 calculates the degree of confidence score respectively for above-mentioned a plurality of pronunciation labels.
Particularly, under the situation of the aligned phoneme sequence that a plurality of pronunciation labels of above-mentioned registration voice are a plurality of optimums, degree of confidence score computing unit 42 is that each aligned phoneme sequence is calculated the degree of confidence score.In addition, be under the situation of phoneme lattice at a plurality of pronunciation labels of these registration voice, degree of confidence score computing unit 42 is that the single phoneme on each arc in these phoneme lattice calculates the degree of confidence score.
Degree of confidence score computing unit 42 can realize based on any method of calculating the degree of confidence score for aligned phoneme sequence or single phoneme now known or that can know in the future, for example based on the degree of confidence score computing method of posterior probability, based on the degree of confidence score computing method of inverse model etc.
Pronunciation label selected cell 43 is selected at least one optimum pronunciation label based on the degree of confidence score of each pronunciation label in above-mentioned a plurality of pronunciation labels from these a plurality of pronunciation labels.
In one embodiment, pronunciation label selected cell 43 is selected the highest pronunciation label of degree of confidence score from above-mentioned a plurality of pronunciation labels, as above-mentioned at least one optimum pronunciation label.
In addition, in another embodiment, pronunciation label selected cell 43 selects the degree of confidence score to be higher than the pronunciation label of predefined confidence threshold value from above-mentioned a plurality of pronunciation labels, as above-mentioned at least one optimum pronunciation label.As previously mentioned, above-mentioned confidence threshold value can be based on pre-prepd test data, set according to developer's experience.
Voice label producing unit 44 is based on selected above-mentioned at least one optimum pronunciation forming label voice label entry corresponding with above-mentioned registration voice, to add in the recognition network 46.
Tested speech recognition unit 45 is discerned based on 46 pairs of these tested speech of above-mentioned recognition network, to identify the content of this tested speech when user's input test voice.
It more than is exactly detailed description based on the voice label device of degree of confidence score to present embodiment.The voice label device 40 based on the degree of confidence score of present embodiment can be realized the voice label method based on the degree of confidence score of front embodiment 1 in the operation.
In addition; Though in the above-described embodiments recognition network 46 is depicted as the inside that has been included in based on the voice label device 40 of degree of confidence score; But be not limited to this, this recognition network 46 also can be positioned at the outside based on the voice label device 40 of degree of confidence score.
(embodiment 4)
The voice label device based on the degree of confidence score of the embodiment of the invention 4 is described below in conjunction with Fig. 5.
As shown in Figure 5, the voice label device 50 based on the degree of confidence score of present embodiment comprises: phoneme recognition unit 51, degree of confidence score computing unit 52, degree of confidence weight are confirmed unit 53, voice label producing unit 54, tested speech recognition unit 55, recognition result merge cells 56 and recognition network 57.
Particularly, phoneme recognition unit 51 carries out phoneme recognition for the registration voice, to obtain a plurality of pronunciation labels of these registration voice.
Degree of confidence score computing unit 52 calculates the degree of confidence score respectively for a plurality of pronunciation labels of above-mentioned registration voice.
The degree of confidence weight confirms that unit 53 is the definite respectively weight based on the degree of confidence score of above-mentioned a plurality of pronunciation labels.Wherein, the pronunciation label that the degree of confidence score is high more, its weight is also high more.
In one embodiment; The degree of confidence weight confirms that unit 53 is each the pronunciation label in above-mentioned a plurality of pronunciation labels; Calculate the ratio of degree of confidence score sum of degree of confidence score all a plurality of pronunciation labels of this pronunciation label, as the weight based on the degree of confidence score of this pronunciation label with this.
Voice label producing unit 54 is based on above-mentioned a plurality of pronunciation forming labels voice label entry corresponding with above-mentioned registration voice, adding in the recognition network 57, and correspondingly writes down each the weight based on the degree of confidence score of these a plurality of pronunciation labels.
In one embodiment; Voice label producing unit 54 is based on the degree of confidence score of each pronunciation label in above-mentioned a plurality of pronunciation labels; From these a plurality of pronunciation labels, select at least one optimum pronunciation label, and make the voice label entry corresponding with above-mentioned registration voice according to selected this at least one optimum pronunciation label.
Tested speech recognition unit 55 is discerned based on 57 pairs of these tested speech of above-mentioned recognition network when user's input test voice, to obtain a plurality of optimal identification result candidates of this tested speech.
The a plurality of recognition result candidates that belong to same voice label entry among a plurality of optimal identification result candidates that recognition result merge cells 56 is obtained for tested speech recognition unit 55, according to its respectively the weight based on the degree of confidence score of corresponding pronunciation label merge.
In one embodiment; Recognition result merge cells 56 is carried out following process for a plurality of recognition result candidates that belong to same voice label entry among the above-mentioned recognition result candidate: should a plurality of recognition result candidates merge into a recognition result candidate; And distinguish the weight based on the degree of confidence score of corresponding pronunciation label according to these a plurality of recognition result candidates; Ask for the weighted sum of these a plurality of recognition result candidates' acoustics score, as the recognition result candidate's after merging acoustics score.
And among the above-mentioned recognition result candidate after merging, recognition result merge cells 56 is selected the recognition result candidate optimum, that for example acoustics score is the highest, as final recognition result.
It more than is exactly detailed description based on the voice label device of degree of confidence score to present embodiment.The voice label device 50 based on the degree of confidence score of present embodiment can be realized the voice label method based on the degree of confidence score of front embodiment 2 in the operation.
In addition; Though in the above-described embodiments recognition network 57 is depicted as the inside that has been included in based on the voice label device 50 of degree of confidence score; But be not limited to this, this recognition network 57 also can be positioned at the outside based on the voice label device 50 of degree of confidence score.
In addition; It will be appreciated by those skilled in the art that; Voice label device 40,50 and each ingredient thereof based on the degree of confidence score of the foregoing description 3,4 can be made up of the circuit or the chip of special use, also can carry out corresponding program through computing machine (processor) and realize.
Though more than embodiment through certain exemplary the voice label method and apparatus based on the degree of confidence score of the present invention has been carried out detailed description; But above these embodiment are not exhaustive, and those skilled in the art can realize variations and modifications within the spirit and scope of the present invention.Therefore, the present invention is not limited to these embodiment, and scope of the present invention only is as the criterion with accompanying claims.

Claims (10)

1. voice label method based on the degree of confidence score comprises:
Carry out phoneme recognition for the registration voice, to obtain a plurality of pronunciation labels of these registration voice;
For above-mentioned a plurality of pronunciation labels calculate the degree of confidence score respectively;
Based on the degree of confidence score of each pronunciation label in above-mentioned a plurality of pronunciation labels, from these a plurality of pronunciation labels, select at least one optimum pronunciation label; And
Based on selected above-mentioned at least one optimum pronunciation forming label voice label entry corresponding, to add in the recognition network with above-mentioned registration voice.
2. voice label method based on the degree of confidence score comprises:
Carry out phoneme recognition for the registration voice, to obtain a plurality of pronunciation labels of these registration voice;
For above-mentioned a plurality of pronunciation labels are confirmed the weight based on the degree of confidence score respectively;
Based on above-mentioned a plurality of pronunciation forming labels voice label entry corresponding, adding in the recognition network, and correspondingly write down each weight of these a plurality of pronunciation labels based on the degree of confidence score with above-mentioned registration voice; And
When utilizing above-mentioned recognition network that tested speech is discerned, for a plurality of recognition result candidates that belong to same voice label entry among the recognition result candidate, according to its weight based on the degree of confidence score of corresponding respectively pronunciation label merge.
3. method according to claim 2, wherein confirm respectively further to comprise based on the step of the weight of degree of confidence score for above-mentioned a plurality of pronunciation labels:
For above-mentioned a plurality of pronunciation labels calculate the degree of confidence score respectively; And
For above-mentioned a plurality of pronunciation labels are confirmed the weight based on the degree of confidence score respectively, the pronunciation label that wherein the degree of confidence score is high more, its weight is also high more.
4. method according to claim 2, wherein:
Weight based on the degree of confidence score of each pronunciation label in above-mentioned a plurality of pronunciation label is the ratio of degree of confidence score sum of degree of confidence score all a plurality of pronunciation labels with this of this pronunciation label.
5. method according to claim 2, wherein the step based on above-mentioned a plurality of pronunciation forming labels voice label entry corresponding with above-mentioned registration voice further comprises:
Based on the degree of confidence score of each pronunciation label in above-mentioned a plurality of pronunciation labels, from these a plurality of pronunciation labels, select at least one optimum pronunciation label; And
Based on selected above-mentioned at least one optimum pronunciation forming label voice label entry corresponding with above-mentioned registration voice.
6. according to claim 1 or 5 described methods, the step of at least one optimum pronunciation label of wherein above-mentioned selection further comprises:
From above-mentioned a plurality of pronunciation labels, select the highest pronunciation label of degree of confidence score, as above-mentioned at least one optimum pronunciation label of above-mentioned registration voice.
7. according to claim 1 or 5 described methods, the step of at least one optimum pronunciation label of wherein above-mentioned selection further comprises:
Selection degree of confidence score is higher than the pronunciation label of predefined confidence threshold value from above-mentioned a plurality of pronunciation labels, as above-mentioned at least one optimum pronunciation label of above-mentioned registration voice.
8. method according to claim 2, wherein the step of above-mentioned merging further comprises:
For a plurality of recognition result candidates that belong to same voice label entry among the above-mentioned recognition result candidate:
Should a plurality of recognition result candidates merge into a recognition result candidate; And
Distinguish the weight based on the degree of confidence score of corresponding pronunciation label according to these a plurality of recognition result candidates, ask for the weighted sum of these a plurality of recognition result candidates' acoustics score, as the recognition result candidate's after merging acoustics score.
9. voice label device based on the degree of confidence score comprises:
The phoneme recognition unit, it carries out phoneme recognition for the registration voice, to obtain a plurality of pronunciation labels of these registration voice;
Degree of confidence score computing unit, it calculates the degree of confidence score respectively for above-mentioned a plurality of pronunciation labels;
Pronunciation label selected cell, it selects at least one optimum pronunciation label based on the degree of confidence score of each pronunciation label in above-mentioned a plurality of pronunciation labels from these a plurality of pronunciation labels; And
The voice label producing unit, it is based on selected above-mentioned at least one optimum pronunciation forming label voice label entry corresponding with above-mentioned registration voice, to add in the recognition network.
10. voice label device based on the degree of confidence score comprises:
The phoneme recognition unit, it carries out phoneme recognition for the registration voice, to obtain a plurality of pronunciation labels of these registration voice;
The degree of confidence weight is confirmed the unit, and it is that above-mentioned a plurality of pronunciation label is confirmed the weight based on the degree of confidence score respectively;
The voice label producing unit, it is based on above-mentioned a plurality of pronunciation forming labels voice label entry corresponding with above-mentioned registration voice, adding in the recognition network, and correspondingly writes down each the weight based on the degree of confidence score of these a plurality of pronunciation labels; And
The recognition result merge cells; It is when utilizing above-mentioned recognition network that tested speech is discerned; For a plurality of recognition result candidates that belong to same voice label entry among the recognition result candidate, according to its weight based on the degree of confidence score of corresponding respectively pronunciation label merge.
CN2010800015191A 2010-06-29 2010-06-29 Voice-tag method and apparatus based on confidence score Pending CN102439660A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2010/052954 WO2012001458A1 (en) 2010-06-29 2010-06-29 Voice-tag method and apparatus based on confidence score

Publications (1)

Publication Number Publication Date
CN102439660A true CN102439660A (en) 2012-05-02

Family

ID=45401457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010800015191A Pending CN102439660A (en) 2010-06-29 2010-06-29 Voice-tag method and apparatus based on confidence score

Country Status (2)

Country Link
CN (1) CN102439660A (en)
WO (1) WO2012001458A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103500579A (en) * 2013-10-10 2014-01-08 中国联合网络通信集团有限公司 Voice recognition method, device and system
CN103559881A (en) * 2013-11-08 2014-02-05 安徽科大讯飞信息科技股份有限公司 Language-irrelevant key word recognition method and system
CN104282305A (en) * 2013-07-12 2015-01-14 通用汽车环球科技运作有限责任公司 Result arbitrating system and method for speech system
CN105074822A (en) * 2013-03-26 2015-11-18 杜比实验室特许公司 Device and method for audio classification and audio processing
CN106157969A (en) * 2015-03-24 2016-11-23 阿里巴巴集团控股有限公司 The screening technique of a kind of voice identification result and device
CN106340297A (en) * 2016-09-21 2017-01-18 广东工业大学 Speech recognition method and system based on cloud computing and confidence calculation
US9715878B2 (en) 2013-07-12 2017-07-25 GM Global Technology Operations LLC Systems and methods for result arbitration in spoken dialog systems
CN107808662A (en) * 2016-09-07 2018-03-16 阿里巴巴集团控股有限公司 Update the method and device in the syntax rule storehouse of speech recognition
CN110264996A (en) * 2019-04-17 2019-09-20 北京爱数智慧科技有限公司 Voice annotation quality determination method, device, equipment and computer-readable medium
CN110364146A (en) * 2019-08-23 2019-10-22 腾讯科技(深圳)有限公司 Audio recognition method, device, speech recognition apparatus and storage medium
CN111048098A (en) * 2018-10-12 2020-04-21 广达电脑股份有限公司 Voice correction system and voice correction method
CN112447173A (en) * 2019-08-16 2021-03-05 阿里巴巴集团控股有限公司 Voice interaction method and device and computer storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1165590A (en) * 1997-08-25 1999-03-09 Nec Corp Voice recognition dialing device
US20040148173A1 (en) * 2003-01-23 2004-07-29 Gansha Wu Registering an utterance and an associated destination anchor with a speech recognition engine
CN1615508A (en) * 2001-12-17 2005-05-11 旭化成株式会社 Speech recognition method, remote controller, information terminal, telephone communication terminal and speech recognizer
CN1753083A (en) * 2004-09-24 2006-03-29 中国科学院声学研究所 Phonetic symbol method, system reach audio recognition method and system based on phonetic symbol

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1165590A (en) * 1997-08-25 1999-03-09 Nec Corp Voice recognition dialing device
CN1615508A (en) * 2001-12-17 2005-05-11 旭化成株式会社 Speech recognition method, remote controller, information terminal, telephone communication terminal and speech recognizer
US20040148173A1 (en) * 2003-01-23 2004-07-29 Gansha Wu Registering an utterance and an associated destination anchor with a speech recognition engine
CN1753083A (en) * 2004-09-24 2006-03-29 中国科学院声学研究所 Phonetic symbol method, system reach audio recognition method and system based on phonetic symbol

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YAN MING CHENG ET.AL: "VOICE-TO-PHONEME CONVERSION ALGORITHMS FOR SPEAKER-INDEPENDENT VOICE-TAG APPLICATIONS IN EMBEDDED PLATFORMS", 《WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING, 2005 IEEE》, 27 November 2005 (2005-11-27) *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105074822A (en) * 2013-03-26 2015-11-18 杜比实验室特许公司 Device and method for audio classification and audio processing
US10803879B2 (en) 2013-03-26 2020-10-13 Dolby Laboratories Licensing Corporation Apparatuses and methods for audio classifying and processing
CN104282305B (en) * 2013-07-12 2018-04-24 通用汽车环球科技运作有限责任公司 It is used for the system and method for result arbitration in speech dialogue system
CN104282305A (en) * 2013-07-12 2015-01-14 通用汽车环球科技运作有限责任公司 Result arbitrating system and method for speech system
US9715878B2 (en) 2013-07-12 2017-07-25 GM Global Technology Operations LLC Systems and methods for result arbitration in spoken dialog systems
CN103500579B (en) * 2013-10-10 2015-12-23 中国联合网络通信集团有限公司 Audio recognition method, Apparatus and system
CN103500579A (en) * 2013-10-10 2014-01-08 中国联合网络通信集团有限公司 Voice recognition method, device and system
CN103559881B (en) * 2013-11-08 2016-08-31 科大讯飞股份有限公司 Keyword recognition method that languages are unrelated and system
CN103559881A (en) * 2013-11-08 2014-02-05 安徽科大讯飞信息科技股份有限公司 Language-irrelevant key word recognition method and system
CN106157969B (en) * 2015-03-24 2020-04-03 阿里巴巴集团控股有限公司 Method and device for screening voice recognition results
CN106157969A (en) * 2015-03-24 2016-11-23 阿里巴巴集团控股有限公司 The screening technique of a kind of voice identification result and device
CN107808662A (en) * 2016-09-07 2018-03-16 阿里巴巴集团控股有限公司 Update the method and device in the syntax rule storehouse of speech recognition
CN107808662B (en) * 2016-09-07 2021-06-22 斑马智行网络(香港)有限公司 Method and device for updating grammar rule base for speech recognition
CN106340297A (en) * 2016-09-21 2017-01-18 广东工业大学 Speech recognition method and system based on cloud computing and confidence calculation
CN111048098A (en) * 2018-10-12 2020-04-21 广达电脑股份有限公司 Voice correction system and voice correction method
CN110264996A (en) * 2019-04-17 2019-09-20 北京爱数智慧科技有限公司 Voice annotation quality determination method, device, equipment and computer-readable medium
CN110264996B (en) * 2019-04-17 2021-12-17 北京爱数智慧科技有限公司 Method, device and equipment for determining voice labeling quality and computer readable medium
CN112447173A (en) * 2019-08-16 2021-03-05 阿里巴巴集团控股有限公司 Voice interaction method and device and computer storage medium
CN110364146A (en) * 2019-08-23 2019-10-22 腾讯科技(深圳)有限公司 Audio recognition method, device, speech recognition apparatus and storage medium
CN110364146B (en) * 2019-08-23 2021-07-27 腾讯科技(深圳)有限公司 Speech recognition method, speech recognition device, speech recognition apparatus, and storage medium

Also Published As

Publication number Publication date
WO2012001458A1 (en) 2012-01-05

Similar Documents

Publication Publication Date Title
CN102439660A (en) Voice-tag method and apparatus based on confidence score
CN107016994B (en) Voice recognition method and device
CN109036391B (en) Voice recognition method, device and system
CN110675855B (en) Voice recognition method, electronic equipment and computer readable storage medium
CN103714048B (en) Method and system for correcting text
CN101785051B (en) Voice recognition device and voice recognition method
CN111402862B (en) Speech recognition method, device, storage medium and equipment
CN105654940B (en) Speech synthesis method and device
JP6284462B2 (en) Speech recognition method and speech recognition apparatus
JP2008216756A (en) Technique for acquiring character string or the like to be newly recognized as phrase
CN107093422B (en) Voice recognition method and voice recognition system
Qian et al. A two-pass framework of mispronunciation detection and diagnosis for computer-aided pronunciation training
JP2014219557A (en) Voice processing device, voice processing method, and program
KR102199246B1 (en) Method And Apparatus for Learning Acoustic Model Considering Reliability Score
CN101515456A (en) Speech recognition interface unit and speed recognition method thereof
CN110415725B (en) Method and system for evaluating pronunciation quality of second language using first language data
CN106653002A (en) Literal live broadcasting method and platform
CN111599339B (en) Speech splicing synthesis method, system, equipment and medium with high naturalness
Tong et al. Goodness of tone (GOT) for non-native Mandarin tone recognition.
US8219386B2 (en) Arabic poetry meter identification system and method
CN117099157A (en) Multitasking learning for end-to-end automatic speech recognition confidence and erasure estimation
CN111508497B (en) Speech recognition method, device, electronic equipment and storage medium
CN102970618A (en) Video on demand method based on syllable identification
Zhang et al. Wake-up-word spotting using end-to-end deep neural network system
JP2014164261A (en) Information processor and information processing method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20120502