WO2016088241A1 - Système de traitement de la parole et procédé de traitement de la parole - Google Patents

Système de traitement de la parole et procédé de traitement de la parole Download PDF

Info

Publication number
WO2016088241A1
WO2016088241A1 PCT/JP2014/082198 JP2014082198W WO2016088241A1 WO 2016088241 A1 WO2016088241 A1 WO 2016088241A1 JP 2014082198 W JP2014082198 W JP 2014082198W WO 2016088241 A1 WO2016088241 A1 WO 2016088241A1
Authority
WO
WIPO (PCT)
Prior art keywords
phoneme
speech
unit
phonemes
created
Prior art date
Application number
PCT/JP2014/082198
Other languages
English (en)
Japanese (ja)
Inventor
亮 岩宮
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2014/082198 priority Critical patent/WO2016088241A1/fr
Publication of WO2016088241A1 publication Critical patent/WO2016088241A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice

Definitions

  • the present invention relates to a voice processing system and a voice processing method.
  • an in-vehicle device that uses the content information for speech recognition and speech synthesis has been proposed. According to such an in-vehicle device, for example, it is possible to switch channels by performing speech recognition using a broadcast station name included in a satellite radio broadcast wave, or to read out the recognized broadcast station name.
  • phonemes corresponding to phonetic symbols of language are used for the above-mentioned speech recognition and speech synthesis.
  • a speech recognition dictionary is generated from phonemes, or speech is synthesized from phonemes.
  • Such phonemes are created in advance outside the vehicle-mounted device, and then included in the content information along with text data (hereinafter referred to as “offline-created phonemes”) and in the distributed content information.
  • Phoneme generated on-vehicle equipment (online) based on the text data stored hereinafter referred to as “online generated phoneme”).
  • An off-line created phoneme is disclosed in, for example, Patent Document 1
  • an on-line generated phoneme is disclosed in, for example, Patent Document 2.
  • phoneme formats and systems supported by the speech recognition engine and speech synthesis engine differ depending on the engine. For example, even in the same language engine, phoneme formats may differ from manufacturer to manufacturer.
  • phoneme formats may differ from manufacturer to manufacturer.
  • there are phonemes that are supported by both English and French engines in common but there are also phonemes that are supported only by English engines or only by French engines.
  • off-line created phonemes humans can tune and create based on knowledge of correct reading of text data, so it is possible to use correct phonemes rather than online generated phonemes that can only be generated mechanically This is the point.
  • a disadvantage of offline created phonemes is that they cannot be used unless they are stored in advance in the content information, and are less likely to be used than online generated phonemes.
  • the present invention has been made in view of the above problems, and an object of the present invention is to provide a technology capable of suppressing the disadvantages of offline created phonemes and online generated phonemes.
  • An audio processing system includes an information acquisition unit that externally acquires content information including text data and off-line created phonemes corresponding to the reading of the text data, and text information from the content information acquired by the information acquisition unit.
  • An extraction unit that extracts data and off-line created phonemes, and a phoneme generation unit that generates online generation phonemes based on the text data extracted by the extraction unit.
  • the speech processing system determines whether to use the offline created phoneme extracted by the extracting unit, selects the offline created phoneme when determined to use, and selects the phoneme generating unit when determined not to use it. Includes a phoneme selection unit that generates an online generated phoneme and selects the online generated phoneme.
  • the speech processing method obtains content information including text data and off-line created phonemes corresponding to reading of the text data from outside, and the text data and off-line creation from the obtained content information Extracting phonemes, determining whether to use the extracted offline created phonemes, and selecting the offline generated phonemes if determined to use, and determining not to use the extracted phonemes An online generated phoneme is generated based on the text data, and the online generated phoneme is selected.
  • FIG. 1 is a block diagram showing a configuration of a speech processing apparatus according to Embodiment 1.
  • FIG. 3 is a flowchart showing an operation of the speech processing apparatus according to the first embodiment.
  • 4 is a block diagram illustrating a configuration of a speech processing apparatus according to Embodiment 2.
  • FIG. 6 is a flowchart showing an operation of the speech processing apparatus according to the second embodiment.
  • FIG. 10 is a block diagram showing a configuration of a speech processing apparatus according to Embodiment 3.
  • 10 is a flowchart illustrating an operation of the speech processing apparatus according to the third embodiment. It is a block diagram which shows the structure of the audio processing apparatus which concerns on a modification.
  • FIG. 1 is a block diagram showing a configuration of a speech processing apparatus 1 according to Embodiment 1 of the present invention.
  • 1 includes a content information acquisition unit (information acquisition unit) 11, a text and phoneme extraction unit (extraction unit) 12, a phoneme generation unit 13, and a used phoneme selection unit (phoneme selection unit) 14. It has.
  • the content information acquisition unit 11 includes, for example, a communication device that can communicate with the outside 81 (such as a satellite radio broadcast station) of the audio processing device 1 or an input device that can be connected to the communication device.
  • the text and phoneme extraction unit 12, the phoneme generation unit 13, and the used phoneme selection unit 14 are, for example, a CPU (Central Processing Unit) (not shown) of the speech processing device 1 or an HDD (Hard Disk Drive) (not shown) of the speech processing device 1. It is implemented as a function of the CPU by executing a program stored in a storage device such as a semiconductor memory.
  • the content information acquisition unit 11 acquires content information 82 including text data 82a and offline created phonemes 82b corresponding to reading of the text data 82a from the outside 81.
  • the off-line created phoneme 82b is created, for example, by a person tuning based on knowledge of correct reading of the text data 82a.
  • the text and phoneme extraction unit 12 extracts the text data 82a and the offline created phoneme 82b from the content information 82 acquired by the content information acquisition unit 11.
  • the phoneme generation unit 13 generates an online generation phoneme based on the text and the text data 82a extracted by the phoneme extraction unit 12.
  • the online generated phonemes are generated mechanically based on, for example, text data 82a. Note that, as will be described next, whether or not the phoneme generation unit 13 generates the online generation phoneme is determined based on the determination result of the used phoneme selection unit 14.
  • the use phoneme selection unit 14 determines whether to use the off-line created phoneme 82b extracted by the text and phoneme extraction unit 12, and does not cause the phoneme generation unit 13 to generate an online generation phoneme if it is determined to use it.
  • the off-line created phoneme 82b is selected.
  • the used phoneme selection unit 14 causes the phoneme generation unit 13 to generate an online generated phoneme and selects the online generated phoneme.
  • FIG. 2 is a flowchart showing the operation of the speech processing apparatus 1 according to the first embodiment.
  • step S 1 the content information acquisition unit 11 acquires content information 82 from the outside 81.
  • step S2 the text and phoneme extraction unit 12 extracts the text data 82a and the offline created phoneme 82b from the content information 82 acquired by the content information acquisition unit 11.
  • step S3 the use phoneme selection unit 14 determines whether to use the off-line created phoneme 82b extracted by the text and phoneme extraction unit 12. The use phoneme selection unit 14 proceeds to step S4 when it is determined that the off-line created phoneme 82b is used, and proceeds to step S5 otherwise.
  • step S4 the phoneme selection unit 14 selects the off-line created phoneme 82b. Thereafter, the operation of FIG.
  • step S5 the phoneme selection unit 14 causes the phoneme generation unit 13 to generate online generated phonemes based on the extracted text data 82a.
  • step S6 the phoneme selection unit 14 selects the generated online generated phoneme. Thereafter, the operation of FIG.
  • ⁇ Summary of Embodiment 1> According to the speech processing apparatus 1 according to the first embodiment as described above, when it is determined that the offline created phoneme 82b is used, the offline created phoneme 82b is selected, and when it is determined that the offline created phoneme 82b is not used. Select an online generated phoneme. Since the phonemes to be used can be dynamically selected in this way, the disadvantages of offline created phonemes and the disadvantages of online generated phonemes can be suppressed. As a result, the accuracy of phonemes to be used and the possibility of using phonemes can be increased.
  • FIG. 3 is a block diagram showing the configuration of the speech processing apparatus 1 according to Embodiment 2 of the present invention.
  • the same or similar components as those described above are denoted by the same reference numerals, and different portions are mainly described.
  • the speech processing apparatus 1 has a function of a speech recognition apparatus, and in addition to the configuration of FIG. 1, a speech input unit 21, a speech recognition dictionary generation unit (dictionary generation unit) 22, The voice recognition dictionary storage unit 23 and the voice recognition unit 24 are provided.
  • the voice input unit 21 is configured by a voice input device such as a microphone
  • the voice recognition dictionary storage unit 23 is configured by a storage device such as an HDD or a semiconductor memory.
  • the voice recognition dictionary generation unit 22 and the voice recognition unit 24 are realized as functions of a CPU (not shown) of the voice processing device 1, for example.
  • the voice input unit 21 receives voice from outside (for example, a user).
  • the speech recognition dictionary generation unit 22 generates a speech recognition dictionary based on the phonemes selected by the used phoneme selection unit 14.
  • the voice recognition dictionary generated by the voice recognition dictionary generation unit 22 is stored in the voice recognition dictionary storage unit 23.
  • the offline-generated phoneme 82b extracted by the text and phoneme extraction unit 12 includes only predetermined phonemes to be used for generating the speech recognition dictionary. Is determined to use the off-line created phoneme 82b.
  • the used phoneme selection unit 14 determines that the offline created phoneme 82b is not used when the offline created phoneme 82b extracted by the text and phoneme extracting unit 12 includes phonemes other than the above-described predetermined phonemes. .
  • the speech recognition unit 24 uses the speech recognition dictionary (speech recognition dictionary stored in the speech recognition dictionary storage unit 23) generated by the speech recognition dictionary generation unit 22 to accept the speech to be recognized (received by the speech input unit 21). Voice).
  • speech recognition dictionary speech recognition dictionary stored in the speech recognition dictionary storage unit 23
  • FIG. 4 is a flowchart showing the operation of the speech processing apparatus 1 according to the second embodiment.
  • steps S11 and S12 operations similar to those in steps S1 and S2 in FIG. 2 are performed.
  • step S13 the phoneme selection unit 14 attempts to generate a speech recognition dictionary.
  • the used phoneme selection unit 14 determines whether or not the offline created phoneme 82b extracted by the text and phoneme extraction unit 12 includes only a predetermined phoneme to be used for generating the speech recognition dictionary. .
  • step S14 if the used phoneme selection unit 14 determines that the extracted offline created phoneme 82b includes only the above-described predetermined phoneme, it determines that the offline created phoneme 82b is to be used and proceeds to step S15. If not, it is determined that the off-line created phoneme 82b is not used, and the process proceeds to step S16.
  • step S15 the use phoneme selection unit 14 selects the offline created phoneme 82b, and the speech recognition dictionary generation unit 22 uses the speech recognition dictionary based on the offline creation phoneme 82b (the phoneme selected by the use phoneme selection unit 14). Is generated. Thereafter, the process proceeds to step S19.
  • step S16 the used phoneme selection unit 14 causes the phoneme generation unit 13 to generate online generated phonemes based on the extracted text data 82a.
  • step S17 the use phoneme selection unit 14 selects the generated online generation phoneme, and the speech recognition dictionary generation unit 22 is based on the online generation phoneme (the phoneme selected by the use phoneme selection unit 14). To generate a speech recognition dictionary. Thereafter, the process proceeds to step S19.
  • step S18 parallel to steps S11 to S17, the voice input unit 21 receives voice from the outside. Thereafter, the process proceeds to step S19.
  • step S19 the voice recognition unit 24 performs voice recognition of the voice to be recognized (the voice received by the voice input unit 21) using the voice recognition dictionary generated by the voice recognition dictionary generation unit 22. Thereafter, the operation of FIG. 4 ends.
  • ⁇ Summary of Embodiment 2> According to the speech processing apparatus 1 according to the second embodiment as described above, the disadvantages of offline-generated phonemes and the disadvantages of online-generated phonemes are suppressed in the generation of the speech recognition dictionary as in the first embodiment. be able to. As a result, it is possible to increase the accuracy of phonemes to be used and the possibility of using phonemes in generating a speech recognition dictionary. Therefore, a speech recognition dictionary can be appropriately generated in a region where a plurality of languages having different phonemes are used, such as Europe.
  • the used phoneme selection unit 14 may perform the above selection for each external 81.
  • offline generated phonemes 82b may be selected (used) for some externals 81, while online generated phonemes may be selected (used) for the remaining externals 81.
  • the above selection may be performed for each text data 82a and offline created phonemes 82b.
  • FIG. 5 is a block diagram showing the configuration of the speech processing apparatus 1 according to Embodiment 3 of the present invention.
  • the same or similar components as those described above are denoted by the same reference numerals, and different portions will be mainly described.
  • the speech processing apparatus 1 has a function of a speech synthesizer, and includes a speech synthesizer 31 and a speech output unit 32 in addition to the configuration of FIG.
  • the voice synthesizer 31 is realized as a function of a CPU (not shown) of the voice processing device 1, for example.
  • the audio output unit 32 includes, for example, an audio output device such as a speaker.
  • the speech synthesizer 31 synthesizes the speech output from the speech output unit 32 using the phonemes selected by the used phoneme selector 14.
  • the used phoneme selection unit 14 is predetermined to be used for synthesizing the speech output from the speech output unit 32 by the off-line created phoneme 82b extracted by the text and phoneme extraction unit 12. When only the phoneme is included, it is determined that the off-line created phoneme 82b is used. On the other hand, the used phoneme selection unit 14 determines that the offline created phoneme 82b is not used when the offline created phoneme 82b extracted by the text and phoneme extracting unit 12 includes phonemes other than the above-described predetermined phonemes. .
  • the voice output unit 32 outputs the voice synthesized by the voice synthesis unit 31 to the outside.
  • FIG. 6 is a flowchart showing the operation of the speech processing apparatus 1 according to the third embodiment.
  • steps S21 and S22 operations similar to those in steps S1 and S2 in FIG. 2 are performed.
  • step S23 the phoneme selection unit 14 attempts to synthesize the voice output from the voice output unit 32.
  • the use phoneme selection unit 14 determines whether or not the offline created phoneme 82b extracted by the text and phoneme extraction unit 12 includes only a predetermined phoneme to be used for the synthesis of the speech.
  • step S24 if the used phoneme selection unit 14 determines that the extracted offline created phoneme 82b includes only the above-described predetermined phoneme, the used phoneme selection unit 14 determines that the offline created phoneme 82b is to be used and proceeds to step S25. If not, it is determined that the off-line created phoneme 82b is not used, and the process proceeds to step S26.
  • step S25 the use phoneme selection unit 14 selects the off-line created phoneme 82b, and the speech synthesis unit 31 selects the off-line creation phoneme 82b (the phoneme selected by the use phoneme selection unit 14) from the speech output unit 32. Synthesize the output voice. Thereafter, the process proceeds to step S28.
  • step S26 the phoneme selection unit 14 causes the phoneme generation unit 13 to generate online generated phonemes based on the extracted text data 82a.
  • step S27 the use phoneme selection unit 14 selects the generated online generation phoneme, and the speech synthesis unit 31 performs speech based on the online generation phoneme (phoneme selected by the use phoneme selection unit 14).
  • the voice output from the output unit 32 is synthesized. Thereafter, the process proceeds to step S28.
  • step S28 the voice output unit 32 outputs the voice synthesized by the voice synthesis unit 31 to the outside. Thereafter, the operation of FIG.
  • the voice processing device 1 may be a combination of the configuration of the second embodiment and the configuration of the third embodiment.
  • FIG. 7 is a block diagram showing a configuration of the sound processing apparatus 1 according to this modification.
  • the used phoneme selection unit 14 creates offline when the offline created phoneme 82b extracted by the text and phoneme extracting unit 12 includes only a predetermined phoneme to be used for generating the speech recognition dictionary. It is determined that the phoneme 82b is used. On the other hand, the used phoneme selection unit 14 determines that the offline created phoneme 82b is not used when the offline created phoneme 82b extracted by the text and phoneme extracting unit 12 includes phonemes other than the above-described predetermined phonemes. .
  • the use phoneme selection unit 14 uses only predetermined phonemes to be used for synthesizing speech output from the speech output unit 32 by the off-line created phoneme 82b extracted by the text and phoneme extraction unit 12. If included, it is determined that the off-line created phoneme 82b is used. On the other hand, the used phoneme selection unit 14 determines that the offline created phoneme 82b is not used when the offline created phoneme 82b extracted by the text and phoneme extracting unit 12 includes phonemes other than the above-described predetermined phonemes. .
  • the voice processing device 1 described above includes an installed navigation device that can be mounted on a vehicle, a portable navigation device, a communication terminal (for example, a portable terminal such as a mobile phone, a smartphone, and a tablet), and applications installed on these devices.
  • the present invention can also be applied to a voice processing system constructed as a system by appropriately combining functions and servers. In this case, each function or each component of the voice processing device 1 described above may be distributed and arranged in each device that constructs the system, or may be concentrated in any device. Good.
  • 1 speech processing device 11 content information acquisition unit, 12 text and phoneme extraction unit, 13 phoneme generation unit, 14 used phoneme selection unit, 22 speech recognition dictionary generation unit, 24 speech recognition unit, 31 speech synthesis unit, 81 external, 82 Content information, 82a text data, 82b offline created phonemes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Machine Translation (AREA)

Abstract

La présente invention a pour objet de proposer une technique permettant de réduire à un minimum chaque inconvénient d'un phonème préparé hors ligne et d'un phonème généré en ligne. Un dispositif de traitement de la parole (1) comprend une unité d'extraction de phonème et de texte (12) destinée à extraire, à partir d'informations de contenu (82), des données textuelles (82a) et un phonème préparé hors ligne (82b), une unité de génération de phonèmes (13) et une unité de sélection de phonèmes de travail (14). L'unité de sélection de phonèmes de travail (14) détermine s'il faut, ou non, utiliser le phonème préparé hors ligne extrait (82b) et sélectionne soit le phonème préparé hors ligne (82b) s'il est déterminé que le phonème préparé hors ligne (82b) doit être utilisé, soit un phonème généré en ligne s'il est déterminé que le phonème préparé hors ligne (82b) ne doit pas être utilisé, ledit phonème généré en ligne étant généré par l'unité de génération de phonèmes (13).
PCT/JP2014/082198 2014-12-05 2014-12-05 Système de traitement de la parole et procédé de traitement de la parole WO2016088241A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2014/082198 WO2016088241A1 (fr) 2014-12-05 2014-12-05 Système de traitement de la parole et procédé de traitement de la parole

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2014/082198 WO2016088241A1 (fr) 2014-12-05 2014-12-05 Système de traitement de la parole et procédé de traitement de la parole

Publications (1)

Publication Number Publication Date
WO2016088241A1 true WO2016088241A1 (fr) 2016-06-09

Family

ID=56091213

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/082198 WO2016088241A1 (fr) 2014-12-05 2014-12-05 Système de traitement de la parole et procédé de traitement de la parole

Country Status (1)

Country Link
WO (1) WO2016088241A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005227545A (ja) * 2004-02-13 2005-08-25 Matsushita Electric Ind Co Ltd 辞書作成装置、番組案内装置及び辞書作成方法
WO2007069512A1 (fr) * 2005-12-15 2007-06-21 Sharp Kabushiki Kaisha Dispositif de traitement d’informations et programme correspondant
JP2011033874A (ja) * 2009-08-03 2011-02-17 Alpine Electronics Inc 多言語音声認識装置及び多言語音声認識辞書作成方法
JP2012058311A (ja) * 2010-09-06 2012-03-22 Alpine Electronics Inc 動的音声認識辞書の生成方法及びその生成装置
WO2012172596A1 (fr) * 2011-06-14 2012-12-20 三菱電機株式会社 Dispositif générant de l'information de prononciation, dispositif d'information de bord, et procédé de génération de base de données

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005227545A (ja) * 2004-02-13 2005-08-25 Matsushita Electric Ind Co Ltd 辞書作成装置、番組案内装置及び辞書作成方法
WO2007069512A1 (fr) * 2005-12-15 2007-06-21 Sharp Kabushiki Kaisha Dispositif de traitement d’informations et programme correspondant
JP2011033874A (ja) * 2009-08-03 2011-02-17 Alpine Electronics Inc 多言語音声認識装置及び多言語音声認識辞書作成方法
JP2012058311A (ja) * 2010-09-06 2012-03-22 Alpine Electronics Inc 動的音声認識辞書の生成方法及びその生成装置
WO2012172596A1 (fr) * 2011-06-14 2012-12-20 三菱電機株式会社 Dispositif générant de l'information de prononciation, dispositif d'information de bord, et procédé de génération de base de données

Similar Documents

Publication Publication Date Title
US11250859B2 (en) Accessing multiple virtual personal assistants (VPA) from a single device
WO2015098306A1 (fr) Dispositif de commande de réponse et programme de commande
JP6844472B2 (ja) 情報処理装置
US10170122B2 (en) Speech recognition method, electronic device and speech recognition system
US11120785B2 (en) Voice synthesis device
JP2018054790A (ja) 音声対話システムおよび音声対話方法
US11182567B2 (en) Speech translation apparatus, speech translation method, and recording medium storing the speech translation method
CN104282301A (zh) 一种语音命令处理方法以及***
JP5606951B2 (ja) 音声認識システムおよびこれを用いた検索システム
US11367457B2 (en) Method for detecting ambient noise to change the playing voice frequency and sound playing device thereof
JP6109451B2 (ja) 音声認識装置及び音声認識方法
US7181397B2 (en) Speech dialog method and system
KR20140028336A (ko) 음성 변환 장치 및 이의 음성 변환 방법
JP6559417B2 (ja) 情報処理装置、情報処理方法、対話システム、および制御プログラム
US10964307B2 (en) Method for adjusting voice frequency and sound playing device thereof
JP2019113636A (ja) 音声認識システム
WO2016088241A1 (fr) Système de traitement de la parole et procédé de traitement de la parole
KR101945190B1 (ko) 음성인식 작동 시스템 및 방법
JP2011180416A (ja) 音声合成装置、音声合成方法およびカーナビゲーションシステム
JP2009210868A (ja) 音声処理装置、及び音声処理方法等
JP2019211966A (ja) 制御装置、対話装置、制御方法、およびプログラム
JP2019035894A (ja) 音声処理装置および音声処理方法
US20230377594A1 (en) Mobile terminal capable of processing voice and operation method therefor
JP2015064450A (ja) 情報処理装置、サーバ、および、制御プログラム
JP2019028160A (ja) 電子装置および情報端末システム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14907512

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14907512

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP