CN106875942B - Acoustic model self-adaption method based on accent bottleneck characteristics - Google Patents

Acoustic model self-adaption method based on accent bottleneck characteristics Download PDF

Info

Publication number
CN106875942B
CN106875942B CN201611232996.4A CN201611232996A CN106875942B CN 106875942 B CN106875942 B CN 106875942B CN 201611232996 A CN201611232996 A CN 201611232996A CN 106875942 B CN106875942 B CN 106875942B
Authority
CN
China
Prior art keywords
accent
deep
acoustic model
neural network
audio data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611232996.4A
Other languages
Chinese (zh)
Other versions
CN106875942A (en
Inventor
陶建华
易江燕
温正棋
倪浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201611232996.4A priority Critical patent/CN106875942B/en
Publication of CN106875942A publication Critical patent/CN106875942A/en
Application granted granted Critical
Publication of CN106875942B publication Critical patent/CN106875942B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/065Adaptation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/02Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention belongs to the technical field of voice recognition, and particularly relates to an acoustic model self-adaption method based on accent bottleneck characteristics. In order to realize the personalized customization of the acoustic model aiming at users with different accents, the method provided by the invention comprises the following steps: s1, based on the first deep neural network, taking the voiceprint splicing characteristics of the plurality of accent audio data as training samples to obtain a deep accent bottleneck network model; s2, acquiring accent splicing characteristics of the accent audio data based on the deep accent bottleneck network; s3, based on the deep second neural network, taking the accent splicing characteristics of the accent audio data as training samples to obtain an accent independent baseline acoustic model; s4, adjusting parameters of the baseline acoustic model independent of the accent by using the accent splicing characteristics of specific accent audio data to generate an accent-dependent acoustic model. By the method, the accuracy of the speech recognition with the accent is improved.

Description

Acoustic model self-adaption method based on accent bottleneck characteristics
Technical Field
The invention belongs to the technical field of voice recognition, and particularly relates to an acoustic model self-adaption method based on accent bottleneck characteristics.
Background
To date, speech recognition technology has become an important portal for human-computer interaction, and the number of users using this technology is increasing. Since these users come from the five lakes and four seas with very different accents, the universal speech recognition acoustic model is difficult to be applied to all users. Therefore, the acoustic models are customized for different accents of users. At present, the technology of extracting the voiceprint features has been widely applied in the speaker field, and the voiceprint features of the speaker have a myriad of connections with the accent of the speaker. Although a few scholars extract the accent features through the technology of extracting the voiceprint features, the technology cannot represent the accent features at a high level, and how to represent the accent features at a high level is crucial to individually customizing the acoustic model.
Therefore, there is a need in the art for a new method to solve the above problems.
Disclosure of Invention
In order to solve the above problems in the prior art, that is, to implement personalized customization of an acoustic model for users with different accents, the invention provides an acoustic model adaptive method based on accent bottleneck characteristics. The method comprises the following steps:
s1, based on the first deep neural network, taking the voiceprint splicing characteristics of the plurality of accent audio data as training samples to obtain a deep accent bottleneck network model;
s2, acquiring accent splicing characteristics of the accent audio data based on the deep accent bottleneck network;
s3, based on a second deep neural network, taking the accent splicing characteristics of the accent audio data as training samples to obtain an accent independent baseline acoustic model;
s4, adjusting parameters of the baseline acoustic model independent of the accent by using the accent splicing characteristics of specific accent audio data to generate an accent-dependent acoustic model.
Preferably, in step S1, the step of acquiring the voiceprint splicing feature includes:
s11, extracting acoustic features from the accent audio data;
s12, extracting a voiceprint feature vector of the speaker by using the acoustic features;
and S13, fusing the voiceprint feature vector and the acoustic feature to generate a voiceprint splicing feature.
Preferably, in step S1, the first neural network is a deep feedforward neural network model, and the deep feedforward neural network model is trained with the voiceprint concatenation characteristics of the plurality of accent audio data, so as to obtain a deep accent bottleneck network.
Preferably, the step S2 further includes:
s21, extracting the accent bottleneck characteristics of the accent audio data by utilizing the deep accent bottleneck network model;
and S22, fusing the accent bottleneck characteristic and the acoustic characteristic to obtain an accent splicing characteristic of the accent audio data.
Preferably, the step S21 further includes: and taking the voiceprint splicing characteristics of the accent audio data as the input of the deep accent bottleneck network model, and obtaining the accent bottleneck characteristics of the accent audio data by utilizing a forward propagation algorithm.
Preferably, in step S3, the second neural network is a deep bidirectional long-and-short memory recurrent neural network, and the deep bidirectional long-and-short memory recurrent neural network is trained with the accent stitching features, so as to obtain an acoustic model of the deep bidirectional long-and-short memory recurrent neural network with independent accents;
and taking the acoustic model of the deep bidirectional long-term and short-term memory cyclic neural network with the independent accents as an independent baseline acoustic model of the accents.
Preferably, in step S4, parameters of an output layer of the baseline acoustic model independent of accents are adjusted using the accent stitching features to produce an accent dependent acoustic model.
Preferably, in step S4, parameters of the last output layer of the accent-independent baseline acoustic model are adjusted.
Preferably, parameters of an output layer of the accent-independent baseline acoustic model are adjusted using a back propagation algorithm.
By adopting the acoustic model self-adaption method based on the accent bottleneck characteristics, the method has the following beneficial effects:
(1) the accent splicing characteristics extracted by adopting the deep accent bottleneck network have more abstract and more general expression, and can accurately obtain high-level representation of the accent.
(2) The output layers of the baseline acoustic models independent of the accents are self-adapted by using the accent splicing characteristics, each accent has a corresponding output layer, hidden layer parameters are shared, and the storage space of the models can be reduced.
(3) By the adoption of the acoustic model self-adaption method based on the accent bottleneck characteristics, the accuracy of accent-bearing speech recognition is improved.
Drawings
FIG. 1 is a flow chart of an acoustic model adaptation method based on accent bottleneck characteristics of the present invention;
FIG. 2 is an overall flow diagram of an embodiment of the present invention;
FIG. 3 is a flow diagram of generating a voiceprint join feature of an embodiment of the invention;
FIG. 4 is a flow chart of generating an accent stitching feature of an embodiment of the present invention.
Detailed Description
Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are only for explaining the technical principle of the present invention, and are not intended to limit the scope of the present invention.
Referring to fig. 1, fig. 1 shows a flow chart of an acoustic model adaptation method based on an accent bottleneck characteristic of the present invention. The method of the invention comprises the following steps:
s1, based on the first neural network model, taking the voiceprint splicing characteristics of the plurality of accent audio data as training samples to obtain a deep accent bottleneck network;
s2, acquiring accent splicing characteristics of the accent audio data based on the deep accent bottleneck network;
s3, based on a second neural network model, taking the accent splicing characteristics of the accent audio data as training samples to obtain an accent independent baseline acoustic model;
s4, adjusting parameters of the baseline acoustic model independent of the accent by using the accent splicing characteristics of specific accent audio data to generate an accent-dependent acoustic model.
Fig. 2 shows a general flow diagram 2 of an embodiment of the invention. The method of the present invention is described in detail below with reference to fig. 2.
In step S1, the step of obtaining the voiceprint stitching feature includes:
and S11, extracting acoustic features from the accent audio data. Specifically, the step mainly adopts mel-frequency spectrum characteristics or mel-frequency cepstrum characteristics. Taking the mel-frequency cepstrum feature as an example, the static parameter of the mel-frequency cepstrum feature can be 13-dimensional, first-order difference and second-order difference are carried out on the static parameter, the dimension of the final parameter is 39-dimensional, and then the 39-dimensional feature is utilized for carrying out subsequent processing.
And S12, extracting the voiceprint feature vector of the speaker by using the acoustic features. Specifically, a gaussian mixture model-general background model is trained by using the acoustic features, and then a voiceprint feature vector of each person is extracted from the acoustic features by using the gaussian mixture model-general background model, and the dimension of the voiceprint feature vector is 80 dimensions.
And S13, fusing the voiceprint feature vector and the acoustic feature to generate a voiceprint splicing feature. As shown in fig. 3, in the process of producing the voiceprint stitching feature, the acoustic feature extracted in S11 is fused with the voiceprint feature vector extracted in S12. Specifically, the voiceprint feature vector of each person is spliced to the acoustic features of each frame, thereby generating a voiceprint splice feature.
In step S1, the first neural network may be a deep feedforward neural network model, and the deep feedforward neural network model is trained with the generated voiceprint stitching features to obtain a deep accent bottleneck network. In this embodiment, the last hidden node of the deep accent bottleneck network is 60, which is less than other hidden nodes, and the other hidden nodes may be 1024 or 2048. In this embodiment, the training criterion of the deep feedforward neural network model is cross entropy, and the training method is a back propagation algorithm. The activation function of the deep feedforward neural network model may be a double-curved activation function or a hyperbolic tangent activation function, and the loss function of the network is cross entropy, which belongs to the technology known in the art and is not described in detail herein.
In step S2, the step of acquiring the accent stitching characteristics includes:
s21, extracting the accent bottleneck characteristics of the accent audio data by using the deep accent bottleneck network;
and S22, fusing the accent bottleneck characteristic and the acoustic characteristic to obtain an accent splicing characteristic of the accent audio data.
Specifically, the deep accent bottleneck network obtained in step S1 is regarded as a feature extractor, and the voiceprint concatenation feature generated in step S13 is used as an input of the deep accent bottleneck network, and the accent bottleneck feature of the accent audio data is obtained by using a forward propagation algorithm. In this embodiment, the accent bottleneck is characterized by 60 dimensions. As shown in fig. 4, in the process of producing the accent stitching features, the accent bottleneck features extracted at S21 and the acoustic features extracted at S11 are fused at a frame level, thereby generating the accent stitching features.
In step S3, the second neural network may be a deep bidirectional long-short time memory cyclic neural network, and the deep bidirectional long-short time memory cyclic neural network is trained by the accent stitching features obtained in step S2, that is, the accent stitching features obtained in step S2 are input to the deep bidirectional long-short time memory cyclic neural network, and the label of the output layer is an initial consonant and a vowel. And obtaining an acoustic model of the deep bidirectional long-term and short-term memory cyclic neural network with independent accents, and taking the acoustic model of the deep bidirectional long-term and short-term memory cyclic neural network with independent accents as an acoustic model of the baseline with independent accents. In this embodiment, the training criterion for deep bidirectional long-term and short-term memory of the recurrent neural network is a connection timing classification function, and the training method is a back propagation algorithm. The deep bidirectional long-time and short-time memory cyclic neural network can not only memorize historical information of input characteristics, but also predict future knowledge of the input characteristics, and three control gates are adopted to realize the functions of memorizing and predicting, wherein the three control gates are respectively an input gate, a forgetting gate and an output gate. The deep bidirectional long-term and short-term memory recurrent neural network belongs to the technology known in the art, and is not described in detail herein.
In step S4, the parameters of the output layer (typically, the last output layer) of the baseline acoustic model independent of accents obtained in step S3 are fine-tuned using the accent stitching features obtained in step S2, to produce an accent-dependent acoustic model. Specifically, the accent splicing characteristics corresponding to each accent are used as the input of the independent baseline acoustic model of the accent, each accent corresponds to an output layer on which the accent depends, and the hidden layer is shared by the accents. Further, a backward propagation algorithm is adopted to carry out parameter fine adjustment on the baseline acoustic model independent of the accents. The baseline acoustic model with independent accents is based on a bidirectional long-and-short-term memory neural network model, the acoustic model with dependent accents generated by the hidden layer is also based on a deep bidirectional long-and-short-term memory cyclic neural network model, the output layer of the model is labeled with initials and finals, and the text corresponding to the audio data can be recognized by combining a pronunciation dictionary and a language model.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.

Claims (6)

1. An acoustic model adaptive method based on accent bottleneck characteristics, characterized in that the method comprises the following steps:
s1, based on the first deep neural network, taking the voiceprint splicing characteristics of the plurality of accent audio data as training samples to obtain a deep accent bottleneck network model;
s2, acquiring accent splicing characteristics of the accent audio data based on the deep accent bottleneck network;
s3, based on a second deep neural network, taking the accent splicing characteristics of the accent audio data as training samples to obtain an accent independent baseline acoustic model;
s4, adjusting parameters of the baseline acoustic model independent of the accents by using the accent splicing characteristics of specific accent audio data to generate an accent-dependent acoustic model;
wherein, the step S2 further includes:
s21, extracting the accent bottleneck characteristics of the accent audio data by utilizing the deep accent bottleneck network model;
s22, fusing the accent bottleneck characteristics and the acoustic characteristics to obtain accent splicing characteristics of the accent audio data;
the voiceprint splicing characteristics of the accent audio data are used as the input of the deep accent bottleneck network model, and the accent bottleneck characteristics of the accent audio data are obtained by utilizing a forward propagation algorithm;
wherein, in step S3, the second deep neural network is a deep bidirectional long-and-short memory recurrent neural network,
training the deep bidirectional long-time and short-time memory cyclic neural network by using the plurality of accent splicing characteristics to obtain an acoustic model of the deep bidirectional long-time and short-time memory cyclic neural network with independent accents;
taking the acoustic model of the deep bidirectional long-time and short-time memory cyclic neural network with the independent accents as an independent baseline acoustic model of the accents;
the label of the output layer of the deep bidirectional long-and-short time memory cyclic neural network is an initial consonant and a vowel, and the training criterion of the deep bidirectional long-and-short time memory cyclic neural network is a connection time sequence classification function.
2. The method according to claim 1, wherein in step S1, the step of obtaining the voiceprint stitching feature comprises:
s11, extracting acoustic features from the accent audio data;
s12, extracting a voiceprint feature vector of the speaker by using the acoustic features;
and S13, fusing the voiceprint feature vector and the acoustic feature to generate a voiceprint splicing feature.
3. The method according to claim 2, wherein in step S1, the first deep neural network is a deep feedforward neural network, and the deep feedforward neural network is trained with the voiceprint concatenation features of the plurality of accent audio data to obtain a deep accent bottleneck network.
4. The method according to claim 1, wherein in step S4, parameters of an output layer of the accent-independent baseline acoustic model are adjusted using the accent stitching features to generate an accent-dependent acoustic model.
5. The method of claim 4, wherein in step S4, parameters of a last output layer of the accent-independent baseline acoustic model are adjusted.
6. The method of claim 4 or 5, wherein parameters of an output layer of the accent-independent baseline acoustic model are adjusted using a back propagation algorithm.
CN201611232996.4A 2016-12-28 2016-12-28 Acoustic model self-adaption method based on accent bottleneck characteristics Active CN106875942B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611232996.4A CN106875942B (en) 2016-12-28 2016-12-28 Acoustic model self-adaption method based on accent bottleneck characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611232996.4A CN106875942B (en) 2016-12-28 2016-12-28 Acoustic model self-adaption method based on accent bottleneck characteristics

Publications (2)

Publication Number Publication Date
CN106875942A CN106875942A (en) 2017-06-20
CN106875942B true CN106875942B (en) 2021-01-22

Family

ID=59164199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611232996.4A Active CN106875942B (en) 2016-12-28 2016-12-28 Acoustic model self-adaption method based on accent bottleneck characteristics

Country Status (1)

Country Link
CN (1) CN106875942B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108074575A (en) * 2017-12-14 2018-05-25 广州势必可赢网络科技有限公司 Identity verification method and device based on recurrent neural network
CN108447490B (en) * 2018-02-12 2020-08-18 阿里巴巴集团控股有限公司 Voiceprint recognition method and device based on memorability bottleneck characteristics
CN108538285B (en) * 2018-03-05 2021-05-04 清华大学 Multi-instance keyword detection method based on multitask neural network
CN108682416B (en) * 2018-04-11 2021-01-01 深圳市卓翼科技股份有限公司 Local adaptive speech training method and system
CN108682417B (en) * 2018-05-14 2020-05-19 中国科学院自动化研究所 Small data voice acoustic modeling method in voice recognition
CN108922559A (en) * 2018-07-06 2018-11-30 华南理工大学 Recording terminal clustering method based on voice time-frequency conversion feature and integral linear programming
CN109147763B (en) * 2018-07-10 2020-08-11 深圳市感动智能科技有限公司 Audio and video keyword identification method and device based on neural network and inverse entropy weighting
CN109074804B (en) * 2018-07-18 2021-04-06 深圳魔耳智能声学科技有限公司 Accent-based speech recognition processing method, electronic device, and storage medium
CN110890085B (en) * 2018-09-10 2023-09-12 阿里巴巴集团控股有限公司 Voice recognition method and system
CN109887497B (en) * 2019-04-12 2021-01-29 北京百度网讯科技有限公司 Modeling method, device and equipment for speech recognition
CN111833847B (en) * 2019-04-15 2023-07-25 北京百度网讯科技有限公司 Voice processing model training method and device
CN110033760B (en) 2019-04-15 2021-01-29 北京百度网讯科技有限公司 Modeling method, device and equipment for speech recognition
CN110570858A (en) * 2019-09-19 2019-12-13 芋头科技(杭州)有限公司 Voice awakening method and device, intelligent sound box and computer readable storage medium
CN110930982A (en) * 2019-10-31 2020-03-27 国家计算机网络与信息安全管理中心 Multi-accent acoustic model and multi-accent voice recognition method
CN111370025A (en) * 2020-02-25 2020-07-03 广州酷狗计算机科技有限公司 Audio recognition method and device and computer storage medium
CN111508501B (en) * 2020-07-02 2020-09-29 成都晓多科技有限公司 Voice recognition method and system with accent for telephone robot
CN112992126B (en) * 2021-04-22 2022-02-25 北京远鉴信息技术有限公司 Voice authenticity verification method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN106875942A (en) 2017-06-20

Similar Documents

Publication Publication Date Title
CN106875942B (en) Acoustic model self-adaption method based on accent bottleneck characteristics
CN107195296B (en) Voice recognition method, device, terminal and system
EP3857543B1 (en) Conversational agent pipeline trained on synthetic data
Ghai et al. Literature review on automatic speech recognition
Arora et al. Automatic speech recognition: a review
WO2020123315A1 (en) Reconciliation between simulated data and speech recognition output using sequence-to-sequence mapping
US11741942B2 (en) Text-to-speech synthesis system and method
Gulzar et al. A systematic analysis of automatic speech recognition: an overview
JP2024505076A (en) Generate diverse, natural-looking text-to-speech samples
KR102607373B1 (en) Apparatus and method for recognizing emotion in speech
Hasija et al. Out domain data augmentation on Punjabi children speech recognition using Tacotron
Baby et al. Deep Learning Techniques in Tandem with Signal Processing Cues for Phonetic Segmentation for Text to Speech Synthesis in Indian Languages.
Mandal et al. Shruti-II: A vernacular speech recognition system in Bengali and an application for visually impaired community
EP2867890A1 (en) Meta-data inputs to front end processing for automatic speech recognition
Kumar et al. Automatic spontaneous speech recognition for Punjabi language interview speech corpus
Fauziya et al. A Comparative study of phoneme recognition using GMM-HMM and ANN based acoustic modeling
Li et al. Deep neural networks for syllable based acoustic modeling in Chinese speech recognition
Pantazoglou et al. Implementation of the generic greek model for cmu sphinx speech recognition toolkit
Abraham et al. Articulatory Feature Extraction Using CTC to Build Articulatory Classifiers Without Forced Frame Alignments for Speech Recognition.
Wisesty et al. Feature extraction analysis on Indonesian speech recognition system
Rahmatullah et al. Performance Evaluation of Indonesian Language Forced Alignment Using Montreal Forced Aligner
El Ouahabi et al. Amazigh speech recognition using triphone modeling and clustering tree decision
JP5315976B2 (en) Speech recognition apparatus, speech recognition method, and program
Coto-Jiménez et al. Speech Synthesis Based on Hidden Markov Models and Deep Learning.
Syiem et al. Deep neural network-based phoneme classification of standard Khasi dialect in continuous speech

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant