CN114565593B - Full-field digital image classification and detection method based on semi-supervision and attention - Google Patents
Full-field digital image classification and detection method based on semi-supervision and attention Download PDFInfo
- Publication number
- CN114565593B CN114565593B CN202210208369.6A CN202210208369A CN114565593B CN 114565593 B CN114565593 B CN 114565593B CN 202210208369 A CN202210208369 A CN 202210208369A CN 114565593 B CN114565593 B CN 114565593B
- Authority
- CN
- China
- Prior art keywords
- full
- digital image
- attention
- classification
- field digital
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 21
- 238000000034 method Methods 0.000 claims abstract description 13
- 238000002372 labelling Methods 0.000 claims abstract description 5
- 239000013598 vector Substances 0.000 claims description 13
- 238000000605 extraction Methods 0.000 claims description 7
- 230000004913 activation Effects 0.000 claims description 6
- 238000009432 framing Methods 0.000 claims description 6
- 239000003086 colorant Substances 0.000 claims description 4
- 239000013068 control sample Substances 0.000 claims description 4
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 230000001575 pathological effect Effects 0.000 description 14
- 208000010507 Adenocarcinoma of Lung Diseases 0.000 description 5
- 210000004072 lung Anatomy 0.000 description 5
- 206010041823 squamous cell carcinoma Diseases 0.000 description 5
- 206010058467 Lung neoplasm malignant Diseases 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 201000005249 lung adenocarcinoma Diseases 0.000 description 4
- 201000005202 lung cancer Diseases 0.000 description 4
- 208000020816 lung neoplasm Diseases 0.000 description 4
- 239000000523 sample Substances 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 230000007170 pathology Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000010186 staining Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Medical Informatics (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a full-view digital image classification and detection method based on semi-supervision and attention. The full-view digital image classification and detection framework is constructed, classification results can be directly output, the region of interest can be visually displayed, a user can be assisted in accurately judging the type of the image, and meanwhile, the region of interest can be rapidly locked. Compared with a weak supervision learning method without labeling the region of interest, the method can greatly improve the classification accuracy of the full-field digital image and accurately detect the region of interest by only labeling a small amount of the region of interest of the full-field digital image, and has higher practicability.
Description
Technical Field
The invention relates to the technical field of full-field digital image processing, in particular to a full-field digital image classification and region-of-interest detection method based on semi-supervised learning and attention.
Background
The full-view digital image is an image with ultra-high resolution, which is generated by scanning by a full-automatic microscope scanner and automatically processing by a computer technology, generally can exceed ten giga pixels, one full-view digital image contains a large amount of information, a great amount of time is required to be consumed when a professional searches for a region of interest in the full-view digital image for marking, the judgment of the image type and the retrieval of the region of interest are based on subjective opinion of people, the limitation of subjectivity, fatigue and difference cognition is imposed, and even the full-view digital image with the best expression in a sample is difficult to obtain a good consistency result, and key problems such as false detection, omission detection and the like are easy to occur.
In recent years, artificial intelligence technology is gradually introduced into the field of classification of full-field digital images and achieves excellent effects, and is receiving unprecedented attention. The convolutional neural network does not depend on a characteristic description operator which is defined, selected and designed manually, can automatically mine deep information of the full-view digital image to extract image characteristics and finish classification, and has the advantages of high efficiency, high stability, strong generalization and the like.
The inventor finds that in the current full-view digital image classification method based on deep learning, the full-view digital image is required to be marked on the region of interest by a professional, and then the region of interest is extracted and sent into a network for training and classification tasks are completed. The classification method has the advantage of high accuracy, but requires a huge full-field digital image dataset marked on the region of interest. Because labeling the region of interest of the full-field digital image requires a lot of time and human resources, this method is largely limited by the inability to construct large-scale full-field digital image sample datasets. The learner uses the data set which is not marked by the region of interest to classify, and the accuracy of the classification model is low because the characteristics of the region of interest such as space, texture and the like cannot be effectively extracted. In addition, the two methods can only complete the classification task of the full-field digital image, the region of interest is not detected, and the user can not quickly lock the position of the region of interest when judging the type of the image, and still a great deal of time is required.
Therefore, there is a need for a full-field digital image classification and region of interest detection method that does not require a large-scale region of interest labeled dataset but has high classification accuracy.
Disclosure of Invention
The invention aims to solve the technical problem that the existing full-view digital image classification method based on deep learning is limited by the lack of a large-scale region-of-interest labeled data set, and provides a full-view digital image classification and region-of-interest detection method based on semi-supervised learning and attention, which can greatly improve the classification accuracy only by a small amount of region-of-interest labeled data sets.
The method comprises the following steps:
step S1: full field digital images are collected and preprocessed.
Step S2: the pre-training feature extraction network Resnet18 is used for extracting the features of the full-field digital image, and specifically comprises the following steps:
step S21: selecting a part of full-view digital image and a standard control sample, framing a region of interest by using a marking frame, and framing the content part of the standard control sample by using the marking frame;
step S22: generating a mask with the same size and position as those of the region of interest on the preprocessed full-field digital image by using a labeling frame of the region of interest;
step S23: dividing the preprocessed full-field digital image into a plurality of n multiplied by n small image blocks by utilizing a sliding window, wherein n is the pixel width and the pixel height of the small image blocks;
step S24: overlapping the mask and the preprocessed full-field digital image, removing small image blocks at non-overlapping positions, and reserving the small image blocks at the overlapping positions;
step S25: and (3) sending the small image blocks stored in the step S24 into a Resnet18 network for training, and storing and outputting the trained network structure and parameters thereof.
Step S3: all full-field digital images are sent to the Resnet18 network pre-trained in the previous step to extract features, and the specific steps are as follows:
step S31: and (3) automatically dividing all the full-view digital images by using opencv, filtering blank background and artificially formed holes, dividing the blank background and the artificially formed holes into n multiplied by n small image blocks, and storing the coordinates of each image block.
Step S32: the small image blocks are fed into a pre-trained Resnet18 network and converted into 512-dimensional feature vectors h at a fourth residual block k I.e. eachFeatures extracted from the small image blocks.
Step S4: and (3) sending the features extracted in the step (S3) to a depth gating channel attention module, comprehensively generating Slide-level features, and classifying the full-field digital image through a classification layer. The method comprises the following specific steps:
step S41: the feature vector h k Sending the attention score into a depth gating channel attention module to obtain an attention score a corresponding to each small image block k,n :
Wherein a is k,n Representing the attention score, P, of the kth small image block belonging to the nth class a,n Representing linear layers belonging to the N-th class, σ ()'s representing sigmoid activation functions, tanh ()'s representing tanh activation functions, V (), W (), G (), J (), L (·) respectively representing different linear layers, N being the total number of image blocks;
step S42: comprehensively generating a Slide-level feature h by the feature vector and the attention score corresponding to each small image block slide,n :
h slide,n Features representing each full-field digital image in an nth class;
step S43: feature vector h of Slide level slide,n Into a sorting layerObtaining a classification result, and realizing the classification of the full-field digital image;
step S5: and (3) extracting attention scores of all the small image blocks generated in the step (4) corresponding to the model prediction types, generating color blocks with corresponding colors by using matplotlib, covering the corresponding positions on the original full-view digital image with a certain transparency, and obtaining a detection heat map of the region of interest after fuzzy and smoothing operations.
Preferably, the preprocessing is to perform color normalization on the collected full-field digital image according to the input image template.
Preferably, the transparency is 0.4 to 0.6.
Compared with the prior art, the invention has the beneficial effects that:
(1) The method can be popularized and applied to various tasks of classification and interested region detection according to the full-field digital image, and has universality.
(2) The full-view digital image classification and region-of-interest detection method based on semi-supervised learning and attention provided by the invention only uses a small amount of full-view digital images marked by the region-of-interest when training the feature extraction model, uses full-view digital images marked by the region-of-interest when training the classification network, greatly improves the accuracy of the classification network in the full-view digital image classification task while reducing the preparation work of a data set, and integrates simplicity and high accuracy.
(3) The invention separates the feature extraction module from the classification layer classification module, can arbitrarily add and replace the attention module in the middle, and has stronger adaptability. And after the change, all networks are not required to be trained, and only the newly added attention module and classification layer are required to be retrained, so that the training time is greatly shortened.
(4) The depth gating channel attention network provided by the invention can capture channel information, uses deeper-level attention branches, strengthens the distinction degree of the attention scores layer by layer, ensures that the attention scores of small image blocks have stronger robustness and more accuracy, effectively improves the accuracy of full-field digital image classification, and is easy to realize and high in practicability.
(5) The invention constructs the full-view digital image classification and detection framework, can directly output classification results and visually display the interested region, can assist a user to accurately judge the type of the interest region, and simultaneously can quickly lock the interested region.
Drawings
FIG. 1 is a flow chart of a full-field digital image classification and region of interest detection method based on semi-supervised learning and attention
FIG. 2 is a flow chart of a pre-training feature extraction network of the present invention
Detailed Description
The invention will be described in further detail with reference to specific examples and figures.
As shown in fig. 1, the present example is directed to classification and detection of lung adenocarcinoma and lung squamous carcinoma, and the collected data includes 1724 lung adenocarcinomas, 1707 lung squamous carcinomas, and 30 normal tissue samples. The full-field digital pathology image of the lesion area label accounts for only 1.75% of the total sample. The feature extraction network uses a Resnet18. The lung cancer pathological image classification and focus detection based on the semi-supervised learning and attention method comprises the following steps:
step S1: and collecting 3431 total lung adenocarcinoma and lung squamous carcinoma full-field digital pathological images, and 30 normal tissue samples. And reading all pathological image information, and performing color normalization processing on all pathological images to eliminate pathological image color differences caused by different colorant proportions, staining and scanning factors.
Step S2: the pre-training feature extraction network Resnet18 is used for extracting features of all lung cancer pathological images, as shown in fig. 2, and specifically comprises the following steps:
step S21: selecting 30 lung adenocarcinoma, lung squamous carcinoma and normal tissue samples, framing the cancerous tissue samples by a specialized pathologist on a focus area, and framing the normal tissue samples on a tissue area by a marking frame.
Step S22: and generating a mask with the same size as the original pathological size through a calibration frame marked by a doctor.
Step S23: the pathological section is segmented into a plurality of 256×256 small image blocks by using a sliding window.
Step S24: overlapping the mask with the original pathological image, removing small image blocks at non-overlapping positions, and storing the small image blocks at overlapping positions.
Step S25: and (3) sending the small image blocks stored in the step S24 into a Resnet18 network for training, and storing and outputting the trained network structure and parameters thereof.
Step S3: all lung adenocarcinoma and lung squamous carcinoma full-field digital pathological images are sent to the Resnet18 network pre-trained in the previous step to extract features, and the specific steps are as follows:
step S31: and (3) automatically segmenting the pathological images of all cancerous samples by using opencv, filtering the background and artificially formed cavities, and only reserving tissue parts in the pathological images. The tissue portion is segmented into 256 x 256 small image blocks and saved as a pile of image blocks and their coordinates.
Step S32: the small image blocks are fed into a pre-trained Resnet18 network and converted into 512-dimensional feature vectors h at a fourth residual block k I.e. the features extracted per small image block.
Step S4: and (3) sending the features extracted in the step (S3) into a depth-gating channel attention module, comprehensively generating Slide-level features, and classifying lung cancer pathological images through a classification layer. The method comprises the following specific steps:
step S41: the feature vector h k Sending the attention score into a depth gating channel attention module to obtain an attention score a corresponding to each small image block k,n :
Wherein a is k,n Representing the attention score, P, of the kth small image block belonging to the nth class a,n Represents a linear layer belonging to the nth class, sigma (·) represents a sigmoid activation function, tanh (·) represents a tanh activation function, V (·),
W (·), G (·), J (·), L (·) represent different linear layers, respectively, N being the total number of image blocks.
Step S42: comprehensively generating a Slide-level feature h by the feature vector and the attention score corresponding to each small image block slide,n :
Step S43: feature vector h of Slide level slide,n Entering classification layers of corresponding categoriesAnd obtaining a classification result, and realizing classification of lung cancer pathological images.
Step S5: and (3) extracting attention scores of all the small image blocks generated in the step (4) corresponding to the model prediction class, generating color blocks with corresponding colors by using matplotlib, covering the corresponding positions on the original full-view digital image with transparency of 0.5, and obtaining a detection heat map of the region of interest after fuzzy and smoothing operations.
The above embodiments are not intended to limit the present invention, and the present invention is not limited to the above embodiments, and falls within the scope of the present invention as long as the present invention meets the requirements.
What is not described in detail in the present specification belongs to the prior art known to those skilled in the art.
Claims (3)
1. The full-field digital image classification and detection method based on semi-supervision and attention is characterized by comprising the following steps of:
step S1: collecting a full-field digital image and preprocessing;
step S2: the pre-training feature extraction network Resnet18 is used for extracting the features of the full-field digital image, and specifically comprises the following steps:
step S21: selecting a part of full-view digital image and a standard control sample, framing a region of interest by using a marking frame, and framing the content part of the standard control sample by using the marking frame;
step S22: generating a mask with the same size and position as those of the region of interest on the preprocessed full-field digital image by using a labeling frame of the region of interest;
step S23: dividing the preprocessed full-field digital image into a plurality of n multiplied by n small image blocks by utilizing a sliding window, wherein n is the pixel width and the pixel height of the small image blocks;
step S24: overlapping the mask and the preprocessed full-field digital image, removing small image blocks at non-overlapping positions, and reserving the small image blocks at the overlapping positions;
step S25: the small image block saved in the step S24 is sent to a Resnet18 network for training, and the network structure and parameters thereof after training are saved and output;
step S3: all full-field digital images are sent to the Resnet18 network pre-trained in the previous step to extract features, and the specific steps are as follows:
step S31: using opencv to automatically divide all full-view digital images, filtering blank background and artificially formed holes, dividing the blank background and the artificially formed holes into n multiplied by n small image blocks, and storing the coordinates of each image block;
step S32: the small image blocks are fed into a pre-trained Resnet18 network and converted into 512-dimensional feature vectors h at a fourth residual block k I.e. the features extracted from each small image block;
step S4: the feature h extracted in the step S3 is processed k Sending the full-view digital image to a depth gating channel attention module, comprehensively generating Slide-level features, and classifying the full-view digital image through a classification layer; the method comprises the following specific steps:
step S41: the feature vector h k Sending the attention score into a depth gating channel attention module to obtain an attention score a corresponding to each small image block k,n :
Wherein a is k,n Representing the attention score, P, of the kth small image block belonging to the nth class a,n Representing linear layers belonging to the nth class, sigma (·) representing a sigmoid activation function, tanh (·) representing a tanh activation function, V (·), W (·), G (·), J (·), L (·) representing different linear layers respectively, N being the total number of image blocks;
step S42: generating a Slide-level feature vector h by combining the feature vector corresponding to each small image block and the attention score slide,n :
h slide,n Features representing each full-field digital image in an nth class;
step S43: feature vector h of Slide level slide,n Into a sorting layerObtaining a classification result, and realizing the classification of the full-field digital image;
step S5: and (3) extracting attention scores of all the small image blocks generated in the step (4) corresponding to the model prediction types, generating color blocks with corresponding colors by using matplotlib, covering the corresponding positions on the original full-view digital image with a certain transparency, and obtaining a detection heat map of the region of interest after fuzzy and smoothing operations.
2. The semi-supervised and attention based full field digital image classification and detection method as claimed in claim 1, wherein: the preprocessing is to perform color normalization on the collected full-field digital image according to an input image template.
3. The semi-supervised and attention based full field digital image classification and detection method as claimed in claim 1, wherein: the transparency is 0.4-0.7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210208369.6A CN114565593B (en) | 2022-03-04 | 2022-03-04 | Full-field digital image classification and detection method based on semi-supervision and attention |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210208369.6A CN114565593B (en) | 2022-03-04 | 2022-03-04 | Full-field digital image classification and detection method based on semi-supervision and attention |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114565593A CN114565593A (en) | 2022-05-31 |
CN114565593B true CN114565593B (en) | 2024-04-02 |
Family
ID=81717968
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210208369.6A Active CN114565593B (en) | 2022-03-04 | 2022-03-04 | Full-field digital image classification and detection method based on semi-supervision and attention |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114565593B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115082743B (en) * | 2022-08-16 | 2022-12-06 | 之江实验室 | Full-field digital pathological image classification system considering tumor microenvironment and construction method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112329867A (en) * | 2020-11-10 | 2021-02-05 | 宁波大学 | MRI image classification method based on task-driven hierarchical attention network |
CN112529042A (en) * | 2020-11-18 | 2021-03-19 | 南京航空航天大学 | Medical image classification method based on dual-attention multi-instance deep learning |
-
2022
- 2022-03-04 CN CN202210208369.6A patent/CN114565593B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112329867A (en) * | 2020-11-10 | 2021-02-05 | 宁波大学 | MRI image classification method based on task-driven hierarchical attention network |
CN112529042A (en) * | 2020-11-18 | 2021-03-19 | 南京航空航天大学 | Medical image classification method based on dual-attention multi-instance deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN114565593A (en) | 2022-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11681418B2 (en) | Multi-sample whole slide image processing in digital pathology via multi-resolution registration and machine learning | |
US11615559B2 (en) | Methods and systems for human imperceptible computerized color transfer | |
Li et al. | Example-based image colorization using locality consistent sparse representation | |
CN111340824B (en) | Image feature segmentation method based on data mining | |
Kainz et al. | Semantic segmentation of colon glands with deep convolutional neural networks and total variation segmentation | |
CN104408449B (en) | Intelligent mobile terminal scene literal processing method | |
CN106023151B (en) | Tongue object detection method under a kind of open environment | |
CN112241762B (en) | Fine-grained identification method for pest and disease damage image classification | |
CN106934386B (en) | A kind of natural scene character detecting method and system based on from heuristic strategies | |
CN110838100A (en) | Colonoscope pathological section screening and segmenting system based on sliding window | |
CN105825216A (en) | Method of locating text in complex background image | |
CN111401426A (en) | Small sample hyperspectral image classification method based on pseudo label learning | |
CN110929746A (en) | Electronic file title positioning, extracting and classifying method based on deep neural network | |
CN110992374A (en) | Hair refined segmentation method and system based on deep learning | |
CN114565593B (en) | Full-field digital image classification and detection method based on semi-supervision and attention | |
CN115810191A (en) | Pathological cell classification method based on multi-attention fusion and high-precision segmentation network | |
CN111210447B (en) | Hematoxylin-eosin staining pathological image hierarchical segmentation method and terminal | |
Fueten et al. | An artificial neural net assisted approach to editing edges in petrographic images collected with the rotating polarizer stage | |
CN115410258A (en) | Human face expression recognition method based on attention image | |
CN114782948A (en) | Global interpretation method and system for cervical liquid-based cytology smear | |
CN109886325B (en) | Template selection and accelerated matching method for nonlinear color space classification | |
CN116311403A (en) | Finger vein recognition method of lightweight convolutional neural network based on FECAGhostNet | |
Chandraprabha et al. | Texture feature extraction for batik images using glcm and glrlm with neural network classification | |
CN112966774B (en) | Picture Bert-based tissue pathology picture classification method | |
Tang et al. | Ethnic costume grayscale image coloring method with improved Pix2Pix |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |