CN113298772A - Nose wing blackhead image detection method based on deep learning and adaptive threshold method - Google Patents

Nose wing blackhead image detection method based on deep learning and adaptive threshold method Download PDF

Info

Publication number
CN113298772A
CN113298772A CN202110552883.7A CN202110552883A CN113298772A CN 113298772 A CN113298772 A CN 113298772A CN 202110552883 A CN202110552883 A CN 202110552883A CN 113298772 A CN113298772 A CN 113298772A
Authority
CN
China
Prior art keywords
image
extracting
threshold
influence
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110552883.7A
Other languages
Chinese (zh)
Inventor
熊国虹
董璐
孙佳
王远大
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Yunzhikong Industrial Technology Research Institute Co ltd
Original Assignee
Nanjing Yunzhikong Industrial Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Yunzhikong Industrial Technology Research Institute Co ltd filed Critical Nanjing Yunzhikong Industrial Technology Research Institute Co ltd
Priority to CN202110552883.7A priority Critical patent/CN113298772A/en
Publication of CN113298772A publication Critical patent/CN113298772A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting a blackhead image of a nasal wing based on deep learning and an adaptive threshold method, which comprises the following steps: s1, collecting a facial feature segmentation data set, training a CNN cascade network and storing network parameters; s2, calling the trained CNN cascade network to perform facial feature segmentation, and extracting a nose image in the face image; s3, extracting a blue single channel from the nose image, removing the influence of edges, light rays and shadows, and removing the nostril part; s4, extracting the blackheads by using a self-adaptive threshold method; and S5, utilizing the tkater to manufacture an interactive interface. The method adopts the convolutional neural network to segment the facial features, thereby avoiding the influence of color difference of different parts on a detection result, extracting a single-channel image to reduce the influence of illumination and skin color, removing the influence of the border, nostril shadow and highlight by adopting a corrosion and threshold segmentation method, and detecting the influence of factors such as illumination reduction by adopting a self-adaptive threshold.

Description

Nose wing blackhead image detection method based on deep learning and adaptive threshold method
Technical Field
The invention belongs to the technical field of digital image processing, and particularly relates to a method for detecting a blackhead image of a nasal wing based on deep learning and an adaptive threshold method.
Background
Digital Image Processing (Digital Image Processing) is a method and technique for performing processes such as denoising, enhancement, restoration, segmentation, feature extraction, and the like on an Image by a computer. The generation and rapid development of digital image processing is largely influenced by three factors: the development of computers; secondly, the development of mathematics (particularly the establishment and perfection of discrete mathematical theory); and thirdly, the application requirements of wide agriculture and animal husbandry, forestry, environment, military, industry, medicine and the like are increased.
Deep Learning (DL) is a new research direction in the field of Machine Learning (ML), which is introduced into Machine Learning to make it closer to the original target, Artificial Intelligence (AI). Deep learning is the intrinsic law and expression level of the learning sample data, and the information obtained in the learning process is very helpful for the interpretation of data such as characters, images and sounds. The final aim of the method is to enable the machine to have the analysis and learning capability like a human, and to recognize data such as characters, images and sounds. Deep learning is a complex machine learning algorithm, and achieves the effect in speech and image recognition far exceeding the prior related art. Deep learning has achieved many achievements in search technology, data mining, machine learning, machine translation, natural language processing, multimedia learning, speech, recommendation and personalization technologies, and other related fields. The deep learning enables the machine to imitate human activities such as audio-visual and thinking, solves a plurality of complex pattern recognition problems, and makes great progress on the artificial intelligence related technology.
Convolutional Neural Networks (CNN) are a class of feed forward Neural Networks (fed forward Neural Networks) that contain convolution computations and have a deep structure, and are one of the representative algorithms for deep learning (deep learning). Convolutional Neural Networks have a feature learning (rendering) capability, and can perform Shift-Invariant classification (Shift-Invariant classification) on input information according to a hierarchical structure thereof, and are therefore also called "Shift-Invariant Artificial Neural Networks (SIANN)". The convolutional neural network is constructed by imitating a visual perception (visual perception) mechanism of a living being, can perform supervised learning and unsupervised learning, and has the advantages that the convolutional neural network can learn grid-like topologic features such as pixels and audio with small calculation amount, has stable effect and has no additional feature engineering (feature engineering) requirement on data due to the fact that convolutional kernel parameter sharing in an implicit layer and sparsity of connection between layers.
In recent years, people pay more and more attention to skin care, and various skin detection products are also activated on the market. The black head is very difficult to detect because the black head has small proportion in the photo, unobvious color and is greatly influenced by factors such as illumination, spots on the skin, camera pixels and the like.
Disclosure of Invention
In order to solve the problems, the invention discloses a nose wing blackhead image detection method based on deep learning and an adaptive threshold method, which is characterized in that a convolutional neural network is adopted to segment facial features, so that the influence of color difference of different parts on a detection result is avoided, a single-channel image is extracted to reduce the influence of illumination and skin color, the influence of boundary, nostril shadow and highlight is removed by adopting a corrosion and threshold segmentation method, and the influence of factors such as illumination reduction of a blackhead is detected by adopting adaptive threshold and the like. And finally, making a user interaction interface for use.
The above purpose is realized by the following technical scheme:
a nose wing blackhead image detection method based on deep learning and an adaptive threshold method comprises the following steps:
s1, collecting a facial feature segmentation data set, training a CNN cascade network and storing network parameters;
s2, calling the trained CNN cascade network to perform facial feature segmentation, and extracting a nose image in the face image;
s3, extracting a blue single channel from the nose image, removing the influence of edges, light rays and shadows, and removing the nostril part;
s4, extracting the blackheads by using a self-adaptive threshold method;
and S5, utilizing the tkater to manufacture an interactive interface.
In the method for detecting the blackhead images of the nose wings based on the deep learning and adaptive threshold method, the CNN cascade network detects the facial markers by using a cross entropy loss function in step S1, 68 detected locating points are coded into 68 independent channels, the two-dimensional Gaussian distribution is arranged at the corresponding facial marker positions of the channels, and the 68 channels are stacked together and transmitted to a region segmentation network together with an original image;
the training CNN cascade network encodes the marks into two-dimensional Gaussian distribution at the positions of the provided facial marks, each facial mark is allocated with a separate channel to prevent the marks from being overlapped with other facial marks, in the cross entropy loss function, n is a label of the labeled data, and the Gaussian distribution of the labeled data
Figure BDA0003076106000000021
And predicted Gaussian distribution
Figure BDA0003076106000000022
Have the same dimensions: n W H, N being the number of channels, W being the width, H being the height, the cross entropy loss function defined is:
Figure BDA0003076106000000023
face region segmentation network, using softmax loss function as the final loss function:
Figure BDA0003076106000000024
where M is the number of outputs.
In the method for detecting the blackhead image of the nasal wing by the deep learning and adaptive threshold method, the specific method for extracting the blackhead by the adaptive threshold method in step S4 is as follows:
setting the threshold value as T, dividing the image into target images C1And a background image C2Two parts, traversing the whole gray scale interval, determining a proper threshold value T, and enabling C1、C2The difference between the gray value variance of the two parts is maximum;
for a given image I (x, y), let T be the threshold for segmenting the target image and the background image, and let ω be the ratio of the number of pixels belonging to the target image to the total image0Average gray of μ0(ii) a The proportion of the pixel points belonging to the background image to the whole image is omega1Average gray of μ1(ii) a The total average gray level of the image is mu, and the inter-class variance is g; the size of the image I (x, y) is M N, and the number of pixels in the image with a gray scale value less than the threshold value T is denoted as N0The number of pixels with gray values greater than the threshold T is denoted by N1Then, there are:
Figure BDA0003076106000000031
Figure BDA0003076106000000032
N0+N1=M×N
ω01=1
μ=ω0μ01μ1
g=ω00-μ)210-μ)2
thereby obtaining:
g=ω0ω101)2
the segmentation threshold that maximizes the inter-class variance is obtained by traversing the entire interval.
The invention has the beneficial effects that:
the method uses the convolutional neural network to segment the facial features, thereby avoiding the influence of color difference of different parts on a detection result, extracts a single-channel image to reduce the influence of illumination and skin color, removes the influence of boundary, nostril shadow and highlight by adopting a corrosion and threshold segmentation method, and detects the influence of factors such as illumination reduction of a blackhead by adopting a self-adaptive threshold and the like. The invention can accurately and quickly detect the blackheads of the nose wings on the face image, provides skin care suggestions for a detector, and provides certain technical reference for the fields of medical cosmetology and the like.
Drawings
FIG. 1 is a face segmentation result;
FIG. 2 is a segmented nose image;
FIG. 3 is a graph showing the results of blackhead detection;
FIG. 4 is a user interface: selecting a picture address and saving the address;
FIG. 5 is a user interface: detecting the result;
FIG. 6 is a user interface: and detecting picture display.
Detailed Description
The present invention will be further illustrated with reference to the accompanying drawings and specific embodiments, which are to be understood as merely illustrative of the invention and not as limiting the scope of the invention.
The invention discloses a method for detecting a blackhead image of a nasal wing based on deep learning and an adaptive threshold method, which comprises the following steps:
s1, collecting a facial feature segmentation data set, training a CNN cascade network and storing network parameters;
the CNN cascade network utilizes a cross entropy loss function to detect the facial markers, encodes the detected 68 positioning points into 68 independent channels, the channels have two-dimensional Gaussian distribution at the corresponding facial marker positions, and the 68 channels are stacked together and transmitted to the region segmentation network together with the original image;
the training CNN cascade network encodes the marks into two-dimensional Gaussian distribution at the positions of the provided facial marks, each facial mark is allocated with a separate channel to prevent the marks from being overlapped with other facial marks, each point is easier to distinguish, in the cross entropy loss function, n is a label of the labeled data, and the Gaussian distribution of the labeled data
Figure BDA0003076106000000041
And predicted Gaussian distribution
Figure BDA0003076106000000042
Have the same dimensions: n W H, N being the number of channels, W being the width, H being the height, the cross entropy loss function defined is:
Figure BDA0003076106000000043
face region segmentation network, using softmax loss function as the final loss function:
Figure BDA0003076106000000044
where M is the number of outputs.
S2, calling the trained CNN cascade network to perform facial feature segmentation, and extracting a nose image in the face image;
and S3, extracting a blue single channel from the nose image, and removing the boundary and nostril shadows by adopting a corrosion and threshold segmentation method. Since the nostril shadow is a continuous area with the maximum gray value in the nose image, the nostril shadow can be removed by adopting a threshold segmentation method;
s4, extracting the blackheads by using a self-adaptive threshold method;
the adaptive threshold method is an improvement on the variance method between the largest classes. The idea is not to calculate the threshold of the global image, but to calculate the local threshold according to the brightness distribution of different areas of the image, so that the image is not subjected toIn the same region, different thresholds can be calculated in a self-adaptive mode. Setting the threshold value as T, dividing the image into target images C1And a background image C2Two parts, traversing the whole gray scale interval, determining a proper threshold value T, and enabling C1、C2The difference between the gray value variance of the two parts is maximum;
for a given image I (x, y), let T be the threshold for segmenting the target image and the background image, and let ω be the ratio of the number of pixels belonging to the target image to the total image0Average gray of μ0(ii) a The proportion of the pixel points belonging to the background image to the whole image is omega1Average gray of μ1(ii) a The total average gray level of the image is mu, and the inter-class variance is g; the size of the image I (x, y) is M N, and the number of pixels in the image with a gray scale value less than the threshold value T is denoted as N0The number of pixels with gray values greater than the threshold T is denoted by N1Then, there are:
Figure BDA0003076106000000045
Figure BDA0003076106000000051
N0+N1=M×N
ω01=1
μ=ω0μ01μ1
g=ω00-μ)210-μ)2
thereby obtaining:
g=ω0ω101)2
the segmentation threshold that maximizes the inter-class variance is obtained by traversing the entire interval.
And finally, manufacturing an interactive interface by using the tkiner library, calculating the black head ratio, and providing skin care suggestions for the user. The interactive interface has the functions of image address selection, address saving selection, image display, starting detection, system quitting, prompting and the like.
The technical means disclosed in the invention scheme are not limited to the technical means disclosed in the above embodiments, but also include the technical scheme formed by any combination of the above technical features.

Claims (3)

1. A nose wing blackhead image detection method based on deep learning and an adaptive threshold method is characterized by comprising the following steps:
s1, collecting a facial feature segmentation data set, training a CNN cascade network and storing network parameters;
s2, calling the trained CNN cascade network to perform facial feature segmentation, and extracting a nose image in the face image;
s3, extracting a blue single channel from the nose image, removing the influence of edges, light rays and shadows, and removing the nostril part;
s4, extracting the blackheads by using a self-adaptive threshold method;
and S5, utilizing the tkater library to manufacture an interactive interface.
2. The method for detecting blackhead images of nasal wings based on deep learning and adaptive threshold method as claimed in claim 1, wherein the CNN cascade network performs facial landmark detection by using cross entropy loss function in step S1, encodes the detected 68 anchor points into 68 separate channels, the channels have a two-dimensional gaussian distribution at their corresponding facial landmark positions, and the 68 channels are stacked together and transmitted to the region segmentation network together with the original image;
the training CNN cascade network encodes the marks into two-dimensional Gaussian distribution at the positions of the provided facial marks, each facial mark is allocated with a separate channel to prevent the marks from being overlapped with other facial marks, in the cross entropy loss function, n is a label of the labeled data, and the Gaussian distribution of the labeled data
Figure FDA0003076105990000011
And predicted Gaussian distribution
Figure FDA0003076105990000012
Have the same dimensions: n W H, N being the number of channels, W being the width, H being the height, the cross entropy loss function defined is:
Figure FDA0003076105990000013
face region segmentation network, using softmax loss function as the final loss function:
Figure FDA0003076105990000014
where M is the number of outputs.
3. The method for detecting blackheads in nose wings based on deep learning and adaptive thresholding as claimed in claim 1, wherein the specific method for extracting blackheads by adaptive thresholding in step S4 is:
setting the threshold value as T, dividing the image into target images C1And a background image C2Two parts, traversing the whole gray scale interval, determining a proper threshold value T, and enabling C1、C2The difference between the gray value variance of the two parts is maximum;
for a given image I (x, y), let T be the threshold for segmenting the target image and the background image, and let ω be the ratio of the number of pixels belonging to the target image to the total image0Average gray of μ0(ii) a The proportion of the pixel points belonging to the background image to the whole image is omega1Average gray of μ1(ii) a The total average gray level of the image is mu, and the inter-class variance is g; the size of the image I (x, y) is M N, and the number of pixels in the image with a gray scale value less than the threshold value T is denoted as N0The number of pixels with gray values greater than the threshold T is denoted by N1Then, there are:
Figure FDA0003076105990000021
Figure FDA0003076105990000022
N0+N1=M×N
ω01=1
μ=ω0μ01μ1
g=ω00-μ)210-μ)2thereby obtaining:
g=ω0ω101)2
the segmentation threshold that maximizes the inter-class variance is obtained by traversing the entire interval.
CN202110552883.7A 2021-05-20 2021-05-20 Nose wing blackhead image detection method based on deep learning and adaptive threshold method Pending CN113298772A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110552883.7A CN113298772A (en) 2021-05-20 2021-05-20 Nose wing blackhead image detection method based on deep learning and adaptive threshold method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110552883.7A CN113298772A (en) 2021-05-20 2021-05-20 Nose wing blackhead image detection method based on deep learning and adaptive threshold method

Publications (1)

Publication Number Publication Date
CN113298772A true CN113298772A (en) 2021-08-24

Family

ID=77323232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110552883.7A Pending CN113298772A (en) 2021-05-20 2021-05-20 Nose wing blackhead image detection method based on deep learning and adaptive threshold method

Country Status (1)

Country Link
CN (1) CN113298772A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778473A (en) * 2014-01-09 2015-07-15 深圳市中瀛鑫科技股份有限公司 Image binarization method, device and video analysis system
CN109344802A (en) * 2018-10-29 2019-02-15 重庆邮电大学 A kind of human-body fatigue detection method based on improved concatenated convolutional nerve net
CN110287759A (en) * 2019-03-25 2019-09-27 广东工业大学 A kind of eye strain detection method based on simplified input convolutional neural networks O-CNN
CN110287895A (en) * 2019-04-17 2019-09-27 北京阳光易德科技股份有限公司 A method of emotional measurement is carried out based on convolutional neural networks
CN110503097A (en) * 2019-08-27 2019-11-26 腾讯科技(深圳)有限公司 Training method, device and the storage medium of image processing model
CN110533648A (en) * 2019-08-28 2019-12-03 上海复硕正态企业管理咨询有限公司 A kind of blackhead identifying processing method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778473A (en) * 2014-01-09 2015-07-15 深圳市中瀛鑫科技股份有限公司 Image binarization method, device and video analysis system
CN109344802A (en) * 2018-10-29 2019-02-15 重庆邮电大学 A kind of human-body fatigue detection method based on improved concatenated convolutional nerve net
CN110287759A (en) * 2019-03-25 2019-09-27 广东工业大学 A kind of eye strain detection method based on simplified input convolutional neural networks O-CNN
CN110287895A (en) * 2019-04-17 2019-09-27 北京阳光易德科技股份有限公司 A method of emotional measurement is carried out based on convolutional neural networks
CN110503097A (en) * 2019-08-27 2019-11-26 腾讯科技(深圳)有限公司 Training method, device and the storage medium of image processing model
CN110533648A (en) * 2019-08-28 2019-12-03 上海复硕正态企业管理咨询有限公司 A kind of blackhead identifying processing method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘禾: "数字图像处理及应用", 31 January 2006, 中国电力出版社, pages: 129 - 131 *
殷耀文: "深人理解XGBoost:高效机器学习算法与进阶", 31 May 2020, 北京理工大学出版社, pages: 164 - 165 *

Similar Documents

Publication Publication Date Title
Yan et al. Unsupervised image saliency detection with Gestalt-laws guided optimization and visual attention based refinement
US11681418B2 (en) Multi-sample whole slide image processing in digital pathology via multi-resolution registration and machine learning
Jain et al. Hybrid deep neural networks for face emotion recognition
Wu et al. FAT-Net: Feature adaptive transformers for automated skin lesion segmentation
Li et al. Multitask semantic boundary awareness network for remote sensing image segmentation
Yuan et al. A wave-shaped deep neural network for smoke density estimation
US20210118144A1 (en) Image processing method, electronic device, and storage medium
Chai et al. Aerial image semantic segmentation using DCNN predicted distance maps
Xiao et al. Scene classification with improved AlexNet model
Lim et al. Block-based histogram of optical flow for isolated sign language recognition
Zhou et al. Embedding topological features into convolutional neural network salient object detection
Jin et al. Real-time action detection in video surveillance using sub-action descriptor with multi-cnn
Xue et al. Automatic salient object extraction with contextual cue and its applications to recognition and alpha matting
Khan et al. A deep survey on supervised learning based human detection and activity classification methods
Shan et al. Recognizing facial expressions automatically from video
Chen et al. A self-attention based faster R-CNN for polyp detection from colonoscopy images
Sheng et al. Adaptive semantic-spatio-temporal graph convolutional network for lip reading
Yang et al. End-to-end background subtraction via a multi-scale spatio-temporal model
Ren et al. Multi-scale deep encoder-decoder network for salient object detection
Huang et al. Robust skin detection in real-world images
Lu et al. FCN based preprocessing for exemplar-based face sketch synthesis
Jiang et al. Robust visual tracking via laplacian regularized random walk ranking
Nguyen et al. Salient object detection with semantic priors
Zhong et al. Background subtraction driven seeds selection for moving objects segmentation and matting
Pang et al. Dance video motion recognition based on computer vision and image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination