CN112053435A - Self-adaptive real-time human body three-dimensional reconstruction method - Google Patents

Self-adaptive real-time human body three-dimensional reconstruction method Download PDF

Info

Publication number
CN112053435A
CN112053435A CN202011083230.0A CN202011083230A CN112053435A CN 112053435 A CN112053435 A CN 112053435A CN 202011083230 A CN202011083230 A CN 202011083230A CN 112053435 A CN112053435 A CN 112053435A
Authority
CN
China
Prior art keywords
human body
robot
module
tsdf
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011083230.0A
Other languages
Chinese (zh)
Inventor
李娟�
占永刚
曹宇
李军
熊竹青
刘建晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Edgar Beauty Rehabilitation Equipment Co ltd
Original Assignee
Wuhan Edgar Beauty Rehabilitation Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Edgar Beauty Rehabilitation Equipment Co ltd filed Critical Wuhan Edgar Beauty Rehabilitation Equipment Co ltd
Priority to CN202011083230.0A priority Critical patent/CN112053435A/en
Publication of CN112053435A publication Critical patent/CN112053435A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a self-adaptive real-time human body three-dimensional reconstruction method, which relates to the technical field of human body three-dimensional reconstruction, in particular to a self-adaptive real-time human body three-dimensional reconstruction method, and comprises the following steps: s1, a human body reconstruction module; s2, an active visual angle planning module; s3, the flying robot module. The self-adaptive real-time human body three-dimensional reconstruction method provides a self-adaptive three-dimensional reconstruction system, the system comprises a flying robot module, a human body reconstruction module and an active visual angle planning module, the three modules are mutually promoted in a cooperative manner, and the purposes of self-adaptive real-time accurate reconstruction and robot efficiency improvement are finally achieved; the method also comprises the steps of innovatively providing the probability TSDF, unifying the three-dimensional reconstruction problem and the viewpoint planning problem of the robot in a common space, and carrying out spatial classification, probability model estimation and information entropy reduction on the three-dimensional reconstruction problem and the viewpoint planning problem of the robot in the same space while the system carries out reconstruction so as to optimize the next optimal observation viewpoint of the robot.

Description

Self-adaptive real-time human body three-dimensional reconstruction method
Technical Field
The invention relates to the technical field of human body three-dimensional reconstruction, in particular to a self-adaptive real-time human body three-dimensional reconstruction method.
Background
Human behavior perception and understanding is the basis for a large number of intelligent scene applications, such as human-computer interaction, human action analysis and recognition, immersive gaming, and other virtual reality applications. One of the most important problems is how to reconstruct a realistic three-dimensional model of the human body. With the advent and development of depth cameras, it has become possible to capture depth and color information at video rates, especially consumer-grade depth cameras are favored for their low cost and portability. However, because the field angle of the camera is limited, if a complete human body needs to be reconstructed, the camera needs to be arranged more than 2m away from the target, and the reconstruction accuracy is greatly limited. The reconstruction method for fusing multi-frame depth information in a close range can increase the reconstruction accuracy, but the reconstruction result is still not ideal enough, and moreover, the reconstruction scene of the method is generally limited to a static camera array or a human needs to manually scan a target by a hand-held camera. The introduction of a small robot can greatly solve the problems and provide possibility for automatically completing a scanning task, but in the traditional technology, human body reconstruction and trajectory planning of the robot are often distinguished into different problems to be processed, so that the computational complexity is increased.
Based on the background, the patent provides a self-adaptive real-time human body three-dimensional reconstruction method, and an assembly depth camera and a small unmanned aerial vehicle are used for intelligently reconstructing a target.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a self-adaptive real-time human body three-dimensional reconstruction method.
In order to achieve the purpose, the invention is realized by the following technical scheme: a self-adaptive real-time human body three-dimensional reconstruction method comprises the following steps:
s1, a human body reconstruction module: the flying robot module captures depth information of a reconstructed target and transmits the depth data streams to the human body reconstruction module, and the human body reconstruction module converts the depth data streams into three-dimensional probability TSDF by using the input depth data and performs fusion reconstruction on the three-dimensional probability TSDF to output a high-precision model in real time;
s2, an active visual angle planning module: the probability TSDF is directly used for view generation of an active view planning module through information gain optimization of an occupancy rate model;
s3, flying robot module: and feeding back the next optimal observation visual angle of the robot generated by the visual angle planning module to the flying robot, and planning and executing the self track by the flying robot.
Optionally, in the step S1, in the human body reconstruction module, the probability TSDF model Sk(p) is represented by the following formula, wherein Fk(p) and is Wk(p) the components of the traditional TSDF respectively store the truncation distance of the volume pixel in the camera ray projection direction and the corresponding weight, and the updating iterative process of the weight and the truncation distance value of the probability TSDF is similar to that of the traditional TSDF; probability TSDF also contains Lk(p), characterizing the probability of a volume pixel being occupied:
Figure BDA0002719399780000023
optionally, in the step S1, in the human body reconstruction module, in the probability TSDF, the voxels may be further classified into the following three voxel types according to the weight and the truncated distance value of the voxels, so as to distinguish whether the voxels belong to the object itself and the possible probability thereof, where the classification criteria is as follows:
Figure BDA0002719399780000021
optionally, in the step S2 and the active view angle planning module, voxels in a certain range around the leading edge voxel often contain information that is important for human reconstruction, and guide the robot to perform the next observation, so that a complete three-dimensional model of the human body can be obtained more quickly, and the information gain of the voxel p in the neighborhood of the leading edge voxel is defined as:
Figure BDA0002719399780000022
wherein the information gain decreases gaussian with distance according to a normal distribution.
Optionally, in the step S2 and the active perspective planning module, the information gain is defined and the robot motion cost C is synthesizedvConsistency of robot motion track considering SvAnd information gain I of view anglevThe following optimization method is proposed to select the optimal next observation viewpoint;
Figure BDA0002719399780000031
the calculation and optimization of the algorithm are all processed on parallel processors, and the core of each individual stream processor is responsible for storing and calculating the probability of a single voxel TSDF and the information gain.
Optionally, in the step S3, in the flying robot module, an open-source path planning algorithm is adopted, and given the position, the acceleration, the angular acceleration and the destination of the robot at the current time, the path planning algorithm provides real-time speed output guidance to the robot, and then transmits the speed output guidance to the robot controller through the robot application program interface for execution.
The invention provides a self-adaptive real-time human body three-dimensional reconstruction method, which has the following beneficial effects:
1. the self-adaptive real-time human body three-dimensional reconstruction method provides a self-adaptive three-dimensional reconstruction system which comprises a flying robot module, a human body reconstruction module and an active visual angle planning module, wherein the three modules are mutually promoted in a cooperative manner, and the purposes of self-adaptive real-time accurate reconstruction and robot efficiency improvement are finally achieved.
2. The self-adaptive real-time human body three-dimensional reconstruction method also comprises the steps that a probability TSDF is innovatively provided, the three-dimensional reconstruction problem and the viewpoint planning problem of the robot are unified in a common space, the system carries out spatial classification and probability model estimation on the three-dimensional reconstruction problem and the viewpoint planning problem in the same space while carrying out reconstruction, and optimizes the next optimal observation viewpoint of the robot through information entropy reduction, so that the effects of promoting reconstruction precision and improving the moving efficiency of the robot are achieved;
3. the self-adaptive real-time human body three-dimensional reconstruction method provides a real-time active visual angle planning strategy of a high-performance ray projection algorithm based on a GPU, simultaneously utilizes a completely designed non-rigid body space fusion technology to solve possible slight movement interference of a human body in the reconstruction process, fuses active visual angle planning and three-dimensional reconstruction space, and innovatively provides information TSDF body space.
Drawings
FIG. 1 is a schematic block diagram of a system according to the present invention;
fig. 2 is a schematic structural view of the slot frame of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
Referring to fig. 1 to 2, the present invention provides a technical solution: a self-adaptive real-time human body three-dimensional reconstruction method comprises the following steps:
s1, a human body reconstruction module: the flying robot module captures depth information of a reconstructed target and transmits the depth data streams to the human body reconstruction module, and the human body reconstruction module converts the depth data streams into three-dimensional probability TSDF by using the input depth data and performs fusion reconstruction on the three-dimensional probability TSDF to output a high-precision model in real time;
in order to realize three-dimensional reconstruction and next optimal viewpoint selection simultaneously, a probability TSDF model S is innovatively providedk(p) is represented by the following formula, wherein Fk(p) and Wk(p) is a component of the traditional TSDF, and respectively stores the truncation distance of the volume pixel in the camera ray projection direction and the corresponding weight, and the updating iterative process of the weight and the truncation distance value of the probability TSDF is similar to that of the traditional TSDF; high-efficient wineThe rate TSDF also comprises Lk(p) characterizing the probability that a volume pixel is occupied.
Figure BDA0002719399780000041
As shown in graph 2, the left graph of graph 2 takes a two-dimensional TSDF as an example, and the voxels of the conventional TSDF contain a signed cutoff distance value along the camera ray casting direction; the right-hand graph shows that in the probability TSDF, each voxel contains not only the truncated distance value and the probability of being occupied, but it can also be divided into three basic categories, the occupied, unknown and free voxels, where a particular unknown voxel whose nearest neighbors contain both the free and occupied voxels is called the leading voxel, and in the probability TSDF, the voxels can also be classified into the following three voxel types according to their weight and truncated distance value, for distinguishing whether the voxel belongs to the object itself and its possible probability, with the classification criteria as shown below.
Figure BDA0002719399780000051
Distinguishing an iterative method for weighting and truncation distance compared with the traditional TSDF, and modeling the pixel probability of an occupied body by adopting grid mapping integral;
wherein P (P | D)k) For independent event probabilities generated for a single depth observation, P (P | D)1:k) It is the overall probability estimated for a 1 to k multiframe depth observation. Modeling the mapping relation between the independent event probability and the independent TSDF truncation distance value as follows;
Figure BDA0002719399780000053
if the above model is logarithmically characterized and the prior probability P (p) of all voxels being occupied is assumed to be 0.5, the iterative algorithm of the above model can be simplified to the following formula, L (p | D)1:k) Is P (P | D)1:k) Logarithmic form of (2):
L(p|D1:k)=L(p|D1:k-1)+L(p|Dk)·。
s2, an active visual angle planning module: the probability TSDF is directly used for view generation of an active view planning module through information gain optimization of an occupancy rate model;
as shown in fig. 2, the leading edge voxel is defined as an unknown voxel whose spatial domain includes both an occupied voxel and a free voxel, and the edge of the human body model is often observed around this kind of special voxel, so voxels in a certain range around the leading edge voxel often contain information that is very important for human body reconstruction, and the robot is guided to perform the next observation, so that a complete human body three-dimensional model can be obtained more quickly, and the information gain of the voxel p in the neighborhood of the leading edge voxel is defined as:
Figure BDA0002719399780000061
wherein the information gain decreases gaussian with distance according to a normal distribution. Assuming that the robot projects a ray r from the next viewpoint v and passes through a voxel p, the information gain of the time of transmission v of the voxel p can be written as the product of the entropy of the information at that point and the probability of the projection ray passing all points along the line being unoccupied, as shown in
Figure BDA0002719399780000062
Encopy (p) is the information entropy of p points, and the information entropy is defined as follows, and represents the uncertainty of information;
Entropy(p)=-Q(p)lnQ(p)-[1-Q(p)]ln[1-Q(p)]
the overall information gain through the virtual viewpoint v may then be expressed as the sum of the information gains of all voxels through which all the projected rays pass within that viewpoint.
Figure BDA0002719399780000063
According to the definition of the information gain, the movement cost C of the robot is synthesizedvConsistency of robot motion track considering SvAnd information gain I of view anglevThe following optimization method is proposed to select the optimal next observation viewpoint;
Figure BDA0002719399780000064
the calculation and optimization of the algorithm are processed on a parallel processor, and the core of each independent stream processor is responsible for storing and calculating the probability TSDF and information gain of a single pixel; the light ray adopts a three-dimensional Brazier Hamm linear algorithm in the rasterization process of the three-dimensional pixels, the summary of information gain skillfully utilizes the reduction of meridian blocks of a GPU, 4 thousand virtual visual angles are uniformly adopted in a cylinder mode at different positions, distances and directions around a human body, and the optimal virtual visual angle meeting the formula is calculated through the reduction by the method and is input to the robot to be executed as the destination of the next robot. By using a high-performance GPU processor and a designed high-efficiency reduction algorithm, the system can achieve the standard of selecting the optimal next viewpoint for the robot in real time.
S3, flying robot module: the next optimal observation visual angle of the robot generated by the visual angle planning module is fed back to the flying robot, and the flying robot plans and executes the self track;
the adopted three-dimensional reconstruction algorithm is different from the reconstruction method of static objects on the market, and in consideration of slight movement of a human body in the scanning process, the system adopts a dynamic three-dimensional reconstruction method, regularly samples nodes of a three-dimensional model, solves the warping field of the nodes in real-time input frame and reconstruction of the three-dimensional model by the aid of non-rigid motion of the nodes, and can reconstruct an effective three-dimensional model even if the human body slightly moves.
The system adopts an open-source path planning algorithm, the position, the acceleration, the angular acceleration and the destination of the robot at the current moment are given, the path planning algorithm can provide real-time speed output guidance to the robot, and then the real-time speed output guidance is transmitted to a robot controller to be executed through a robot Application Program Interface (API).
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be able to cover the technical scope of the present invention and the equivalent alternatives or modifications according to the technical solution and the inventive concept of the present invention within the technical scope of the present invention.

Claims (6)

1. A self-adaptive real-time human body three-dimensional reconstruction method is characterized by comprising the following steps:
s1, a human body reconstruction module: the flying robot module captures depth information of a reconstructed target and transmits the depth data streams to the human body reconstruction module, and the human body reconstruction module converts the depth data streams into three-dimensional probability TSDF by using the input depth data and performs fusion reconstruction on the three-dimensional probability TSDF to output a high-precision model in real time;
s2, an active visual angle planning module: the probability TSDF is directly used for view generation of an active view planning module through information gain optimization of an occupancy rate model;
s3, flying robot module: and feeding back the next optimal observation visual angle of the robot generated by the visual angle planning module to the flying robot, and planning and executing the self track by the flying robot.
2. The adaptive real-time human body three-dimensional reconstruction method according to claim 1, wherein in the step S1, the human body reconstruction module is a probability TSDF moduleType Sk(p) is represented by the following formula, wherein Fk(p) and is Wk(p) the components of the traditional TSDF respectively store the truncation distance of the volume pixel in the camera ray projection direction and the corresponding weight, and the updating iterative process of the weight and the truncation distance value of the probability TSDF is similar to that of the traditional TSDF; probability TSDF also contains Lk(p), characterizing the probability of a volume pixel being occupied:
Figure FDA0002719399770000011
3. the adaptive real-time human body three-dimensional reconstruction method according to claim 1, wherein in the step S1, the human body reconstruction module, in the probability TSDF, the voxels can be further classified into the following three voxel types according to their weights and cutoff distance values, for distinguishing whether the voxels belong to the object itself and their possible probabilities, and the classification criteria is as follows:
Figure FDA0002719399770000012
4. the adaptive real-time human body three-dimensional reconstruction method according to claim 1, wherein in the step S2 and the active perspective planning module, the voxels within a certain range around the leading edge voxel often contain information important for human body reconstruction, and the robot is guided to perform the next observation, so that a complete human body three-dimensional model can be obtained more quickly, and the information gain of the leading edge voxel neighborhood voxel p is defined as:
Figure FDA0002719399770000021
wherein the information gain decreases gaussian with distance according to a normal distribution.
5. The adaptive real-time human body three-dimensional reconstruction method according to claim 1, characterized in that: in the step S2 and the active visual angle planning module, the information gain is defined, and the movement cost C of the robot is synthesizedvConsistency of robot motion track considering SvAnd information gain I of view anglevThe following optimization method is proposed to select the optimal next observation viewpoint;
Figure FDA0002719399770000022
the calculation and optimization of the algorithm are all processed on parallel processors, and the core of each individual stream processor is responsible for storing and calculating the probability of a single voxel TSDF and the information gain.
6. The adaptive real-time human body three-dimensional reconstruction method according to claim 1, characterized in that: in the step S3, in the flying robot module, an open-source path planning algorithm is adopted, and the position, acceleration, angular acceleration and destination of the robot at the current time are given, and the path planning algorithm provides real-time speed output guidance to the robot, and then transmits the speed output guidance to the robot controller through the robot application program interface for execution.
CN202011083230.0A 2020-10-12 2020-10-12 Self-adaptive real-time human body three-dimensional reconstruction method Pending CN112053435A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011083230.0A CN112053435A (en) 2020-10-12 2020-10-12 Self-adaptive real-time human body three-dimensional reconstruction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011083230.0A CN112053435A (en) 2020-10-12 2020-10-12 Self-adaptive real-time human body three-dimensional reconstruction method

Publications (1)

Publication Number Publication Date
CN112053435A true CN112053435A (en) 2020-12-08

Family

ID=73606095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011083230.0A Pending CN112053435A (en) 2020-10-12 2020-10-12 Self-adaptive real-time human body three-dimensional reconstruction method

Country Status (1)

Country Link
CN (1) CN112053435A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2886043A1 (en) * 2013-12-23 2015-06-24 a.tron3d GmbH Method for continuing recordings to detect three-dimensional geometries of objects
CN106910242A (en) * 2017-01-23 2017-06-30 中国科学院自动化研究所 The method and system of indoor full scene three-dimensional reconstruction are carried out based on depth camera
CN107833270A (en) * 2017-09-28 2018-03-23 浙江大学 Real-time object dimensional method for reconstructing based on depth camera
CN108053482A (en) * 2018-02-05 2018-05-18 喻强 A kind of human body 3D modeling method based on mobile phone scanning
WO2020073600A1 (en) * 2018-10-10 2020-04-16 Midea Group Co., Ltd. Method and system for providing remote robotic control

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2886043A1 (en) * 2013-12-23 2015-06-24 a.tron3d GmbH Method for continuing recordings to detect three-dimensional geometries of objects
CN106910242A (en) * 2017-01-23 2017-06-30 中国科学院自动化研究所 The method and system of indoor full scene three-dimensional reconstruction are carried out based on depth camera
CN107833270A (en) * 2017-09-28 2018-03-23 浙江大学 Real-time object dimensional method for reconstructing based on depth camera
CN108053482A (en) * 2018-02-05 2018-05-18 喻强 A kind of human body 3D modeling method based on mobile phone scanning
WO2020073600A1 (en) * 2018-10-10 2020-04-16 Midea Group Co., Ltd. Method and system for providing remote robotic control

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
EMANUELE VESPA 等: "Efficient_Octree-Based_Volumetric_SLAM_Supporting_Signed-Distance_and_Occupancy_Mapping", 《 IEEE ROBOTICS AND AUTOMATION LETTERS》, vol. 3, no. 2, 30 April 2018 (2018-04-30), pages 1144, XP055815072, DOI: 10.1109/LRA.2018.2792537 *
LINTAO ZHENG 等: "Active Scene Understanding via Online Semantic Reconstruction", 《COMPUTER GRAPHICS FORUM》, vol. 38, no. 7, 14 November 2019 (2019-11-14), pages 103 - 114, XP071489836, DOI: 10.1111/cgf.13820 *
WEI CHENG 等: "iHuman3D Intelligent Human Body 3D Reconstruction using a Single Flying Camera", 《MM \'18: PROCEEDINGS OF THE 26TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA》, 15 October 2018 (2018-10-15), pages 1733 - 1741, XP058544205, DOI: 10.1145/3240508.3240600 *
张俊宇: "基于移动机器人平台的复杂室内场景自动采集与三维重建", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2018, 15 June 2018 (2018-06-15), pages 138 - 1613 *
林金花 等: "全局相机姿态优化下的快速表面重建", 《吉林大学学报(工学版)》, vol. 48, no. 03, 15 May 2018 (2018-05-15), pages 909 - 918 *
熊豆: "基于深度相机的教室场景三维重建与自由视点图像生成", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2020, 15 March 2020 (2020-03-15), pages 138 - 1065 *

Similar Documents

Publication Publication Date Title
Tewari et al. Advances in neural rendering
US20200380769A1 (en) Image processing method and apparatus, storage medium, and computer device
US11954870B2 (en) Dynamic scene three-dimensional reconstruction method, apparatus and system, server, and medium
CN110458939B (en) Indoor scene modeling method based on visual angle generation
JP7169358B2 (en) View-dependent brick selection for fast stereo reconstruction
CN113892127A (en) Method and apparatus for corner detection using a neural network and a corner detector
CN112513712A (en) Mixed reality system with virtual content warping and method for generating virtual content using the same
EP2671210A2 (en) Three-dimensional environment reconstruction
JP2022533207A (en) Cache and update dense 3D reconstructed data
WO2024032464A1 (en) Three-dimensional face reconstruction method, apparatus, and device, medium, and product
WO2023015409A1 (en) Object pose detection method and apparatus, computer device, and storage medium
US20220309761A1 (en) Target detection method, device, terminal device, and medium
KR20210058686A (en) Device and method of implementing simultaneous localization and mapping
WO2022025772A1 (en) Path guiding for path-traced rendering
CN115349140A (en) Efficient positioning based on multiple feature types
CN116134491A (en) Multi-view neuro-human prediction using implicit differentiable renderers for facial expression, body posture morphology, and clothing performance capture
CN115953551A (en) Sparse grid radiation field representation method based on point cloud initialization and depth supervision
Baudron et al. E3d: event-based 3d shape reconstruction
CN114863061A (en) Three-dimensional reconstruction method and system for remote monitoring medical image processing
Ashutosh et al. 3d-nvs: A 3d supervision approach for next view selection
US20230298243A1 (en) 3d digital avatar generation from a single or few portrait images
Hou et al. Octree-based approach for real-time 3d indoor mapping using rgb-d video data
CN112053435A (en) Self-adaptive real-time human body three-dimensional reconstruction method
US20220392121A1 (en) Method for Improved Handling of Texture Data For Texturing and Other Image Processing Tasks
CN116704112A (en) 3D scanning system for object reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination