CN110298861A - A kind of quick three-dimensional image partition method based on shared sampling - Google Patents

A kind of quick three-dimensional image partition method based on shared sampling Download PDF

Info

Publication number
CN110298861A
CN110298861A CN201910601108.9A CN201910601108A CN110298861A CN 110298861 A CN110298861 A CN 110298861A CN 201910601108 A CN201910601108 A CN 201910601108A CN 110298861 A CN110298861 A CN 110298861A
Authority
CN
China
Prior art keywords
pixel
foreground
color
ignorance
zone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910601108.9A
Other languages
Chinese (zh)
Inventor
刘斌
牛晓嫘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201910601108.9A priority Critical patent/CN110298861A/en
Publication of CN110298861A publication Critical patent/CN110298861A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of quick three-dimensional image partition methods based on shared sampling, comprising the following steps: is extended according to color and space length similarity to the known region in trimap image;It uses the method for shared sampling to find a pair of optimal foreground-background point pair for the pixel in zone of ignorance, then carries out local optimum, calculate opacity, foreground, background colour and the confidence level for selecting foreground-background color of the pixel;Local smoothing method is carried out using pixel of the Gaussian function to zone of ignorance and obtains the final opacity of each pixel.This method can rapidly realize cutting operation, and segmentation effect is good.Compared with prior art, the real-time of this method and accuracy are all improved.

Description

A kind of quick three-dimensional image partition method based on shared sampling
Technical field
The present invention relates to three-dimensional image segmentation more particularly to a kind of quick three-dimensional image partition methods based on shared sampling.
Background technique
During the three-dimension modeling of human visualization, accurately segmentation extracts organ from sliced image of human body It is the basis for carrying out human visualization with tissue, the effect of segmentation directly influences the accuracy of visualization human body.Human body is cut The main difficulty that picture data set is split is to need to handle a large amount of data, and human internal organs and tissue morphology are complicated It is changeable.Nowadays, three-dimensional segmentation extraction directly is carried out to image data and has become a research hotspot.It is obvious that with layer-by-layer figure It is compared as segmentation is combined into three-dimensional, segmentation, which extracts area-of-interest and studied, from original three-dimensional image data set is More meaningful, and the volume data extracted is easier to analyze and understand.But present Medical image segmentation algorithm is mostly For two dimensional image, the dividing method of three-dimensional color image is seldom.Present three-dimensional segmentation is substantially first to mention from single layer image Contour area is taken out, then is combined into 3-D image.For a large amount of image, this be not only one it is uninteresting and time-consuming Task, and this can also lose some information on 3 D stereo direction.These result in segmentation result inaccurate, sliced time It is long.
Summary of the invention
According to problem of the existing technology, the invention discloses a kind of quick three-dimensional image segmentations based on shared sampling Method specifically uses following steps:
Two dimensional series image I and corresponding trimap image T is chosen, wherein trimap image T points are known region TrWith Zone of ignorance Tu, according to color and space length similarity to the known region T in trimap imagerIt is extended;
Use the shared method of sampling for zone of ignorance TuEach of pixel find a pair of optimal foreground-background point It is right, local optimum is carried out to the foreground-background color of unknown pixel, calculate the opacity information of unknown pixel, foreground color, Background color and the confidence level for selecting prospect background color;
Using Gaussian function to zone of ignorance TuPixel carry out local smoothing method obtain each pixel it is final not Transparency.
Further, according to color and space length similarity to the known region T in trimap imagerIt is extended tool Body is in the following way: for zone of ignorance TuEach of pixel A scanned for from inside to outside on two-dimensional surface, such as Fruit finds a pixel B and pixel A color and pixel A is then classified as pixel B within the set range by space length The known region T at placerInterior, if not finding the pixel B of the condition of satisfaction, pixel A still falls within zone of ignorance Tu
Further, use the method for shared sampling for zone of ignorance TuEach of pixel find it is a pair of it is optimal before Then scape-background dot pair carries out local optimum concrete mode are as follows:
To belong to zone of ignorance TuEach pixel repeatedly searched for along the ray of different angle in three dimensions, It finds multiple foreground points and background dot and optimal foreground-background point pair is found according to constraint function, with optimal foreground-background point pair As the pixel foreground-background color and calculate the local color variance of foreground point and background dot, as zone of ignorance TuIn Each pixel find optimal foreground-background point to rear, for zone of ignorance TuIn each pixel according to surrounding its His pixel carries out local optimum to the foreground-background color of its own, foreground-background color after being optimized, opaque Spend and select the confidence information of the foreground-background color.
Local smoothing method is carried out using pixel of the Gaussian function to zone of ignorance method particularly includes: be weighted and averaged using Gauss Method to zone of ignorance TuEach pixel foreground-background color and opacity be not weighted and averaged to the end not Transparency.
For zone of ignorance TuIn each pixel according to other pixels of surrounding to its own foreground-background color It carries out local optimum specifically in the following way: to each pixel p of zone of ignorance, finding three colors in a certain range It is distorted the smallest neighbouring zone of ignorance pixel, colour distortion is defined as follows:
If one of vicinity points are q, CpIt is the color value of p,It is the background colour of q,It is the foreground of q, it is right Foreground color, background color and the local color variance of these three pixels are averaged, and using the mean value as standard of comparison, are obtained The confidence level of foreground-background color and opacity and selection prospect background color after optimization.Due to using above-mentioned skill Art scheme, a kind of quick three-dimensional image partition method based on shared sampling provided by the invention, this method can be significant and pacify Calculation amount needed for reducing segmentation entirely can obtain the knot of high quality while running on fairly small discrete search space Fruit, the image segmentation speed in method proposed by the present invention the matte of user create process to be easier, greatly reduce logical Alphamatting Interactive Segmentation image the time it takes is crossed, runing time is short, therefore can obtain accurate three-dimensional point Cut image.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this The some embodiments recorded in application, for those of ordinary skill in the art, without creative efforts, It is also possible to obtain other drawings based on these drawings.
Fig. 1 is the flow chart of the method for the present invention;
Fig. 2 is the former data that the present invention uses;
Fig. 3 is the trimap image that the present invention uses;
Fig. 4 is the result of step S1 of the present invention;
Fig. 5 is the search process of step S2 of the present invention;
Fig. 6 is the result of step S2 of the present invention;
Fig. 7 is final segmentation effect figure of the invention.
Specific embodiment
To keep technical solution of the present invention and advantage clearer, with reference to the attached drawing in the embodiment of the present invention, to this Technical solution in inventive embodiments carries out clear and complete description:
A kind of quick three-dimensional image partition method based on shared sampling as shown in Figure 1, this method input two dimensional series Image I and corresponding trimap image T.Trimap image is divided into known region TrWith zone of ignorance Tu, wherein known region is divided into Prospect TfWith background Tb.The opacity of prospect is 1, and the opacity of background is 0.By taking eyeball as an example, a certain Zhang Erwei of I is cut Piece such as Fig. 2, the three-dimensional reconstruction of T such as Fig. 3.Specifically use following steps:
S1: the known region in trimap image is extended according to color and space length similarity: for unknown Each of region pixel carries out searching for from inside to outside on two-dimensional surface, if found and its color and space length Otherwise pixel within the specified range still falls within zone of ignorance then this pixel is just classified as corresponding known region. This step is extended the known region in trimap image using color and space length similarity, specific steps are as follows:
S11: in zone of ignorance TuEach of pixel p, in (2k+1) centered on p × (2k+1) two Search pixel point q from the inside to surface in dimensional plane region stops after searching q (if having searched for entire (2k+1) × (2k+1's) Two-dimensional surface does not find the q of the condition of satisfaction yet, and search also stops).Wherein q belongs to known region Tr(r={ f, b }), k=1, 2·····ki.It is necessary to meet following condition by pixel q: the pixel space distance between p and q | | p-q | |≤ki, color sky Between distance | | Cp-Cq||≤kc.Wherein CpAnd CqRespectively represent the color value of p and q, kiDepending on the size of zone of ignorance, generally Between 10 to 30, kcBetween 5 to 10.
The label of pixel p is changed to known region belonging to pixel q by S12: the pixel p for finding q.P's Opacity is also changed to corresponding 0 or 1.The pixel p for not finding q still falls within zone of ignorance.
S2: as shown in Figure 4, Figure 5 and Figure 6, the method for shared sampling is used to seek for each of zone of ignorance pixel Look for a pair of optimal foreground-background point pair: to belong to each pixel of zone of ignorance in three dimensions along different angles The ray of degree is repeatedly searched for, before finding multiple foreground points and background dot and finding optimal a pair according to the constraint function of definition Scape-background dot, foreground-background color using it as the pixel and the local color variance for calculating foreground point and background dot, Then local optimum is carried out to the result, obtains better foreground-background color, opacity, the confidence for selecting foreground-background color Degree.This step is that each of zone of ignorance pixel finds a pair of optimal foreground-background using the method for shared sampling Then point pair carries out each pixel is calculated in local optimum foreground-background color, opacity, selection foreground-background The confidence level of color, specific steps are as follows:
S21: sampling is collected for each pixel of zone of ignorance.Specific steps are as follows:
S211: for each pixel p in zone of ignorance, first along using p as starting point, with x, positive direction of the y-axis is divided A foreground point and a background dot (if present) not nearest with p distance at the search of ray 1 of (θ, β) angle.
Then along using p as starting point, with x, y, z-axis positive direction is searched at the ray 2 of (θ, β, ξ) angle respectively, last edge Using p as starting point, and x, y, z-axis positive direction are searched at the ray 3 of (θ, β, 180- ξ) angle, wherein ξ=θ (mod180).Initially θ=(xp+yp+zp)×(1.7×(360÷kg) ÷ 9) % (360 ÷ kg), (xp,yp,zp) be pixel p three-dimensional coordinate, kgGenerally Take 4.Search for situation such as Fig. 5.
S212: θ is increased into (360 ÷ k every timeg) degree repetition step 1), repeat kg- 1 time.Finally collect at most kgBefore × 3 Sight spot and most kg× 3 background dots, following steps can select a pair of of foreground-background point from these foreground points and background dot As the foreground and background colour of p.
S213: some foreground point f that S211 and S212 step obtains above is takeniWith some background dot bj, their face Color is respectively FiAnd Bj, calculate the neighbouring similarity of p
Wherein estimate opacity valueΩpIt is 3 × 3 × 3 regions centered on p, q is ΩpIn Certain point.
S214: constraint function is calculatedWhereinIt is estimated value a possibility that pixel p belongs to prospect, from p to fiOr bjEnergy Function isBehalf fiOr bj,
S215: pixel p to f is calculated separatelyiAnd bjImage space distance Dp(fi)=| | fi- p | | and Dp(bj)=| | bj-p||。
S216: total constraint function is calculatedWherein one As take eN=3, eA=2, ef=1, eb=4.
S217: repeating above-mentioned S213, S214, S215, and S216 step finds the optimal foreground-background pair of pixel p
S218:Corresponding color beIt calculatesWithWherein ΩfAnd ΩbBe respectively withWithCentered on 5 × 5 × 5 pixel regions,WithIt is local color variance, N=125.Finally obtain a tuple about p
S22: the influence in order to reduce noise optimizes, specific steps to the result that sampling step obtains is collected are as follows:
S221: for each of zone of ignorance pixel p, in neighbouring zone of ignorance pixel (most k of prIt is a) in look for To three pixel q (pixel q also belongs to zone of ignorance), these three pixels meet colour distortionBe it is the smallest, then by these three pixelsMember Group averagely obtainsBy the influence for averagely reducing noise.Wherein krGenerally take 11 × 11 × 11.
S222: the color of foreground-background pair after calculation optimizationWithOpacity valuePixel p selectionWithA possibility thatGenerally take λ=10, ε=10-8.Finally obtain new tuple
S3: in order to prevent the discontinuous generation of result, using Gaussian function to zone of ignorance TuPixel carry out part Smoothly obtain the final opacity of each pixel.Specific steps are as follows:
For each of zone of ignorance pixel p, the m nearest vicinity points of p are taken, ψ is denoted asp, q ∈ Ψp。G It is σ2The normal Gaussian function of=m/9 π.Generally take m=100.
S31: calculating the weight of m pixel,Then calculate pixel p it is final before SceneryWith final background colour
S32: weight is calculatedThen Ψ is calculatedpAverage foreground-background color distanceIt finally calculates pixel p and selects FpAnd BpA possibility that:
S33: the opacity of pixel q is calculatedWeightWherein δ is a Boolean function, whenWhen, 1 is returned to, conversely, returning to 0.Then Ψ is calculatedpWeighted average α valueFinally calculate the final α value of pixel p
S34: final α value being exported to obtain the three-dimensional α image of input picture, then carries out Three-dimensional Gravity according to three-dimensional α image It builds to obtain the tissue divided.Final segmentation result is as shown in Figure 7.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto, Anyone skilled in the art in the technical scope disclosed by the present invention, according to the technique and scheme of the present invention and its Inventive concept is subject to equivalent substitution or change, should be covered by the protection scope of the present invention.

Claims (5)

1. a kind of quick three-dimensional image partition method based on shared sampling, it is characterised in that the following steps are included:
Two dimensional series image I and corresponding trimap image T is chosen, wherein trimap image T points are known region TrThe unknown area and Domain Tu, according to color and space length similarity to the known region T in trimap imagerIt is extended;
Use the shared method of sampling for zone of ignorance TuEach of pixel find a pair of optimal foreground-background point pair, to not Know that the foreground-background color of pixel carries out local optimum, calculates opacity information, foreground color, the background face of unknown pixel Color and the confidence level for selecting prospect background color;
Using Gaussian function to zone of ignorance TuPixel carry out local smoothing method obtain the final opaque of each pixel Degree.
2. a kind of quick three-dimensional image partition method based on shared sampling according to claim 1, it is further characterized in that: According to color and space length similarity to the known region T in trimap imagerIt is extended specifically in the following way: right In zone of ignorance TuEach of pixel A scanned for from inside to outside on two-dimensional surface, if finding a pixel B The known region T being within the set range then classified as pixel A with pixel A color and space length where pixel Br Interior, if not finding the pixel B of the condition of satisfaction, pixel A still falls within zone of ignorance Tu
3. a kind of quick three-dimensional image partition method based on shared sampling according to claim 1, it is further characterized in that: Use the method for shared sampling for zone of ignorance TuEach of pixel find a pair of optimal foreground-background point pair, then Carry out local optimum concrete mode are as follows:
To belong to zone of ignorance TuEach pixel repeatedly searched for along the ray of different angle in three dimensions, find Multiple foreground points and background dot simultaneously find optimal foreground-background point pair according to constraint function, using optimal foreground-background point to as The foreground-background color of the pixel and the local color variance for calculating foreground point and background dot, as zone of ignorance TuIn it is every A pixel all finds optimal foreground-background point to rear, for zone of ignorance TuIn each pixel according to other pictures of surrounding Vegetarian refreshments carries out local optimum to its own foreground-background color, foreground-background color, opacity after being optimized and Select the confidence information of the foreground-background color.
4. a kind of quick three-dimensional image partition method based on shared sampling according to claim 1, it is further characterized in that: Local smoothing method is carried out using pixel of the Gaussian function to zone of ignorance method particularly includes: use the average weighted method pair of Gauss Zone of ignorance TuEach pixel foreground-background color and the opacity opacity that is weighted and averaged to the end.
5. a kind of quick three-dimensional image partition method based on shared sampling according to claim 3, it is further characterized in that: For zone of ignorance TuIn each pixel part is carried out to its own foreground-background color according to other pixels of surrounding Optimization is specific in the following way: to each pixel p of zone of ignorance, finding three colour distortion minimums in a certain range Neighbouring zone of ignorance pixel, colour distortion is defined as follows:
If one of vicinity points are q, CpIt is the color value of p,It is the background colour of q,The foreground of q, to this three Foreground color, background color and the local color variance of a pixel are averaged, and using the mean value as standard of comparison, are optimized The confidence level of foreground-background color and opacity and selection prospect background color afterwards.
CN201910601108.9A 2019-07-04 2019-07-04 A kind of quick three-dimensional image partition method based on shared sampling Pending CN110298861A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910601108.9A CN110298861A (en) 2019-07-04 2019-07-04 A kind of quick three-dimensional image partition method based on shared sampling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910601108.9A CN110298861A (en) 2019-07-04 2019-07-04 A kind of quick three-dimensional image partition method based on shared sampling

Publications (1)

Publication Number Publication Date
CN110298861A true CN110298861A (en) 2019-10-01

Family

ID=68030360

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910601108.9A Pending CN110298861A (en) 2019-07-04 2019-07-04 A kind of quick three-dimensional image partition method based on shared sampling

Country Status (1)

Country Link
CN (1) CN110298861A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111223108A (en) * 2019-12-31 2020-06-02 上海影卓信息科技有限公司 Method and system based on backdrop matting and fusion

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6741755B1 (en) * 2000-12-22 2004-05-25 Microsoft Corporation System and method providing mixture-based determination of opacity
CN102622754A (en) * 2012-02-29 2012-08-01 无锡宜华智能科技有限公司 Rapid foreground extraction method on basis of user interaction Trimap
CN103164855A (en) * 2013-02-26 2013-06-19 清华大学深圳研究生院 Bayesian Decision Theory foreground extraction method combined with reflected illumination
CN103177446A (en) * 2013-03-13 2013-06-26 北京航空航天大学 Image foreground matting method based on neighbourhood and non-neighbourhood smoothness prior
US20140301639A1 (en) * 2013-04-09 2014-10-09 Thomson Licensing Method and apparatus for determining an alpha value
CN107452010A (en) * 2017-07-31 2017-12-08 中国科学院长春光学精密机械与物理研究所 A kind of automatically stingy nomography and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6741755B1 (en) * 2000-12-22 2004-05-25 Microsoft Corporation System and method providing mixture-based determination of opacity
CN102622754A (en) * 2012-02-29 2012-08-01 无锡宜华智能科技有限公司 Rapid foreground extraction method on basis of user interaction Trimap
CN103164855A (en) * 2013-02-26 2013-06-19 清华大学深圳研究生院 Bayesian Decision Theory foreground extraction method combined with reflected illumination
CN103177446A (en) * 2013-03-13 2013-06-26 北京航空航天大学 Image foreground matting method based on neighbourhood and non-neighbourhood smoothness prior
US20140301639A1 (en) * 2013-04-09 2014-10-09 Thomson Licensing Method and apparatus for determining an alpha value
CN107452010A (en) * 2017-07-31 2017-12-08 中国科学院长春光学精密机械与物理研究所 A kind of automatically stingy nomography and device

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
EDUARDO S. L. GASTAL等: "Shared Sampling for Real-Time Alpha Matting", 《COMPUTER GRAPHICS》 *
EHSAN SHAHRIAN等: "Weighted color and texture sample selection for image matting", 《2012 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
JUE WANG等: "An Iterative Optimization Approach for Unified Image Segmentation and Matting", 《TENTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV"05) VOLUME 1》 *
夏晶晶等: "基于Closed-Form抠图算法的复杂背景下植物叶片提取", 《江苏农业科学》 *
陈秋凤等: "局部自适应输入控制的随机游走抠图", 《智能***学报》 *
黄睿等: "改进的自然图像鲁棒抠图算法", 《计算机工程与应用》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111223108A (en) * 2019-12-31 2020-06-02 上海影卓信息科技有限公司 Method and system based on backdrop matting and fusion

Similar Documents

Publication Publication Date Title
Grady et al. Random walks for interactive alpha-matting
Campbell et al. Automatic 3d object segmentation in multiple views using volumetric graph-cuts
CN104156693B (en) A kind of action identification method based on the fusion of multi-modal sequence
CN106991686B (en) A kind of level set contour tracing method based on super-pixel optical flow field
Nakajima et al. Semantic object selection and detection for diminished reality based on slam with viewpoint class
Djelouah et al. Sparse multi-view consistency for object segmentation
CN102034247B (en) Motion capture method for binocular vision image based on background modeling
Jepson et al. A layered motion representation with occlusion and compact spatial support
US11282257B2 (en) Pose selection and animation of characters using video data and training techniques
CN110853064B (en) Image collaborative segmentation method based on minimum fuzzy divergence
CN111507334A (en) Example segmentation method based on key points
CN107909079A (en) One kind collaboration conspicuousness detection method
Kok et al. A review on stereo vision algorithm: Challenges and solutions
CN115880720A (en) Non-labeling scene self-adaptive human body posture and shape estimation method based on confidence degree sharing
Yuan et al. Volume cutout
CN110298861A (en) A kind of quick three-dimensional image partition method based on shared sampling
CN111680756A (en) Binocular stereoscopic vision accurate matching method for optimizing inclined plane
Kolliopoulos et al. Segmentation-Based 3D Artistic Rendering.
Li et al. Graph-based saliency fusion with superpixel-level belief propagation for 3D fixation prediction
Oh et al. Probabilistic Correspondence Matching using Random Walk with Restart.
Li et al. Texture category-based matching cost and adaptive support window for local stereo matching
Lee et al. A graph-based segmentation method for breast tumors in ultrasound images
Leichter et al. Bittracker—a bitmap tracker for visual tracking under very general conditions
Huguet et al. Color-based watershed segmentation of low-altitude aerial images
CN110751658A (en) Matting method based on mutual information and point spread function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191001