CN108171220A - Road automatic identifying method based on full convolutional neural networks Yu CRF technologies - Google Patents
Road automatic identifying method based on full convolutional neural networks Yu CRF technologies Download PDFInfo
- Publication number
- CN108171220A CN108171220A CN201810096619.5A CN201810096619A CN108171220A CN 108171220 A CN108171220 A CN 108171220A CN 201810096619 A CN201810096619 A CN 201810096619A CN 108171220 A CN108171220 A CN 108171220A
- Authority
- CN
- China
- Prior art keywords
- picture
- road
- convolutional neural
- neural networks
- full convolutional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/182—Network patterns, e.g. roads or rivers
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses the road automatic identifying method based on full convolutional neural networks Yu CRF technologies, Step 1: obtaining unmanned plane video flowing and unmanned plane video being decoded;Step 2: obtain picture of taking photo by plane from unmanned plane video flowing;Step 3: read this input picture;Step 4: reading designated model, if without designated model, default models are read;Step 5: picture is predicted;Step 6: the picture predicted is preserved with png forms.The present invention uses the deep learning method of the full convolutional neural networks of multichannel, with reference to the CRF algorithms approached by Recognition with Recurrent Neural Network, it migrates study and existing UAV Video method for stream processing comes, pass through the equipment recorded video of taking photo by plane of unmanned plane awing carry, so as to be automatically performed the identification to road network and peripheral object, the accuracy of unmanned plane road Identification is promoted to the maximum extent.
Description
Technical field
The present invention relates to unmanned plane image procossing, computer vision, deep learning migrates study, specifically, relating to
And the depth learning technology based on full convolutional neural networks is used to identify road and periphery in unmanned plane picture automatically
Object technology.
Background technology
Road Identification present situation at present:
It is manual mostly to the road markings technology of unmanned plane picture at present, automanual (dependence is artificial and counts
Calculation machine operates simultaneously), minority is identified automatically with Partial Feature such as color, shape etc. in picture.Although manual way
It can ensure certain mark accuracy rate, without the machine of powerful calculating ability, but heavy dependence possesses the skill of professional knowledge
Art personnel.Therefore, the method is difficult that can not even be applied to the road markings of large-scale big picture.Road in automatic figure of taking photo by plane
Road is but limited to the accuracy of mark it is possible to prevente effectively from above-mentioned Human Resource Problems.
As shown in Figure 1, roads recognition method of the prior art.
Attached drawing source:http://bbs.dji.com/thread-24617-1-1.html.
Currently based on the research conditions of the target identification of deep learning:In computer vision field, deep learning method is
It is proved that mark accuracy rate will be effectively improved.In paper " Leaning to Detect Roads in High-
In Resolution Aerial Images ", author Mnih, Hinton mention their deep learning method in detection U.S. horse
Sa Zhusai takes photo by plane in state and has led over other non-deep learning methods up to 7% in picture.
As shown in Figure 2:《The deep learning of Minh and Hinton, conventional method road Identification compare》, wherein entitled
The curve of other is non-deep learning method effect, remaining is deep learning method effect.With curve area coverage (AUC)
Algorithm quality is evaluated as it can be seen that deep learning method is better than non-deep learning method.
As shown in Figure 3:《The comparison of deep learning and conventional method》, left figure, right figure is respectively non-deep learning, depth
The prediction result of learning method.White portion is the road of model prediction in figure.As it can be seen that right figure is much better than left figure.
However the road obtained by common full connection convolutional neural networks (CNN) or full convolutional neural networks (FCN)
Recognition result is still very coarse, for example road edge is accurate not enough.The reason is as follows that:
(1) CNN experience the visual field it is excessive, so that last segmentation output it is very coarse (when last layer of network,
Each neuron corresponds to the one piece very big region of original image);The last magnification ratios of FCN are 32 times, cause to divide coarse;
(2) CNN and FCN lacks to constraints such as space, marginal informations.CNN and FCN is a kind of model end to end,
Do not add in it is any existing prior-constrained, we be desirable to image segmentation when, in the place at edge, separated probability is big
Some (the bigger places of gradient).In other words it is desirable that if two adjacent pixel difference are bigger, then the two pixels
The probability to belong to a different category should be bigger;If the color of two neighbor pixels is very close, then they belong to different
The probability of classification should be smaller.So if these artificial existing prior-constrained information can be added thereto, that algorithm is just
Have further promotion.
The problem of for more than:Experience that the visual field is excessive, edge constraint is not strong enough,《Semantic Image
Segmentation with Deep Convolutional Nets and Fully Connec ted CRFs》The calculation of proposition
Method is:Coarse segmentation is done first with FCN and then CRF is recycled to carry out smart segmentation.Specific FCN improves as follows:
Coarse segmentation is done first with FCN and then CRF is recycled to carry out smart segmentation.Specific FCN improves as follows:
1st, the convolution kernel size of first layer convolutional layer is changed to 3*3 by 7*7;(reducing receptive field);
2nd, reduce down-sampling ratio, our original FCN down-sampling ratios are 32, by reducing stride sizes, lower use
Ratio is 8;(reducing receptive field)
3rd, using full condition of contact random field, the segmentation (enhancing edge constraint) that becomes more meticulous is carried out to FCN segmentation results.
But this be coarse segmentation and essence segmentation be distinct, be not a training pattern end to end, only
The result of FCN is utilized as unitary potential function in CRF, can not form global unified optimal training result.
Invention content
For the deficiencies in the prior art, the present invention uses the deep learning side of the full convolutional neural networks of multichannel
Method, with reference to Conditional Random Field (CRF) algorithm approached by Recognition with Recurrent Neural Network (RNN), migrate study with
And existing UAV Video method for stream processing comes, by the equipment recorded video of taking photo by plane of unmanned plane awing carry, so as to
The identification to road network and peripheral object is automatically performed, promotes the accurate of unmanned plane road Identification to the maximum extent
Property.
With popularizing for unmanned plane technology, video data of taking photo by plane also increases in geometry grade.UAV Video stream process
Technology and deep learning method and with can substantially reduce in unmanned plane picture road markings task to professional technician
Dependence and effectively promote predictablity rate.In this way can while cost of labor is reduced effectively to a large amount of satellite photoes into
Row prediction.
In order to achieve the above-mentioned object of the invention, the technical solution adopted by the present invention is:
A kind of road automatic identifying method based on full convolutional neural networks, it is characterised in that:
Step 1: it obtains unmanned plane video flowing and unmanned plane video is decoded;H264 video flowings are decoded,
And pass through ffmpeg and video flowing is converted into RGB image;
Step 2: obtain picture of taking photo by plane from unmanned plane video flowing;
Step 3: read this input picture;
Step 4: reading designated model, if without designated model, default models are read;
Step 5: picture is predicted;
Step 6: the picture predicted is preserved with png forms.
By the full convolutional neural networks framework of multichannel, to taking photo by plane, picture is analyzed using deep learning method.Analysis
As a result it is the road that each carries out pixel in image, building, Fei Lufei buildings (background) classification.Step:To take photo by plane figure first
Piece is divided into region (window) block of multiple 64*64, and the region unit of 64*64 is inputted full convolutional network, exports 256 road probability
It is worth decimal or 768 roads, building, Fei Lufei builds probability value decimal, and 256 road probability values correspond to the road sign of 16*16 sizes
Know picture;768 probability values correspond to the road of 16*16 sizes and building mark picture.Network inputs are RGB multichannel pictures.
Advantageous effect:
The present invention completes the road and building to each picture of taking photo by plane by the full convolutional neural networks CRF technologies of multichannel
Mark.The present invention realizes the training of end-to-end (End-to-End), i.e. mode input output is without more extra process.This hair
Bright innovative point:
1st, UAV Video stream application;
2nd, the deep learning method that study is migrated comprising VGG networks has been used;
3rd, the RNNasCRF in forward position has been used to advanced optimize road prediction;
4th, RGB multichannels:Roadway characteristic is more accurate, performance optimization;
5th, Maxout networks:Characteristic optimization transmits.
Description of the drawings
Fig. 1 is road automatic identifying method of the prior art.
Fig. 2 is Minh, deep learning and non-deep learning effect evaluation and test figure in Hinton papers.
Fig. 3 takes photo by plane road markings exemplary plot for deep learning.
Fig. 4 is full convolutional neural networks and CRF technology exemplary plots.
Fig. 5 is the VGG network example figures for migrating study.
Fig. 6 is RNNasCRF technology exemplary plots.
Fig. 7 is the road automatic identifying method instance graph based on full convolutional neural networks Yu CRF technologies.
Fig. 8 marks comparative examples figure for prior art and this technology.
Fig. 9 is picture instance graph of taking photo by plane.
Figure 10 is the road automatic identifying method example effects figure based on full convolutional neural networks Yu CRF technologies.
Figure 11 is the road Identification flow chart of taking photo by plane based on unmanned plane of present example.
Specific embodiment
The present invention is described in further detail in the following with reference to the drawings and specific embodiments.
1st, man-machine video flowing pretreatment
H264 video flowings are decoded, and passes through ffmpeg and video flowing is converted into RGB image;
2nd, using road in depth learning technology identification image
As shown in figure 4, full convolutional neural networks and CRF technologies that this example proposes.The present embodiment passes through full convolutional Neural
The network architecture carries out picture of taking photo by plane to draw window Operations Analyst (Sliding Window).Picture segmentation will be taken photo by plane first into multiple
The region unit of 64*64 is inputted full convolutional neural networks, exports 256 road probability value decimals by region (window) block of 64*64,
Or 768 roads, building, Fei Lufei building probability value decimals.256 road probability values correspond to the line picture of 16*16 sizes;
768 probability values correspond to the road of 64*64 sizes and building mark picture.
As shown in figure 5, the proposition of this example migrates study exemplary plot based on VGG16 networks.
As shown in fig. 6, the RNNasCRF technology exemplary plots that this example proposes.
The present embodiment is completed to the road of each picture of taking photo by plane by the full convolutional neural networks CRF technologies of multichannel with building
Build mark.The present embodiment realizes the training of end-to-end (End-to-End), i.e. mode input output is without more extra process.
First, around each 100 zero padding are carried out to input picture.Then, by the good pictures of zero padding input with
Lower network:
1st layer:64 3*3 convolution (Convolution), span 1 (stride), activation primitive (activation)
ReLu;
2nd layer:64 3*3 convolution (Convolution), span 1 (stride), activation primitive (activation)
ReLu uses zero padding;
3rd layer:Maxpooling size 2*2, use zero padding;
4th layer:128 3*3 convolution (Convolution), span 1 (stride), activation primitive (activation)
ReLu uses zero padding;
5th layer:128 3*3 convolution (Convolution), span 1 (stride), activation primitive (activation)
ReLu uses zero padding;
6th layer:Maxpooling size 2*2, use zero padding;
7th layer:256 3*3 convolution (Convolution), span 1 (stride), activation primitive (activation)
ReLu uses zero padding;
8th layer:256 3*3 convolution (Convolution), span 1 (stride), activation primitive (activation)
ReLu uses zero padding;
9th layer:256 3*3 convolution (Convolution), span 1 (stride), activation primitive (activation)
ReLu uses zero padding;
10th layer:Maxpooling size 2*2, use zero padding;
11th layer:512 3*3 convolution (Convolution), span 1 (stride), activation primitive (activation)
ReLu uses zero padding;
12nd layer:512 3*3 convolution (Convolution), span 1 (stride), activation primitive (activation)
ReLu uses zero padding;
13rd layer:512 3*3 convolution (Convolution), span 1 (stride), activation primitive (activation)
ReLu uses zero padding;
14th layer:Maxpooling size 2*2, use zero padding;
15th layer:512 3*3 convolution (Convolution), span 1 (stride), activation primitive (activation)
ReLu uses zero padding;
16th layer:512 3*3 convolution (Convolution), span 1 (stride), activation primitive (activation)
ReLu uses zero padding;
17th layer:512 3*3 convolution (Convolution), span 1 (stride), activation primitive (activation)
ReLu uses zero padding;
18th layer:Maxpooling size 2*2, use zero padding;
19th layer:4096 1*1 convolution (Convolution), span 1 (stride), activation primitive (activation)
ReLu;
20th layer:Dropout;
21st layer:4096 1*1 convolution (Convolution), span 1 (stride), activation primitive (activation)
ReLu;
22nd layer:Dropout;
23rd layer:3 1*1 convolution (Convolution), span 1 (stride), activation primitive (activation)
ReLu;
24th layer:3 4*4 deconvolution (Deconvolution), span 2 (stride);
25th layer:3 1*1 convolution (Convolution), span 1 (stride), activation primitive (activation)
ReLu acts on the 14th layer;
26th layer:It cuts out (Cropping), size 5*5;
27th layer:It is added, the 24th layer and the 26th layer;
28th layer:3 1*1 convolution (Convolution), span 1 (stride), activation primitive (activation)
ReLu acts on the 10th layer;
29th layer:It cuts out (Cropping), size 9*9;
30th layer:It is added, the 27th layer and the 29th layer;
31st layer:3 16*16 deconvolution (Deconvolution), span 8 (stride), no biase;
32nd layer:It cuts out (Cropping), left and right, is respectively (31,37) up and down;
33rd layer:The Conditional Random Field that Recognition with Recurrent Neural Network (RNN) approaches.
First 18 layers of neural network uses the VGG network parameter weights obtained by ImageNet training to migrate study.With reference to
Warp lamination later, finally obtains 33 layer depth neural networks by RNN layers.
The initial value of parameter is obtained by summer dimension initial value (Xavier initialization) in network, loss function
(loss function) is cross entropy (cross-entropy), and passes through stochastic gradient descent (stochastic gradient
Descent) coaching method Optimized model parameter, due to input picture for original image a part caused by marginal information careless omission with
Exporting discontinuous phenomenon causes it is necessary to carry out noise reduction to output, the displacement picture based on original picture can be carried out by algorithm pre-
It surveys, reuses equalization method to reduce the accuracy drawn the noise that picture is generated after window operates and be used for being promoted road Identification.
The present embodiment includes direct forecast function and training function.It can be according to existing to the direct forecast function of image
Model and model parameter carry out road markings to the picture of taking photo by plane that one does not identify.The parameter of model is come to the nearly 200G U.S.
The training result that Massachusetts satellite photo road training set is trained.User can also utilize the training function of this technology
The satellite photo of oneself is trained and is predicted with training result.
Automatic road markings is carried out to satellite photo.Road is right figure black portions.It is as shown in fig. 7, left:Artwork (input
Figure);It is right:The figure of software mark.
Fig. 8 is prior art and the comparison diagram of this technology mark.
Unmanned plane hardware:phantom 2vision(p2v)
Computer hardware:Memory:64G,gpu:NVIDIA titan X
Computer software:14.04 operating systems of ubuntu, python3.
The road automatic identifying method based on full convolutional neural networks Yu CRF technologies of the present embodiment, the specific steps are:
1st, it obtains unmanned plane video flowing and unmanned plane video is decoded;
2nd, picture of taking photo by plane is obtained from unmanned plane video flowing;(as shown in Figure 9)
3rd, this input picture is read;
4th, designated model is read, if without designated model, reads default models;
5th, picture is predicted;
6th, the picture predicted is preserved with png forms.
As shown in Figure 10, output effect figure.
Figure 11 is the flow chart the present invention is based on full convolutional neural networks and the road automatic identifying method of CRF technologies.
The foregoing is only a preferred embodiment of the present invention, not makees limitation in any form to the present invention, appoints
What those skilled in the art, without departing from the scope of the present invention, technical spirit according to the present invention to
Any simple modification that upper embodiment is made, equivalent variations and modification, are still within the scope of the technical scheme of the invention.
Claims (5)
1. a kind of road automatic identifying method based on full convolutional neural networks Yu CRF technologies, it is characterised in that:
Step 1: it obtains unmanned plane video flowing and unmanned plane video is decoded;
Step 2: obtain picture of taking photo by plane from unmanned plane video flowing;
Step 3: read this input picture;
Step 4: reading designated model, if without designated model, default models are read;
Step 5: picture is predicted;
Step 6: the picture predicted is preserved with png forms.
2. the road automatic identifying method according to claim 1 based on full convolutional neural networks Yu CRF technologies, feature
It is:H264 video flowings are decoded, and passes through ffmpeg and video flowing is converted into RGB image.
3. the road automatic identifying method according to claim 1 based on full convolutional neural networks Yu CRF technologies, feature
It is:
By the full convolutional neural networks framework of multichannel, picture of taking photo by plane is carried out to draw window Operations Analyst:The picture that will take photo by plane first point
Region (window) block of multiple 64*64 is cut into, the region unit of 64*64 is inputted into full convolutional neural networks, exports 256 road probability
It is worth decimal or 768 roads, building, Fei Lufei builds probability value decimal, and 256 road probability values correspond to the road sign of 16*16 sizes
Know picture;768 probability values correspond to the road of 16*16 sizes and building mark picture.
4. the road automatic identifying method according to claim 3 based on full convolutional neural networks Yu CRF technologies, feature
It is:Network inputs are RGB multichannel pictures.
5. the road automatic identifying method according to claim 1 based on full convolutional neural networks Yu CRF technologies, feature
It is:
The initial value of parameter is obtained by summer dimension initial value (Xavier initialization) in network, loss function (loss
Function it is) cross entropy (cross-entropy), and passes through stochastic gradient descent (stochastic gradient
Descent) coaching method Optimized model parameter, due to input picture for original image a part caused by marginal information careless omission with
Exporting discontinuous phenomenon causes it is necessary to carry out noise reduction to output, the displacement picture based on original picture can be carried out by algorithm pre-
It surveys, reuses equalization method to reduce the accuracy drawn the noise that picture is generated after window operates and be used for being promoted road Identification.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810096619.5A CN108171220A (en) | 2018-01-31 | 2018-01-31 | Road automatic identifying method based on full convolutional neural networks Yu CRF technologies |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810096619.5A CN108171220A (en) | 2018-01-31 | 2018-01-31 | Road automatic identifying method based on full convolutional neural networks Yu CRF technologies |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108171220A true CN108171220A (en) | 2018-06-15 |
Family
ID=62512366
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810096619.5A Pending CN108171220A (en) | 2018-01-31 | 2018-01-31 | Road automatic identifying method based on full convolutional neural networks Yu CRF technologies |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108171220A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109657614A (en) * | 2018-12-19 | 2019-04-19 | 沈阳天择智能交通工程有限公司 | Traffic accident situ of taking photo by plane reconnoitres middle road and automatic vehicle identification method |
CN109784479A (en) * | 2019-01-16 | 2019-05-21 | 上海西井信息科技有限公司 | Restrictor bar neural network based is anti-to pound method, system, equipment and storage medium |
CN109800661A (en) * | 2018-12-27 | 2019-05-24 | 东软睿驰汽车技术(沈阳)有限公司 | A kind of road Identification model training method, roads recognition method and device |
CN112329596A (en) * | 2020-11-02 | 2021-02-05 | 中国平安财产保险股份有限公司 | Target damage assessment method and device, electronic equipment and computer-readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106897683A (en) * | 2017-02-15 | 2017-06-27 | 武汉喜恩卓科技有限责任公司 | The ground object detecting method and system of a kind of remote sensing images |
CN107194346A (en) * | 2017-05-19 | 2017-09-22 | 福建师范大学 | A kind of fatigue drive of car Forecasting Methodology |
CN107256550A (en) * | 2017-06-06 | 2017-10-17 | 电子科技大学 | A kind of retinal image segmentation method based on efficient CNN CRF networks |
-
2018
- 2018-01-31 CN CN201810096619.5A patent/CN108171220A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106897683A (en) * | 2017-02-15 | 2017-06-27 | 武汉喜恩卓科技有限责任公司 | The ground object detecting method and system of a kind of remote sensing images |
CN107194346A (en) * | 2017-05-19 | 2017-09-22 | 福建师范大学 | A kind of fatigue drive of car Forecasting Methodology |
CN107256550A (en) * | 2017-06-06 | 2017-10-17 | 电子科技大学 | A kind of retinal image segmentation method based on efficient CNN CRF networks |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109657614A (en) * | 2018-12-19 | 2019-04-19 | 沈阳天择智能交通工程有限公司 | Traffic accident situ of taking photo by plane reconnoitres middle road and automatic vehicle identification method |
CN109657614B (en) * | 2018-12-19 | 2023-02-03 | 沈阳天择智能交通工程有限公司 | Automatic road identification method in aerial photography road traffic accident scene investigation |
CN109800661A (en) * | 2018-12-27 | 2019-05-24 | 东软睿驰汽车技术(沈阳)有限公司 | A kind of road Identification model training method, roads recognition method and device |
CN109784479A (en) * | 2019-01-16 | 2019-05-21 | 上海西井信息科技有限公司 | Restrictor bar neural network based is anti-to pound method, system, equipment and storage medium |
CN112329596A (en) * | 2020-11-02 | 2021-02-05 | 中国平安财产保险股份有限公司 | Target damage assessment method and device, electronic equipment and computer-readable storage medium |
CN112329596B (en) * | 2020-11-02 | 2021-08-24 | 中国平安财产保险股份有限公司 | Target damage assessment method and device, electronic equipment and computer-readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109190752B (en) | Image semantic segmentation method based on global features and local features of deep learning | |
WO2018103608A1 (en) | Text detection method, device and storage medium | |
WO2019144575A1 (en) | Fast pedestrian detection method and device | |
CN105184763B (en) | Image processing method and device | |
CN108171220A (en) | Road automatic identifying method based on full convolutional neural networks Yu CRF technologies | |
CN107273832B (en) | License plate recognition method and system based on integral channel characteristics and convolutional neural network | |
CN106971155B (en) | Unmanned vehicle lane scene segmentation method based on height information | |
CN114118124B (en) | Image detection method and device | |
CN111914698B (en) | Human body segmentation method, segmentation system, electronic equipment and storage medium in image | |
CN110766020A (en) | System and method for detecting and identifying multi-language natural scene text | |
CN109977834B (en) | Method and device for segmenting human hand and interactive object from depth image | |
CN110517270B (en) | Indoor scene semantic segmentation method based on super-pixel depth network | |
CN113435319B (en) | Classification method combining multi-target tracking and pedestrian angle recognition | |
CN113158977B (en) | Image character editing method for improving FANnet generation network | |
CN112287859A (en) | Object recognition method, device and system, computer readable storage medium | |
Cho et al. | Semantic segmentation with low light images by modified CycleGAN-based image enhancement | |
CN114708566A (en) | Improved YOLOv 4-based automatic driving target detection method | |
CN112395962A (en) | Data augmentation method and device, and object identification method and system | |
CN116071294A (en) | Optical fiber surface defect detection method and device | |
CN110008834B (en) | Steering wheel intervention detection and statistics method based on vision | |
CN112686872B (en) | Wood counting method based on deep learning | |
CN117496518A (en) | Electronic file image intelligent correction method based on text detection and form detection | |
CN109840498B (en) | Real-time pedestrian detection method, neural network and target detection layer | |
CN109117841B (en) | Scene text detection method based on stroke width transformation and convolutional neural network | |
Zhang et al. | Text extraction from images captured via mobile and digital devices |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180615 |