CN104021224A - Image labeling method based on layer-by-layer label fusing deep network - Google Patents

Image labeling method based on layer-by-layer label fusing deep network Download PDF

Info

Publication number
CN104021224A
CN104021224A CN201410290316.9A CN201410290316A CN104021224A CN 104021224 A CN104021224 A CN 104021224A CN 201410290316 A CN201410290316 A CN 201410290316A CN 104021224 A CN104021224 A CN 104021224A
Authority
CN
China
Prior art keywords
layer
label
degree
depth network
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410290316.9A
Other languages
Chinese (zh)
Inventor
徐常胜
袁召全
桑基韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201410290316.9A priority Critical patent/CN104021224A/en
Publication of CN104021224A publication Critical patent/CN104021224A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/192Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
    • G06V30/194References adjustable by an adaptive method, e.g. learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image labeling method based on a layer-by-layer label fusing deep network. The method comprises the following steps: extracting a bottom layer vision characteristic for a training image with centralized training; layering the label of the training image to construct a hierarchical structure of the label; fusing the bottom vision characteristic information and label information layer by layer for the training image and obtaining the layered characteristic representation of the training image through parameter learning of the deep network; extracting a bottom layer vision characteristic for a testing image with centralized test, thereby obtaining the layered characteristic representation through deep network learning; and finally, forecasting the labeling information of the image according to layered characteristic representation of the testing image. The image labeling method disclosed by the invention belongs to layered labeling and is more precise than conventional labeling methods.

Description

Based on the image labeling method of successively label fusion degree of depth network
Technical field
The present invention relates to social networks image labeling technical field, relate in particular to a kind of image labeling method based on successively label fusion degree of depth network.
Background technology
In recent years, along with the development of social media, the amount of images on social platform is explosive growth, and how the socialgram of magnanimity being looked like to mark becomes the important research contents in network multimedia field.
The image labeling method of main flow mainly concentrates on the method based on visual information at present, and first these class methods carry out low-level image feature extraction, then utilize machine learning model to classify to the image based on character representation.These class methods have obtained good effect to a certain extent, but owing to only utilizing visual information to ignore its contextual text message, its effect is still not ideal enough.
The core of image labeling is to utilize the information of image correlation (to comprise vision, context text label information etc.) carry out the understanding of picture material, the label information of fused images and visual information, more be added with the characteristics of image of ability to express, to image labeling, particularly socialgram looks like there is important facilitation.But, the isomerism of visual signature and text label information, bring challenge to the fusion of two category informations, the image labeling method based on successively label fusion degree of depth network that the present invention proposes merges two category informations layer by layer, solve the difficult problem that Heterogeneous Information merges, be marked with important effect for socialgram image scale.
Summary of the invention
In order to solve the above-mentioned problems in the prior art, the present invention proposes a kind of image labeling method based on successively label fusion degree of depth network.
A kind of image labeling method based on successively label fusion degree of depth network that the present invention proposes comprises the following steps:
Step 1, for the training image in training set, extract its bottom visual signature X;
Step 2, carry out level for the label of described training image, build the hierarchical structure of label;
Step 3, for described training image, successively merge its bottom visual signature information and label information, and learn by degree of depth network parameter, the hierarchy characteristic that obtains described training image represents;
Step 4, for the test pattern in test set, extract its bottom visual signature, then obtain its hierarchy characteristic by described degree of depth e-learning and represent, finally represent to predict its markup information according to the hierarchy characteristic of described test pattern.
The Internet images is labeled in a lot of important association areas application widely.Due to the existence of the semantic gap between vision top layer information and high-level semantic, the image labeling based on vision is a challenging difficult problem.The method of the above-mentioned image labeling based on successively label fusion degree of depth network that the present invention proposes can look like to mark to socialgram automatically, and the mask method of level of the present invention is more accurate than traditional mask method in addition.
Brief description of the drawings
Fig. 1 is the process flow diagram that merges according to an embodiment of the invention the image labeling method of degree of depth network based on label successively;
Fig. 2 is label level exemplary plot;
Fig. 3 is the model structure figure of Fusion Features degree of depth network successively according to an embodiment of the invention.
Embodiment
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in more detail.
The related associated data set of method proposed by the invention comprises: 1) training set, comprising image and the corresponding social label of this image; 2) test set, only comprises test pattern to be marked, and there is no label information.
Consider the isomerism of image bottom visual information and social label information, the present invention proposes a kind of image labeling method based on successively label fusion degree of depth network.The core concept of the method is under the framework of degree of depth network, carries out layer by layer the fusion of label information and visual information, thus the hierarchy characteristic of study image, for the mark of image provides character representation.
Fig. 1 shows the image labeling method process flow diagram based on successively label fusion degree of depth network that the present invention proposes, and as shown in Figure 1, described method comprises:
Step 1, for the training image in training set, extract its bottom visual signature;
Step 2, carry out level for the label of described training image, build the hierarchical structure of label;
Step 3, for described training image, successively merge its bottom visual signature information and label information, and learn by degree of depth network parameter, the hierarchy characteristic that obtains described training image represents;
Step 4, for the test pattern in test set, extract its bottom visual signature, then obtain its hierarchy characteristic by described degree of depth e-learning and represent, finally represent to predict its markup information according to the hierarchy characteristic of described test pattern.
Introduce in detail the concrete implementation of above-mentioned four steps below.
In step 1, the bottom Visual Feature Retrieval Process of object is the initial representation that obtains object, for image information, the present invention preferably adopts the bottom visual signature of yardstick invariant features transform characteristics (SIFT) (such as 1000 dimensions) as image, and the bottom visual signature of image represents with X.
In step 2, the instrument that utilizes some to use, the preferred WordNet of the present invention, builds for the social label of image the label level that the number of plies is K.Such as: if certain image with label animal, plant, cat, dog, flower, corresponding label level (number of plies is 2 herein) as shown in Figure 2.
Described step 3 is for training image, successively merges its bottom visual signature information and label information, and learns by degree of depth network parameter, obtains the hierarchy characteristic of described training image.
In step 3, the structure number of plies is the degree of depth network of L (L>K), and makes the corresponding degree of depth network of K layer top of label layer level structure.If the argument table that degree of depth network is each layer is shown h={h (0)..., h (L), wherein, h (0)the bottom visual signature X of presentation video; The argument table of each layer that the label layer level structure of K layer is corresponding is shown y={y (L-K+1)..., y (L).
This step is pith of the present invention, and Fig. 3 is the model structure figure of Fusion Features degree of depth network successively according to an embodiment of the invention, and with reference to Fig. 3, described step 3 can be divided into following sub-step:
Step 3.1: by build own coding device (auto-encoder), based on reconstructed error in degree of depth network from h (0)layer is to h (L-K+1)the parameter of layer is tentatively adjusted;
Described step 3.1 is further comprising the steps:
Step 3.1.1: from h (0)layer is upward to h (L-K+1)layer builds an own coding device between every adjacent two layers, can obtain the mapping that last layer represents by the expression of lower one deck by described own coding device;
Such as, based on h (l-1)and h (l)own coding device between layer, by h (l-1)the expression of layer can be shone upon and be obtained h (l)the expression of layer:
h ( 1 ) = s ( W h ( l - 1 ) h ( l - 1 ) + b ( l ) ) - - - ( 1 )
Wherein, represent h (l-1)and h (l)weight parameter between layer, b (l)represent h (l)biasing (bias) parameter of layer, s () represents logistic function:
Like this by h (l-1)the expression of layer just can obtain h by mapping (l)the expression of layer.
Step 3.1.2: represent that by last layer the reconstruct that mapping returns to obtain lower one deck represents;
Such as, by h (l)expression shine upon and can obtain h (l-1)reconstruct represent z:
z = s ( W h ′ ( l - 1 ) h ( l ) + b ′ ) - - - ( 2 )
Wherein, for transposed representation, b ' expression h (l-1)biasing (bias) parameter.
Step 3.1.3: the mistake between representing according to Correct and reconstruct, adjust for the parameter of described degree of depth network.
Such as by minimizing z and h (l-1)reconstruct mistake between layer represents just can realize the preliminary adjustment for described degree of depth network parameter, in an embodiment of the present invention, preferably with minimizing reconstruct cross entropy, above-mentioned parameter is tentatively adjusted:
Wherein, k represents the subscript of the component of z, D (l-1)represent the dimension of z.
So go on, adjust to h always (L-K+1)layer.
Step 3.2: for the h in described degree of depth network (L-K+1) layer is to the highest h (L)layer, in conjunction with certain one deck in degree of depth network, such as h (l)equivalent layer in layer and label layer level structure, such as u (l)layer, carries out the adjustment of relevant parameter in Fusion Features and described degree of depth network;
This step can be divided into again two sub-steps: (with h (l)for example)
Step 3.2.1: utilize the y in described label layer level structure (l)layer label adjusted in described degree of depth network from h (0)to h (l)the parameter of layer;
In this step, first calculate cross entropy loss:
Loss ( { W , b } ) = - Σ n = 1 N Σ k = 1 K t nk ln y nk - - - ( 4 )
Wherein, N represents the number of sample, and K represents the number of the label of this layer, y ntrepresent the value of the k dimension of the prediction of model to n sample, t nkrepresent the real value of the k dimension of n sample in training sample.
Then by this loss conversely to degree of depth network from h (0)to h (l)layer carries out parameter adjustment, in an embodiment of the present invention, adopts famous Back Propagation Algorithm to carry out global parameter adjustment.
Step 3.2.2: pass through h (l)layer and y (l)layer represents to merge study and obtains h (l+1)the character representation of layer;
In this step, by h (l)layer and y (l)the expression of layer is combined, with h (l+1)the expression of layer forms an own coding device (auto-encoder):
h ( l + 1 ) = s ( W h ( l ) h ( l ) + W y ( l ) y ( l ) + b ( l + 1 ) ) - - - ( 5 )
Equally, h (l), y (l)and h (l+1)between parameter optimize by minimizing reconstruct cross entropy.
So go on, until h (L)layer.
By above-mentioned Fusion Features successively, just the label information of image can be fused in visual information, the parameter of degree of depth network has also obtained optimization simultaneously.
In step 4, the degree of depth network that utilizes parameter to optimize, marks for the test pattern in test set.
Described step 4 is further divided into following sub-step:
Step 4.1: extract its bottom visual signature X for test pattern test, similar to the method for the training image extraction bottom visual signature in training set in this step and step 1;
Step 4.2: utilize the degree of depth network after Optimal Parameters, obtain described test pattern bottom visual signature X testhierarchy characteristic represent { h (L-K+1)..., h (L);
Step 4.3: utilize this hierarchy characteristic to represent to predict the label information { h of described test pattern (L-K+1)..., h (L)}:
y i ( l ) = exp ( W i T h i ( l ) ) Σ j exp ( W j T h j ( l ) ) - - - ( 6 )
Wherein, W irepresent label with feature h (l)between weight.
Above-described specific embodiment; object of the present invention, technical scheme and beneficial effect are further described; institute is understood that; the foregoing is only specific embodiments of the invention; be not limited to the present invention; within the spirit and principles in the present invention all, any amendment of making, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.

Claims (10)

1. the image labeling method based on successively label fusion degree of depth network, is characterized in that, the method comprises the following steps:
Step 1, for the training image in training set, extract its bottom visual signature X;
Step 2, carry out level for the label of described training image, build the hierarchical structure of label;
Step 3, for described training image, successively merge its bottom visual signature information and label information, and learn by degree of depth network parameter, the hierarchy characteristic that obtains described training image represents;
Step 4, for the test pattern in test set, extract its bottom visual signature, then obtain its hierarchy characteristic by described degree of depth e-learning and represent, finally represent to predict its markup information according to the hierarchy characteristic of described test pattern.
2. method according to claim 1, is characterized in that, the bottom visual signature of described training image is its yardstick invariant features transform characteristics.
3. method according to claim 1, is characterized in that, the number of plies of described degree of depth network is L, and the number of plies of label layer level structure is K, wherein, L>K, the argument table that described degree of depth network is each layer is shown h={h (0)..., h (L), wherein, h (0)the bottom visual signature X of presentation video; The argument table that described label layer level structure is corresponding each layer is shown y={y (L-K+1)..., y (L).
4. method according to claim 3, is characterized in that, described step 3 comprises the following steps:
Step 3.1: by building own coding device, based on reconstructed error in degree of depth network from h (0)layer is to h (L-K+1)the parameter of layer is tentatively adjusted;
Step 3.2: for the h in described degree of depth network (L-K+1)layer is to the highest h (L)layer, in conjunction with certain one deck in degree of depth network, such as h (l)equivalent layer in layer and label layer level structure, such as y (l)layer, carries out the adjustment of relevant parameter in Fusion Features and described degree of depth network.
5. method according to claim 4, is characterized in that, described step 3.1 is further comprising the steps:
Step 3.1.1: from h (0)layer is upward to h (L-K+1)layer builds an own coding device between every adjacent two layers, can obtain the mapping that last layer represents by the expression of lower one deck by described own coding device;
Step 3.1.2: represent that by last layer the reconstruct that mapping returns to obtain lower one deck represents;
Step 3.1.3: the mistake between representing according to Correct and reconstruct, adjust for the parameter of described degree of depth network, until h (L-K+1)layer.
6. method according to claim 5, is characterized in that, in described step 3.1.3, with minimizing reconstruct cross entropy, the parameter of described degree of depth network is adjusted.
7. method according to claim 4, is characterized in that, described step 3.2 is further comprising the steps:
Step 3.2.1: utilize certain one deck y in described label layer level structure (l)label is adjusted in described degree of depth network from h (0)to h (l)the parameter of layer;
Step 3.2.2: pass through h (l)layer and y (l)layer represents to merge study and obtains h (l+1)the character representation of layer, and the relevant parameter of described degree of depth network is adjusted, until h (L)layer.
8. method according to claim 7, is characterized in that, in described step 3.2.1 and step 3.2.2, based on cross entropy loss, utilizes Back Propagation Algorithm to carry out parameter adjustment for described degree of depth network.
9. method according to claim 7, is characterized in that, in described step 3.2.2, by h (l)layer and y (l)the expression of layer is combined, with h (l+1)the expression of layer forms an own coding device.
10. method according to claim 1, is characterized in that, described step 4 is further comprising the steps:
Step 4.1: extract its bottom visual signature for test pattern;
Step 4.2: utilize described degree of depth network, the hierarchy characteristic that obtains described test pattern bottom visual signature represents;
Step 4.3: utilize the hierarchy characteristic of described test pattern to represent to predict the label information of described test pattern.
CN201410290316.9A 2014-06-25 2014-06-25 Image labeling method based on layer-by-layer label fusing deep network Pending CN104021224A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410290316.9A CN104021224A (en) 2014-06-25 2014-06-25 Image labeling method based on layer-by-layer label fusing deep network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410290316.9A CN104021224A (en) 2014-06-25 2014-06-25 Image labeling method based on layer-by-layer label fusing deep network

Publications (1)

Publication Number Publication Date
CN104021224A true CN104021224A (en) 2014-09-03

Family

ID=51437978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410290316.9A Pending CN104021224A (en) 2014-06-25 2014-06-25 Image labeling method based on layer-by-layer label fusing deep network

Country Status (1)

Country Link
CN (1) CN104021224A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104572940A (en) * 2014-12-30 2015-04-29 中国人民解放军海军航空工程学院 Automatic image annotation method based on deep learning and canonical correlation analysis
CN105631479A (en) * 2015-12-30 2016-06-01 中国科学院自动化研究所 Imbalance-learning-based depth convolution network image marking method and apparatus
CN106570910A (en) * 2016-11-02 2017-04-19 南阳理工学院 Auto-encoding characteristic and neighbor model based automatic image marking method
CN108595558A (en) * 2018-04-12 2018-09-28 福建工程学院 A kind of image labeling method of data balancing strategy and multiple features fusion
CN108875934A (en) * 2018-05-28 2018-11-23 北京旷视科技有限公司 A kind of training method of neural network, device, system and storage medium
CN109271539A (en) * 2018-08-31 2019-01-25 华中科技大学 A kind of image automatic annotation method and device based on deep learning
WO2020073952A1 (en) * 2018-10-10 2020-04-16 腾讯科技(深圳)有限公司 Method and apparatus for establishing image set for image recognition, network device, and storage medium
CN111583321A (en) * 2019-02-19 2020-08-25 富士通株式会社 Image processing apparatus, method and medium
CN112331314A (en) * 2020-11-25 2021-02-05 中山大学附属第六医院 Image annotation method and device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120233159A1 (en) * 2011-03-10 2012-09-13 International Business Machines Corporation Hierarchical ranking of facial attributes
CN103544392A (en) * 2013-10-23 2014-01-29 电子科技大学 Deep learning based medical gas identifying method
CN103593474A (en) * 2013-11-28 2014-02-19 中国科学院自动化研究所 Image retrieval ranking method based on deep learning
CN103823845A (en) * 2014-01-28 2014-05-28 浙江大学 Method for automatically annotating remote sensing images on basis of deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120233159A1 (en) * 2011-03-10 2012-09-13 International Business Machines Corporation Hierarchical ranking of facial attributes
CN103544392A (en) * 2013-10-23 2014-01-29 电子科技大学 Deep learning based medical gas identifying method
CN103593474A (en) * 2013-11-28 2014-02-19 中国科学院自动化研究所 Image retrieval ranking method based on deep learning
CN103823845A (en) * 2014-01-28 2014-05-28 浙江大学 Method for automatically annotating remote sensing images on basis of deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHAOQUAN YUAN ET AL: "tag-aware image classification via nested deep belief nets", 《IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104572940B (en) * 2014-12-30 2017-11-21 中国人民解放军海军航空工程学院 A kind of image automatic annotation method based on deep learning and canonical correlation analysis
CN104572940A (en) * 2014-12-30 2015-04-29 中国人民解放军海军航空工程学院 Automatic image annotation method based on deep learning and canonical correlation analysis
CN105631479A (en) * 2015-12-30 2016-06-01 中国科学院自动化研究所 Imbalance-learning-based depth convolution network image marking method and apparatus
CN105631479B (en) * 2015-12-30 2019-05-17 中国科学院自动化研究所 Depth convolutional network image labeling method and device based on non-equilibrium study
CN106570910A (en) * 2016-11-02 2017-04-19 南阳理工学院 Auto-encoding characteristic and neighbor model based automatic image marking method
CN106570910B (en) * 2016-11-02 2019-08-20 南阳理工学院 Based on the image automatic annotation method from coding characteristic and Neighborhood Model
CN108595558B (en) * 2018-04-12 2022-03-15 福建工程学院 Image annotation method based on data equalization strategy and multi-feature fusion
CN108595558A (en) * 2018-04-12 2018-09-28 福建工程学院 A kind of image labeling method of data balancing strategy and multiple features fusion
CN108875934A (en) * 2018-05-28 2018-11-23 北京旷视科技有限公司 A kind of training method of neural network, device, system and storage medium
CN109271539A (en) * 2018-08-31 2019-01-25 华中科技大学 A kind of image automatic annotation method and device based on deep learning
WO2020073952A1 (en) * 2018-10-10 2020-04-16 腾讯科技(深圳)有限公司 Method and apparatus for establishing image set for image recognition, network device, and storage medium
US11853352B2 (en) 2018-10-10 2023-12-26 Tencent Technology (Shenzhen) Company Limited Method and apparatus for establishing image set for image recognition, network device, and storage medium
CN111583321A (en) * 2019-02-19 2020-08-25 富士通株式会社 Image processing apparatus, method and medium
CN112331314A (en) * 2020-11-25 2021-02-05 中山大学附属第六医院 Image annotation method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN104021224A (en) Image labeling method based on layer-by-layer label fusing deep network
CN106650789B (en) Image description generation method based on depth LSTM network
CN108009285B (en) Forest Ecology man-machine interaction method based on natural language processing
CN105631479A (en) Imbalance-learning-based depth convolution network image marking method and apparatus
CN103886046B (en) Automatic semanteme extraction method for Web data exchange
Zhang et al. Fast and accurate land-cover classification on medium-resolution remote-sensing images using segmentation models
CN101866337A (en) Part-or-speech tagging system, and device and method thereof for training part-or-speech tagging model
CN109359297A (en) A kind of Relation extraction method and system
CN106708802A (en) Information recommendation method and system
CN107679221A (en) Towards the time-space data acquisition and Services Composition scheme generation method of mitigation task
CN107194422A (en) A kind of convolutional neural networks relation sorting technique of the forward and reverse example of combination
CN108932322A (en) A kind of geographical semantics method for digging based on text big data
CN103942274B (en) A kind of labeling system and method for the biologic medical image based on LDA
CN113806537B (en) Commodity category classification method and device, equipment, medium and product thereof
CN107045532A (en) The visual analysis method of space-time geographical space
CN113392864B (en) Model generation method, video screening method, related device and storage medium
CN116611131B (en) Automatic generation method, device, medium and equipment for packaging graphics
CN103440352A (en) Method and device for analyzing correlation among objects based on deep learning
CN102521227A (en) Image annotation reinforcing method based on user information modeling
CN113254652A (en) Social media posting authenticity detection method based on hypergraph attention network
CN104484347A (en) Geographic information based hierarchical visual feature extracting method
Douglas et al. Companion encyclopedia of geography: From the local to the global
Zhu et al. A flood knowledge-constrained large language model interactable with GIS: enhancing public risk perception of floods
Kang et al. Artificial intelligence studies in cartography: a review and synthesis of methods, applications, and ethics
CN103218460A (en) Image label complementing method based on optimal linear sparsity reconstruction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140903