CN103247042A - Image fusion method based on similar blocks - Google Patents

Image fusion method based on similar blocks Download PDF

Info

Publication number
CN103247042A
CN103247042A CN2013101985720A CN201310198572A CN103247042A CN 103247042 A CN103247042 A CN 103247042A CN 2013101985720 A CN2013101985720 A CN 2013101985720A CN 201310198572 A CN201310198572 A CN 201310198572A CN 103247042 A CN103247042 A CN 103247042A
Authority
CN
China
Prior art keywords
source images
image
centerdot
similar
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013101985720A
Other languages
Chinese (zh)
Other versions
CN103247042B (en
Inventor
屈小波
李磊
赖宗英
陈忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN201310198572.0A priority Critical patent/CN103247042B/en
Publication of CN103247042A publication Critical patent/CN103247042A/en
Application granted granted Critical
Publication of CN103247042B publication Critical patent/CN103247042B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image fusion method based on similar blocks, which relates to digital image processing. The invention provides an image fusion method based on similar blocks, which has a good effect and is easy to operate. The method comprises the following steps of (1) constructing a self-similar structure shared by a plurality of source images: performing similar block matching in the plurality of source images to obtain a shared similar block structure, wherein the shared similar blocks contain self-similarity of images; and (2) weighting: performing image feature extraction on the shared similar blocks, and determining a last selected pixel value according to a weighting method. According to the method, the self-similarity of the source images is utilized, certain robustness is realized for the characteristics of the source images, and the parameter selection range in practical application is large. Through weighting, fusion results are smoother.

Description

A kind of image interfusion method based on similar
Technical field
The present invention relates to Digital Image Processing, especially relate to multi-focused a kind of image interfusion method based on similar.
Background technology
Multi-focused Digital Image Processing is widely used in fields such as satellite remote sensing, military affairs, medical science.Common multi-focus image fusing method probably is divided into two classes: spatial domain method and transform domain method.The advantage of spatial domain method is that the pixel of fused images all comes from source images, makes source image information keep completely, but shortcoming is block pseudo-shadow to occur through regular meeting; The method of transform domain can effectively suppress block pseudo-shadow, but because transform domain operation non-linear makes fusion results compare distortion with source images.
In the spatial domain method, (Evaluation of focus measures in multi-focus image fusion such as W.Huang and Z.Jing are arranged, Pattern Recognition Letters, vol.28, pp:493-500,2007) common method and evaluation index have been made relatively conclusion.At the block pseudo-shadow that fusion results occurs, these methods tend to take braking measure, for example image are carried out unification filtering and handle region growing etc.
In the transform domain method, (A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application such as Zhang Zhong, Proceedings of the IEEE, vol.87, pp.1315-1326,1999) common transform domain method has been made conclusion, yet these methods often exist the susceptibility height, the bad weakness of choosing of parameter.
In addition, domestic (Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain such as Qu Xiaobo, Acta Automatica Sinica, vol.34, pp.1508-1514,2008) also success transform domain image is merged combines with neural network, obtained good effect.
2006, people such as K.Dabov (Image denoising with block-matching and3D filtering, Proceedings of the IEEE, vol.6064, pp.354-365,2006) method that has proposed similar coupling solves the image denoising problem.This method has been utilized the self-similarity of image, has obtained success aspect denoising.
Summary of the invention
The object of the present invention is to provide effect good, a kind of image interfusion method based on similar of easy operating.
The present invention includes following steps:
1) make up the self-similar structure that a plurality of source images are shared: carry out similar coupling in several source images, the similar block structure that obtains sharing has namely comprised the self-similarity of image in described shared similar;
2) weighting: after carrying out image characteristics extraction in share similar, determine the pixel value of choosing at last according to the method for weighting.
In step 2) in, the method for described weighting can be: the image definition index of establishing extraction is C, uses C 1And C 2Represent the sharpness of source images 1 and source images 2, be expressed as a counter big with source images etc. with M, for j pixel, use With
Figure BDA00003241958700028
The counter of expression source images 1 and source images 2, given similar shared D,
Work as C 1〉=C 2The time, have
M 1 j = M 1 j + 1 , j ∈ D ;
Work as C 1<C 2The time, have
M 2 j = M 2 j + 1 , j ∈ D ;
After the feature of entire image all is extracted, use With
Figure BDA00003241958700024
Represent source images 1 and source images 2 values j pixel respectively, then j the pixel F of fused images F jObtain in the following manner:
F j = M 1 j S 1 j M 1 j + M 2 j + M 2 j S 2 j M 1 j + M 2 j .
Concrete grammar is as follows:
1) makes up the self-similar structure that a plurality of source images are shared: entire image is regarded as the set in some zones, described some zones can be overlapped, each zone is regarded as the set of some image blocks, can overlap each other between described some image blocks, in each zone, choose a central block earlier, by between other image blocks and central block, carrying out the similarity coupling:
η=||P q-P r||
Wherein, η represents similarity degree, P qAnd P rSubmeter is represented the matrix that the pixel of the image block centered by pixel q and pixel p is formed, || A|| is matrix norm, and it is defined as follows:
| | A | | = Σ i = 1 I Σ j = 1 J | a ij | 2
Wherein, a IjThe value of each pixel in the presentation video piece A;
By the similarity degree numerical value that obtains, seek the less image block of difference each other and form analog structure in the width of cloth source images, use
Figure BDA000032419587000314
Expression centered by pixel i image block and the similarity degree between the central block, it is as follows to establish the similarity numerical relation that searches out:
η q 1 ≤ η q 2 ≤ · · · ≤ η q k ≤ η q k + 1 ≤ · · · ≤ η q ( n - m + 1 ) 2
Wherein, (n-m+1) 2Be illustrated in size in the zone of n, total total (n-m+1) 2Individual size is the image block of m.The analog structure that in different source images, obtains sharing then; Make L iRepresent the analog structure in the i width of cloth source images, with Represent j similar to central block in i sub-picture image block, suppose
L 1 = { q 1 1 , q 2 1 , · · · , q k 1 } , L 2 = { q 1 2 , q 2 2 , · · · , q k 2 }
Then have
L g 12 = L 1 ∩ L 2
Wherein, L 1Analog structure in the expression source images 1, L 2Analog structure in the expression source images 2,
Figure BDA000032419587000311
The analog structure that expression is shared, shared analog structure can embody the self-similarity of source images;
2) weighting: under spatial domain, often there is block pseudo-shadow in the result of fusion, in a yardstick of transform domain, also has similar problem; Generally, the method for weighting can make fusion results more level and smooth, and the weighting rule that the present invention adopts is as follows: after the analog structure of sharing was carried out image characteristics extraction, the characteristics of image (being generally articulation index herein) of establishing extraction was C, uses C 1And C 2Represent the sharpness of source images 1 and source images 2, for j pixel, use
Figure BDA000032419587000312
With
Figure BDA000032419587000313
The counter of expression source images 1 and source images 2; Given similar shared D then works as C 1〉=C 2The time, have
M 1 j = M 1 j + 1 , j ∈ D ;
Work as C 1<C 2The time, have
M 2 j = M 2 j + 1 , j ∈ D ;
After the feature of entire image all is extracted, use
Figure BDA00003241958700034
With
Figure BDA00003241958700035
Represent source images 1 and source images 2 values j pixel respectively, then j the pixel F of fused images F jObtain in the following manner:
F j = M 1 j M 1 j + M 2 j · S 1 j + M 2 j M 1 j + M 2 j · S 2 j
Wherein F represents fused images, and S represents source images.
By following formula as can be known, the pixel value of fused images belongs in the closed interval that two width of cloth source image pixels values are the starting point formation.
Application under transform domain is as follows:
After several source images are carried out certain conversion, obtain the coefficient of source images on different scale under this transform method, in each yardstick, coefficient carried out above-mentioned steps 1) and 2) operation, be the application of the present invention on transform domain.
The invention has the beneficial effects as follows: because the present invention has utilized the self-similarity of source images self, make the present invention have certain robustness for the characteristics of source images, in the practical application parameter to choose scope bigger.By above-mentioned weighting rule, make fusion results more level and smooth.
Description of drawings
Fig. 1 is source images 1.
Fig. 2 is source images 2.
Fig. 3 is similar coupling and the exemplary plot that constitutes the analog structure of sharing.
Fig. 4 is the weight by the source images that obtains 1 of weighting rule.
Fig. 5 is the fusion results that obtains under spatial domain.
Embodiment
The embodiment of the invention uses 256 grades of gray-scale maps to carry out mixing operation.The gray level image that the embodiment of the invention is used is two 512 * 512 sizes, the picture of tif form, and two width of cloth source images are as illustrated in fig. 1 and 2.
Because the present invention has stronger robustness to parameter, in implementation process, choose the canonical parameter operation.Parameter is chosen as table 1:
Table 1: parameter value among the embodiment
Figure BDA00003241958700041
Specific implementation process is as follows:
The first step: make up the self-similar structure that a plurality of source images are shared
At first source images is carried out border extension, what adopt in the present embodiment is to be the symmetrical extended method of axis of symmetry with the image boundary, and the amplitude of expansion is 20 pixels.
After the border extension, the source images image becomes 552 * 552 sizes.Source images is divided into the zone that some sizes are m * m (n=16 in the present embodiment), and each zone is divided into the image block that some sizes are m * m (m=8 in the present embodiment), any two zones or the distance of any two image blocks is 1 are so comprise (552-16+1) altogether in each width of cloth source images 2Individual zone, and comprise (16-8+1) in each zone 2Individual image block.
As Fig. 3, the f of outermost layer frame 1Represent source images 1, f 1In R 1The some zones in the source images 1 have been represented, at this regional R 1In, at first chosen a central block 31, central block 31 is marked as strip shade (as shown in Figure 3).f 2 Represent source images 2, R 2Represent in the source images 2 and R 1The zone of same position, R 2In a central block is also arranged.
In a zone, make and carry out the similarity coupling between other image blocks and the central block:
η=||P q-P r||
Wherein, η represents similarity degree, P qAnd P rSubmeter is represented the matrix that the pixel of the image block centered by pixel q and pixel r is formed, || A|| is matrix norm, and it is defined as follows:
| | A | | = Σ i = 1 I Σ j = 1 J | a ij | 2
Wherein, a IjThe value of each element in the representing matrix A;
By the similarity degree numerical value that obtains, seek the less image block of difference each other and form analog structure in the width of cloth source images, use η QiExpression centered by pixel i image block and the similarity degree between the central block, it is as follows to establish the similarity numerical relation that searches out:
η q 1 ≤ η q 2 ≤ · · · ≤ η q k ≤ η q k + 1 ≤ · · · ≤ η q ( n - m + 1 ) 2
Wherein, (n-m+1) 2Be illustrated in size in the zone of n, total total (n-m+1) 2Individual size is the image block of m.Search out after this magnitude relationship, choose k the most similar to central block in all images piece (k=16 in the present embodiment, but directly perceived for simplicity, what represent among Fig. 3 is the situation of k=5) image block, as shown in Figure 3, suppose
The analog structure 33 that in different source images, obtains sharing then; Make L iRepresent the analog structure in the i width of cloth source images, with
Figure BDA00003241958700055
Represent j similar to central block in i sub-picture image block, suppose that 5 space rectangles around the central block are 5 image blocks the most similar that select, claim that then they are similar 32.In two width of cloth images, get their intersection respectively, again the intersection that obtains in two width of cloth images got friendship:
L 1 = { q 1 1 , q 2 1 , · · · , q k 1 } , L 2 = { q 1 2 , q 2 2 , · · · , q k 2 }
L g 12 = L 1 ∩ L 2
Wherein, L 1Similar intersection in the expression source images 1 claims that it is analog structure in the source images 1, L 2Analog structure in the expression source images 2,
Figure BDA00003241958700067
The analog structure that expression is shared, as shown in Figure 3, the netted shadow representation of sharing of analog structure 33.The analog structure of sharing 33 can embody the self-similarity of source images;
In the share similar structure of two width of cloth figure, extract the index of expression sharpness.What present embodiment adopted is the index of improved Laplce's energy and (Sum-Modified-Laplacian is hereinafter to be referred as SML), and SML is defined as follows:
SML ( x 0 , y 0 ) = Σ x = x 0 - N y = x 0 + N Σ x = y 0 - N y = y 0 + N ▿ ML 2 f ( x 0 , y 0 )
Wherein
Figure BDA00003241958700062
Represent improved Laplace operator, be defined as:
▿ ML 2 f ( x 0 , y 0 ) = | 2 f ( x 0 , y 0 ) - f ( x 0 - step , y 0 ) - f ( x 0 + step , y 0 ) |
+ | 2 f ( x 0 , y 0 ) - f ( x 0 , y 0 - step ) - f ( x 0 , y 0 + step ) |
F (x wherein 0, y 0) the expression position is (x 0, y 0) pixel value, step represents the step-length that each computing is moved, and is taken as 1 herein.
Second step: weighting
After with SML the analog structure of sharing being calculated, the characteristics of image (being the SML value herein) of establishing extraction is C, uses C 1And C 2Represent the sharpness of source images 1 and source images 2, for j pixel, use M 1With
Figure BDA00003241958700068
The counter of expression source images 1 and source images 2.The given similar D that shares then works as C 1〉=C 2The time, have
M 1 j = M 1 j + 1 , j ∈ D .
Work as C 1<C 2The time, have
M 2 j = M 2 j + 1 , j ∈ D .
After the feature of entire image all is extracted,
Figure BDA00003241958700069
With
Figure BDA000032419587000610
Be exactly the weights of two width of cloth source images on pixel j.First width of cloth source images weight matrix M of having a few wherein 1Expression is shown as image namely as shown in Figure 4.With
Figure BDA000032419587000611
With
Figure BDA000032419587000612
Represent source images 1 and source images 2 values j pixel respectively, then j the pixel of fused images F obtains in the following manner:
F j = M 1 j M 1 j + M 2 j · S 1 j + M 2 j M 1 j + M 2 j · S 2 j
Wherein F represents fused images, and S represents source images.Fusion results as shown in Figure 5.

Claims (4)

1. one kind based on similar image interfusion method, it is characterized in that may further comprise the steps:
1) make up the self-similar structure that a plurality of source images are shared: carry out similar coupling in several source images, the similar block structure that obtains sharing has namely comprised the self-similarity of image in described shared similar;
2) weighting: after carrying out image characteristics extraction in share similar, determine the pixel value of choosing at last according to the method for weighting.
2. a kind of image interfusion method based on similar according to claim 1 is characterized in that in step 2) in, the method for described weighting is: the image definition index of establishing extraction is C, uses C 1And C 2Represent the sharpness of source images 1 and source images 2, be expressed as a counter big with source images etc. with M, for j pixel, use
Figure FDA00003241958600016
With The counter of expression source images 1 and source images 2, given similar shared D works as C 1〉=C 2The time, have:
M 1 j = M 1 j + 1 , j ∈ D ;
Work as C 1<C 2The time, have:
M 2 j = M 2 j + 1 , j ∈ D ;
After the feature of entire image all is extracted, use
Figure FDA00003241958600013
With
Figure FDA00003241958600014
Represent source images 1 and source images 2 values j pixel respectively, then j the pixel F of fused images F jObtain in the following manner:
F j = M 1 j S 1 j M 1 j + M 2 j + M 2 j S 2 j M 1 j + M 2 j .
3. a kind of image interfusion method based on similar according to claim 1, it is characterized in that in step 1), the concrete grammar of the self-similar structure that a plurality of source images of described structure are shared is as follows: entire image is regarded as the set in some zones, described some zones can be overlapped, each zone is regarded as the set of some image blocks, can overlap each other between described some image blocks, in each zone, choose a central block earlier, by between other image blocks and central block, carrying out the similarity coupling:
η=||P q-P r||
Wherein, η represents similarity degree, P qAnd P rSubmeter is represented the matrix that the pixel of the image block centered by pixel q and pixel p is formed, || A|| is matrix norm, and it is defined as follows:
| | A | | = Σ i = 1 I Σ j = 1 J | a ij | 2
Wherein, a IjThe value of each pixel in the presentation video piece A;
By the similarity degree numerical value that obtains, seek the less image block of difference each other and form analog structure in the width of cloth source images, use
Figure FDA000032419586000215
Expression centered by pixel i image block and the similarity degree between the central block, it is as follows to establish the similarity numerical relation that searches out:
η q 1 ≤ η q 2 ≤ · · · ≤ η q k ≤ η q k + 1 ≤ · · · ≤ η q ( n - m + 1 ) 2
Wherein, (n-m+1) 2Be illustrated in size in the zone of n, total total (n-m+1) 2Individual size is the image block of m.The analog structure that in different source images, obtains sharing then; Make L iRepresent the analog structure in the i width of cloth source images, with
Figure FDA000032419586000210
Represent j similar to central block in i sub-picture image block, suppose
L 1 = { q 1 1 , q 2 1 , · · · , q k 1 } , L 2 = { q 1 2 , q 2 2 , · · · , q k 2 }
Then have
L g 12 = L 1 ∩ L 2
Wherein, L 1Analog structure in the expression source images 1, L 2Analog structure in the expression source images 2,
Figure FDA000032419586000212
The analog structure that expression is shared, shared analog structure can embody the self-similarity of source images.
4. a kind of image interfusion method based on similar according to claim 1 is characterized in that in step 2) in, the concrete grammar of described weighting is: after the analog structure of sharing was carried out image characteristics extraction, the characteristics of image of establishing extraction was C, uses C 1And C 2Represent the sharpness of source images 1 and source images 2, for j pixel, use
Figure FDA000032419586000213
With
Figure FDA000032419586000214
The counter of expression source images 1 and source images 2; Given similar shared D then works as C 1〉=C 2The time, have:
M 1 j = M 1 j + 1 , j ∈ D ;
Work as C 1<C 2The time, have:
M 2 j = M 2 j + 1 , j ∈ D ;
After the feature of entire image all is extracted, use
Figure FDA00003241958600027
With
Figure FDA00003241958600028
Represent source images 1 and source images 2 values j pixel respectively, then j the pixel F of fused images F jObtain in the following manner:
F j = M 1 j M 1 j + M 2 j · S 1 j + M 2 j M 1 j + M 2 j · S 2 j
Wherein F represents fused images, and S represents source images.
CN201310198572.0A 2013-05-24 2013-05-24 A kind of image interfusion method based on similar piece Expired - Fee Related CN103247042B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310198572.0A CN103247042B (en) 2013-05-24 2013-05-24 A kind of image interfusion method based on similar piece

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310198572.0A CN103247042B (en) 2013-05-24 2013-05-24 A kind of image interfusion method based on similar piece

Publications (2)

Publication Number Publication Date
CN103247042A true CN103247042A (en) 2013-08-14
CN103247042B CN103247042B (en) 2015-11-11

Family

ID=48926547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310198572.0A Expired - Fee Related CN103247042B (en) 2013-05-24 2013-05-24 A kind of image interfusion method based on similar piece

Country Status (1)

Country Link
CN (1) CN103247042B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106327539A (en) * 2015-07-01 2017-01-11 北京大学 Image reconstruction method and device based on example
CN107087107A (en) * 2017-05-05 2017-08-22 中国科学院计算技术研究所 Image processing apparatus and method based on dual camera
CN108537264A (en) * 2018-03-30 2018-09-14 西安电子科技大学 Heterologous image matching method based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1402191A (en) * 2002-09-19 2003-03-12 上海交通大学 Multiple focussing image fusion method based on block dividing
CN102005033A (en) * 2010-11-16 2011-04-06 中国科学院遥感应用研究所 Method for suppressing noise by image smoothing
CN202134044U (en) * 2011-07-06 2012-02-01 长安大学 An image splicing device based on extracting and matching of angular point blocks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1402191A (en) * 2002-09-19 2003-03-12 上海交通大学 Multiple focussing image fusion method based on block dividing
CN102005033A (en) * 2010-11-16 2011-04-06 中国科学院遥感应用研究所 Method for suppressing noise by image smoothing
CN202134044U (en) * 2011-07-06 2012-02-01 长安大学 An image splicing device based on extracting and matching of angular point blocks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KOSTADIN DABOV ET AL.: "Image denoising with block-matching and 3D filtering", 《IMGEPROCESSING:ALGORITHMS AND SYSTEMS:NEURAL NETWORKS,AND MACHINE LEARNING》 *
闫敬文 等: "图像融合研究最新进展", 《厦门理工学院学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106327539A (en) * 2015-07-01 2017-01-11 北京大学 Image reconstruction method and device based on example
CN106327539B (en) * 2015-07-01 2019-06-28 北京大学 Image rebuilding method and device based on sample
CN107087107A (en) * 2017-05-05 2017-08-22 中国科学院计算技术研究所 Image processing apparatus and method based on dual camera
CN108537264A (en) * 2018-03-30 2018-09-14 西安电子科技大学 Heterologous image matching method based on deep learning

Also Published As

Publication number Publication date
CN103247042B (en) 2015-11-11

Similar Documents

Publication Publication Date Title
CN101872472B (en) Method for super-resolution reconstruction of facial image on basis of sample learning
CN103279935B (en) Based on thermal remote sensing image super resolution ratio reconstruction method and the system of MAP algorithm
CN104061907B (en) The most variable gait recognition method in visual angle based on the coupling synthesis of gait three-D profile
Li et al. Performance improvement scheme of multifocus image fusion derived by difference images
CN104156957B (en) Stable and high-efficiency high-resolution stereo matching method
CN106228528B (en) A kind of multi-focus image fusing method based on decision diagram and rarefaction representation
CN104268847A (en) Infrared light image and visible light image fusion method based on interactive non-local average filtering
CN106204447A (en) The super resolution ratio reconstruction method with convolutional neural networks is divided based on total variance
CN102903098A (en) Depth estimation method based on image definition difference
CN106254722A (en) A kind of video super-resolution method for reconstructing and device
CN102663361A (en) Face image reversible geometric normalization method facing overall characteristics analysis
CN108596975A (en) A kind of Stereo Matching Algorithm for weak texture region
CN103268482B (en) A kind of gesture of low complex degree is extracted and gesture degree of depth acquisition methods
CN104036479A (en) Multi-focus image fusion method based on non-negative matrix factorization
CN105894513B (en) Take the remote sensing image variation detection method and system of imaged object change in time and space into account
CN105913407A (en) Method for performing fusion optimization on multi-focusing-degree image base on difference image
CN104517317A (en) Three-dimensional reconstruction method of vehicle-borne infrared images
CN104933678A (en) Image super-resolution reconstruction method based on pixel intensity
CN107689036A (en) A kind of Real-time image enhancement method based on the bilateral study of depth
CN103095996A (en) Multi-sensor video fusion method based on space-time conspicuousness detection
CN103971354A (en) Method for reconstructing low-resolution infrared image into high-resolution infrared image
CN103247042A (en) Image fusion method based on similar blocks
CN103065291A (en) Image fusion method based on promoting wavelet transform and correlation of pixel regions
Duan et al. Multifocus image fusion using superpixel segmentation and superpixel-based mean filtering
CN103985104A (en) Multi-focusing image fusion method based on higher-order singular value decomposition and fuzzy inference

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20151111