CN102592141A - Method for shielding face in dynamic image - Google Patents

Method for shielding face in dynamic image Download PDF

Info

Publication number
CN102592141A
CN102592141A CN2012100017944A CN201210001794A CN102592141A CN 102592141 A CN102592141 A CN 102592141A CN 2012100017944 A CN2012100017944 A CN 2012100017944A CN 201210001794 A CN201210001794 A CN 201210001794A CN 102592141 A CN102592141 A CN 102592141A
Authority
CN
China
Prior art keywords
face
image
template
people
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012100017944A
Other languages
Chinese (zh)
Inventor
张敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology Changshu Research Institute Co Ltd
Original Assignee
Nanjing University of Science and Technology Changshu Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology Changshu Research Institute Co Ltd filed Critical Nanjing University of Science and Technology Changshu Research Institute Co Ltd
Priority to CN2012100017944A priority Critical patent/CN102592141A/en
Publication of CN102592141A publication Critical patent/CN102592141A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of computer vision and image processing, and discloses a method for shielding a face in a dynamic image. The method comprises the following steps of: firstly, constructing a skin color division region and dividing the skin colors to obtain multiple probable image positions and sizes of a face region; secondly, carrying out template matching through region scanning and verifying the face region; and at last, carrying out fuzzy processing on the face region so as to realize an effect of mosaic. The method is quick in operation speed, is hardly influenced by changes of gesture, size, expression and the like, and is suitable for the application field having higher requirement about instantaneity.

Description

A kind of method that people's face in the dynamic image is blocked
Technical field
The present invention relates to computer vision and image processing field, the technology of especially people's face in the dynamic image being blocked.
Background technology
In some special application, need the people's face in the dynamic image be blocked (or adding mosaic).As in order to protect interviewer's in the news privacy; Need interviewer's face image be blocked processing (or Fuzzy Processing); Usually way is the zone of people's face set positions in image, and the image in this zone is asked for average according to certain block size, reaches fuzzy effect.Yet, because the interviewer is movable, so when carrying out the news post-production; Need dynamically adjust fuzzy region; If adopt manual operations, then producer's workload is bigger on the one hand, also has technical risk on the other hand; Promptly, window not exclusively exposed interviewer's privacy, the perhaps excessive news picture quality that influences of fuzzy region owing to covering.
In order automatically to accomplish the people's face in the dynamic image is blocked; At first need utilize automatic human face detection tech; Promptly adopt certain strategy that image is searched for, confirming wherein position, the size of people's face, and then this parts of images is carried out Fuzzy Processing.
Detect difficult point from moving face and show two broad aspect, on the one hand since the inherent variation of people's face cause: people's face has the quite variations in detail of complicacy, different appearance such as the shape of face, the colour of skin etc., the opening and close etc. of different expressions such as eye, mouth; Blocking of people's face is like glasses, hair and head jewelry and other exterior objects etc.On the other hand owing to the external condition variation causes: because the difference of imaging angle causes the colourful attitude of people's face, rotate and rotate up and down like rotation in the plane, the degree of depth, wherein degree of depth rotation influence is bigger; The influence of illumination is like the variation of the brightness in the image, contrast and shade etc.; The image-forming condition of image, like focal length, the image-forming range of picture pick-up device, approach that image obtains or the like.
Summary of the invention
The object of the invention is to provide a kind of automatic realization people's face in the dynamic image to be blocked the method for (or adding mosaic).
1. area of skin color is cut apart
The colour of skin is the important information of people's face, does not rely on facial minutia, can both be suitable for for situation such as rotation, expression shape change, has relative stable and distinguish mutually with the color of most of background objects.For coloured image, after having confirmed complexion model, can at first carry out Face Detection, after detecting skin pixel, be partitioned into possible human face region in similarity on the colourity and the correlativity on the space according to them.
For some simple situation, only can accomplish Region Segmentation according to the cluster characteristic of skin pixel.For than complicated situation, skin color segmentation then need consider two sides and problem: because the influence of illumination and face's organ, people's face possibly isolated is some mutual disconnected area of skin color; Human face region possibly link together with other type area of skin color.Cluster-merger-authentication policy is this type problem solution commonly used: at first skin pixel is the zone according to comparatively strict colourity consistance and geometrical constraint condition cluster; Carry out merger according to certain rule then, utilize further feature to verify again after the merger or in the merger process.
The advantage of area of skin color dividing method is that the travelling speed of system is fast; The influence that changed by attitude, size, expression etc. is little; Be suitable for application that real-time is had relatively high expectations, but responsive to the characteristic of illumination condition and image capture device, be subject to environmental factor and disturb.Therefore, the coarse positioning that is fit to human face region.
2. verify based on people's face of template matches
After area of skin color splits, need to utilize the geometric properties or the gray feature in zone, whether checking should the zone be people's face, to get rid of the zone of the similar colour of skin of color.
Based on the method for template matches is in direct ground similarity between comparison object template and the candidate image area on the gradation of image characteristic; And based on the matching process of geometric properties the similarity of certain characteristic of relatively from image, extracting; People's face is detected, and characteristic comprises eyes, nose, face etc.The method of template matches has characteristics simply and intuitively, compares with the geometric properties matching process to have stronger adaptability.
Generating the template most principal work is the change of scale and the intensity profile standardization of image; Main mean value and the variance of considering two parameter-gray scales of token image intensity profile; They are adjusted to set-point, and factor such as illumination is to the influence of facial image when eliminate gathering.For template matches, this intensity profile standardized method is more effective than histogram equalization method commonly used, these two key figure characteristics of reason has been its unification average and variance.
Choose some secondary positive facial images of rectifying; Shear out human face region as people's face sample; After scale calibrationization (36 * 36) and intensity profile standardization, it is average that everyone face sample is got gray scale, obtains size and be the original average face template that rectify in 36 * 36 front.
Consider the importance of eyes in face characteristic, copy the eyes part of original average face template, shear out size and be 36 * 12 eye areas, it is carried out after the intensity profile standardization as the eyes template.
Respectively according to 1: 0.9, the length and width of 1: 1 and 1: 1.1 is than stretching with original face template, respectively they carried out after the intensity profile standardization as face template, to adapt to difform people's face.
Use each template to might yardstick and the image of shape (length breadth ratio) mate, the image that satisfies certain condition and reach the matching degree threshold value as people's face, is used the basis for estimation of related coefficient as template matches.
3. people's face occlusion method
(1) creates the skin color segmentation zone
Having obtained several through skin color segmentation possibly be picture position, the size of human face region, and then each maybe human face region be exactly that of template matches is by the region of search.
(2) carry out template matches through sector scanning
Scanned by the region of search for each; After intensity profile standardization pre-service; At first use the eyes template to mate, to the region of search of related coefficient above given threshold value, the face template that then re-uses various yardsticks matees; The degree of correlation is maximum and greater than the scanning area mark behaviour face position of people's face threshold value, and preserve position, the size that this moment should the zone.
(3) human face region is carried out Fuzzy Processing
To human face region is image known location, the size of template matches, and the image in this zone is asked for average according to certain block size, reaches Fuzzy Processing or adds the effect of mosaic.
Description of drawings
Fig. 1 is the inventive method process flow diagram.
Fig. 2 is cut apart process flow diagram for area of skin color.
Fig. 3 is an original color image.
Fig. 4 is the image after the skin color segmentation.
Fig. 5 is possible human face region.
Fig. 6 is the human face region of checking.
Embodiment
Embodiment 1:
The inventive method flow process is as shown in Figure 1, and it is as shown in Figure 2 that area of skin color is cut apart flow process, comprises the steps:
(1) read in original color image, as shown in Figure 3;
(2) scan image obtains each pixel R of this coloured image, G, B component value, calculates its gray-scale value according to formula (1); And the number of every kind of gray-scale value in the calculating entire image; Get preceding 5% pixel, try to achieve the light compensation coefficient, and with R, G, the B value of this each pixel of coefficient adjustment;
Gray=R×0.3+G×0.59+B×0.11 (1)
(3) each pixel is transformed under the YCbCr color space according to formula (2), calculates its Y, Cb, Cr value;
Y Cb Cr 1 = 0.299 0.587 0.114 0 - 0.1687 - 0.3313 0.500 128 0.500 - 0.4187 - 0.0813 128 0 0 0 1 R G B 1 - - - ( 2 )
(4) if Y between [125,188], then Cb, Cr remain unchanged, otherwise adjust its Cb, Cr value according to formula (3)~(6);
C &OverBar; r ( Y ) = 154 - ( K i - Y ) ( 154 - 144 ) K i - Y min , Y < K i 154 - ( Y - K h ) ( 154 - 132 ) Y max - K h , Y &GreaterEqual; K h - - - ( 3 )
C &OverBar; b ( Y ) = 108 + ( K i - Y ) ( 118 - 108 ) K l - Y min , Y < K i 108 + ( Y - K h ) ( 118 - 108 ) Y max - K h , Y &GreaterEqual; K h - - - ( 4 )
Wc i ( Y ) = WLc i + ( Y - Y min ) ( Wc i - WLc i ) K l - Y min , Y < K l WHc i + ( Y max - Y ) ( Wc i - WHc i ) Y max K h , Y &GreaterEqual; K h - - - ( 5 )
(5) if Cb between [77,127] and Cr between [133,173], then this pixel is made as white, otherwise is made as black;
(6) each pixel in the image is carried out the processing of step (3), (4), (5), obtain the image after the skin color segmentation, as shown in Figure 4, obtain possible human face region, as shown in Figure 5.
Embodiment 2:
If the gray-scale value matrix of image is D [W] [H]; Wherein W, H are respectively the width and the height of image, and then the average gray of this image
Figure BSA00000650858000044
is expressed as
&mu; &OverBar; = 1 W &CenterDot; H &Sigma; i = 0 W - 1 &Sigma; j = 0 H - 1 D [ i ] [ j ] - - - ( 7 )
The variance of intensity profile
Figure BSA00000650858000046
is expressed as
&sigma; &OverBar; 2 1 W &CenterDot; H &Sigma; i = 1 W - 1 &Sigma; j = 0 H - 1 ( D [ i ] [ j ] - &mu; &OverBar; ) 2 - - - ( 8 )
The intensity profile standardization is exactly that the average gray of image and gray variance are transformed to standard value μ 0And σ 0, each gray values of pixel points in the image is carried out conversion by formula (9), then obtain the image after the intensity profile standardization.
D ^ [ i ] [ j ] = &sigma; 0 &sigma; &OverBar; ( D [ i ] [ j ] - &mu; &OverBar; ) + &mu; 0 - - - ( 9 )
Wherein, 0≤i<W, 0≤j<H.
Embodiment 3:
The gray matrix of supposing face template is T [M] [N], and gray average and variance are respectively μ TAnd σ T, the gray matrix of image to be verified is R [M] [N], gray average and variance are respectively μ RAnd σ R, then (T R) does the correlation coefficient r between them
r ( T , R ) = &Sigma; i = 0 M - 1 &Sigma; j = 0 N - 1 ( T [ i ] [ j ] - &mu; T ) ( R [ i ] [ j ] - &mu; R ) M &CenterDot; N &CenterDot; &sigma; T &CenterDot; &sigma; R - - - ( 10 )
Wherein, &mu; T = 1 M &CenterDot; N &Sigma; i = 0 M - 1 &Sigma; j = 0 N - 1 T [ i ] [ j ] , &sigma; T 2 = 1 M &CenterDot; N &Sigma; i = 0 M - 1 &Sigma; j = 0 N - 1 ( T [ i ] [ j ] - &mu; T ) 2 , &mu; R = 1 M &CenterDot; N &Sigma; i = 0 M - 1 &Sigma; j = 0 N - 1 R [ i ] [ j ] , &sigma; R 2 = 1 M &CenterDot; N &Sigma; i = 0 M - 1 &Sigma; j = 0 N - 1 ( R [ i ] [ j ] - &mu; R ) 2 , (T R) has reflected the angle of image vector between input picture zone and template to r, and (T, R) matching degree in big more representation template and input picture zone is high more for r.

Claims (4)

1. the method that people's face in the dynamic image is blocked is characterized in that comprising the steps:
(1) obtain picture position, the size that several possibly be human face regions through skin color segmentation, then each maybe human face region be exactly that of template matches is by the region of search;
(2) scanned by the region of search for each; After intensity profile standardization pre-service; Use the eyes template to mate, to the region of search of related coefficient above given threshold value, the face template that then re-uses various yardsticks matees; The degree of correlation is maximum and greater than the scanning area mark behaviour face position of people's face threshold value, and preserve position, the size that this moment should the zone;
(3) be image known location, the size of template matches to human face region, the image in this zone is asked for average according to certain block size, reach Fuzzy Processing or add the effect of mosaic.
2. the method that people's face in the dynamic image is blocked according to claim 1 is characterized in that skin color segmentation comprises the steps:
(1) reads in original color image;
(2) scan image obtains each pixel R of this coloured image, G, B component value, calculates its gray-scale value; And the number of every kind of gray-scale value in the calculating entire image; Get preceding 5% pixel, try to achieve the light compensation coefficient, and with R, G, the B value of this each pixel of coefficient adjustment;
(3) each pixel transitions is arrived under the YCbCr color space, calculate its Y, Cb, Cr value;
(4) if Y between [125,188], then Cb, Cr remain unchanged, otherwise adjust its Cb, Cr value according to formula (3)~(6);
C &OverBar; r ( Y ) = 154 - ( K i - Y ) ( 154 - 144 ) , K i - Y min Y < K i 154 - ( Y - K h ) ( 154 - 132 ) Y max - K h , Y &GreaterEqual; K h - - - ( 3 )
C &OverBar; b ( Y ) = 108 + ( K i - Y ) ( 118 - 108 ) K l - Y min , Y < K i 108 + ( Y - K h ) ( 118 - 108 ) Y max - K h , Y &GreaterEqual; K h - - - ( 4 )
Wc i ( Y ) = WLc i + ( Y - Y min ) ( Wc i - WLc i ) K l - Y min , Y < K l WHc i + ( Y max - Y ) ( Wc i - WHc i ) Y max - K h , Y &GreaterEqual; K h - - - ( 5 )
Figure FSA00000650857900014
(5) if Cb between [77,127] and Cr between [133,173], then this pixel is made as white, otherwise is made as black;
(6) each pixel in the image is carried out the processing of step (3), (4), (5), obtain the image after the skin color segmentation, obtain possible human face region.
3. the method that people's face in the dynamic image is blocked according to claim 1 is characterized in that generating face template and comprises the steps:
(1) chooses some secondary positive facial images of rectifying; Shear out human face region as people's face sample; After scale calibrationization (36 * 36) and intensity profile standardization, it is average that everyone face sample is got gray scale, obtains size and be the original average face template that rectify in 36 * 36 front;
(2) consider the importance of eyes in face characteristic, copy the eyes part of original average face template, shear out size and be 36 * 12 eye areas, it is carried out after the intensity profile standardization as the eyes template;
(3) with original face template respectively according to 1: 0.9, the length and width of 1: 1 and 1: 1.1 is than stretching, and respectively they carried out after the intensity profile standardization as face template, to adapt to difform people's face.
4. the method that people's face in the dynamic image is blocked according to claim 1 is characterized in that verifying foundation as follows based on people's face of template matches:
If the gray matrix of face template is T [M] [N], gray average and variance are respectively μ TAnd σ T, be R [M] [N] by the gray matrix of authentication image, gray average and variance are respectively μ RAnd σ R, then the related coefficient between them does
r ( T , R ) = &Sigma; i = 0 M - 1 &Sigma; j = 0 N - 1 ( T [ i ] [ j ] - &mu; T ) ( R [ i ] [ j ] - &mu; R ) M &CenterDot; N &CenterDot; &sigma; T &CenterDot; &sigma; R - - - ( 10 )
Wherein, (T R) has reflected the angle of image vector between input picture zone and template to r, and (T, R) matching degree in big more representation template and input picture zone is high more for r.
CN2012100017944A 2012-01-04 2012-01-04 Method for shielding face in dynamic image Pending CN102592141A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012100017944A CN102592141A (en) 2012-01-04 2012-01-04 Method for shielding face in dynamic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012100017944A CN102592141A (en) 2012-01-04 2012-01-04 Method for shielding face in dynamic image

Publications (1)

Publication Number Publication Date
CN102592141A true CN102592141A (en) 2012-07-18

Family

ID=46480751

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012100017944A Pending CN102592141A (en) 2012-01-04 2012-01-04 Method for shielding face in dynamic image

Country Status (1)

Country Link
CN (1) CN102592141A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103716707A (en) * 2013-12-10 2014-04-09 乐视网信息技术(北京)股份有限公司 Method for video control and video client
CN103886549A (en) * 2012-12-21 2014-06-25 北京齐尔布莱特科技有限公司 Method and apparatus for automatic mosaic processing of license plate in picture
CN104463777A (en) * 2014-11-11 2015-03-25 厦门美图之家科技有限公司 Human-face-based real-time depth of field method
WO2015081916A1 (en) * 2013-12-05 2015-06-11 腾讯科技(深圳)有限公司 Media interaction method and device
CN104966266A (en) * 2015-06-04 2015-10-07 福建天晴数码有限公司 Method and system to automatically blur body part
CN106127106A (en) * 2016-06-13 2016-11-16 东软集团股份有限公司 Target person lookup method and device in video
CN106228136A (en) * 2016-07-26 2016-12-14 厦门大学 Panorama streetscape method for secret protection based on converging channels feature
WO2017032117A1 (en) * 2015-08-25 2017-03-02 中兴通讯股份有限公司 Image processing method and apparatus
CN106874787A (en) * 2017-01-20 2017-06-20 维沃移动通信有限公司 A kind of image viewing method and mobile terminal
CN107220652A (en) * 2017-05-31 2017-09-29 北京京东尚科信息技术有限公司 Method and apparatus for handling picture
CN108537194A (en) * 2018-04-17 2018-09-14 谭红春 A kind of expression recognition method of the hepatolenticular degeneration patient based on deep learning and SVM
CN108898051A (en) * 2018-05-22 2018-11-27 广州洪森科技有限公司 A kind of face identification method and system based on video flowing
CN112905812A (en) * 2021-02-01 2021-06-04 上海德拓信息技术股份有限公司 Media file auditing method and system
CN116453173A (en) * 2022-12-16 2023-07-18 南京奥看信息科技有限公司 Picture processing method based on picture region segmentation technology

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1492379A (en) * 2002-10-22 2004-04-28 中国科学院计算技术研究所 Method for covering face of news interviewee using quick face detection

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1492379A (en) * 2002-10-22 2004-04-28 中国科学院计算技术研究所 Method for covering face of news interviewee using quick face detection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘正光: "基于肤色分割的人脸检测算法研究", 《计算机工程》 *
屠添翼: "视频监视***中的快速人脸检测和识别", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886549A (en) * 2012-12-21 2014-06-25 北京齐尔布莱特科技有限公司 Method and apparatus for automatic mosaic processing of license plate in picture
WO2015081916A1 (en) * 2013-12-05 2015-06-11 腾讯科技(深圳)有限公司 Media interaction method and device
US10204087B2 (en) 2013-12-05 2019-02-12 Tencent Technology (Shenzhen) Company Limited Media interaction method and apparatus
CN103716707A (en) * 2013-12-10 2014-04-09 乐视网信息技术(北京)股份有限公司 Method for video control and video client
CN104463777B (en) * 2014-11-11 2018-11-06 厦门美图之家科技有限公司 A method of the real time field depth based on face
CN104463777A (en) * 2014-11-11 2015-03-25 厦门美图之家科技有限公司 Human-face-based real-time depth of field method
CN104966266A (en) * 2015-06-04 2015-10-07 福建天晴数码有限公司 Method and system to automatically blur body part
CN104966266B (en) * 2015-06-04 2019-07-09 福建天晴数码有限公司 The method and system of automatic fuzzy physical feeling
WO2017032117A1 (en) * 2015-08-25 2017-03-02 中兴通讯股份有限公司 Image processing method and apparatus
CN106127106A (en) * 2016-06-13 2016-11-16 东软集团股份有限公司 Target person lookup method and device in video
CN106228136A (en) * 2016-07-26 2016-12-14 厦门大学 Panorama streetscape method for secret protection based on converging channels feature
CN106874787B (en) * 2017-01-20 2019-12-24 维沃移动通信有限公司 Image viewing method and mobile terminal
CN106874787A (en) * 2017-01-20 2017-06-20 维沃移动通信有限公司 A kind of image viewing method and mobile terminal
CN107220652A (en) * 2017-05-31 2017-09-29 北京京东尚科信息技术有限公司 Method and apparatus for handling picture
CN107220652B (en) * 2017-05-31 2020-05-01 北京京东尚科信息技术有限公司 Method and device for processing pictures
CN108537194A (en) * 2018-04-17 2018-09-14 谭红春 A kind of expression recognition method of the hepatolenticular degeneration patient based on deep learning and SVM
CN108898051A (en) * 2018-05-22 2018-11-27 广州洪森科技有限公司 A kind of face identification method and system based on video flowing
CN112905812A (en) * 2021-02-01 2021-06-04 上海德拓信息技术股份有限公司 Media file auditing method and system
CN112905812B (en) * 2021-02-01 2023-07-11 上海德拓信息技术股份有限公司 Media file auditing method and system
CN116453173A (en) * 2022-12-16 2023-07-18 南京奥看信息科技有限公司 Picture processing method based on picture region segmentation technology
CN116453173B (en) * 2022-12-16 2023-09-08 南京奥看信息科技有限公司 Picture processing method based on picture region segmentation technology

Similar Documents

Publication Publication Date Title
CN102592141A (en) Method for shielding face in dynamic image
US9111132B2 (en) Image processing device, image processing method, and control program
KR102561723B1 (en) System and method for performing fingerprint-based user authentication using images captured using a mobile device
US6404900B1 (en) Method for robust human face tracking in presence of multiple persons
CN105205480B (en) Human-eye positioning method and system in a kind of complex scene
CN103902958A (en) Method for face recognition
CN109359634B (en) Face living body detection method based on binocular camera
CN110287900B (en) Verification method and verification device
CN101793562A (en) Face detection and tracking algorithm of infrared thermal image sequence
JP4597391B2 (en) Facial region detection apparatus and method, and computer-readable recording medium
CN106326823A (en) Method and system for acquiring head image in picture
US20220044039A1 (en) Living Body Detection Method and Device
Mu Ear detection based on skin-color and contour information
Gritzman et al. Comparison of colour transforms used in lip segmentation algorithms
US20160171323A1 (en) Apparatus for recognizing iris and operating method thereof
JP2020518879A (en) Detection system, detection device and method thereof
JP2009038737A (en) Image processing apparatus
Kumar et al. Real-time human skin color detection algorithm using skin color map
Xiong et al. Chinese skin detection in different color spaces
Jain Pixel based supervised classification of hyperspectral face images for face recognition
TW200527319A (en) Chin detecting method, chin detecting system and chin detecting program for a chin of a human face
Jyothisree et al. Shadow detection using tricolor attenuation model enhanced with adaptive histogram equalization
Chai et al. Towards contactless palm region extraction in complex environment
JPH11283036A (en) Object detector and object detection method
Kumar et al. Performance analysis of color space for optimum skin color detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20120718

WD01 Invention patent application deemed withdrawn after publication