CN107016657A - The restorative procedure of the face picture covered by reticulate pattern - Google Patents

The restorative procedure of the face picture covered by reticulate pattern Download PDF

Info

Publication number
CN107016657A
CN107016657A CN201710226996.1A CN201710226996A CN107016657A CN 107016657 A CN107016657 A CN 107016657A CN 201710226996 A CN201710226996 A CN 201710226996A CN 107016657 A CN107016657 A CN 107016657A
Authority
CN
China
Prior art keywords
picture
iris
reticulate pattern
pixel
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710226996.1A
Other languages
Chinese (zh)
Other versions
CN107016657B (en
Inventor
张宁
伍萍辉
赵亚东
石学超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University of Technology
Original Assignee
Hebei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University of Technology filed Critical Hebei University of Technology
Priority to CN201710226996.1A priority Critical patent/CN107016657B/en
Publication of CN107016657A publication Critical patent/CN107016657A/en
Application granted granted Critical
Publication of CN107016657B publication Critical patent/CN107016657B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to the restorative procedure of the face picture covered by reticulate pattern, it is characterised in that this method is by first extracting reticulate pattern edge, then is removed reticulate pattern, finally fills up reticulate pattern and entire image is smoothed, and reaches the purpose of reduction face;Comprise the following steps that:Step S1, picture pretreatment:Need to picture to be processed enter every trade acquisition, obtain the picture height rol and picture width row of pending picture, in units of pixel, the picture size for obtaining pending picture is rol × row;Pending picture is subjected to double form conversions again;The picture size carried out after double form conversions is handled, it is 220,88 or 118 to make picture height, corresponding picture width is 178,72 or 96;Step S2, sorts out to picture, sets up coordinate system, positions prime area:Step S3, reticulate pattern edge is extracted using rim detection, is removed reticulate pattern operation:Step S4, extracts mask, fills up reticulate pattern and smoothed image.

Description

The restorative procedure of the face picture covered by reticulate pattern
Technical field
The present invention relates to image data processing technology field, the restorative procedure of the face picture particularly covered by reticulate pattern.
Background technology
Face recognition technology reaches its maturity, and face recognition application is also more and more extensive.It is currently based on the application of recognition of face It is concentrated mainly in terms of face attendance recorder, face clearance system and recognition of face and monitoring based on video, belongs to dynamic State target makes detection and realizes identification;It is identified to belong to make static object detection and realize for the face in photo and knows Not.
There is fine-structure mesh because certificate photo can be coupled with noise, such as face unavoidably when being converted into digital information storage Line, this influences to use very much.The reticulate pattern that past is removed on certificate photo generally uses PhotoShop softwares, artificial to wipe reticulate pattern region, Then manual operation, such operating efficiency is very low, and cost of labor is very high.
It is currently based on image repair technology and can be largely classified into two major classes:One class is the restorative procedure based on diffusion equation, Another kind of is the restorative procedure based on sample block.
Restorative procedure based on diffusion equation is to be based on parameter model or partial differential equation (Xu Liming, Wu Yajuan, Liu Hang River is based on variational PDEs image repair technical research [J] China West Normal University journal (natural science edition) .2016.37 (3):343-348), inwardly gradually seamlessly transitted from the edge of image damaged area, smoothly will preferentially spread through sex intercourse or be distributed to office In portion's structure, such restorative procedure is mainly used in solving the damaged reparation in zonule.This method mainly includes partial differential equation and calculated Method, Total Variation and based on Curvature-driven diffusion equation model etc..
(Chang Chen, what builds a kind of improved Criminisi image repair methods [J] good fortune of agriculture to restorative procedure based on sample block State college journal (natural science edition) .2017.45 (01):It is 74-79) by the Optimum Matching in resource area searching and object block Block, and be copied directly to damaged area to realize image repair;Because this method can keep the uniformity of textural characteristics, Therefore it is adapted to the reparation for solving the problems, such as big region breakage image.
However, for the main restorative procedure of both the above when carrying out image repair with reticulate pattern face, it is necessary in advance Find the reticulate pattern region for needing to remove, erasing reticulate pattern region, then repaired manually, while fully can not repair after erasing Reticulate pattern region, the especially face sensitive information such as human eye and its neighboring area region does not reach the reparation effect close to artwork Really.
The content of the invention
The deficiency existed during for handling picture that face covered by reticulate pattern in the prior art, skill to be solved by this invention Art problem is to propose a kind of restorative procedure of the face picture covered by reticulate pattern.This method is started with from static object, will be based on expansion Restorative procedure, the restorative procedure based on sample block and the edge detection algorithm for dissipating equation are improved and integrated, and side is carried out first Edge detects that position character profile completes the removal of reticulate pattern, secondly makes mask, prevents later image processing from causing its distortion, so The reticulate pattern region got rid of is filled up by " X-type " structure afterwards, smoothing processing is finally done, to reach the optimal defeated of image Go out.This method is to automatically remove reticulate pattern for what the reticulate pattern noise of face appearance on certificate photo was proposed and can quickly repair image Method, on certificate photo face occur reticulate pattern noise removal and repairing effect very well, be greatly saved artificial removal And repair the time covered on certificate photo by reticulate pattern noise, while improving operating efficiency.
The present invention solves the technical scheme that the technical problem uses:
The restorative procedure of the face picture covered by reticulate pattern, it is characterised in that this method by first extracting reticulate pattern edge, then Reticulate pattern is removed, reticulate pattern is finally filled up and entire image is smoothed, the purpose of reduction face is reached;Specific steps It is as follows:
Step S1, picture pretreatment:
Need to picture to be processed enter every trade acquisition, obtain the picture height rol and picture width row of pending picture, In units of pixel, the picture size for obtaining pending picture is rol × row;Pending picture is subjected to double lattice again Formula is changed;The picture size carried out after double form conversions is handled, it is 220,88 or 118 to make picture height, correspondence Picture width be 178,72 or 96;
Step S2, sorts out to picture, sets up coordinate system, positions prime area:
It is divided into three classes according to the size of the above-mentioned steps S1 pictures pre-processed, i.e., picture size is 220 × 178,88 × 72 Or 118 × 96;Which kind of picture size that the pretreated pictures of judgment step S1 belong in three classes;Then human eye scope is entered Row positioning, the top left corner apex of picture is origin using after processing, using transverse direction as x-axis, and longitudinal direction is y-axis, and x-axis from left to right get over by numerical value Come bigger, numerical value is increasing from top to bottom for y-axis, sets up xy coordinate systems;Set following relevant parameter:Left eye x coordinate ratio system Number is a, and right eye x coordinate proportionality coefficient is c, and right and left eyes y-coordinate proportionality coefficient is d, and location radii is r;It is pre- according to step S1 The size setting different proportion coefficient of the picture of processing, when picture size is 220 × 178, then a=0.387, c=0.645, d= 0.41st, r=14;When picture size is 88 × 72, then a=0.38, c=0.65, d=0.38, r=4;When picture size is 118 × 96, then a=0.365, c=0.645, d=0.375, r=8.5;Eyes and periphery half are oriented by above-mentioned set parameter Footpath is r region, i.e. positioning obtains the prime area of human eye;
Step S3, reticulate pattern edge is extracted using rim detection, is removed reticulate pattern operation:
The pretreated pictures of acquisition step S1, are traveled through to picture, obtain the R of all pixels point of picture, and G, B leads to Road pixel value, using the R of all pixels point, G, channel B pixel value carries out the gradient difference that rim detection solves each pixel, profit Character contour region is obtained with rim detection;Reticulate pattern edge extracting is carried out using the gradient difference of each pixel simultaneously, net is obtained Line fringe region, after reticulate pattern edge is obtained, using white pixel point (255,255,255) to the reticulate pattern fringe region that detects Assignment is carried out, completes to remove the operation of reticulate pattern, obtains removing the white space after reticulate pattern;
Step S4, extracts mask, fills up reticulate pattern and smoothed image:
Iris mask and character contour edge mask are produced, line mask of going forward side by side makes, smoothed image exports image results;
Comprise the concrete steps that:
S41, blank pixel point (x0, y0) is chosen in the obtained white spaces of step S3, then in blank pixel point (x0, y0) 20 vicinity points are chosen in periphery according to " X-type " structure, and carrying out R passage pixel value sizes to this 20 vicinity points compares And sort, then choose four vicinity points that R passages pixel value is located at preceding four, the R passage pictures of this four vicinity points Plain value is descending to be designated as Rout1, Rout2, Rout3 and Rout4 respectively, corresponding G passages pixel value be designated as respectively Gout1, Gout2, Gout3 and Gout4, channel B pixel value are designated as Bout1, Bout2, Bout3 and Bout4 respectively;
S42, by four vicinity points obtained in step S41 and the R, G, channel B pixel of blank pixel point (x0, y0) It is worth (R0, G0, B0) and carries out gradient difference calculating according to formula (1) respectively,
Wherein, i=1,2,3,4, Ti are gradient difference;
S43, sets character contour threshold value as 155, and the gradient difference Ti (T1, T2, T3, T4) that step S42 is obtained is compared Compared with size, then sort, the gradient difference numerical value for selecting maximum is compared with the character contour threshold value set, maximum number in Ti Value is more than character contour threshold value 155, that is, character contour edge mask is obtained, by character contour edge mask to character contour side Edge carries out mask fabrication;Conversely, then without any operation, into step S44;
S44, using relevant parameter set in step S2, orients the prime area scope of human eye, chooses this initial The subregion Ir of 3 × 3 sizes in region, the pixel values of the R passages of the subregion be designated as respectively Ra01, Ra02 ..., Ra09, Concrete structure is as shown in the table,
Ra01 Ra02 Ra03
Ra04 Ra05 Ra06
Ra07 Ra08 Ra09
Convolution algorithm is carried out in this prime area, warp factor α is:
Using Gp=α * Ir, calculate Gp, Gp=Ra02+Ra04+Ra06+Ra08, i.e. Gp represent Ra02, Ra04, Ra06, Tetra- position R passage pixel value sums of Ra08;
Iris threshold value is set as 530, Gp is compared with iris threshold value, if Gp<530, positioning obtains iris mask regions Domain, then carry out iris periphery mask fabrication by iris masked areas;If Gp>530, then without any operation, into step S45;
S45, is operated obtained iris masked areas progress iris periphery mask fabrication is positioned through step S44, in setting It is 1.3 to limit iris threshold value, and lower limit iris threshold value is 0.75, and iris pixel point (x, y) is chosen in iris masked areas, then in the rainbow Film pixel (x, y) upper-lower position chooses ten neighbouring reference pixels points, the R passage pixel values minute of this ten reference pixels points Not Wei R101, R102, R103, R104, R105, R108, R109, R110, R111 and R112, R101, R102, R103, R104 and R105 positions iris pixel point (x, y) relative with R108, R109, R110, R111 and R112 position is in symmetrical above and below Structure, using the proportionate relationship of formula (6), solution obtains Gpr,
Gpr=(R101+R102+R103+R104+R105)/(R108+R109+R110+R111+R112) (6)
Compare the size of Gpr and upper limit iris threshold value and lower limit iris threshold value, if Gpr be less than lower limit iris threshold value 0.75 or Person Gpr is more than upper limit iris threshold value 1.3, then iris periphery mask fabrication is carried out to iris pixel point (x, y);If 0.75≤Gpr ≤ 1.3, then without any operation, into step S46;
S46, chooses the subregion Ir of 3 × 3 sizes in picture in its entirety0, subregion Ir0The rapid S44 of structure synchronization in son Region Ir, to subregion Ir0Rim detection is carried out, and carries out convolution algorithm, horizontal warp factor Gx is:
Longitudinal warp factor Gy is:
Calculate Grx and Gry, Grx and Gry respectively using formula (7) and (8) and correspond to transverse edge detection R channel values respectively R channel values are detected with longitudinal edge;Edge must be arrived and fill up gradient difference Gr by being calculated according to formula (9),
Grx=Gx*Ir0 (7)
Gry=Gy*Ir0 (8)
S47, sets edge and fills up threshold value as 60, and the edge that edge is filled up into gradient difference Gr with setting is filled up threshold value 60 and carried out Compare, if Gr<60, R passage pixel value Rout1, the Rout2 positioned at the first two that will be obtained according to step S41, while obtaining relative Pixel value Gout1, Gout2 of the G passages answered and pixel value Bout1, Bout2 of channel B, respectively according to formula (10)-(12) To R, G, channel B sums up averaging, obtains average value in average value Gm on average value Rm on R passages, G passages, channel B Bm,
Rm=(Rout1+Rout2)/2 (10)
Gm=(Gout1+Gout2)/2 (11)
Bm=(Bout1+Bout2)/2 (12)
Rm, Gm, Bm is used to fill up the R of the reticulate pattern edge detected, G, channel B pixel value respectively;If Gr >=60, carry out Step S48;
S48, smooth pixel point (x1, y1) is chosen in the non-reticulate pattern edge of picture in its entirety, then in the smooth pixel point 20 vicinity points are chosen in (x1, y1) periphery according to " X-type " structure, with step S41, choose R passages pixel value and are located at preceding four Four vicinity points, the average of this four each passages of vicinity points is calculated with formula (13)-(15) respectively, is designated as respectively R1、G1、B1;Except iris mask before and character contour edge mask extract region, R1, G1, B1 are replaced into the smooth picture The R of vegetarian refreshments (x1, y1), G, channel B pixel value realize the smoothing processing to whole pictures, export image results;
R1=(Rout1+Rout2+Rout3+Rout4)/4 (13)
G1=(Gout1+Gout2+Gout3+Gout4)/4 (14)
B1=(Bout1+Bout2+Bout3+Bout4)/4 (15).
" X-type " structure in the restorative procedure of the above-mentioned face picture covered by reticulate pattern, the step S41 refers to, with sky Centered on white pixel point, four neighborhood pixels are diagonally chosen respectively in four corner positions of the blank pixel point Point, then every selecting a vicinity points, totally 20 neighbours after a pixel on the position up and down of blank pixel point Nearly pixel constitutes " X-type " structure.
The ginseng of iris pixel point (x, y) in the restorative procedure of the above-mentioned face picture covered by reticulate pattern, the step S45 The choosing method of photograph vegetarian refreshments is:Centered on iris pixel point (x, y), in four drift angles position of the iris pixel point (x, y) Put and diagonally choose two reference pixels points respectively, then every a pixel on the upper-lower position of iris pixel point A reference pixels point is selected afterwards, and 10 reference pixels points are chosen altogether.
Compared with prior art, beneficial effects of the present invention are as follows:
(1) how the present invention directly inputs the face picture that there is reticulate pattern and carries out for the face picture covered by reticulate pattern Face descreening is operated, effectively to remove the problem of reticulate pattern is reduced to original facial image, by repairing based on diffusion equation Compound method, the restorative procedure based on sample block and edge detection algorithm are improved and integrated, and first extract reticulate pattern and are repaired again, Rim detection is carried out first, and position character profile completes the removal of reticulate pattern, secondly makes mask, prevents later image processing from making Into its distortion, then the reticulate pattern region got rid of is filled up by " X-type " structure, smoothing processing is finally done, to reach people The optimal output of face image.
(2) the inventive method is repaired again for there is the picture of reticulate pattern first to carry out descreening on face, using rim detection, Reticulate pattern edge can be accurately detected, reticulate pattern region thus can be effectively removed, while in order to avoid rim detection is to people Face key position, including human eye pupil, iris, sclera, the nose profile and sensitive information such as lip contour region is missed up and down Operation, the present invention has done mask protection by making iris mask and personage's edge contour mask to face key position, can be with Simple is removed reticulate pattern operation;Fill up use " X-type " structure and fill up strategy, relative to two kinds of reparation sides in background technology Method is filled up faster, and is more suitable for the removal reparation of face reticulate pattern.From the point of view of experimental result, " X-type " is filled up reparation and achieved Preferable repairing effect (seeing below the explanation to Fig. 3 (a) and Fig. 3 (b) and Fig. 4 (a) and Fig. 4 (b) in embodiment).
Brief description of the drawings
Fig. 1 is the schematic diagram of " X-type " structure in step S41 in the present invention;
Fig. 2 is the choosing method schematic diagram of the reference pixels point of iris pixel point (x, y) in step S45 in the present invention;
Fig. 3 (a) and Fig. 3 (b) are respectively the example of two input pictures;
Fig. 4 (a) and Fig. 4 (b) are respectively the design sketch after Fig. 3 (a) and the corresponding reparations of Fig. 3 (b);
Embodiment
For the object, technical solutions and advantages of the present invention are more clearly understood, below in conjunction with example example and accompanying drawing to this Invention is further described.But described embodiment is intended merely to facilitate the understanding of the present invention, and it is not used for pair with this The restriction of the claims in the present invention protection domain.
The restorative procedure for the face picture that the present invention is covered by reticulate pattern, by first extracting reticulate pattern edge, then is removed net Line, finally fill up reticulate pattern and entire image is smoothed, reach the purpose of reduction face;Comprise the following steps that:
Step S1, picture pretreatment:
Need to picture to be processed every trade entered by MATLAB softwares obtained, obtain the picture height rol of pending picture With picture width row, in units of pixel, the picture size for obtaining pending picture is rol × row;Again by pending figure Piece carries out double form conversions;The picture size carried out after double form conversions is handled, makes the picture height be 220th, 88 or 118, corresponding picture width is 178,72 or 96;
Step S2, sorts out to picture, sets up coordinate system, positions prime area:
It is divided into three classes according to the size of the above-mentioned steps S1 pictures pre-processed, i.e., picture size is 220 × 178,88 × 72 Or 118 × 96;Which kind of picture size that the pretreated pictures of judgment step S1 belong in three classes;Then human eye scope is entered Row positioning, the top left corner apex of picture is origin using after processing, using transverse direction as x-axis, and longitudinal direction is y-axis, and x-axis from left to right get over by numerical value Come bigger, numerical value is increasing from top to bottom for y-axis, sets up xy coordinate systems;Set following relevant parameter:Left eye x coordinate ratio system Number is a, and right eye x coordinate proportionality coefficient is c, and right and left eyes y-coordinate proportionality coefficient is d, and location radii is r;At step S1 Picture size setting different proportion coefficient after reason, when picture size is 220 × 178, then a=0.387, c=0.645, d= 0.41st, r=14;When picture size is 88 × 72, then a=0.38, c=0.65, d=0.38, r=4;When picture size is 118 × 96, then a=0.365, c=0.645, d=0.375, r=8.5;Eyes and periphery half are oriented by above-mentioned set parameter Footpath is r region, i.e. positioning obtains the prime area of human eye;Because input picture size is different, but the area where human eye The ratio that domain position accounts for picture in its entirety is close, and setting specific ratio by the size tested repeatedly for different pictures closes System can make the mask in later stage extract more accurate, reach the accurate effect for removing face reticulate pattern;
Step S3, reticulate pattern edge is extracted using rim detection, is removed reticulate pattern operation:
The pretreated pictures of acquisition step S1, are traveled through to picture, obtain the R of all pixels point of picture, and G, B leads to Road pixel value, using the R of all pixels point, G, channel B pixel value carries out rim detection, and (Shen Dehai, Hou Jian, E Xu are based on improving Sobel operator edge detections algorithm [J] computer technologies with development .2013.23 (11):22-25) solve each pixel Gradient difference, obtain character contour region using rim detection;Simultaneously reticulate pattern edge is carried out using the gradient difference of each pixel Extract, obtain reticulate pattern fringe region, after reticulate pattern edge is obtained, using white pixel point (255,255,255) to what is detected Reticulate pattern fringe region carries out assignment, completes to remove the operation of reticulate pattern, obtains removing the white space after reticulate pattern;
Step S4, extracts mask, fills up reticulate pattern and smoothed image:
Iris mask and character contour edge mask are produced, line mask of going forward side by side makes, then fills up reticulate pattern, and smoothed image is defeated Go out image results;
Comprise the concrete steps that:
S41, blank pixel point (x0, y0) is chosen in the obtained white spaces of step S3, then in blank pixel point (x0, y0) 20 vicinity points are chosen in periphery according to " X-type " structure, and carrying out R passage pixel value sizes to this 20 vicinity points compares And sort, then choose four vicinity points that R passages pixel value is located at preceding four, the R passage pictures of this four vicinity points Plain value is descending to be designated as Rout1, Rout2, Rout3 and Rout4 respectively, corresponding G passages pixel value be designated as respectively Gout1, Gout2, Gout3 and Gout4, channel B pixel value are designated as Bout1, Bout2, Bout3 and Bout4 respectively;
S42, by four vicinity points obtained in step S41 and the R, G, channel B pixel of blank pixel point (x0, y0) It is worth (R0, G0, B0) and carries out gradient difference calculating according to formula (1) respectively,
Wherein, i=1,2,3,4, Ti are gradient difference;
S43, sets character contour threshold value as 155, and the gradient difference Ti (T1, T2, T3, T4) that step S42 is obtained is compared Compared with size, then sort, the gradient difference numerical value for selecting maximum is compared with the character contour threshold value set, maximum number in Ti Value is more than character contour threshold value 155, that is, character contour edge mask is obtained, by character contour edge mask to character contour side Edge carries out mask fabrication;Conversely, then without any operation, into step S44;
S44, using the relevant parameter set in step S2, orients the prime area scope of human eye, in order to this step Iris masked areas is more accurately extracted, the subregion Ir of 3 × 3 sizes in the prime area is chosen, the R passages of the subregion Pixel value be designated as respectively Ra01, Ra02 ..., Ra09, concrete structure is as shown in the table,
Ra01 Ra02 Ra03
Ra04 Ra05 Ra06
Ra07 Ra08 Ra09
Convolution algorithm is carried out in this prime area, warp factor α is:
Using Gp=α * Ir, calculate Gp, Gp=Ra02+Ra04+Ra06+Ra08, i.e. Gp represent Ra02, Ra04, Ra06, Tetra- position R passage pixel value sums of Ra08;
Iris threshold value is set as 530, Gp is compared with iris threshold value, if Gp<530, positioning obtains iris mask regions Domain, then carry out iris periphery mask fabrication, that is, masking operation by iris masked areas, below fill up work howsoever Operate and produce influence all without the iris masked areas on having shielded, until before last picture in its entirety is smoothed;If Gp>530, then without any operation, into step S45;
S45, is operated obtained iris masked areas progress iris periphery mask fabrication is positioned through step S44, in setting It is 1.3 to limit iris threshold value, and lower limit iris threshold value is 0.75, and this upper limit iris threshold value and lower limit iris threshold value are used for drawing iris week The mask on side, chooses iris pixel point (x, y), then choose in iris pixel point (x, the y) upper-lower position in iris masked areas Ten neighbouring reference pixels points, the R passage pixel values of this ten reference pixels points be respectively R101, R102, R103, R104, R105, R108, R109, R110, R111 and R112, R101, R102, R103, R104 and R105 position and R108, R109, In symmetrical structure up and down, closed with respect to iris pixel point (x, y) using the ratio of formula (6) R110, R111 and R112 position System, solution obtains Gpr,
Gpr=(R101+R102+R103+R104+R105)/(R108+R109+R110+R111+R112) (6)
Compare the size of Gpr and upper limit iris threshold value and lower limit iris threshold value, if Gpr be less than lower limit iris threshold value 0.75 or Person Gpr is more than upper limit iris threshold value 1.3, then iris periphery mask fabrication is carried out to iris pixel point (x, y), it is to avoid the later stage is because of figure As the change for filling up the iris neighboring pixel value caused causes distortion;If 0.75≤Gpr≤1.3, without any operation, enter Enter step S46;
S46, chooses the subregion Ir of 3 × 3 sizes in picture in its entirety0, subregion Ir0The rapid S44 of structure synchronization in son Region Ir, to subregion Ir0Rim detection is carried out, and carries out convolution algorithm, horizontal warp factor Gx is:
Longitudinal warp factor Gy is:
Calculate Grx and Gry, Grx and Gry respectively using formula (7) and (8) and correspond to transverse edge detection R channel values respectively R channel values are detected with longitudinal edge;Edge must be arrived and fill up gradient difference Gr by being calculated according to formula (9),
Grx=Gx*Ir0 (7)
Gry=Gy*Ir0 (8)
S47, sets edge and fills up threshold value as 60, and the edge that edge is filled up into gradient difference Gr with setting is filled up threshold value 60 and carried out Compare, if Gr<60, R passage pixel value Rout1, the Rout2 positioned at the first two that will be obtained according to step S41, while obtaining relative Pixel value Gout1, Gout2 of the G passages answered and pixel value Bout1, Bout2 of channel B, respectively according to formula (10)-(12) To R, G, channel B sums up averaging, obtains average value in average value Gm on average value Rm on R passages, G passages, channel B Bm,
Rm=(Rout1+Rout2)/2 (10)
Gm=(Gout1+Gout2)/2 (11)
Bm=(Bout1+Bout2)/2 (12)
Rm, Gm, Bm is used to fill up the R of the reticulate pattern edge detected, G, channel B pixel value respectively;If Gr >=60, carry out Step S48;
S48, smooth pixel point (x1, y1) is chosen in the non-reticulate pattern edge of picture in its entirety, then in the smooth pixel point 20 vicinity points are chosen in (x1, y1) periphery according to according to " X-type " structure, with step S41, choose R passage pixel values and are located at Preceding four four vicinity points, calculate the average of this four each passages of vicinity points with formula (13)-(15) respectively, respectively It is designated as R1, G1, B1;Except iris mask before and character contour edge mask extract region, R1, G1, B1 are put down instead of this The R of sliding pixel (x1, y1), G, channel B pixel value realize the smoothing processing to whole pictures, export image results;
R1=(Rout1+Rout2+Rout3+Rout4)/4 (13)
G1=(Gout1+Gout2+Gout3+Gout4)/4 (14)
B1=(Bout1+Bout2+Bout3+Bout4)/4 (15).
The inventive method judges input figure first to input the pending picture of face covered with reticulate pattern by the height of picture Piece size.Secondly extraction reticulate pattern is carried out using rim detection, that is, removes reticulate pattern, then carry out mask fabrication, it is therefore an objective to right Line mask is entered in face sensitizing range (human eye pupil, iris, sclera, the sensitive information region such as nose profile and up and down lip contour) Make, no matter how to handle face later, these parts are impregnable, be finally that " X is carried out to the reticulate pattern region of removal Type " is filled up, and the image filled up is smoothed and has just obtained final output photographic result.
Heretofore described mask is by changing the pixel value extracted, making the pixel value extracted not in setting threshold In the range of value, it is ensured that not by corresponding algorithm process, play shielding action.Described pending picture refers to the people with reticulate pattern Face certificate photo, certificate photo size works just for these three 220 × 178,88 × 72,118 × 96 depth-width ratios, other certificate photos Size will be converted into these three sizes and be handled again, and the pending picture of the application is colour picture.
Embodiment illustrated in fig. 1 shows, " X-type " structure in the present invention described in step S41 be with blank pixel point (x0, Y0 centered on), the blank pixel point four corner positions diagonally respectively choose four vicinity points, then Every selecting a vicinity points, totally 20 vicinity points after a pixel on the position up and down of blank pixel point Constitute 20 vicinity points that the black bars in " X-type " structure, Fig. 1 are selection.Smooth pixel point in step S48 (x1, Y1 vicinity points selection rule) is identical with the vicinity points selection rule of blank pixel point.
Embodiment illustrated in fig. 2 shows, in the present invention in step S45 the reference pixels point of iris pixel point (x, y) selection Method is:In Fig. 2, each white square is a pixel, centered on iris pixel point (x, y), in the iris pixel Four corner positions of point (x, y) diagonally choose respectively two reference pixels points (R101 and R104, R103 and R105, R110 and R108, R112 and R109), then every selecting one after a pixel on the upper-lower position of iris pixel point Individual reference pixels point (R102 and R111), altogether choose 10 reference pixels points, 10 reference pixels points with respect to iris pixel point (x, Y) structure symmetrical above and below is constituted.
By the way that experimental results demonstrate character contour edge mask fabrication of the invention takes 20, periphery using " X-type " structure Vicinity points, the structure is symmetrical above and below, symmetrical, and advantage is that 20 vicinity points are discontinuous, that is, every one One is taken, is diagonally extended, and this 20 vicinity points are ranked up, R passages pixel value is located at preceding four four are chosen Individual pixel, while the corresponding G of this four pixels can be obtained, channel B pixel value can draw ladder using formula (1) Difference Ti is spent, i=1,2,3,4, the gradient difference for utilizing (x0, y0) equally distributed vicinity points to draw can be extracted more reasonably Character contour, has been done for non-character contour and has preferably filtered out, and vital effect is played to making personage's profile mask.Rainbow It is because iris region pixel value is generally than iris peripheral region using structure symmetrical above and below using formula (6) in film mask fabrication Domain pixel value is low, is 5 scattered reference pixels point R passage pixel value sums and following structure 5 minutes by above structure Scattered reference pixels point R passage pixel value sums are done than drawing Gpr, it is desirable to above structure and following symmetrical configuration, are so done more Iris region can be accurately positioned, iris mask is then made.Fill up reticulate pattern edge use " X-type " structure, can quickly from 20 vicinity points select R passage pixel value Rout1, Rout2 positioned at the first two, while obtaining corresponding G passages Pixel value Gout1, Gout2 and channel B pixel value Bout1, Bout2, are best using formula (10)-(12) fill up effect 's.Smooth pixel point (x1, y1) finally is chosen to the non-reticulate pattern edge of picture in its entirety, then in the smooth pixel point (x1, y1) 20 vicinity points are chosen in periphery according to according to " X-type " structure, calculate this four neighborhood pixels with formula (13)-(15) respectively The average of each passage of point, is designated as R1, G1, B1 respectively;Except iris mask before and character contour edge mask extract region, R1, G1, B1 are replaced into the smooth pixel point (x1, y1), the smoothing processing to whole pictures is realized.This smoothing processing is directed to people The pixel distortion factor on the face is small, can select preceding four pixel from 20, (x1, y1) periphery vicinity points using " X-type " structure Point is smoothed, to be optimal repairing effect.
In image repair, what is be most difficult to is exactly to find blocks and optimal matching blocks to repair area to be repaired, and the present invention is according to " X-type " Structure, chooses 20 vicinity points, calculates the average positioned at the R passage pixel values of the first two, you can quickly obtain optimal reticulate pattern Matched pixel point is filled up at edge, calculates the average of the R passage pixel values positioned at preceding four, for optimal smoothing processing pixel, Picture in its entirety be can obtain except iris mask before and character contour edge mask extraction region.It is relatively simple based on expansion Dissipate restorative procedure, the restorative procedure and edge detection algorithm based on sample block of equation, the inventive method it is creative by three With reference to obtaining blocks and optimal matching blocks, can handle marginal information can protect the marginal information of image, can be simultaneously reached smooth again Repairing effect, and can quickly repair.
Embodiment
In order to describe the embodiment and checking effectiveness of the invention of the present invention in detail, by side proposed by the present invention Method is applied in multiple face pictures covered by reticulate pattern.The face reticulate pattern of such picture is more sparse and color is shallower.
In the present embodiment, directly the input of picture path is entered, then command window inputs inpainting again, i.e., It can be seen that the design sketch after artwork and final descreening.
Pending picture is Fig. 3 (a) and Fig. 3 (b) in the present embodiment, and size is 220 × 178, sets a=0.387, c =0.645, d=0.41, r=14;Repaired according to the inventive method, obtain the reparation effect described in Fig. 4 (a) and Fig. 4 (b) Really.
Fig. 4 (a) and Fig. 4 (b) illustrate the design sketch removed after the reparation of face reticulate pattern of the inventive method, wherein filling up net Line is used from waiting that fill up a periphery is taken a little with " X-type " structure, is then screened in point is obtained, is found R passages numerical value maximum Point, while its G is also taken, channel B numerical value, similarly, obtaining, R passages numerical value time is a little bigger, gets the R of R passage numerical value the first two points Passage average, then to corresponding G, channel B value is carried out taking average operation, and the average value of obtain three passages, which is filled up, to be needed On the pixel to be filled up, so than full figure travel through find similar grain structure Exemplar-based algorithms faster, fill up Effect is more preferable;It is equally also more accurate than Diffusion-based restorative procedure, because being directly to be taken a little to be repaired periphery, Filled up after being averaged, so being substantially not visible the vestige filled up.Both algorithms are compared with more than, and the inventive method is to quilt The face picture of reticulate pattern covering has more preferable repairing effect, and the picture after reparation has the distortion of very little compared with artwork, and just looks at Less than the vestige of reparation, the accuracy recognized in recognition of face can be effectively lifted.
Particular embodiments described above, has been carried out further in detail to the purpose of the present invention, technical scheme and beneficial effect Describe in detail it is bright, should be understood that the foregoing is only the present invention specific embodiment, be not intended to limit the invention, it is all Within the spirit and principles in the present invention, any modification, equivalent substitution and improvements done etc., should be included in the guarantor of the present invention Within the scope of shield.
The methods such as heretofore described mask fabrication, rim detection are prior art.
The present invention does not address part and is applied to prior art.

Claims (3)

1. the restorative procedure of the face picture covered by reticulate pattern, it is characterised in that this method is by first extracting reticulate pattern edge, then enters Row removes reticulate pattern, finally fills up reticulate pattern and entire image is smoothed, and reaches the purpose of reduction face;Specific steps are such as Under:
Step S1, picture pretreatment:
Need to picture to be processed enter every trade acquisition, the picture height rol and picture width row of pending picture are obtained, with picture Vegetarian refreshments is unit, and the picture size for obtaining pending picture is rol × row;Pending picture is carried out into double forms again to turn Change;The picture size carried out after double form conversions is handled, it is 220,88 or 118, corresponding figure to make picture height Piece width is 178,72 or 96;
Step S2, sorts out to picture, sets up coordinate system, positions prime area:
It is divided into three classes according to the size of the above-mentioned steps S1 pictures pre-processed, i.e., picture size is 220 × 178,88 × 72 or 118 ×96;Which kind of picture size that the pretreated pictures of judgment step S1 belong in three classes;Then human eye scope is determined Position, the top left corner apex of picture is origin using after processing, using transverse direction as x-axis, and longitudinal direction is y-axis, and numerical value is more and more from left to right for x-axis Greatly, numerical value is increasing from top to bottom for y-axis, sets up xy coordinate systems;Set following relevant parameter:Left eye x coordinate proportionality coefficient is A, right eye x coordinate proportionality coefficient is c, and right and left eyes y-coordinate proportionality coefficient is d, and location radii is r;Pre-processed according to step S1 Picture size setting different proportion coefficient, when picture size is 220 × 178, then a=0.387, c=0.645, d= 0.41st, r=14;When picture size is 88 × 72, then a=0.38, c=0.65, d=0.38, r=4;When picture size is 118 × 96, then a=0.365, c=0.645, d=0.375, r=8.5;Eyes and periphery half are oriented by above-mentioned set parameter Footpath is r region, i.e. positioning obtains the prime area of human eye;
Step S3, reticulate pattern edge is extracted using rim detection, is removed reticulate pattern operation:
The pretreated pictures of acquisition step S1, are traveled through to picture, obtain the R, G, channel B picture of all pixels point of picture Element value, using the R of all pixels point, G, channel B pixel value carries out the gradient difference that rim detection solves each pixel, utilizes side Edge detection obtains character contour region;Reticulate pattern edge extracting is carried out using the gradient difference of each pixel simultaneously, reticulate pattern side is obtained Edge region, after reticulate pattern edge is obtained, is carried out using white pixel point (255,255,255) to the reticulate pattern fringe region detected Assignment, completes to remove the operation of reticulate pattern, obtains removing the white space after reticulate pattern;
Step S4, extracts mask, fills up reticulate pattern and smoothed image:
Iris mask and character contour edge mask are produced, line mask of going forward side by side makes, smoothed image exports image results;
Comprise the concrete steps that:
S41, blank pixel point (x0, y0) is chosen in the obtained white spaces of step S3, then on blank pixel point (x0, y0) periphery 20 vicinity points are chosen according to " X-type " structure, carrying out R passage pixel value sizes to this 20 vicinity points compares side by side Sequence, then chooses four vicinity points that R passages pixel value is located at preceding four, the R passage pixel values of this four vicinity points It is descending to be designated as Rout1, Rout2, Rout3 and Rout4 respectively, corresponding G passages pixel value be designated as respectively Gout1, Gout2, Gout3 and Gout4, channel B pixel value are designated as Bout1, Bout2, Bout3 and Bout4 respectively;
S42, by four vicinity points obtained in step S41 and the R, G, channel B pixel value of blank pixel point (x0, y0) (R0, G0, B0) carries out gradient difference calculating according to formula (1) respectively,
Wherein, i=1,2,3,4, Ti are gradient difference;
S43, sets character contour threshold value as 155, and the gradient difference Ti (T1, T2, T3, T4) that step S42 is obtained is compared greatly It is small, then sort, the gradient difference numerical value for selecting maximum is compared with the character contour threshold value set, and maximum numerical value is big in Ti In character contour threshold value 155, that is, character contour edge mask is obtained, character contour edge is entered by character contour edge mask Line mask makes;Conversely, then without any operation, into step S44;
S44, using the relevant parameter set in step S2, orients the prime area scope of human eye, chooses 3 in the prime area The subregion Ir of × 3 sizes, the pixel values of the R passages of the subregion be designated as respectively Ra01, Ra02 ..., Ra09, concrete structure It is as shown in the table,
Ra01 Ra02 Ra03 Ra04 Ra05 Ra06 Ra07 Ra08 Ra09
Convolution algorithm is carried out in this prime area, warp factor α is:
Using Gp=α * Ir, calculate Gp, Gp=Ra02+Ra04+Ra06+Ra08, i.e. Gp and represent Ra02, Ra04, Ra06, Ra08 Four position R passage pixel value sums;
Iris threshold value is set as 530, Gp is compared with iris threshold value, if Gp<530, positioning obtains iris masked areas, then Iris masked areas is subjected to iris periphery mask fabrication;If Gp>530, then without any operation, into step S45;
S45, operates obtained iris masked areas progress iris periphery mask fabrication is positioned through step S44, sets upper limit rainbow Film threshold value is 1.3, and lower limit iris threshold value is 0.75, and iris pixel point (x, y) is chosen in iris masked areas, then in the iris picture Vegetarian refreshments (x, y) upper-lower position chooses ten neighbouring reference pixels points, and the R passage pixel values of this ten reference pixels points are respectively R101, R102, R103, R104, R105, R108, R109, R110, R111 and R112, R101, R102, R103, R104 and R105 Position iris pixel point (x, y) relative with R108, R109, R110, R111 and R112 position symmetrical structure above and below, Using the proportionate relationship of formula (6), solution obtains Gpr,
Gpr=(R101+R102+R103+R104+R105)/(R108+R109+R110+R111+R112) (6)
Compare the size of Gpr and upper limit iris threshold value and lower limit iris threshold value, if Gpr be less than lower limit iris threshold value 0.75 or Gpr is more than upper limit iris threshold value 1.3, then iris periphery mask fabrication is carried out to iris pixel point (x, y);If 0.75≤Gpr≤ 1.3, then without any operation, into step S46;
S46, chooses the subregion Ir of 3 × 3 sizes in picture in its entirety0, subregion Ir0The rapid S44 of structure synchronization in subregion Ir, to subregion Ir0Rim detection is carried out, and carries out convolution algorithm, horizontal warp factor Gx is:
Longitudinal warp factor Gy is:
Calculate Grx and Gry, Grx and Gry respectively using formula (7) and (8) and correspond to transverse edge detection R channel values respectively and vertical To rim detection R channel values;Edge must be arrived and fill up gradient difference Gr by being calculated according to formula (9),
Grx=Gx*Ir0 (7)
Gry=Gy*Ir0 (8)
S47, sets edge and fills up threshold value as 60, and the edge that edge is filled up into gradient difference Gr with setting is filled up threshold value 60 and compared Compared with if Gr<60, R passage pixel value Rout1, the Rout2 positioned at the first two that will be obtained according to step S41, while obtaining corresponding G passages pixel value Gout1, Gout2 and pixel value Bout1, Bout2 of channel B, it is right according to formula (10)-(12) respectively R, G, channel B sum up averaging, obtain average value Bm in average value Gm on average value Rm on R passages, G passages, channel B,
Rm=(Rout1+Rout2)/2 (10)
Gm=(Gout1+Gout2)/2 (11)
Bm=(Bout1+Bout2)/2 (12)
Rm, Gm, Bm is used to fill up the R of the reticulate pattern edge detected, G, channel B pixel value respectively;If Gr >=60, step is carried out S48;
S48, smooth pixel point (x1, y1) is chosen in the non-reticulate pattern edge of picture in its entirety, then in the smooth pixel point (x1, y1) 20 vicinity points are chosen in periphery according to " X-type " structure, with step S41, choose four neighbours that R passages pixel value is located at preceding four Nearly pixel, calculates the average of this four each passages of vicinity points with formula (13)-(15) respectively, be designated as respectively R1, G1, B1;Except iris mask before and character contour edge mask extract region, R1, G1, B1 are replaced into the smooth pixel point The R of (x1, y1), G, channel B pixel value realize the smoothing processing to whole pictures, export image results;
R1=(Rout1+Rout2+Rout3+Rout4)/4 (13)
G1=(Gout1+Gout2+Gout3+Gout4)/4 (14)
B1=(Bout1+Bout2+Bout3+Bout4)/4 (15).
2. the restorative procedure of the face picture according to claim 1 covered by reticulate pattern, it is characterised in that:The step " X-type " structure in S41 refers to, centered on blank pixel point, the blank pixel point four corner positions diagonally Four vicinity points are chosen in direction respectively, then every selection after a pixel on the position up and down of blank pixel point One vicinity points, totally 20 vicinity points constitute " X-type " structure.
3. the restorative procedure of the face picture according to claim 1 covered by reticulate pattern, it is characterised in that:The step The choosing method of the reference pixels point of iris pixel point (x, y) in S45 is:Centered on iris pixel point (x, y), in the rainbow Four corner positions of film pixel (x, y) diagonally choose two reference pixels points respectively, then in iris pixel point Upper-lower position on every after a pixel select a reference pixels point, altogether choose 10 reference pixels points.
CN201710226996.1A 2017-04-07 2017-04-07 The restorative procedure of the face picture covered by reticulate pattern Expired - Fee Related CN107016657B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710226996.1A CN107016657B (en) 2017-04-07 2017-04-07 The restorative procedure of the face picture covered by reticulate pattern

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710226996.1A CN107016657B (en) 2017-04-07 2017-04-07 The restorative procedure of the face picture covered by reticulate pattern

Publications (2)

Publication Number Publication Date
CN107016657A true CN107016657A (en) 2017-08-04
CN107016657B CN107016657B (en) 2019-05-28

Family

ID=59446227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710226996.1A Expired - Fee Related CN107016657B (en) 2017-04-07 2017-04-07 The restorative procedure of the face picture covered by reticulate pattern

Country Status (1)

Country Link
CN (1) CN107016657B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107993190A (en) * 2017-11-14 2018-05-04 中国科学院自动化研究所 Image watermark removal device
CN108010009A (en) * 2017-12-15 2018-05-08 北京小米移动软件有限公司 A kind of method and device for removing interference figure picture
CN108121978A (en) * 2018-01-10 2018-06-05 马上消费金融股份有限公司 Face image processing method, system and equipment and storage medium
CN108428218A (en) * 2018-02-28 2018-08-21 广州布伦南信息科技有限公司 A kind of image processing method of removal newton halation
CN108447030A (en) * 2018-02-28 2018-08-24 广州布伦南信息科技有限公司 A kind of image processing method of descreening
CN109035171A (en) * 2018-08-01 2018-12-18 中国计量大学 A kind of reticulate pattern facial image restorative procedure
CN112418054A (en) * 2020-11-18 2021-02-26 北京字跳网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567957A (en) * 2010-12-30 2012-07-11 北京大学 Method and system for removing reticulate pattern from image
CN103442159A (en) * 2013-09-02 2013-12-11 安徽理工大学 Edge self-adapting demosaicing method based on RS-SVM integration
CN105930797A (en) * 2016-04-21 2016-09-07 腾讯科技(深圳)有限公司 Face verification method and device
CN106530227A (en) * 2016-10-27 2017-03-22 北京小米移动软件有限公司 Image restoration method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567957A (en) * 2010-12-30 2012-07-11 北京大学 Method and system for removing reticulate pattern from image
CN103442159A (en) * 2013-09-02 2013-12-11 安徽理工大学 Edge self-adapting demosaicing method based on RS-SVM integration
CN105930797A (en) * 2016-04-21 2016-09-07 腾讯科技(深圳)有限公司 Face verification method and device
CN106530227A (en) * 2016-10-27 2017-03-22 北京小米移动软件有限公司 Image restoration method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李功清: ""基于样本和结构信息的大范围图像修复修复算法研究"", 《万方企业知识服务平台》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107993190A (en) * 2017-11-14 2018-05-04 中国科学院自动化研究所 Image watermark removal device
CN107993190B (en) * 2017-11-14 2020-05-19 中国科学院自动化研究所 Image watermark removing device
CN108010009A (en) * 2017-12-15 2018-05-08 北京小米移动软件有限公司 A kind of method and device for removing interference figure picture
CN108010009B (en) * 2017-12-15 2021-12-21 北京小米移动软件有限公司 Method and device for removing interference image
CN108121978A (en) * 2018-01-10 2018-06-05 马上消费金融股份有限公司 Face image processing method, system and equipment and storage medium
CN108428218A (en) * 2018-02-28 2018-08-21 广州布伦南信息科技有限公司 A kind of image processing method of removal newton halation
CN108447030A (en) * 2018-02-28 2018-08-24 广州布伦南信息科技有限公司 A kind of image processing method of descreening
CN109035171A (en) * 2018-08-01 2018-12-18 中国计量大学 A kind of reticulate pattern facial image restorative procedure
CN109035171B (en) * 2018-08-01 2021-06-15 中国计量大学 Reticulate pattern face image restoration method
CN112418054A (en) * 2020-11-18 2021-02-26 北京字跳网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable medium

Also Published As

Publication number Publication date
CN107016657B (en) 2019-05-28

Similar Documents

Publication Publication Date Title
CN107016657B (en) The restorative procedure of the face picture covered by reticulate pattern
CN112419250B (en) Pavement crack digital image extraction, crack repair and crack parameter calculation method
CN109754377B (en) Multi-exposure image fusion method
CN114723701A (en) Gear defect detection method and system based on computer vision
CN108510451B (en) Method for reconstructing license plate based on double-layer convolutional neural network
US20070189606A1 (en) Automatic detection and correction of non-red eye flash defects
CN109191387A (en) A kind of Infrared Image Denoising method based on Butterworth filter
JP2007003244A5 (en)
CN103198319B (en) For the blurred picture Angular Point Extracting Method under the wellbore environment of mine
CN105719306B (en) A kind of building rapid extracting method in high-resolution remote sensing image
CN109544464A (en) A kind of fire video image analysis method based on contours extract
CN110544300B (en) Method for automatically generating three-dimensional model based on two-dimensional hand-drawn image characteristics
CN109300127A (en) Defect inspection method, device, computer equipment and storage medium
CN106846271B (en) Method for removing reticulate pattern in identity card photo
CN112529800B (en) Near-infrared vein image processing method for filtering hair noise
CN107481210A (en) The infrared image enhancing method of local selective mapping based on details
CN103679672A (en) Panorama image splicing method based on edge vertical distance matching
CN117274113B (en) Broken silicon wafer cleaning effect visual detection method based on image enhancement
CN109064439B (en) Partition-based single-side light-entering type light guide plate shadow defect extraction method
CN109903270A (en) Livestock number of groups monitoring method and device
CN106504261A (en) A kind of image partition method and device
CN110490886A (en) A kind of method for automatically correcting and system for certificate image under oblique viewing angle
CN103208104A (en) Non-local theory-based image denoising method
CN114792310A (en) Mura defect detection method for edge blurring in LCD screen
JP4076777B2 (en) Face area extraction device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190528

CF01 Termination of patent right due to non-payment of annual fee