CN106548159A - Reticulate pattern facial image recognition method and device based on full convolutional neural networks - Google Patents
Reticulate pattern facial image recognition method and device based on full convolutional neural networks Download PDFInfo
- Publication number
- CN106548159A CN106548159A CN201610982333.8A CN201610982333A CN106548159A CN 106548159 A CN106548159 A CN 106548159A CN 201610982333 A CN201610982333 A CN 201610982333A CN 106548159 A CN106548159 A CN 106548159A
- Authority
- CN
- China
- Prior art keywords
- image
- reticulate pattern
- convolutional neural
- neural networks
- facial image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000001815 facial effect Effects 0.000 title claims abstract description 67
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 60
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000012549 training Methods 0.000 claims abstract description 42
- 230000006870 function Effects 0.000 claims abstract description 34
- 238000000605 extraction Methods 0.000 claims abstract description 15
- 230000008569 process Effects 0.000 claims description 11
- 238000005070 sampling Methods 0.000 claims description 11
- 239000011159 matrix material Substances 0.000 claims description 7
- 238000003062 neural network model Methods 0.000 claims description 7
- 230000009466 transformation Effects 0.000 claims description 7
- 238000004422 calculation algorithm Methods 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 2
- 230000000052 comparative effect Effects 0.000 claims description 2
- 238000013480 data collection Methods 0.000 claims description 2
- 238000005457 optimization Methods 0.000 claims description 2
- 239000012141 concentrate Substances 0.000 claims 8
- 230000000007 visual effect Effects 0.000 abstract description 4
- 238000011084 recovery Methods 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 9
- 238000013528 artificial neural network Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 5
- 230000007935 neutral effect Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000005303 weighing Methods 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 238000012952 Resampling Methods 0.000 description 1
- BQCADISMDOOEFD-UHFFFAOYSA-N Silver Chemical compound [Ag] BQCADISMDOOEFD-UHFFFAOYSA-N 0.000 description 1
- 230000001154 acute effect Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 229910052709 silver Inorganic materials 0.000 description 1
- 239000004332 silver Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
- G06F18/2193—Validation; Performance evaluation; Active pattern learning techniques based on specific statistical tests
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Probability & Statistics with Applications (AREA)
- Geometry (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of reticulate pattern facial image recognition method and device based on full convolutional neural networks.The method includes:Reticulate pattern facial image and corresponding clear face image pair are collected, and these images are utilized to one full convolutional neural networks for recovering clear face image (descreening) from reticulate pattern image of training.During identification, reticulate pattern facial image is input in the descreening model for training, clear face image is obtained for carrying out follow-up recognition of face task.Full convolutional neural networks be present invention employs as the main body of learning framework, have bigger visual experience wild using which and faster arithmetic speed the characteristics of.In the design of training objective function, with reference to the reconstruction loss and the reconstruction of face characteristic rank loss of pixel scale, the accurate alignment that arranging in pairs or groups carries out human face region in a network using spatial alternation module realizes the accurate extraction of human face region feature.The adopted method of invention effectively can not only recover clear face image from reticulate pattern image, and in recovery process, face characteristic can be kept relatively stable, can increase substantially the recognition accuracy of reticulate pattern facial image.
Description
Technical field
The present invention relates to the technical field such as computer vision, pattern-recognition, machine learning, particularly a kind of to be based on full convolution
The reticulate pattern facial image recognition method of neutral net and device (Fully Convolutional Network for Deep
MeshFace Verification, abbreviation DeMeshNet).
Background technology
The characteristics of face recognition technology is convenient due to its contactless identification authentication mode and accurately, lives at us
In every aspect all start to be taken seriously.Not only in application scenarios such as traditional airport security, face work attendance and customs's clearances
Used in extensively, face recognition technology is also gradually used widely in mobile Internet.It is particularly therein based on certificate
According to the face recognition technology that living photo is compared, start to obtain greatly popularization, the silver based on the technology in internet financial field
Row is remotely opened an account and wallet payment function starts to engender and receives extensive concern.The technology is due to carrying
Front registration enriches its application scenarios with regard to energy ONLINE RECOGNITION so as to the availability of face recognition technology is greatly improved.
The necessarily technical challenge come with its convenience.Certificate photo living photo compares problem and mainly faces at present
Two technological difficulties.The difficult point of first aspect is that certificate photo is different with the acquisition time of living photo, and environment is different, thus generally
The light conditions of the facial image for collecting, attitude and expression are also not quite similar.These differences cause difference in class very
Greatly, the difference between class can even be exceeded sometimes, this algorithm to Automatic face recognition proposes very big challenge.The opposing party
Face, in order to prevent the abuse of certificate photo, would generally be added by the identity card picture that the data-interface that Ministry of Public Security's system is provided is obtained
Enter random reticulate pattern.Although the presence of these reticulate patterns protects privacy of user to a certain extent, to automatic face identification system
Huge interference is caused, the precision of recognition of face is seriously reduced.
The method of deep learning has played huge effect in artificial intelligence field, particularly in computer vision field,
In object identification, detection, the traditional field of segmentation all obtain greatly breakthrough.Scientific research personnel passes through to increase amount of training data,
Using convolutional neural networks, important breakthrough is also achieved on face recognition technology, surmounted on public data collection LFW
The recognition effect of human eye.Meanwhile, such model the deconvolution of such as image, deblurring, is gone in some bottom visual problems
Make an uproar and the task such as super-resolution in also all achieve extraordinary effect.Particularly, using the structure that a class is special, i.e., full convolution
Deep neural network, researcher can be appointed in image, semantic segmentation, Image Super-resolution, image denoising and image completion etc.
Optimal effect is made in business.
The content of the invention
This technological difficulties is recognized for the reticulate pattern facial image in image comparison technology, the present invention proposes one kind and is based on
The reticulate pattern facial image recognition method and device of full convolutional neural networks.In order to improve reticulate pattern image for standard during recognition of face
Really rate, of the invention to train a model that the clear face image without reticulate pattern is recovered with reticulate pattern image.The present invention makes
With full convolutional neural networks as the main body of prediction clear face image, make full use of its amount of calculation little and visual experience is wild big
Advantage.In order that the clear face for recovering accurately can be identified, when object function is designed except considering common picture
The loss of plain rank, is also added into the other loss of feature level in the present invention.Wish that the picture rich in detail for recovering is pre- through one
After the feature extraction network for training, the feature and target picture rich in detail of output try one's best one through the feature of network output
Cause.In order to be able to be trained end to end, the present invention adds spatial alternation module in modelling, and the module can be in net
The alignment of face is carried out inside network, the extraction of feature is conveniently carried out and is compared.
The present invention proposes a kind of reticulate pattern facial image recognition method based on full convolutional neural networks, specifically according to following
Step is implemented:
Step S1, collection reticulate pattern facial image x and corresponding clear face image y are to as sample image pair, forming instruction
Practice data set, for each sample image to obtaining indicating the label figure m of reticulate pattern position by threshold method.For the reticulate pattern collected
Facial image, needs to calculate the position coordinates (x of two eyes in advancer, yr), (xl, yl), for carrying out in network training
Face aligns.It is ready for the neural network model φ of face characteristic extraction.
Step S2, training are obtained from the full convolutional neural networks model for going out clear face image with reticulate pattern face image restoration
ψ, including:
The sample image pair concentrated using the training data, trains a full convolutional neural networks.In training process, net
Line facial image is input to full convolutional neural networks ψ, and exports the prediction of a clear face.Using neutral net ψ to clear
The difference of the clear face image that the prediction of clear facial image and data are concentrated, can be by gradient anti-pass to the weight in network
Parameter is adjusted, so as to be learnt.Loss function for gradient anti-pass includes two parts, and Part I is based on picture
The difference of plain rank, that is, the target picture rich in detail that the picture rich in detail ensured by neural network forecast and data are concentrated is as far as possible one in pixel
Cause.Part II is the difference of feature based rank, that is, the picture rich in detail ensured by neural network forecast is in the ready feature of feeding
After extracting network φ, resulting feature representation will be tried one's best consistent with target image.
The descreening network model that step S3, use are trained, it is clear to predict from reticulate pattern facial image to be identified
Facial image, and the clear face image using prediction carries out recognition of face.
The invention allows for a kind of reticulate pattern facial image identifying device based on full convolutional neural networks, including:
Training sample preparation module, for collecting reticulate pattern facial image and corresponding clear face image to as sample graph
As right, training dataset is formed, for each sample image to obtaining indicating the label figure of reticulate pattern position by threshold method.For
The reticulate pattern facial image of collection, calculates the position coordinates of two eyes in advance.It is ready for the nerve of face characteristic extraction
Network model φ;
Picture rich in detail forecast model (full convolutional neural networks) training module, obtains extensive from reticulate pattern facial image for training
The full convolutional neural networks model of clear face image of appearing again, including:
The sample image pair concentrated using the training data, trains a full convolutional neural networks, in training process, should
Multiple convolutional layers of full convolutional neural networks first half are processed to the reticulate pattern facial image being input into.Using coming from pixel
Rank and the other object function of feature level, to each weight parameter of convolutional layer by gradient anti-pass repeatedly, study is obtained most
Whole picture rich in detail forecast model;
Identification module, for the full convolutional neural networks model that use is trained, recovers clear face figure to be identified
Picture, and carry out recognition of face.
Beneficial effects of the present invention:The said method of the present invention can be extensive from reticulate pattern image by full convolutional neural networks
The clear face image appeared again without reticulate pattern, and recognition of face is carried out using the clear face image for recovering.The present invention is proposed
Full convolutional neural networks model, with faster calculating speed and higher model performance.By combination proposed by the present invention
The object function that Pixel-level is lost and feature level loses a, it is possible to obtain reticulate pattern for greatly improving algorithm discrimination recovers mould
Type.Clear face image is first predicted from band reticulate pattern image by model proposed by the present invention, then goes to carry out recognition of face, can
With the great precision for improving identification.
Description of the drawings
Fig. 1 is the schematic diagram of reticulate pattern facial image, clear face image and reticulate pattern location tags figure;
Fig. 2 is the method flow diagram of the reticulate pattern facial image recognition method in the present invention based on full convolutional neural networks;
Fig. 3 is the block schematic illustration of full convolutional neural networks model in the present invention.
Specific embodiment
To make the object, technical solutions and advantages of the present invention become more apparent, below in conjunction with specific embodiment, and reference
Accompanying drawing, the present invention is described in further detail.
In order to solve the problems, such as the identification of reticulate pattern facial image, the reticulate pattern people based on full convolutional neural networks proposed by the present invention
Face image identifying method and device are recovered without the clear of reticulate pattern from reticulate pattern facial image by full convolutional neural networks
Facial image.On the one hand full convolutional neural networks can reduce amount of calculation, on the other hand because having down-sampled and rising sample level
Visual experience open country can also be increased, the ability using contextual information is improved, such that it is able to improve effect.In order to recover clear
Ensure that preferable recognition effect while image, the present invention is when full convolutional neural networks are trained using coming from two layers
Loss and feature level other loss of the secondary loss as object function, i.e. pixel scale.Two loss training are used in combination
Model out preferably can be predicted can be for the picture rich in detail that accurately carries out recognition of face.
The present invention goes to optimize full convolutional neural networks by a unique object function, and study obtains a height non-thread
The conversion of property, for the picture rich in detail without reticulate pattern is recovered from reticulate pattern image, and carries out follow-up people using picture rich in detail
Face is recognized.
Fig. 1 shows the schematic diagram of reticulate pattern facial image, clear face image and reticulate pattern location tags figure.
Fig. 2 shows the reticulate pattern facial image recognition method flow process based on full convolutional neural networks proposed by the present invention
Figure, the method as shown in Figure 2 include following step:
Step S1, collects reticulate pattern facial image x and corresponding clear face image y to as sample image pair, forming instruction
Practice data set, in the same size, gray-scale map or the coloured image of described image pair.For each sample image to by threshold
Value method obtains the label figure M for indicating reticulate pattern positioni。
For the reticulate pattern facial image collected, the position coordinates (x of two eyes in facial image can be calculated in advancer,
yr), (xl, yl), for face alignment is carried out in network training.
It is ready for the neural network model φ of face characteristic extraction.The neural network model can be by there is convolutional layer
Or the artificial neural network that common full articulamentum is constituted.Neutral net is used to extract for recognition of face with strong differentiation
The face characteristic of property.The neural network model φ is that training in advance is good, and the full convolutional neural networks model ψ is being carried out
During training, the parameter of the neural network model φ keeps constant.
Step S2, the sample image pair concentrated using the training data train full convolutional neural networks model φ, with
To recover clearly facial image y from reticulate pattern facial image x.Full convolutional network used in the present invention can ensure that output
Picture is identical with the size of input picture, and wherein introduces pond layer and anti-pond layer carries out the number that down-sampled and liter is sampled
According to process layer, amount of calculation can be reduced and increase receptive field scope.In one embodiment, the convolutional neural networks structure is by class
Like a kind of structure composition of VGG-Net (general depth convolutional neural networks model with 16 layers of convolution and 5 pond layers),
By adding symmetrical up-sampling and convolutional layer after last convolutional layer, gradually the size of characteristic image is recovered
Into original size, obtain and be input into output of the same size.In embodiment, down-sampled used pond layer can also be with spaced
Convolutional layer substitute.In network, the wave filter quantity and size of concrete each layer freely can be arranged.As shown in figure 3, by original
The input layer that beginning image is input into into the convolutional neural networks, through multilayer convolution layer operation, completes to the input picture
Nonlinear transformation, obtain the picture rich in detail of full convolutional neural networks prediction;The picture rich in detail is with target picture rich in detail in pixel
Level and feature level carry out both sides comparison, counting loss by loss function as described below.
Specifically, full convolutional neural networks object function is divided into the target of two, that is, maximize the similarity of pixel scale
With the other similarity of feature level (minimizing the distance of Pixel-level and feature level).Particularly, (other can also be used with mean square error
Function is weighing similitude) weighing the similitude of these two aspects, in the present invention, object function as follows can be constructed:
Wherein, loss of the object function by pixel scaleWith
And the other loss of feature levelTwo parts are constituted, weight parameter λ2For adjusting
The weight of whole two parts loss.XiIt is i-th reticulate pattern image, YiIt is the corresponding target picture rich in detail of i-th reticulate pattern image, ψ (Xi)
It is i-th picture rich in detail by model prediction, MiBe characterize i-th reticulate pattern image in reticulate pattern position two-value label figure, λ1With
To balance two-part weight in Pixel-level loss, the operation that ⊙ representative elements rank is multiplied.ST represents the full convolutional Neural of connection
(module can be common for the spatial alternation module of picture rich in detail that network model ψ is predicted and feature extraction network model φ
Human face region image, snap under a unified attitude), ST (Xi) for by XiBe normalized unified size and
Attitude.Its role is to face alignment can be carried out to prediction image out in network calculations, normalize to unified big
Little and attitude.The F norms of matrix are represented, for calculating the quadratic sum of difference between predicted value and desired value.
The training process of the full convolutional neural networks is as follows:
Step S21:The weight parameter of all nodes in initialization network.
Step S22:According to object function defined above, to loss before calculating.According to the weight parameter of current network, nothing
The a number of a collection of image randomly selected put back to, the loss function value obtained by calculating network;
Step S23:Calculate anti-pass gradient.According to the output valve and derivative chain rule of loss function, calculation procedure S22
Partial derivative of the target loss value obtained by middle calculating to all nodes in network;
Step S24:Using a kind of new optimum algorithm of multi-layer neural network ADAM (ADAptive Moment estimation),
According to partial derivative required in step S23, the weight of each node in network is updated;
Step S25:Again (if data are sampled finished, all put without another batch of image of randomly selecting put back to
Return, repeat extraction step), repeat S22-S24, until loss function is no longer reduced, deconditioning obtains final descreening net
Network ψ.
Wherein, the loss function of pixel scale is made up of two parts, that is, predict between clear face and the clear face of target
The quadratic sum of pixel differenceAnd difference is in the quadratic sum in reticulate pattern regionλ1With
To balance two-part weight, the operation that wherein ⊙ representative elements rank is multiplied.The loss function of pixel scale is used to ensure net
The picture rich in detail of network prediction is tried one's best consistent in pixel with the target picture rich in detail that data are concentrated.
The other loss function of feature level is by by calculating the face characteristic φ that prediction picture rich in detail is extractedj(ψ(ST(Xi)) and
The face characteristic φ of target picture rich in detailj(ST(Yi))) difference quadratic sum determine.The other loss function of feature level is used to protect
The picture rich in detail of card neural network forecast after ready feature extraction network φ is sent into, resulting feature representation to try one's best with
Target image is consistent.Spatial alternation module ST is become by affine or other mapping modes for the clear face image that will be predicted
Get a normalization facial image for having alignd in return, feature extraction φ is carried out such that it is able to be input to feature extraction network.φj
Representative feature extracts the jth layer of network, during the difference of comparative feature rank, except the output characteristic using last layer of network,
Supervision is added to the similarity in the feature of the full convolutional neural networks middle-shallow layer also, for carrying out the supervision of deeper, from
And boosting algorithm optimization efficiency.
Need to lead to the clear face image of prediction by spatial alternation module ST before calculating the other loss function of feature level
Cross affine or other mapping mode conversion and obtain a normalization facial image for having alignd.In conversion process, need first
By shifting to an earlier date ready eye position, global similitude transformation matrix θ=[a, b, 1 is calculated;- b, a, 1].Calculating needs
Formula it is as follows:
Wherein (xr, yr), (xl, yl) position coordinates of right eye and left eye is respectively, a, b are the ginsengs of similar transformation matrices θ
Number.Then, spatial alternation module is by the transformation matrix, carries out bilinearity sampling to the pixel in the picture rich in detail that predicts,
A new facial image for having alignd is obtained, the computing formula of sampling process is as follows:
The image obtained after representing sampling, t are target, refer to the target image after sampling.Represent original
Picture rich in detail, s is source, the original picture rich in detail of acute pyogenic infection of finger tip.The position coordinates of pixel in (m, n) representative image.Pass through
The formula, it is possible to resampling is carried out according to image to original certificate inside neutral net, a face for having normalized is obtained
Region.
Wherein, the Feature Selection Model φ for calculating human face region feature is pre-training the good extraction that can be used in has
Any neural network structure of identification face characteristic.Carry as needs carry out effective feature using fixed model parameter
Take, the parameter of the model is not updated during full convolutional neural networks are trained, and remains constant.
Step S3, for the reticulate pattern facial image for recognition of face, is entered into the full convolutional Neural net for training
In network ψ, the clear face image predicted is obtained.Next can be through Face datection, crucial with traditional recognition of face step
After point detection and feature extraction, corresponding aspect ratio pair is carried out, recognition of face task is completed.
In order to describe the specific embodiment and checking effectiveness of the invention of the present invention in detail, the present invention is proposed by we
One reticulate pattern image of method application recognition of face task.Specifically, in order to train the full convolutional neural networks mould of descreening
Type, according to step 1, have collected 500,000 facial images with reticulate pattern and which is corresponding without reticulate pattern picture rich in detail, and calculate
Go out the coordinate of right and left eyes in the label figure and facial image of reticulate pattern position.Using gradient back propagation algorithm and ready data,
We are trained to the neutral net for proposing, until network convergence, obtains the weight of each node in final network.
The clearly body of 1000 people outside training set in order to test the validity of the model, is additionally prepared to come from
Part license piece (everyone one), the living photo 1000 with reticulate pattern identity card picture and corresponding individuality.Exclusively carried out using one kind
Certificate photo living photo compare depth characteristic, to clear identity card shine and living photo to being compared full when accuracy rate such as table 1
Shown in first row.Using same feature, when being identified to the photograph of the identity card with reticulate pattern and living photo, under accuracy rate has significantly
Drop.Finally, using the full convolutional neural networks model for training, we are recovered from the facial image with reticulate pattern clearly first
Clear facial image, then bring and carry out recognition of face with corresponding living photo, recognition result with respect to reticulate pattern image result by big
The lifting of amplitude, particularly the identification level in TPR@FPR=1% this index substantially close to picture rich in detail.In addition,
On this 1000 images, the PSNR of the picture rich in detail for recovering also is up to 29.16db, embodies good recovery of vision effect
Really.The validity that embodiment valid certificates method proposed by the invention is recognized to reticulate pattern facial image.
Table 1 is untreated reticulate pattern image and normal picture rich in detail for accuracy rate during recognition of face and at the present invention
The comparing result table of the face recognition accuracy rate after reason, it is as follows:
TPR@FPR=1% | TPR@FPR=0.1% | TPR@FPR=0.01% | |
Picture rich in detail | 88.10 | 74.30 | 53.60 |
Reticulate pattern image | 43.20 | 28.50 | 18.20 |
Descreening image | 86.70 | 70.70 | 47.00 |
Particular embodiments described above, has been carried out to the purpose of the present invention, technical scheme and beneficial effect further in detail
Describe in detail bright, it should be understood that the foregoing is only the specific embodiment of the present invention, be not limited to the present invention, it is all
Within the spirit and principles in the present invention, any modification, equivalent substitution and improvements done etc. should be included in the protection of the present invention
Within the scope of.
Claims (10)
1. a kind of reticulate pattern facial image recognition method based on full convolutional neural networks, it is characterised in that include:
Step S1, collection reticulate pattern facial image and corresponding clear face image are to as sample image pair, forming training data
Collection, for each sample image to obtaining indicating the label figure of reticulate pattern position by threshold method;
Step S2, training obtain going out the full convolutional neural networks model of clear face image from reticulate pattern face image restoration,
Including:
Concentrate from the training data and sample image is chosen to as current sample image pair, by the net of current sample image centering
Line facial image is input to full convolutional neural networks model, and the clear face image for obtaining exporting predicts the outcome;Using described
Clear face image predicts the outcome and the difference between current sample image centering clear face image, by gradient anti-pass pair
Weight parameter in the full convolutional neural networks model is adjusted, and iteration performs above-mentioned training process, until meeting iteration
Till condition, the full convolutional neural networks model for training is obtained;
The full convolutional neural networks model that step S3, use are trained, the prediction from reticulate pattern facial image to be identified obtaining clear
Clear facial image.
2. the method for claim 1, it is characterised in that step S1 also includes:
For the reticulate pattern facial image collected, the position coordinates of two eyes in the reticulate pattern facial image is calculated.
3. the method for claim 1, it is characterised in that the loss function for being used for gradient anti-pass in step S2 includes two
Part:Part I is the loss function for predicting the difference of pixel scale between reticulate pattern facial image and clear face image, the
Two parts are the loss functions for predicting the other difference of feature level between network facial image and clear face image.
4. the method for claim 1, it is characterised in that step S1 includes:
Step S11:The reticulate pattern facial image for being gathered and the size of corresponding clear face image are processed into it is consistent, and for
Training data concentrate each sample image to according to certain threshold value obtain indicate reticulate pattern distribution position bianry image conduct
Label image.
5. method according to claim 1, it is characterised in that step S2 includes:
Step S21:Initialize the weight parameter of all nodes in full convolutional neural networks model;
Step S22:To loss before calculating;According to full convolutional neural networks model present weight parameter, for the instruction randomly selected
Practice the reticulate pattern facial image in data set, calculate the loss function value obtained by the full convolutional neural networks model;
Step S23:Calculate anti-pass gradient;According to resulting loss function value and derivative chain rule, the full volume is calculated
Weight parameters w of the target loss L to all nodes in the full convolutional neural networks model in product neural network modeliLocal derviation
Number
Step S24:Using ADAM algorithms, according to partial derivative required in step S23, to the full convolutional neural networks mould
In type, the weight of all nodes is updated;
Step S25:Concentrate from the training data and reselect another batch of reticulate pattern facial image, repeat step S22-S24, until
Meet iterated conditional, obtain the full convolutional neural networks model for training.
6. method as claimed in claim 5, it is characterised in that the used target for arriving of loss function value is calculated in step S22
Function is:
Wherein,For the loss function of pixel scale,It is characterized the loss loss function of rank, λ1And λ2Join for weight;XiIt is
The reticulate pattern facial image of input, YiIt is to concentrate X in training dataiCorresponding clear face image, ψ (Xi) it is by the full volume
Accumulate predicting the outcome for the clear face image that Neural Network model predictive is obtained, MiIt is the label figure for characterizing reticulate pattern position;ST generations
Table connects the spatial alternation module predicted the outcome with feature extraction network of the clear face image, its role is to into pedestrian
Face aligns, and normalizes to unified size and attitude;Represent the F norms of matrix, φjRepresentative feature extracts the jth of network
Layer.
7. method as claimed in claim 3, it is characterised in that the loss function of the pixel scale is made up of two parts, i.e.,
Predict that the training data concentrates the corresponding clear face image of reticulate pattern facial image and the full convolutional neural networks model pre-
The quadratic sum of pixel difference between the predicting the outcome of the clear face for measuringAnd the pixel difference is in net
The quadratic sum in line regionλ1For balancing two-part weight, wherein ⊙ representative elements rank is multiplied
Operation, XiBe input reticulate pattern facial image, YiIt is to concentrate X in training dataiCorresponding clear face image, ψ (Xi) it is logical
Cross predicting the outcome for the clear face image that the full convolutional neural networks model prediction is obtained, MiIt is the mark for characterizing reticulate pattern position
Sign figure,Represent the F norms of matrix.
8. method as claimed in claim 3, it is characterised in that the other loss function of the feature level is by described clear by calculating
The face characteristic φ for predicting the outcome of clear facial imagej(ψ(ST(Xi)) concentrate reticulate pattern facial image corresponding with the training data
Clear face image face characteristic φj(ST(Yi))) difference quadratic sum determine;Spatial alternation module ST is for by institute
The conversion that predicts the outcome for stating clear face image obtains, on a normalization facial image for having alignd, carrying so as to be input to feature
Taking network model φ carries out feature extraction;φjRepresentative feature extracts the jth layer of network model φ, in the difference of comparative feature rank
In the different time, except the output characteristic using last layer of network, accelerate optimization also using the feature of shallow-layer.
9. method as claimed in claim 8, it is characterised in that using passing through spatial alternation mould before the other loss function of feature level
The conversion that predicts the outcome of the clear face image is obtained a normalization facial image for having alignd by block ST;
In the conversion process, first by predetermined eye position, one global similitude transformation matrix θ of calculating=[a, b,
1;- b, a, 1], calculated using equation below:
Wherein (xr, yr), (xl, yl) be right eye and left eye in the predetermined eye position coordinate, a, b be similar transformation matrices
Parameter, the spatial alternation module by the similitude transformation matrix, to the picture in the predicting the outcome of the clear face image
Element carries out bilinearity sampling, obtains a new facial image for having alignd, and the computing formula of the bilinearity sampling process is such as
Under:
The image obtained after representing sampling, whereinPixel in the image obtained after representing sampling,Generation
Image before table sampling, whereinThe pixel in image before expression sampling, the length and width of H, W difference representative image.
10. a kind of reticulate pattern facial image identifying device based on full convolutional neural networks, it is characterised in that include:
Collection module, for collecting reticulate pattern facial image and corresponding clear face image to as sample image pair, forming instruction
Practice data set, for each sample image to obtaining indicating the label figure of reticulate pattern position by threshold method:
Training module, obtains going out the full convolutional neural networks of clear face image from reticulate pattern face image restoration for training
Model, including:
Concentrate from the training data and sample image is chosen to as current sample image pair, by the net of current sample image centering
Line facial image is input to full convolutional neural networks model, and the clear face image for obtaining exporting predicts the outcome;Using described
Clear face image predicts the outcome and the difference between current sample image centering clear face image, by gradient anti-pass pair
Weight parameter in the full convolutional neural networks model is adjusted, and iteration performs above-mentioned training process, until meeting iteration
Till condition, the full convolutional neural networks model for training is obtained;
Identification module, for the full convolutional neural networks model that use is trained, predicts from reticulate pattern facial image to be identified
Obtain clear face image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610982333.8A CN106548159A (en) | 2016-11-08 | 2016-11-08 | Reticulate pattern facial image recognition method and device based on full convolutional neural networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610982333.8A CN106548159A (en) | 2016-11-08 | 2016-11-08 | Reticulate pattern facial image recognition method and device based on full convolutional neural networks |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106548159A true CN106548159A (en) | 2017-03-29 |
Family
ID=58395544
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610982333.8A Pending CN106548159A (en) | 2016-11-08 | 2016-11-08 | Reticulate pattern facial image recognition method and device based on full convolutional neural networks |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106548159A (en) |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106960199A (en) * | 2017-03-30 | 2017-07-18 | 博奥生物集团有限公司 | A kind of RGB eye is as the complete extraction method in figure white of the eye region |
CN107330381A (en) * | 2017-06-15 | 2017-11-07 | 浙江捷尚视觉科技股份有限公司 | A kind of face identification method |
CN107491771A (en) * | 2017-09-21 | 2017-12-19 | 百度在线网络技术(北京)有限公司 | Method for detecting human face and device |
CN107766844A (en) * | 2017-11-13 | 2018-03-06 | 杭州有盾网络科技有限公司 | Method, apparatus, equipment of a kind of reticulate pattern according to recognition of face |
CN107993190A (en) * | 2017-11-14 | 2018-05-04 | 中国科学院自动化研究所 | Image watermark removal device |
CN108304793A (en) * | 2018-01-26 | 2018-07-20 | 北京易真学思教育科技有限公司 | On-line study analysis system and method |
CN108416343A (en) * | 2018-06-14 | 2018-08-17 | 四川远鉴科技有限公司 | A kind of facial image recognition method and device |
CN108427986A (en) * | 2018-02-26 | 2018-08-21 | 中车青岛四方机车车辆股份有限公司 | A kind of production line electrical fault prediction technique and device |
CN108648163A (en) * | 2018-05-17 | 2018-10-12 | 厦门美图之家科技有限公司 | A kind of Enhancement Method and computing device of facial image |
CN108734673A (en) * | 2018-04-20 | 2018-11-02 | 平安科技(深圳)有限公司 | Descreening systematic training method, descreening method, apparatus, equipment and medium |
CN108986132A (en) * | 2018-07-04 | 2018-12-11 | 华南理工大学 | A method of certificate photo Trimap figure is generated using full convolutional neural networks |
CN109360197A (en) * | 2018-09-30 | 2019-02-19 | 北京达佳互联信息技术有限公司 | Processing method, device, electronic equipment and the storage medium of image |
CN109426775A (en) * | 2017-08-25 | 2019-03-05 | 株式会社日立制作所 | The method, device and equipment of reticulate pattern in a kind of detection facial image |
CN109544475A (en) * | 2018-11-21 | 2019-03-29 | 北京大学深圳研究生院 | Bi-Level optimization method for image deblurring |
CN109558903A (en) * | 2018-11-20 | 2019-04-02 | 拉扎斯网络科技(上海)有限公司 | License image detection method and device, electronic equipment and readable storage medium |
CN109711413A (en) * | 2018-12-30 | 2019-05-03 | 陕西师范大学 | Image, semantic dividing method based on deep learning |
CN109815826A (en) * | 2018-12-28 | 2019-05-28 | 新大陆数字技术股份有限公司 | The generation method and device of face character model |
CN109871755A (en) * | 2019-01-09 | 2019-06-11 | 中国平安人寿保险股份有限公司 | A kind of auth method based on recognition of face |
CN110123367A (en) * | 2019-04-04 | 2019-08-16 | 平安科技(深圳)有限公司 | Computer equipment, recognition of heart sound device, method, model training apparatus and storage medium |
CN110175961A (en) * | 2019-05-22 | 2019-08-27 | 艾特城信息科技有限公司 | A kind of descreening method for dividing confrontation thought based on facial image |
CN110738226A (en) * | 2018-07-20 | 2020-01-31 | 马上消费金融股份有限公司 | Identity recognition method and device, storage medium and electronic equipment |
CN110738227A (en) * | 2018-07-20 | 2020-01-31 | 马上消费金融股份有限公司 | Model training method and device, recognition method, storage medium and electronic equipment |
WO2021027163A1 (en) * | 2019-08-09 | 2021-02-18 | 平安科技(深圳)有限公司 | Reticulate pattern-containing image recognition method and apparatus, and terminal device and medium |
CN112434780A (en) * | 2019-08-26 | 2021-03-02 | 上海高德威智能交通***有限公司 | Target object recognition network model, training method thereof and target object recognition method |
CN114708266A (en) * | 2022-06-07 | 2022-07-05 | 青岛通产智能科技股份有限公司 | Tool, method and device for detecting card defects and medium |
US11423634B2 (en) | 2018-08-03 | 2022-08-23 | Huawei Cloud Computing Technologies Co., Ltd. | Object detection model training method, apparatus, and device |
CN115830411A (en) * | 2022-11-18 | 2023-03-21 | 智慧眼科技股份有限公司 | Biological feature model training method, biological feature extraction method and related equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103778414A (en) * | 2014-01-17 | 2014-05-07 | 杭州电子科技大学 | Real-time face recognition method based on deep neural network |
CN104992167A (en) * | 2015-07-28 | 2015-10-21 | 中国科学院自动化研究所 | Convolution neural network based face detection method and apparatus |
CN105005774A (en) * | 2015-07-28 | 2015-10-28 | 中国科学院自动化研究所 | Face relative relation recognition method based on convolutional neural network and device thereof |
CN105512624A (en) * | 2015-12-01 | 2016-04-20 | 天津中科智能识别产业技术研究院有限公司 | Smile face recognition method and device for human face image |
CN105760859A (en) * | 2016-03-22 | 2016-07-13 | 中国科学院自动化研究所 | Method and device for identifying reticulate pattern face image based on multi-task convolutional neural network |
-
2016
- 2016-11-08 CN CN201610982333.8A patent/CN106548159A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103778414A (en) * | 2014-01-17 | 2014-05-07 | 杭州电子科技大学 | Real-time face recognition method based on deep neural network |
CN104992167A (en) * | 2015-07-28 | 2015-10-21 | 中国科学院自动化研究所 | Convolution neural network based face detection method and apparatus |
CN105005774A (en) * | 2015-07-28 | 2015-10-28 | 中国科学院自动化研究所 | Face relative relation recognition method based on convolutional neural network and device thereof |
CN105512624A (en) * | 2015-12-01 | 2016-04-20 | 天津中科智能识别产业技术研究院有限公司 | Smile face recognition method and device for human face image |
CN105760859A (en) * | 2016-03-22 | 2016-07-13 | 中国科学院自动化研究所 | Method and device for identifying reticulate pattern face image based on multi-task convolutional neural network |
Non-Patent Citations (4)
Title |
---|
TAO PAN ETC.: "Perceptual Loss with Fully Convolutional for I mage Residual Denoising", 《CCPR 2016: PATTERN RECOGNITION》 * |
常亮等: "图像理解中的卷积神经网络", 《自动化学报》 * |
王昆翔著: "《智能理论与警用智能技术》", 31 May 2009, 中国人民公安大学出版社 * |
蒋先刚: "《数字图像模型识别工程项目研究》", 31 March 2014, 西安交通大学出版社 * |
Cited By (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106960199A (en) * | 2017-03-30 | 2017-07-18 | 博奥生物集团有限公司 | A kind of RGB eye is as the complete extraction method in figure white of the eye region |
CN107330381A (en) * | 2017-06-15 | 2017-11-07 | 浙江捷尚视觉科技股份有限公司 | A kind of face identification method |
CN109426775A (en) * | 2017-08-25 | 2019-03-05 | 株式会社日立制作所 | The method, device and equipment of reticulate pattern in a kind of detection facial image |
CN109426775B (en) * | 2017-08-25 | 2022-02-25 | 株式会社日立制作所 | Method, device and equipment for detecting reticulate patterns in face image |
CN107491771A (en) * | 2017-09-21 | 2017-12-19 | 百度在线网络技术(北京)有限公司 | Method for detecting human face and device |
US10902245B2 (en) | 2017-09-21 | 2021-01-26 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for facial recognition |
CN107766844A (en) * | 2017-11-13 | 2018-03-06 | 杭州有盾网络科技有限公司 | Method, apparatus, equipment of a kind of reticulate pattern according to recognition of face |
CN107993190A (en) * | 2017-11-14 | 2018-05-04 | 中国科学院自动化研究所 | Image watermark removal device |
CN107993190B (en) * | 2017-11-14 | 2020-05-19 | 中国科学院自动化研究所 | Image watermark removing device |
CN108304793A (en) * | 2018-01-26 | 2018-07-20 | 北京易真学思教育科技有限公司 | On-line study analysis system and method |
CN108304793B (en) * | 2018-01-26 | 2021-01-08 | 北京世纪好未来教育科技有限公司 | Online learning analysis system and method |
CN108427986A (en) * | 2018-02-26 | 2018-08-21 | 中车青岛四方机车车辆股份有限公司 | A kind of production line electrical fault prediction technique and device |
CN108734673A (en) * | 2018-04-20 | 2018-11-02 | 平安科技(深圳)有限公司 | Descreening systematic training method, descreening method, apparatus, equipment and medium |
WO2019200702A1 (en) * | 2018-04-20 | 2019-10-24 | 平安科技(深圳)有限公司 | Descreening system training method and apparatus, descreening method and apparatus, device, and medium |
CN108648163A (en) * | 2018-05-17 | 2018-10-12 | 厦门美图之家科技有限公司 | A kind of Enhancement Method and computing device of facial image |
CN108416343A (en) * | 2018-06-14 | 2018-08-17 | 四川远鉴科技有限公司 | A kind of facial image recognition method and device |
CN108986132A (en) * | 2018-07-04 | 2018-12-11 | 华南理工大学 | A method of certificate photo Trimap figure is generated using full convolutional neural networks |
CN110738227B (en) * | 2018-07-20 | 2021-10-12 | 马上消费金融股份有限公司 | Model training method and device, recognition method, storage medium and electronic equipment |
CN110738226A (en) * | 2018-07-20 | 2020-01-31 | 马上消费金融股份有限公司 | Identity recognition method and device, storage medium and electronic equipment |
CN110738227A (en) * | 2018-07-20 | 2020-01-31 | 马上消费金融股份有限公司 | Model training method and device, recognition method, storage medium and electronic equipment |
US11605211B2 (en) | 2018-08-03 | 2023-03-14 | Huawei Cloud Computing Technologies Co., Ltd. | Object detection model training method and apparatus, and device |
US11423634B2 (en) | 2018-08-03 | 2022-08-23 | Huawei Cloud Computing Technologies Co., Ltd. | Object detection model training method, apparatus, and device |
CN109360197A (en) * | 2018-09-30 | 2019-02-19 | 北京达佳互联信息技术有限公司 | Processing method, device, electronic equipment and the storage medium of image |
CN109360197B (en) * | 2018-09-30 | 2021-07-09 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
CN109558903A (en) * | 2018-11-20 | 2019-04-02 | 拉扎斯网络科技(上海)有限公司 | License image detection method and device, electronic equipment and readable storage medium |
CN109544475A (en) * | 2018-11-21 | 2019-03-29 | 北京大学深圳研究生院 | Bi-Level optimization method for image deblurring |
CN109815826A (en) * | 2018-12-28 | 2019-05-28 | 新大陆数字技术股份有限公司 | The generation method and device of face character model |
CN109815826B (en) * | 2018-12-28 | 2022-11-08 | 新大陆数字技术股份有限公司 | Method and device for generating face attribute model |
CN109711413A (en) * | 2018-12-30 | 2019-05-03 | 陕西师范大学 | Image, semantic dividing method based on deep learning |
CN109711413B (en) * | 2018-12-30 | 2023-04-07 | 陕西师范大学 | Image semantic segmentation method based on deep learning |
CN109871755A (en) * | 2019-01-09 | 2019-06-11 | 中国平安人寿保险股份有限公司 | A kind of auth method based on recognition of face |
CN110123367A (en) * | 2019-04-04 | 2019-08-16 | 平安科技(深圳)有限公司 | Computer equipment, recognition of heart sound device, method, model training apparatus and storage medium |
CN110175961B (en) * | 2019-05-22 | 2021-07-27 | 艾特城信息科技有限公司 | Reticulation removing method based on human face image segmentation countermeasure thought |
CN110175961A (en) * | 2019-05-22 | 2019-08-27 | 艾特城信息科技有限公司 | A kind of descreening method for dividing confrontation thought based on facial image |
WO2021027163A1 (en) * | 2019-08-09 | 2021-02-18 | 平安科技(深圳)有限公司 | Reticulate pattern-containing image recognition method and apparatus, and terminal device and medium |
CN112434780A (en) * | 2019-08-26 | 2021-03-02 | 上海高德威智能交通***有限公司 | Target object recognition network model, training method thereof and target object recognition method |
CN112434780B (en) * | 2019-08-26 | 2023-05-30 | 上海高德威智能交通***有限公司 | Target object recognition network model, training method thereof and target object recognition method |
CN114708266A (en) * | 2022-06-07 | 2022-07-05 | 青岛通产智能科技股份有限公司 | Tool, method and device for detecting card defects and medium |
CN115830411A (en) * | 2022-11-18 | 2023-03-21 | 智慧眼科技股份有限公司 | Biological feature model training method, biological feature extraction method and related equipment |
CN115830411B (en) * | 2022-11-18 | 2023-09-01 | 智慧眼科技股份有限公司 | Biological feature model training method, biological feature extraction method and related equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106548159A (en) | Reticulate pattern facial image recognition method and device based on full convolutional neural networks | |
CN106326886B (en) | Finger vein image quality appraisal procedure based on convolutional neural networks | |
Yuan et al. | Fingerprint liveness detection using an improved CNN with image scale equalization | |
CN106096538B (en) | Face identification method and device based on sequencing neural network model | |
CN111274916B (en) | Face recognition method and face recognition device | |
CN106096535B (en) | Face verification method based on bilinear joint CNN | |
CN109165566A (en) | A kind of recognition of face convolutional neural networks training method based on novel loss function | |
CN104063702B (en) | Three-dimensional gait recognition based on shielding recovery and partial similarity matching | |
CN108427921A (en) | A kind of face identification method based on convolutional neural networks | |
CN107506722A (en) | One kind is based on depth sparse convolution neutral net face emotion identification method | |
CN106991368A (en) | A kind of finger vein checking personal identification method based on depth convolutional neural networks | |
CN106503687A (en) | The monitor video system for identifying figures of fusion face multi-angle feature and its method | |
CN105760859A (en) | Method and device for identifying reticulate pattern face image based on multi-task convolutional neural network | |
CN105205453B (en) | Human eye detection and localization method based on depth self-encoding encoder | |
CN102136024B (en) | Biometric feature identification performance assessment and diagnosis optimizing system | |
CN107832684A (en) | A kind of intelligent vein authentication method and system with independent learning ability | |
CN105512680A (en) | Multi-view SAR image target recognition method based on depth neural network | |
CN109817276A (en) | A kind of secondary protein structure prediction method based on deep neural network | |
CN109145717A (en) | A kind of face identification method of on-line study | |
CN109214263A (en) | A kind of face identification method based on feature multiplexing | |
CN109101869A (en) | Test method, equipment and the storage medium of multi-task learning depth network | |
CN110097029A (en) | Identity identifying method based on Highway network multi-angle of view Gait Recognition | |
CN109801225A (en) | Face reticulate pattern stain minimizing technology based on the full convolutional neural networks of multitask | |
CN109190521A (en) | A kind of construction method of the human face recognition model of knowledge based purification and application | |
CN110472495A (en) | A kind of deep learning face identification method based on graphical inference global characteristics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170329 |