CN108898105A - It is a kind of based on depth characteristic and it is sparse compression classification face identification method - Google Patents

It is a kind of based on depth characteristic and it is sparse compression classification face identification method Download PDF

Info

Publication number
CN108898105A
CN108898105A CN201810700048.1A CN201810700048A CN108898105A CN 108898105 A CN108898105 A CN 108898105A CN 201810700048 A CN201810700048 A CN 201810700048A CN 108898105 A CN108898105 A CN 108898105A
Authority
CN
China
Prior art keywords
network
face
facial image
depth characteristic
convolutional network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810700048.1A
Other languages
Chinese (zh)
Inventor
余化鹏
谢浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lanshan Chengdu Peng Peng Intelligent Technology Co Ltd
Chengdu University
Original Assignee
Lanshan Chengdu Peng Peng Intelligent Technology Co Ltd
Chengdu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lanshan Chengdu Peng Peng Intelligent Technology Co Ltd, Chengdu University filed Critical Lanshan Chengdu Peng Peng Intelligent Technology Co Ltd
Priority to CN201810700048.1A priority Critical patent/CN108898105A/en
Publication of CN108898105A publication Critical patent/CN108898105A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of face identification method based on depth characteristic and rarefaction representation disaggregated model, this method combines convolutional network and SRC recognition of face, it constructs image depth characteristic and extracts convolutional network, extract depth characteristic vector of the facial image in lower dimensional space, dictionary is constructed based on the feature vector extracted, and recognition of face classification is carried out using SRC model.Present invention incorporates depth characteristics to have extremely strong expression ability to original image, and the strong antijamming capability of SRC model, the advantage that interpretation is good, discriminating power is outstanding, so that the present invention is to the illumination variation in recognition of face, image is fuzzy and is possessed preferable robustness by circumstance of occlusion, the not high defect of SRC model face accuracy of identification is compensated for, the branches such as the identification of safety-security area can be preferably applied for.

Description

It is a kind of based on depth characteristic and it is sparse compression classification face identification method
Technical field
The present invention relates to Digital Image Processing and technical field of computer vision, more particularly to it is a kind of based on depth characteristic and The face identification method of sparse compression classification.
Background technique
Currently, safety-security area is concerned, authentication is particularly important as one of branch.Authentication is main Use biometrics identification technology, including iris recognition, fingerprint recognition, personal recognition, recognition of face, Gait Recognition etc..Face Identification is that we distinguish other people feature the most intuitive, is now widely used for identity card identification, Intelligent door control system, day Net recognition of face, smart home system etc..
Main three kinds of existing face identification method, one is the face recognition technology based on traditional characteristic, such methods Mainly there are PCA (Principal Component Analysis, principal component analysis), LDA (Linear Discriminant Analysis, linear discriminant analysis) etc., but such methods are due to needing more training sample, process complexity, accuracy of identification It is low, it is rarely employed.Another kind is the face recognition technology indicated based on rarefaction representation and collaboration, and such methods representative is SRC (Sparse Representation based Classification, rarefaction representation classification), its theory is perfect, model can solve Release that ability is strong, anti-interference ability is good, the disadvantage is that precision is lower, the training sample needed is less.There are also one is be based on depth The convolutional neural networks face recognition technology of habit, such methods representative have YOLO (You Only Look Once, only see primary), RCNN (Regions with CNN features, the region characteristic of convolutional network), its advantage is that model accuracy is high, the disadvantage is that It needs more training sample, metric form simple, while when the depth of convolutional network, number of nodes become larger, will lead to its instruction It is elongated to practice the time.
Dictionary algorithm main or by improvement SRC, makes norm l by optimization in currently available technology1It minimizes Method for solving improves SRC model accuracy.
Summary of the invention
It is an object of the invention to overcome the above-mentioned deficiency in the presence of the prior art, provide it is a kind of based on depth characteristic and The face identification method of rarefaction representation classification, combining depth characteristic has extremely strong expression ability and SRC model to original image Strong antijamming capability, the advantage that interpretation is good, discriminating power is outstanding so that the present invention in recognition of face illumination become Change, image is fuzzy and is possessed preferable robustness by circumstance of occlusion, compensate for that SRC model face accuracy of identification is not high to be lacked It falls into, the branches such as the identification of safety-security area can be preferably applied for.In addition to this, the present invention be easily achieved, be applied widely, It is practical.
In order to achieve the above-mentioned object of the invention, the present invention provides following technical schemes:
It is a kind of based on depth characteristic and it is sparse compression classification face identification method, include the following steps:
Step 101, the facial image of preset quantity is obtained, and facial image is pre-processed, obtains the first face figure As data set, the first facial image data set is divided into training set and test set by a certain percentage, and paste to each facial image Upper corresponding class label;
Step 102, building is suitable for the picture depth feature extraction convolutional network of feature extraction, utilizes the second facial image Data set extracts convolutional network to described image depth characteristic and is trained, and network hyper parameter is adjusted, so that picture depth feature Convolutional network is extracted to tend towards stability;
Step 103, the depth characteristic vector of facial image in the first facial image data set is extracted;
Step 104, dictionary is constructed using the depth characteristic vector;
Step 105, based on the dictionary of construction by based on sparse presentation class SRC model, to test set facial image Carry out recognition of face.
Preferably, the face identification method, step 101 include:
Step 1011, the people for choosing different identity downloads the facial image of everyone preset quantity from the Internet;
Step 1012, the facial image is pre-processed, by facial image scaled to unified scale, constitutes the One face image data collection;
Step 1013, the first facial image data set that the pretreatment is completed is divided into training set and survey by a certain percentage Examination collection.
Preferably, the face identification method, step 102 include:
Step 1021, convolutional network is extracted based on residual error convolutional network building described image depth characteristic;
Step 1022, certain facial image is obtained, facial image is pre-processed, by facial image scaled to system One scale, and facial image is overturn, is cut out, increases and expands data set, constitute the second face image data collection;
Step 1023, the second face image data collection after expanding will be increased and be input to described image depth characteristic extraction convolution net Network is trained network, adjusts network hyper parameter, and training terminates after network performance tends towards stability.
Preferably, the face identification method, step 102 include:
Convolutional network is extracted based on residual error convolutional network building described image depth characteristic, the residual error convolutional network includes One input module, and it is sequentially connected network backbone module, the first network strut module of 35*35 grid, the first residual error net Network module, the second network strut module of 17*17 grid, the second residual error network module, 8*8 grid third network strut mould Block, average pond down sample module, abstention technology modules, output module;The residual error network be used for people's image face data into The multiple residual error of row adds process of convolution.
Preferably, the face identification method, step 102 include:
The second face image data collection after expanding will be increased and be input to described image depth characteristic extraction convolutional network to network It is trained;Network hyper parameter is adjusted based on gradient descent method, when frequency of training increases, model accuracy rises, by network Hyper parameter is turned down, and training terminates after network performance tends towards stability.
Preferably, the face identification method, step 103 include:
Facial image in first facial image data set is fully entered into described image depth characteristic and extracts convolution net Network extracts depth characteristic vector of the facial image in lower dimensional space.
Preferably, the face identification method, step 104 include:
According to depth characteristic vector cijConstruct dictionary A=[c11…cij]∈Rm×n, n expression total number of images, i expression data set The identity number contained, j indicate the amount of images that each identity contains, and have n=∑ij。
Preferably, the face identification method, step 105 include:
Step 1051, in the SRC model, pass through l1It minimizes to solve equation Wherein y indicates that the corresponding class label of human face data, A indicate the dictionary constructed based on depth characteristic vector, and x is y dilute on A Dredging indicates, ε indicates the minimum manually set, ‖ x ‖nIndicate the n norm of x;Also, using homotopy Homotopy algorithm come Realize l1It minimizes;
Step 1052, formula is utilizedResidual error is solved, wherein i ∈ { 1,2 ... k }, k indicates shared K class,It indicates in addition to the element of the i-th class is non-zero, it is 0, r that remaining, which corresponds to the element in each class,i(y) it indicates for test Face uses the error that the i-th class face reconstructs in training set;
Step 1053, formula is utilizedRecognition result is solved, wherein id (y) indicates identification Obtained class label.
The present invention extracts the depth characteristic of image by convolutional network, and constructs dictionary based on depth characteristic, then pass through SRC Category of model identification, completes recognition of face.Compared to traditional technology, the expression ability of present invention combination depth characteristic by force and The good feature of ability, anti-interference ability can be explained in SRC, improves the efficiency of recognition of face, improves the essence of SRC model recognition of face Degree.Simultaneously compared with existing SRC face recognition technology, by the present invention in that the vector is sparse enough with depth characteristic vector And indicate that ability is strong, so that the present invention is to illumination variation, image is fuzzy and is possessed preferable robustness by circumstance of occlusion, and Substantially increase the model accuracy of SRC human face recognition model.
Detailed description of the invention
Fig. 1 is the flow chart of face identification method according to an exemplary embodiment of the present invention;
Fig. 2 is Inception-Resnet-v1 convolutional network frame diagram according to an exemplary embodiment of the present invention;
Fig. 3 is convolutional network training flow chart according to an exemplary embodiment of the present invention.
Specific embodiment
Below with reference to test example and specific embodiment, the present invention is described in further detail.But this should not be understood It is all that this is belonged to based on the technology that the content of present invention is realized for the scope of the above subject matter of the present invention is limited to the following embodiments The range of invention.
It elaborates with reference to the accompanying drawing to the present invention, it is noted that described example is only limitted to the present invention Understanding, and do not play any restriction effect to it.
Embodiment 1
As shown in Figure 1, it is according to an exemplary embodiment of the present invention it is a kind of based on depth characteristic and it is sparse compression classification people Face recognition method specifically comprises the following steps:
Step 101, the facial image of preset quantity is obtained, and facial image is pre-processed, obtains the first face figure As data set, the first facial image data set is divided into training set and test set by a certain percentage, and paste to each facial image Upper corresponding class label;
Step 102, building is suitable for the picture depth feature extraction convolutional network of feature extraction, utilizes the second facial image Data set extracts convolutional network to described image depth characteristic and is trained, and network hyper parameter is adjusted, so that picture depth feature Convolutional network is extracted to tend towards stability;
Step 103, the depth characteristic vector of facial image in the first facial image data set is extracted;
Step 104, dictionary is constructed using the depth characteristic vector;
Step 105, based on the dictionary of construction by based on sparse presentation class SRC model, to test set facial image Carry out recognition of face.
Embodiment 2
In further embodiment of the present invention, the facial image of preset quantity is obtained described in above-mentioned steps 101, and to people Face image is pre-processed, and the first facial image data set is obtained, and the first facial image data set is divided into instruction by a certain percentage Practice collection and test set, and corresponding class label is sticked to each facial image and includes:
A simple Chinese star face database is collected using web crawlers, passes through cutting, the filtering to image, shape At an available database, then from 100 identity are wherein screened, each identity 70 opens image, totally 7000 facial images, structure At the first facial image data set.
Then the first facial image data set is pre-processed, in this example, using the method for bilinear interpolation, by people Face image uniformly zooms to 182*182 pixel.Pretreatment is completed, the first facial image data set to be assessed presses 5:2 ratio Example is divided into training set and test set, wherein 5000 are training set, 2000 are test set, and are each facial image data post Multiple facial images of same people are divided into same class by upper corresponding class label.
Embodiment 3
In further embodiment of the present invention, building described in above-mentioned steps 102 is suitable for the picture depth of feature extraction Feature extraction convolutional network specifically includes:
Convolution net is extracted based on Inception-Resnet-v1 (first generation residual error convolutional network) building image depth characteristic Network connects stem module (trunk of network) in Input (input module), thereafter as shown in Fig. 2, the structure of the network is afterwards It is sequentially connected Inception-Resnet-A (first network strut module), Reduction-A (the first residual error of 35*35 grid Network module), the Inception-Resnet-B (the second network strut module) of 17*17 grid, (second is residual by Reduction-B Poor network module), Inception-Resnet-C (the strut module of third network), the average pooling of 8*8 grid (average pond down sample module), one Dropout (abstention technology modules), last introducing in the output module softmax (divide Generic module).
The principle that the Inception-Resnet-v1 network extracts picture depth feature is that facial image enters volume After product network, it can be exported by filtered feature in each convolutional layer, and hand on.In input layer convolutional network to defeated The two-dimensional image data 182*182 entered is operated, and is write data as three-dimensional matrice form 182*182*3, is transmitted to stem mould Block.Stem module carries out multiple convolution processing to image data, so that image data length and width are reduced then depth drawing, at this time Output data is 35*35*256, is transmitted to the first network strut module I nception-Resnet-A of 35*35 grid.In net Non-liner revision unit R ELU (activation primitive) is introduced after process of convolution in network strut module, helps the precision for improving network, It is that 35*35*256 is transmitted to next module by data stabilization.After data are transmitted to the first residual error network Reduction-A, residual error Network adds process of convolution to reduce data length and width by residual error, then drawing depth, and data are become from 35*35*256 at this time 8*8*896 is transmitted to the Inception-Resnet-B module of 17*17 grid.It is then transferred to the second residual error network, by phase After same residual error adds process of convolution, data output is 8*8*1792, is transmitted to the Inception-Resnet-C mould of 8*8 grid Block.The data of last front end output via average down-sampling pond module progress sampling processing, the depth characteristic of output data to 1792 dimension of amount, to realize the purpose that feature vector sampling is extracted.Finally the data processing of mistake is given up in Dropout Filtering.Classification is realized by the softmax module introduced in output module again, is divided into two classes, i.e. face and background, realizes face The feature extraction of image.
Embodiment 4
In further embodiment of the present invention, as shown in figure 3, constructing the second facial image number described in above-mentioned steps 102 Convolutional network is extracted to described image depth characteristic according to collection to be trained, and network hyper parameter is adjusted, so that picture depth feature mentions Convolutional network is taken to tend towards stability:
Under the data set MS-celeb-1m (a kind of disclosed data set, everybody can download the data used) of open source A large amount of facial image is carried, the second face image data collection is constituted.Then to the facial image of the second face image data collection into Facial image is zoomed to unified 182*182 pixel, does zero averaging processing to the facial image after scaling by row pretreatment, Face images are put into zero averaging processing routine, is subtracted in data set and is owned with the single pixel value of all images Image pixel mean value, and facial image is overturn at random, random cropping, increase and expand data set, constitutes second handled well Face image data collection.
Then by the second face image data collection batch input handled well to the Inception- to be trained of building In Resnet-v1 network, parameter testing is carried out, parameter here includes two parts, and a part is the parameter of network itself, this portion The adjustment of point parameter be convolution kernel parameter (for example, kernel function type, size of convolution kernel), it is generally that the number of the inside is initial Turn to the number that one is not 0.This partial parameters may have up to a million, be adjusted by network itself.Another part be for The parameter of Schistosomiasis control network, is hyper parameter, this partial parameters can based on gradient descent method adjust, according to frequency of training into Row is adjusted, and as frequency of training increases, model accuracy constantly rises, and training is completed after converging to a stability number.? In this example, we are based on Inception-Resnet-v1 building image depth characteristic and extract convolutional network, so right Part hyper parameter such as nkerns (convolution kernel number) in Inception-Resnet-v1 convolutional network is arranged to default value, no It adjusts again.
In actual operation, mainly learning rate (learning_rate) is adjusted.In entire training process, learn Habit rate by adjusting three times, and when training starts, an initial value 0.1 is arranged to learning rate;After training 150 times, it will learn Rate is adjusted to 0.01;After training 180 times, learning rate is adjusted to 0.001;After training 251 times, learning rate is adjusted to 0.0001. It has tended towards stability at this time for picture depth feature extraction convolutional network precision, performance, training terminates, and further constitutes and is suitable for Picture depth feature extraction convolutional network.
Embodiment 5
In further embodiment of the present invention, described in above-mentioned steps 103, the depth characteristic of facial image is extracted, specifically Step includes:
The convolutional network model that load training is completed, the facial image in the first facial image data set is input to Feature extraction is carried out in the picture depth feature extraction convolutional network built, extracts each face figure in training set and test set Decent depth characteristic vector, the depth characteristic extracted in this example are 1792 dimensions.
Embodiment 6
In further embodiment of the present invention, described in above-mentioned steps 104, word is constructed based on the depth characteristic vector Allusion quotation, including:
According to depth characteristic vector cijConstruct dictionary A=[c11…cij]∈Rm×n, wherein n indicates total number of images, and i indicates number According to the identity number that collection contains, j indicates the amount of images that each identity contains, and has n=∑ij.The dimension of each sample herein The depth characteristic (1792 dimension) as extracted from convolutional network described in step 103, i.e. m=1792 in dictionary shares i= 100 classes, each class have j=70 sample,That is A ∈ R1792*7000
Embodiment 7
In further embodiment of the present invention, described in above-mentioned steps 105, the dictionary based on construction is right by SRC model Test set sample image carries out recognition of face, including:
Based on sparse recognition of face SRC model, principle is, a facial image, can use same in database The linear combination of the facial image of person ownership indicates.And for the face of people other in database, the coefficient of linear combination is managed It is zero on.Due to generally there is multiple images of very multiple and different faces in database, if figure all in database The linear combination of picture indicates this given test face, coefficient vector be it is sparse, be based on so SRC model is referred to as Sparse face identification method.That is theoretically, in addition to the combination system with the facial image that this image is the same person Outside number is not zero, other coefficients are all zero.I.e. in actual operation, it needs to indicate test sample with the sample of training, this Sample can just obtain recognition result.
Therefore in this example, 5000 training set images marked are inputted SRC model, training SRC first by us Model is calculated accordingly, completes recognition of face judgement.Then test set image is sequentially input in progressive die type and is identified, To obtain recognition result.I.e. each test sample is indicated by training sample.
Embodiment 8
In further embodiment of the present invention, described in above-mentioned steps 105, the dictionary based on construction is right by SRC model Test set sample image carries out recognition of face, mainly includes:
Step 1051, we pass through l first in the SRC model1It minimizes to solve following equation: So that ‖ Ax-y ‖2≤ ε, wherein y indicates that the corresponding label of human face data, A are indicated by depth face feature vector structure The dictionary made, x are rarefaction representation of the y on A, and ε indicates the minimum manually set, ‖ x ‖nIndicate the n norm of x.And l1 The solution mode of minimum is more, is realized in this example using Homotopy (homotopy) algorithm, i.e., selected ε is 0.001;
Step 1052, residual error is calculated:In the SRC model, algorithmic formula is utilizedIt calculates residual Difference, wherein i ∈ { 1,2 ... k }, k indicate to share k class,It indicates in addition to the element of the i-th class is non-zero, remaining corresponds to each class On element be 0, residual error is solved.ri(y) it indicates to reconstruct test face using the i-th class face in training set The error arrived;
Step 1053, recognition result is calculated, algorithm is utilizedWherein id (y) indicates identification Obtained class label, class label here are the class label of each face image data described in step 101.
The above, the only specific embodiment in the present invention, but scope of protection of the present invention is not limited thereto, appoints What is familiar with the people of the technology within the technical scope disclosed by the invention, it is contemplated that extraction depth characteristic process and use The related mutation that SRC is solved should all be covered within the scope of the present invention.

Claims (8)

1. it is a kind of based on depth characteristic and it is sparse compression classification face identification method, which is characterized in that include the following steps:
Step 101, the facial image of preset quantity is obtained, and facial image is pre-processed, obtains the first facial image number According to collection, the first facial image data set is divided into training set and test set by a certain percentage, and each facial image is sticked pair The class label answered;
Step 102, building is suitable for the picture depth feature extraction convolutional network of feature extraction, utilizes the second face image data Collection extracts convolutional network to described image depth characteristic and is trained, and network hyper parameter is adjusted, so that picture depth feature extraction Convolutional network tends towards stability;
Step 103, the depth characteristic vector of facial image in the first facial image data set is extracted;
Step 104, dictionary is constructed using the depth characteristic vector;
Step 105, based on the dictionary of construction by being carried out to test set facial image based on sparse presentation class SRC model Recognition of face.
2. face identification method according to claim 1, which is characterized in that step 101 includes:
Step 1011, the people for choosing different identity downloads the facial image of everyone preset quantity from the Internet;
Step 1012, the facial image is pre-processed, by facial image scaled to unified scale, is constituted the first Face image data set;
Step 1013, the first facial image data set that the pretreatment is completed is divided into training set and test by a certain percentage Collection.
3. face identification method according to claim 1, which is characterized in that step 102 includes:
Step 1021, convolutional network is extracted based on residual error convolutional network building described image depth characteristic;
Step 1022, certain facial image is obtained, facial image is pre-processed, by facial image scaled to unified ruler Degree, and facial image is overturn, is cut out, increases and expands data set, constitute the second face image data collection;
Step 1023, the second face image data collection after expanding will be increased and be input to described image depth characteristic extraction convolutional network pair Network is trained, and adjusts network hyper parameter, and training terminates after network performance tends towards stability.
4. face identification method according to claim 1, which is characterized in that step 102 includes:
Convolutional network is extracted based on residual error convolutional network building described image depth characteristic, the residual error convolutional network includes one Input module, and it is sequentially connected network backbone module, the first network strut module of 35*35 grid, the first residual error network mould Block, the second network strut module of 17*17 grid, the third network strut module, flat of the second residual error network module, 8*8 grid Equal pond down sample module, abstention technology modules, output module;The residual error network is used to carry out people's image face data multiple Residual error adds process of convolution.
5. face identification method according to claim 1, which is characterized in that step 102 includes:
The second face image data collection after expanding will be increased and be input to described image depth characteristic extraction convolutional network to network progress Training;Network hyper parameter is adjusted based on gradient descent method, when frequency of training increases, model accuracy rises, by the super ginseng of network Number is turned down, and training terminates after network performance tends towards stability.
6. face identification method according to claim 1, which is characterized in that step 103 includes:
Facial image in first facial image data set is fully entered into described image depth characteristic and extracts convolutional network, is mentioned Take depth characteristic vector of the facial image in lower dimensional space.
7. face identification method according to claim 1, which is characterized in that step 104 includes:
According to depth characteristic vector cijConstruct dictionary A=[c11…cij]∈Rm×n, n indicate total number of images, i indicate data set contain Identity number, j indicates the amount of images that each identity contains, and has n=∑ij
8. face identification method according to claim 1, which is characterized in that step 105 includes:
Step 1051, in the SRC model, pass through e1It minimizes to solve equation Wherein y indicates that the corresponding class label of human face data, A indicate the dictionary constructed based on depth characteristic vector, and x is y dilute on A Dredging indicates, ε indicates the minimum manually set, ‖ x ‖nIndicate the n norm of x;Also, using homotopy Homotopy algorithm come Realize l1It minimizes;
Step 1052, formula is utilizedResidual error is solved, wherein i ∈ { 1,2 ... k }, k indicates shared k Class,It indicates in addition to the element of the i-th class is non-zero, it is 0, r that remaining, which corresponds to the element in each class,i(y) it indicates for tester Face uses the error that the i-th class face reconstructs in training set;
Step 1053, formula is utilizedRecognition result is solved, wherein id (y) indicates what identification obtained Class label.
CN201810700048.1A 2018-06-29 2018-06-29 It is a kind of based on depth characteristic and it is sparse compression classification face identification method Pending CN108898105A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810700048.1A CN108898105A (en) 2018-06-29 2018-06-29 It is a kind of based on depth characteristic and it is sparse compression classification face identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810700048.1A CN108898105A (en) 2018-06-29 2018-06-29 It is a kind of based on depth characteristic and it is sparse compression classification face identification method

Publications (1)

Publication Number Publication Date
CN108898105A true CN108898105A (en) 2018-11-27

Family

ID=64347466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810700048.1A Pending CN108898105A (en) 2018-06-29 2018-06-29 It is a kind of based on depth characteristic and it is sparse compression classification face identification method

Country Status (1)

Country Link
CN (1) CN108898105A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816632A (en) * 2018-12-25 2019-05-28 东软集团股份有限公司 Brain image processing method, device, readable storage medium storing program for executing and electronic equipment
CN109917347A (en) * 2019-04-10 2019-06-21 电子科技大学 A kind of radar pedestrian detection method based on the sparse reconstruct of time-frequency domain
CN109978074A (en) * 2019-04-04 2019-07-05 山东财经大学 Image aesthetic feeling and emotion joint classification method and system based on depth multi-task learning
CN110222764A (en) * 2019-06-10 2019-09-10 中南民族大学 Shelter target detection method, system, equipment and storage medium
CN111476145A (en) * 2020-04-03 2020-07-31 南京邮电大学 A convolutional neural network-based 1: n face recognition method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650798A (en) * 2016-12-08 2017-05-10 南京邮电大学 Indoor scene recognition method combining deep learning and sparse representation
CN107203752A (en) * 2017-05-25 2017-09-26 四川云图睿视科技有限公司 A kind of combined depth study and the face identification method of the norm constraint of feature two
CN107273864A (en) * 2017-06-22 2017-10-20 星际(重庆)智能装备技术研究院有限公司 A kind of method for detecting human face based on deep learning
CN107292298A (en) * 2017-08-09 2017-10-24 北方民族大学 Ox face recognition method based on convolutional neural networks and sorter model
CN107506722A (en) * 2017-08-18 2017-12-22 中国地质大学(武汉) One kind is based on depth sparse convolution neutral net face emotion identification method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650798A (en) * 2016-12-08 2017-05-10 南京邮电大学 Indoor scene recognition method combining deep learning and sparse representation
CN107203752A (en) * 2017-05-25 2017-09-26 四川云图睿视科技有限公司 A kind of combined depth study and the face identification method of the norm constraint of feature two
CN107273864A (en) * 2017-06-22 2017-10-20 星际(重庆)智能装备技术研究院有限公司 A kind of method for detecting human face based on deep learning
CN107292298A (en) * 2017-08-09 2017-10-24 北方民族大学 Ox face recognition method based on convolutional neural networks and sorter model
CN107506722A (en) * 2017-08-18 2017-12-22 中国地质大学(武汉) One kind is based on depth sparse convolution neutral net face emotion identification method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HITWH1204: ""卷积神经网络CNNs(AlexNet)"", 《百度文库》 *
JOHN WRIGHT 等: ""Robust Face Recognition via Sparse Representation"", 《MANUSCRIPT ACCEPTED BY IEEE TRANS. PAMI》 *
景晨凯 等: ""基于DCNN的人脸识别技术在考生身份验证中的应用研究"", 《河南大学学报(自然科学版)》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816632A (en) * 2018-12-25 2019-05-28 东软集团股份有限公司 Brain image processing method, device, readable storage medium storing program for executing and electronic equipment
CN109978074A (en) * 2019-04-04 2019-07-05 山东财经大学 Image aesthetic feeling and emotion joint classification method and system based on depth multi-task learning
CN109917347A (en) * 2019-04-10 2019-06-21 电子科技大学 A kind of radar pedestrian detection method based on the sparse reconstruct of time-frequency domain
CN110222764A (en) * 2019-06-10 2019-09-10 中南民族大学 Shelter target detection method, system, equipment and storage medium
CN111476145A (en) * 2020-04-03 2020-07-31 南京邮电大学 A convolutional neural network-based 1: n face recognition method

Similar Documents

Publication Publication Date Title
CN108898105A (en) It is a kind of based on depth characteristic and it is sparse compression classification face identification method
WO2022160771A1 (en) Method for classifying hyperspectral images on basis of adaptive multi-scale feature extraction model
CN111798369B (en) Face aging image synthesis method for generating confrontation network based on circulation condition
Kumar et al. Breast cancer classification of image using convolutional neural network
CN104361363B (en) Depth deconvolution feature learning network, generation method and image classification method
CN109146831A (en) Remote sensing image fusion method and system based on double branch deep learning networks
CN107247971B (en) Intelligent analysis method and system for ultrasonic thyroid nodule risk index
CN106780466A (en) A kind of cervical cell image-recognizing method based on convolutional neural networks
CN109145920A (en) A kind of image, semantic dividing method based on deep neural network
CN106096535A (en) A kind of face verification method based on bilinearity associating CNN
CN107943967A (en) Algorithm of documents categorization based on multi-angle convolutional neural networks and Recognition with Recurrent Neural Network
CN109255340A (en) It is a kind of to merge a variety of face identification methods for improving VGG network
CN106650830A (en) Deep model and shallow model decision fusion-based pulmonary nodule CT image automatic classification method
CN108520213B (en) Face beauty prediction method based on multi-scale depth
CN109815826A (en) The generation method and device of face character model
CN111954250B (en) Lightweight Wi-Fi behavior sensing method and system
CN110443286A (en) Training method, image-recognizing method and the device of neural network model
CN109711461A (en) Transfer learning picture classification method and its device based on principal component analysis
CN110264407B (en) Image super-resolution model training and reconstruction method, device, equipment and storage medium
CN111915545A (en) Self-supervision learning fusion method of multiband images
CN108229571A (en) Apple surface lesion image-recognizing method based on KPCA algorithms Yu depth belief network
CN110674774A (en) Improved deep learning facial expression recognition method and system
CN115049814B (en) Intelligent eye protection lamp adjusting method adopting neural network model
Xu et al. A novel image feature extraction algorithm based on the fusion AutoEncoder and CNN
CN114495210A (en) Posture change face recognition method based on attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181127

RJ01 Rejection of invention patent application after publication