CN106709480A - Partitioning human face recognition method based on weighted intensity PCNN model - Google Patents

Partitioning human face recognition method based on weighted intensity PCNN model Download PDF

Info

Publication number
CN106709480A
CN106709480A CN201710119765.0A CN201710119765A CN106709480A CN 106709480 A CN106709480 A CN 106709480A CN 201710119765 A CN201710119765 A CN 201710119765A CN 106709480 A CN106709480 A CN 106709480A
Authority
CN
China
Prior art keywords
piecemeal
under
image
width
human face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710119765.0A
Other languages
Chinese (zh)
Other versions
CN106709480B (en
Inventor
邓红霞
李海芳
郭浩
相洁
曹锐
李瀚�
杨晓峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gude Intelligent Technology Research Institute Shanxi Co ltd
Original Assignee
Taiyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Technology filed Critical Taiyuan University of Technology
Priority to CN201710119765.0A priority Critical patent/CN106709480B/en
Publication of CN106709480A publication Critical patent/CN106709480A/en
Application granted granted Critical
Publication of CN106709480B publication Critical patent/CN106709480B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a human face recognition method based on a PCNN model, specifically a partitioning human face recognition method based on a weighted intensity PCNN model. The invention solves problems that a conventional human face recognition method based on the PCNN model is not fine in description of image features, carries out no-difference processing of the whole human face image through a group of parameters, and neglects the differences of all parts of a human face. On the basis of simplifying the PCNN model, the invention proposes the weighted intensity PCNN model, introduces the concepts of emission intensity of a spontaneous pulse, the emission intensity of a coupled pulse and the weighted intensity, and refines the output of the model. Meanwhile, the method employs the partitioning recognition during human face recognition. The method comprises the steps: enabling a human face image to be divided into blocks according to the difference of gray scale distribution of all parts of the human face image and the difference of local resolutions before recognition; adaptively setting the weight value of blocks according to the image blocks during recognition; finally enabling the recognition result of each block to be integrated in the recognition result of one human face image.

Description

Intersected human face recognition methods based on weighed intensities PCNN models
Technical field
Face identification method the present invention relates to be based on PCNN models, the specially piecemeal based on weighed intensities PCNN models Face identification method.
Background technology
Pulse Coupled Neural Network is the lock-out pulse of the brain visual cortexs of animal such as Eckhorn et al. researchs cat, monkey Provide what is found during phenomenon, and it is improved in 1993 by Johnson, it is proposed that the pulse coupled neural net of standard Network (Pulse Coupled Neural Network PCNN) model.1999, Lzhikevich was from the strict card of mathematical angle Understand that PCNN models are a kind of network models closest to biological neuron.PCNN models compared with conventional classical neural network, It is a kind of network of self-supervisory self study, it is not necessary to which training is capable of achieving image procossing, and PCNN models have excellent synthesis Space-time summation characteristic, dynamic synchronization pulse provide the characteristics such as characteristic, non-linear modulation and variable threshold, therefore, PCNN models with It is widely used in the aspects such as Image Edge-Detection, image segmentation, image co-registration, image recognition and image denoising.2005, PCNN models are introduced field of face identification by Taymoor M first.Chinese literature on PCNN model recognitions of face is 2011 Year just delivers first, but 2008, PCNN models are applied to recognition of face by domestic Southeast China University Zhang Yudong et al. In research.In recent years, the visual field of domestic scholars is progressed into PCNN models, its application study in terms of recognition of face Also gradually increase.
For being currently based in the face identification method of PCNN models, characteristic is typically from based on model n times iteration Some information of the bianry image of generation, such as duration of ignition sequence and information Entropy sequence, they are a kind of effects of plane Represent, expression is the overall feature of image, it is impossible to the change at edge or point in refinement ground reaction facial image.Therefore document [1] on the basis of simplified PCNN models, a kind of PCNN models that intensity is provided based on pulse be refer to.At the model Reason, each pixel in facial image can obtain its specific pulse and provide intensity, the average pulse that n times iteration is obtained Provide feature of the intensity as facial image, finally by the pulse for calculating facial image to be identified provide intensity matrix and each The average pulse of identity provides the COS distance between intensity matrix to judge its affiliated identity category, but the model does not have Intensity is provided to its pulse to be finely divided.
In actual applications, many uncertain parameters are included in PCNN models, the setting of these model parameters is determined simultaneously The quality of its performance is determined, has generally all been at present, through test of many times, to contrast gained processing result image to select suitable parameter Value.Based on this, existing many documents have been set for research to the automatic of PCNN model parameters.Document [2] is proposed with ash Degree-information content histogram characterizes characteristics of image, and the time in PCNN models of being estimated automatically using information content histogram is decayed ginseng Number, and its setting to PCNN other specifications still needs to set by rule of thumb.
In addition, the intensity distribution difference of face each several part is very big, as Figure 1-5, and traditional face based on PCNN Recognition methods is uniformly processed a width facial image as an entirety, ignores the difference between various pieces.And by testing The discrimination that may certify that the different zones of face be it is different, as shown in Figure 6.Therefore, it can in recognition of face to each portion Set up separately and put different weights.
Document [1]:Computer engineering and design, in January, 2017, the PCNN model parameters based on grid-search algorithms are adaptive Should, Li Han, Yang Xiaofeng etc..
Document [2]:Electronic letters, vol, in May, 2007, a kind of new PCNN model parameter evaluation methods, Zhao Zhijiang, Zhao Chun Sunshine, Zhang Zhihong.
The content of the invention
It is not thin enough that the present invention solves the description of characteristics of image present in the existing face identification method based on PCNN models Change and indistinguishably process view picture facial image using one group of parameter, ignore the different problem of face each several part, there is provided one Plant the intersected human face recognition methods based on weighed intensities PCNN models.
The present invention adopts the following technical scheme that realization:Intersected human face identification side based on weighed intensities PCNN models Method, is realized by following steps:
(1) training stage
Build or select existing containing A people, everyone B width facial images have the face database of A*B width facial images, Take everyone preceding N width in face database, N≤B/2, facial image as training image, by every width training image all according to eye Eyeball, nose and mouth carry out horizontal piecemeal, are divided into the block diagram of upper, middle and lower three for corresponding respectively to eyes, nose and mouth, will be every Three block images of every width facial image of people all weighted input intensity PCNN models, it is strong that each piecemeal can obtain a weighting Degree matrix, can so obtain A*N*3 weighed intensities matrix, and they correspond respectively to each of everyone each training image Piecemeal, then asks the average value of the N number of weighed intensities matrix for corresponding respectively to everyone upper, middle and lower piecemeal, i.e. average weighted intensity Matrix, and this value is recorded, used as the eigenmatrix of everyone each piecemeal, the training stage is always met together and obtains 3A feature square Battle array (average weighted intensity matrix), i.e.,:
(COn 1, COn 2... COn A)
(CIn 1, CIn 2... CIn A)
(CUnder 1, CUnder 2... CUnder A)
Wherein, C represents eigenmatrix, and which people first footmark represents, which piecemeal second footmark represents;
The Mathematical Modeling of described weighed intensities PCNN models is as follows:
Fij(n)=Iij (1)
Lij(n)=∑ WYkl(n-1) (2)
Uij(n)=Fij(n)(1+βLij(n)) (3)
θij(n)=exp (- αθij(n-1)+VθYij(n-1) (4)
Oij(n)=β Fij(n)×Lij(n) (6)
Wherein, to a certain neuron, the implication of parameters is as follows:N loop iteration number of times, [1~+∞);I and j nerves First coordinate;K and l, coordinate is the coordinate of the peripheral neurons of the neuron of i and j;W inside connection matrix;FijFeed back input;Iij External input signal;UijInternal activity;YijPulse is exported, and 0 or 1;θijDynamic threshold, [0~+∞);QijWeighed intensities; αθThreshold value attenuation coefficient;VθThreshold value amplification coefficient;β coefficient of connection;X and y strength formula coefficients;LijLink input.
As shown in Figure 8, weighed intensities PCNN models remain as a two-dimentional neutral net for individual layer, the Y1 in figure, Y2 ..., Yn refer to the pulse output of the neuron adjoining with the neuron.
Neuron in model can be received from outside outside input Iij, while the chain from peripheral neurons can be received Meet input LijN (), they are different and change over time, while corresponding dynamic threshold θijN () also changes over time, That is Fij(n)/θij(n) and Oij(n)/θijN the value of () is different, the two values are respectively just that intensity is provided in the spontaneous pulse of neuron Intensity, such as table 1 are provided with coupled pulse, is tested from the identification on YALE face databases, both obtain respectively as characteristic The discrimination that arrives is simultaneously different, and weighed intensities situation is when taking O in tableijN () is used as characteristic, and OijN () takes (Fij(n)+ Oij(n))/θijThe discrimination obtained when (n), it follows that weighed intensities can as face characteristic data, and can to from Hair pulse is provided intensity and provides the different weight of intensity imparting with coupled pulse, its two-value output Y compared to modelijN () can be with The description characteristics of image for more refining.
The experimental result contrast of intensity is provided in more than 1 kinds of pulse of table
(2) test phase
A) training image is removed in face database, remaining is test image, everyone has (B-N) width test image, A people is total to There is A* (B-N) width test image, take any one width test image, it is obtained according to the method as the step (1) training stage The weighed intensities matrix of each piecemeal, that is, obtain three weighed intensities matrixes:
QDOn m
QDIn m
QDUnder m
Wherein, QD represents weighed intensities matrix, and first footmark m represents any one width in A* (B-N) width test image, Which piecemeal second footmark represents;
B) each of the width test image (with the test image described in step a)) is calculated according to piecemeal weight calculation method The weight of piecemeal, that is, obtain three weights:
EOn m
EIn m
EUnder m
Wherein, E is the weights of piecemeal;First footmark m represents any one width in A* (B-N) width test image;Second Which piecemeal individual footmark represents;
The specific calculating process of described piecemeal weight calculation method is as follows:
LH (i, j)=H (F (i, j)r) (8)
F(i,j)r=I (a, b) | a ∈ [i-r, i+r], b ∈ [j-r, j+r] } (9)
H () is image entropy function, F (i, j)rIt is the subgraph slided in variable window centered on (i, j), I (a, b) Represent F (i, j)rInterior pixel, r is the radius for sliding variable window, and (i, j) represents the position of certain pixel, and p and q is piecemeal The horizontal and vertical length of image, cd represents any block in the piecemeal of upper, middle and lower three, and CM is local special type contribution degree, and is adopted With CM as piecemeal weights E;Because by Figure 10-11 as can be seen that local special type contribution degree CM and correspondence portion in formula (10) Divide discrimination Changing Pattern identical, understand that both degrees of correlation are higher according to statistic analysis result, therefore using CM as piecemeal Weights E.
C) calculate consistent in the width test image (with step a) and b)) each piecemeal weighed intensities matrix and step Suddenly the COS distance of the eigenmatrix of everyone the corresponding piecemeal in (), that is, seek both cosine similarities, obtains 3A Cosine similarity:
(SOn m1, SOn m2... SOn mA)
(SIn m1, SIn m2... SIn mA)
(SUnder m1, SUnder m2... SUnder mA)
Wherein, S represents cosine similarity, and first footmark m still represents any one width in A* (B-N) width test image, Second footmark represents the someone in 1 to A, and which piecemeal the 3rd footmark represents;
D) by the cosine similarity obtained by the corresponding section technique of each piecemeal of the width test image and different people with should The corresponding weights of piecemeal are multiplied, i.e.
(SOn m1*EOn m, SOn m2*EOn m... SOn mA*EOn m)
(SIn m1*EIn m, SIn m2*EIn m... SIn mA*EIn m)
(SUnder m1*EUnder m, SUnder m2*EUnder m... SUnder mA*EUnder m)
Following correspondence addition is carried out again, i.e.
Sm1=SOn m1*EOn m+SIn m1*EIn m+SUnder m1*EUnder m
Sm2=SOn m2*EOn m+SIn m2*EIn m+SUnder m2*EUnder m
SmA=SOn mA*EOn m+SIn mA*EIn m+SUnder mA*EUnder m
E) (S is takenm1, Sm2... SmA) in the corresponding people of maximum as the width test image recognition result.
The existing face identification method based on PCNN models is mostly using some based on the intrinsic two-value output of model Used as face characteristic data, characteristic is not enough refined and indistinguishably processes view picture facial image using one group of parameter information, Ignore the difference of face each several part.The present invention proposes the intersected human face identification side based on weighed intensities PCNN models based on this Method.On the basis of PCNN models are simplified, it is proposed that weighed intensities PCNN models, introduce spontaneous pulse and provide intensity, coupling The concept of intensity and weighed intensities, the output of refined model are provided in pulse.Carry out being recognized using piecemeal during recognition of face simultaneously.Know The difference of the not preceding different and local discrimination according to facial image each several part intensity profile, piecemeal has been carried out by facial image. When carrying out recognition of face, the weights of piecemeal can adaptively set according to block image;The parameter of model can be according to block image Content setting (can be according to improved mesh parameter optimizing, document [1]), being then based on above-mentioned parameter each piecemeal can be carried out respectively Identification.The recognition result of the comprehensive each piecemeal of recognition result meeting of final width facial image.
Effect experiment:
1st, intersected human face identification experiment
After employing piecemeal recognition strategy, the discrimination of each face database, less using the recognition of face side of partition strategy Method, is improved to some extent, and is shown in Table 2,3.
The identification experiment of the YALE face databases intersected human face of table 2
The identification experiment of the ORL face databases intersected human face of table 3
2nd, the face recognition experiment based on weighed intensities PCNN models
After the face identification method based on weighed intensities PCNN models is employed, the discrimination of each face database is relatively based on The face identification method of unweighted intensity PCNN models, is improved to some extent, and sees table 4 below, 5.
The YALE face database weighed intensities PCNN model face recognition experiments of table 4
The ORL face database weighed intensities PCNN model face recognition experiments of table 5
3rd, the intersected human face identification experiment based on weighed intensities PCNN models
Experimental result on YALE face databases is shown in Figure 12, it can be seen that discrimination of this paper algorithms when N takes 3,4 and 5 is equal Higher than the face identification method based on ICA, PCA, the amplitude lifted when N takes 3 is maximum.
The experimental result of ORL face databases is shown in Figure 13 (N takes 5), it can be seen that the discrimination on ORL face databases also has accordingly Lifting.
Brief description of the drawings
Fig. 1 is facial image;
Fig. 2 is the overall gray level distribution of face in Fig. 1;
For the grey level histogram of eyes, (wherein transverse axis represents that the intensity profile of eye portion in Fig. 1 is interval to Fig. 3, longitudinal axis table Show the number of pixels of same grayscale value);
For the grey level histogram of nose, (wherein transverse axis represents that the intensity profile of nasal portion in Fig. 1 is interval to Fig. 4, longitudinal axis table Show the number of pixels of same grayscale value);
For the grey level histogram of mouth, (wherein transverse axis represents that the intensity profile of face part in Fig. 1 is interval to Fig. 5, and the longitudinal axis is represented The number of pixels of same grayscale value);
Fig. 6 is the intensity profile and corresponding discrimination of face different zones;
Fig. 7 is the process schematic of the average weighted intensity matrix that any one personal upper piecemeal is sought in step (1);
Fig. 8 is weighed intensities PCNN model schematics;
Fig. 9 is to seek the process schematic of any one each divided group intensity matrix of width test image in step (2) a);
Figure 10 is the variation relation figure of local feature contribution degree and local discrimination;
Figure 11 is the matching line chart of local feature contribution degree and local discrimination;
Figure 12 is the experimental result on YALE face databases;
Figure 13 is the experimental result of ORL face databases.
Specific embodiment
Based on the intersected human face recognition methods of weighed intensities PCNN models, realized by following steps:
(1) training stage
Build or select existing containing A people, everyone B width facial images have the face database of A*B width facial images; Take everyone preceding N width in face database, N≤B/2, facial image as training image, by every width training image all according to eye Eyeball, nose and mouth carry out horizontal piecemeal, are divided into upper, lower three block diagrams of neutralization for corresponding respectively to eyes, nose and mouth, will be every Three block images of every width facial image of people all weighted input intensity PCNN models, it is strong that each piecemeal can obtain a weighting Degree matrix, can so obtain A*N*3 weighed intensities matrix, and they correspond respectively to each of everyone each training image Piecemeal, then asks the average value of the N number of weighed intensities matrix for corresponding respectively to everyone upper, middle and lower piecemeal, i.e. average weighted intensity Matrix, and this value is recorded, used as the eigenmatrix of everyone each piecemeal, the training stage is always met together and obtains 3A feature square Battle array (average weighted intensity matrix), i.e.,:
(COn 1, COn 2... COn A)
(CIn 1, CIn 2... CIn A)
(CUnder 1, CUnder 2... CUnder A)
Wherein, C represents eigenmatrix, and which people first footmark represents, which piecemeal second footmark represents;
The Mathematical Modeling of described weighed intensities PCNN models is as follows:
Fij(n)=Iij (1)
Lij(n)=∑ WYkl(n-1) (2)
Uij(n)=Fij(n)(1+βLij(n)) (3)
θij(n)=exp (- αθij(n-1)+VθYij(n-1) (4)
Oij(n)=β Fij(n)×Lij(n) (6)
Wherein, to a certain neuron, the implication of parameters is as follows:N loop iteration number of times, [1~+∞);I and j nerves First coordinate;K and l, coordinate is the coordinate of the peripheral neurons of the neuron of i and j;W inside connection matrix;FijFeed back input;Iij External input signal;UijInternal activity;YijPulse is exported, and 0 or 1;θijDynamic threshold, [0~+∞);QijWeighed intensities; αθThreshold value attenuation coefficient;VθThreshold value amplification coefficient;β coefficient of connection;X and y strength formula coefficients;LijLink input;Wherein, it is internal Connection matrix W, threshold value attenuation coefficient αθ, threshold value amplification coefficient Vθ, coefficient of connection β, the determination of strength formula coefficient x and y, to this It is known for art personnel, can be at least set by the grid optimizing method in document [1], specially:(1) W god Represented through the inverse of distance between unit and neuron, i.e. W=[0.7 1 0.7;1 0 1;0.7 1 0.7];Threshold value amplifies system Number VθLarger initial value need to be assigned, while in order to not waste calculating, appropriate takes an integer value, makes it more than facial image Maximum gradation value.(2) assume that two parts weights in formula (7) are that x and y are 1, now need the parameter of setting to only have β and αθ, at this moment just using the grid search optimizing algorithm in document [1], (application result chooses application and is based on weighed intensities PCNN The recognition accuracy that the intersected human face recognition methods of model is obtained on correspondence test set), obtain β and αθValue.(3) will obtain β and αθModel is substituted into, using the grid search optimizing algorithm in document [1] to two parts weights (x and y) in formula (7) Optimizing, obtains x and y.
(2) test phase
A) training image is removed in face database, remaining is test image, everyone has (B-N) width test image, A people is total to There is A* (B-N) width test image, take any one width test image, it is obtained according to the method as the step (1) training stage The weighed intensities matrix of each piecemeal, that is, obtain three weighed intensities matrixes:
QDOn m
QDIn m
QDUnder m
Wherein, QD represents weighed intensities matrix, and first footmark m represents any one width in A* (B-N) width test image, Which piecemeal second footmark represents;
B) the width test image is calculated (with any one width test chart described in step a) according to piecemeal weight calculation method Picture) each piecemeal weight, that is, obtain three weights:
EOn m
EIn m
EUnder m
Wherein, E is the weights of piecemeal;First footmark m represents any one width in A* (B-N) width test image;Second Which piecemeal individual footmark represents;
The specific calculating process of described piecemeal weight calculation method is as follows:
LH (i, j)=H (F (i, j)r) (8)
F(i,j)r=I (a, b) | a ∈ [i-r, i+r], b ∈ [j-r, j+r] } (9)
H () is image entropy function, F (i, j)rIt is the subgraph slided in variable window centered on (i, j), I (a, b) Represent F (i, j)rInterior pixel, r is the radius for sliding variable window, and (i, j) represents the position of certain pixel, and p and q is piecemeal The horizontal and vertical length of image, cd represents any block in the piecemeal of upper, middle and lower three, and CM is local special type contribution degree, and is adopted With CM as piecemeal weights E;
C) calculate consistent in the width test image (with step a) and b)) each piecemeal weighed intensities matrix and step Suddenly the COS distance of the eigenmatrix of everyone the corresponding piecemeal in (), that is, seek both cosine similarities, obtains 3A Cosine similarity:
(SOn m1, SOn m2... SOn mA)
(SIn m1, SIn m2... SIn mA)
(SUnder m1, SUnder m2... SUnder mA)
Wherein, S represents cosine similarity, and first footmark m still represents any one width in A* (B-N) width test image, Second footmark represents the someone in 1 to A, and the 3rd footmark represents which piecemeal, e.g., SOn m1Represent the width test image The cosine similarity of the upper piecemeal of upper piecemeal and first man, SOn m1=sim (QDOn m, COn 1);
D) by the cosine similarity obtained by the corresponding section technique of each piecemeal of the width test image and different people with should The corresponding weights of piecemeal are multiplied, i.e.
(SOn m1*EOn m, SOn m2*EOn m... SOn mA*EOn m)
(SIn m1*EIn m, SIn m2*EIn m... SIn mA*EIn m)
(SUnder m1*EUnder m, SUnder m2*EUnder m... SUnder mA*EUnder m)
Following correspondence addition is carried out again, i.e.
Sm1=SOn m1*EOn m+SIn m1*EIn m+SUnder m1*EUnder m
Sm2=SOn m2*EOn m+SIn m2*EIn m+SUnder m2*EUnder m
SmA=SOn mA*EOn m+SIn mA*EIn m+SUnder mA*EUnder m
E) (S is takenm1, Sm2... SmA) in the corresponding people of maximum as the width test image recognition result.

Claims (1)

1. a kind of intersected human face recognition methods based on weighed intensities PCNN models, it is characterised in that realized by following steps:
(1) training stage
Build or select existing containing A people, everyone B width facial images have the face database of A*B width facial images, in people Take everyone preceding N width in face storehouse, N≤B/2, facial image as training image, by every width training image all according to eyes, nose Son and mouth carry out horizontal piecemeal, are divided into the block diagram of upper, middle and lower three for corresponding respectively to eyes, nose and mouth, by everyone every Three block images of width facial image all weighted input intensity PCNN models, each piecemeal can obtain a weighed intensities square Battle array, can so obtain A*N*3 weighed intensities matrix, and they correspond respectively to each point of everyone each training image Block, then asks the average value of the N number of weighed intensities matrix for corresponding respectively to everyone upper, middle and lower piecemeal, i.e. average weighted intensity square Battle array, and this value is recorded, used as the eigenmatrix of everyone each piecemeal, the training stage is always obtained 3A eigenmatrix, I.e.:
(COn 1, COn 2... COn A)
(CIn 1, CIn 2... CIn A)
(CUnder 1, CUnder 2... CUnder A)
Wherein, C represents eigenmatrix, and which people first footmark represents, which piecemeal second footmark represents;
The Mathematical Modeling of described weighed intensities PCNN models is as follows:
Fij(n)=Iij (1)
Lij(n)=∑ WYkl(n-1) (2)
Uij(n)=Fij(n)(1+βLij(n)) (3)
θij(n)=exp (- αθij(n-1)+VθYij(n-1) (4)
Y i j ( n ) = 1 U i j ( n ) &GreaterEqual; &theta; i j ( n ) 0 U i j ( n ) < &theta; i j ( n ) - - - ( 5 )
Oij(n)=β Fij(n)×Lij(n) (6)
Q i j ( n ) = &Sigma; 1 n &lsqb; x * F i j ( n ) / &theta; i j ( n ) + y * O i j ( n ) / &theta; i j ( n ) &rsqb; - - - ( 7 )
Wherein, to a certain neuron, the implication of parameters is as follows:N loop iteration number of times, [1~+∞);I and j neurons are sat Mark;K and l, coordinate is the coordinate of the peripheral neurons of the neuron of i and j;W inside connection matrix;FijFeed back input;IijIt is outside Input signal;UijInternal activity;YijPulse is exported, and 0 or 1;θijDynamic threshold, [0~+∞);QijWeighed intensities;αθThreshold Value attenuation coefficient;VθThreshold value amplification coefficient;β coefficient of connection;X and y strength formula coefficients;LijLink input.
(2) test phase
A) training image is removed in face database, remaining is test image, everyone has (B-N) width test image, the A total A* of people (B-N) width test image, takes any one width test image, obtains its each according to the method as the step (1) training stage The weighed intensities matrix of piecemeal, that is, obtain three weighed intensities matrixes:
QDOn m
QDIn m
QDUnder m
Wherein, QD represents weighed intensities matrix, and first footmark m represents any one width in A* (B-N) width test image, second Which piecemeal individual footmark represents;
B) weight of each piecemeal of the width test image is calculated according to piecemeal weight calculation method, that is, obtains three weights:
EOn m
EIn m
EUnder m
Wherein, E is the weights of piecemeal;First footmark m represents any one width in A* (B-N) width test image;Second angle Which piecemeal mark represents;
The specific calculating process of described piecemeal weight calculation method is as follows:
LH (i, j)=H (F (i, j)r) (8)
F(i,j)r=I (a, b) | a ∈ [i-r, i+r], b ∈ [j-r, j+r] } (9)
CM c d = 1 p &times; q &Sigma; i = 1 p &Sigma; j = 1 q L H ( i , j ) - - - ( 10 )
H () is image entropy function, F (i, j)rIt is the subgraph slided in variable window centered on (i, j), I (a, b) is represented F(i,j)rInterior pixel, r is the radius for sliding variable window, and (i, j) represents the position of certain pixel, and p and q is block image Horizontal and vertical length, cd represents any block in the piecemeal of upper, middle and lower three, and CM is for local special type contribution degree and uses CM As the weights E of piecemeal.
C) the weighed intensities matrix and everyone the corresponding piecemeal in step (1) of each piecemeal of the width test image are calculated Eigenmatrix COS distance, that is, seek both cosine similarities, obtain 3A cosine similarity:
(SOn m1, SOn m2... SOn mA)
(SIn m1, SIn m2... SIn mA)
(SUnder m1, SUnder m2... SUnder mA)
Wherein, S represents cosine similarity, and first footmark m still represents any one width in A* (B-N) width test image, second Individual footmark represents the someone in 1 to A, and which piecemeal the 3rd footmark represents;
D) by the cosine similarity obtained by the corresponding section technique of each piecemeal of the width test image and different people and the piecemeal Corresponding weights are multiplied, i.e.
(SOn m1*EOn m, SOn m2*EOn m... SOn mA*EOn m)
(SIn m1*EIn m, SIn m2*EIn m... SIn mA*EIn m)
(SUnder m1*EUnder m, SUnder m2*EUnder m... SUnder mA*EUnder m)
Following correspondence addition is carried out again, i.e.
Sm1=SOn m1*EOn m+SIn m1*EIn m+SUnder m1*EUnder m
Sm2=SOn m2*EOn m+SIn m2*EIn m+SUnder m2*EUnder m
SmA=SOn mA*EOn m+SIn mA*EIn m+SUnder mA*EUnder m
E) (S is takenm1, Sm2... SmA) in the corresponding people of maximum as the width test image recognition result.
CN201710119765.0A 2017-03-02 2017-03-02 Intersected human face recognition methods based on weighed intensities PCNN models Active CN106709480B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710119765.0A CN106709480B (en) 2017-03-02 2017-03-02 Intersected human face recognition methods based on weighed intensities PCNN models

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710119765.0A CN106709480B (en) 2017-03-02 2017-03-02 Intersected human face recognition methods based on weighed intensities PCNN models

Publications (2)

Publication Number Publication Date
CN106709480A true CN106709480A (en) 2017-05-24
CN106709480B CN106709480B (en) 2018-07-10

Family

ID=58917745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710119765.0A Active CN106709480B (en) 2017-03-02 2017-03-02 Intersected human face recognition methods based on weighed intensities PCNN models

Country Status (1)

Country Link
CN (1) CN106709480B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107894834A (en) * 2017-11-09 2018-04-10 上海交通大学 Gesture identification method and system are controlled under augmented reality environment
WO2023029702A1 (en) * 2021-09-06 2023-03-09 京东科技信息技术有限公司 Method and apparatus for verifying image
CN116188975A (en) * 2023-01-03 2023-05-30 国网江西省电力有限公司电力科学研究院 Power equipment fault identification method and system based on air-ground visual angle fusion

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281598A (en) * 2008-05-23 2008-10-08 清华大学 Method for recognizing human face based on amalgamation of multicomponent and multiple characteristics
CN101546430A (en) * 2009-04-30 2009-09-30 上海大学 Edge extracting method based on simplified pulse coupled neural network
CN103345624A (en) * 2013-07-15 2013-10-09 武汉大学 Weighing characteristic face recognition method for multichannel pulse coupling neural network
CN103605972A (en) * 2013-12-10 2014-02-26 康江科技(北京)有限责任公司 Non-restricted environment face verification method based on block depth neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281598A (en) * 2008-05-23 2008-10-08 清华大学 Method for recognizing human face based on amalgamation of multicomponent and multiple characteristics
CN101546430A (en) * 2009-04-30 2009-09-30 上海大学 Edge extracting method based on simplified pulse coupled neural network
CN103345624A (en) * 2013-07-15 2013-10-09 武汉大学 Weighing characteristic face recognition method for multichannel pulse coupling neural network
CN103605972A (en) * 2013-12-10 2014-02-26 康江科技(北京)有限责任公司 Non-restricted environment face verification method based on block depth neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
常莎 等: "基于强度PCNN的静态图像人脸识别", 《太原理工大学学报》 *
李瀚 等: "基于加权强度PCNN模型的分块人脸识别", 《计算机工程与设计》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107894834A (en) * 2017-11-09 2018-04-10 上海交通大学 Gesture identification method and system are controlled under augmented reality environment
CN107894834B (en) * 2017-11-09 2021-04-02 上海交通大学 Control gesture recognition method and system in augmented reality environment
WO2023029702A1 (en) * 2021-09-06 2023-03-09 京东科技信息技术有限公司 Method and apparatus for verifying image
CN116188975A (en) * 2023-01-03 2023-05-30 国网江西省电力有限公司电力科学研究院 Power equipment fault identification method and system based on air-ground visual angle fusion

Also Published As

Publication number Publication date
CN106709480B (en) 2018-07-10

Similar Documents

Publication Publication Date Title
CN106169081B (en) A kind of image classification and processing method based on different illumination
CN103996018B (en) Face identification method based on 4DLBP
CN104537393B (en) A kind of traffic sign recognition method based on multiresolution convolutional neural networks
CN104392463B (en) Image salient region detection method based on joint sparse multi-scale fusion
CN104463209B (en) Method for recognizing digital code on PCB based on BP neural network
CN101807245B (en) Artificial neural network-based multi-source gait feature extraction and identification method
CN107977671A (en) A kind of tongue picture sorting technique based on multitask convolutional neural networks
CN107292250A (en) A kind of gait recognition method based on deep neural network
CN106326874A (en) Method and device for recognizing iris in human eye images
CN106446942A (en) Crop disease identification method based on incremental learning
CN104103033B (en) View synthesis method
CN101630364A (en) Method for gait information processing and identity identification based on fusion feature
CN101551853A (en) Human ear detection method under complex static color background
CN108665005A (en) A method of it is improved based on CNN image recognition performances using DCGAN
CN105718889A (en) Human face identity recognition method based on GB(2D)2PCANet depth convolution model
CN107563389A (en) A kind of corps diseases recognition methods based on deep learning
CN106529395B (en) Signature image identification method based on depth confidence network and k mean cluster
CN108053398A (en) A kind of melanoma automatic testing method of semi-supervised feature learning
CN108171318A (en) One kind is based on the convolutional neural networks integrated approach of simulated annealing-Gaussian function
CN107729890A (en) Face identification method based on LBP and deep learning
CN106709480A (en) Partitioning human face recognition method based on weighted intensity PCNN model
CN105913463A (en) Position prior principle-based texture-color characteristic overall saliency detection method
CN105809173A (en) Bionic vision transformation-based image RSTN (rotation, scaling, translation and noise) invariant attributive feature extraction and recognition method
CN109522865A (en) A kind of characteristic weighing fusion face identification method based on deep neural network
CN104268587B (en) False fingerprint detection method based on finger wave conversion and SVM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220317

Address after: 030000 room 1626, floor 15-20, Berlin International Business Center, No. 85, south section of Binhe West Road, Wanbailin District, Taiyuan City, Shanxi Province

Patentee after: Gude Intelligent Technology Research Institute (Shanxi) Co.,Ltd.

Address before: 030024 No. 79 West Main Street, Taiyuan, Shanxi, Yingze

Patentee before: Taiyuan University of Technology