CN111311809A - Intelligent access control system based on multi-biological-feature fusion - Google Patents

Intelligent access control system based on multi-biological-feature fusion Download PDF

Info

Publication number
CN111311809A
CN111311809A CN202010105758.7A CN202010105758A CN111311809A CN 111311809 A CN111311809 A CN 111311809A CN 202010105758 A CN202010105758 A CN 202010105758A CN 111311809 A CN111311809 A CN 111311809A
Authority
CN
China
Prior art keywords
information
face
feature
face image
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010105758.7A
Other languages
Chinese (zh)
Inventor
吴键
张潇
马皓
刘建成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202010105758.7A priority Critical patent/CN111311809A/en
Publication of CN111311809A publication Critical patent/CN111311809A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2132Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on discrimination criteria, e.g. discriminant analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

An intelligent access control system based on multi-biological characteristic fusion comprises an image sensing collector, a sound sensing collector, a data processor and access control equipment; the image sensing collector is used for sensing a target, collecting face image information and sending the face image information to the data processor through the wireless communication module. The sound sensing collector is used for collecting sound information and sending the sound information to the data processor through the wireless communication module. The data processor processes the received face image and voice information, performs characteristic information fusion of the face image and the voice information, performs recognition, storage and training on the characteristic information by assisting a deep learning algorithm, and outputs a judgment result to the access control equipment; the entrance guard control equipment is used for receiving the judgment result sent by the data processor and judging whether the entrance guard responds or not. The invention can effectively make up the defect of biological feature recognition under a single mode and improve the safety of the access control system.

Description

Intelligent access control system based on multi-biological-feature fusion
Technical Field
The invention belongs to a biological characteristic fusion theoretical technology, and particularly relates to an intelligent access control system based on multi-biological characteristic fusion.
Background
With the rapid development of information technology in China, the requirements for security and the like in the security field are higher and higher. For departments with extremely high security requirements such as governments, enterprises, military and the like, such as a security room and the like, extremely high security requirements are further provided. At present, the access control systems are various and mainly divided into traditional access control systems and biometric identification access control systems. The traditional access control system generally adopts a password or an IC card and other modes for authentication, and the authentication mode is very inconvenient and has the risks of forgetting to take a card, forgetting a password or embezzlement and copying of a card password. Along with the development of technique, the access control system of traditional type can not satisfy current needs, slowly shows a batch and uses human biological feature as the biological feature recognition access control system of authentication mode, compares traditional type access control system, and biological feature recognition access control system has promoted the security and the reliability of system, but present biological feature recognition access control system has following several problems:
1. the biometric identification mode is single. The prior common biological characteristic identification access control systems comprise fingerprint identification, palm print vein identification, face identification, iris identification, voice identification and the like, only single biological characteristics are verified in the systems, the identity of a person is judged after the verification is successful, and the probability of misjudgment is higher;
2. and multiple biological characteristics of the same system are respectively authenticated without fusion discrimination. In one system, a plurality of biometrics are used for authentication, but each authentication method is independent, for example, only a human face or voice is discriminated, and the judgment is performed based on the result of each authentication. The method does not perform fusion judgment on various biological characteristic information, so that the identification accuracy rate has large error and the problem that different biological characteristics of the same person are not consistent in authentication exists;
3. and various biological characteristics are fused by adopting a characteristic layer fusion or data layer fusion mode, so that excessive biological characteristic information is lost. The selection of the fusion layer can lose a certain part of the carried biological characteristic information, and the difference from the original information characteristic is certain, so that the difference existing in the authentication result is large.
4. The biological characteristic identification access control system of a large department is only an authentication process and does not have a system learning function, and the identification rate cannot be improved along with the increase of authentication characteristic information.
Disclosure of Invention
The invention aims to provide an intelligent access control system based on multi-biological-feature fusion, and the safety and the accuracy of the access control system are improved.
The technical solution for realizing the purpose of the invention is as follows:
an intelligent access control system based on multi-biological characteristic fusion comprises an image sensing collector, a sound sensing collector, a data processor and access control equipment;
the image sensing collector and the sound sensing collector are respectively used for sensing a target, collecting face image information and sound information, and sending the collected face image information and sound information to the data processor;
the data processor is used for processing the received face image and voice information, fusing the feature information of the face image and the voice information, recognizing, storing and training the feature information by assisting a deep learning algorithm, and outputting a judgment result to the access control equipment;
the entrance guard control equipment is used for receiving the judgment result sent by the data processor, judging whether entrance guard responds or not, if the judgment is successful, opening the entrance guard lock, if the judgment is failed, acquiring biological information again, carrying out biological information authentication again by the data processor, and when the identification exceeds the limited times, sending an alarm by an alarm in the entrance guard control equipment.
Compared with the prior art, the invention has the following remarkable advantages:
1. the face recognition of the access control system adopts two algorithms of a principal component analysis method and a linear discriminant analysis method to jointly extract information of the face image, and the two algorithms jointly discriminate improve the recognition rate of the face recognition because the principal component analysis method is sensitive to image change, the linear discriminant analysis method is sensitive to face expression change.
2. The method adopts a mode of fusing characteristic information at a decision layer: the method selects the fusion at the decision layer, not only retains the characteristics of the original biological information to the maximum extent, but also avoids the calculation complexity of bottom layer parameter fusion and characteristic fusion, simplifies the application of the fusion algorithm and improves the efficiency of the system.
3. And a deep learning algorithm is added, so that the system identification rate and accuracy are improved: a deep learning algorithm is integrated into a face recognition algorithm and a voiceprint recognition algorithm, each authentication is a one-time learning process of characteristic information, and the aim of high authentication accuracy is fulfilled by extracting and learning a large amount of characteristic information through network nerves.
The present invention is described in further detail below with reference to the attached drawings.
Drawings
FIG. 1 is a schematic structural diagram of the present invention.
Fig. 2 is a schematic diagram of the system operation process of the present invention.
FIG. 3 is a schematic diagram of a processing procedure of the face recognition model of the present invention.
FIG. 4 is a schematic diagram of a voiceprint recognition model processing procedure according to the present invention.
FIG. 5 is a schematic diagram of a feature fusion service process according to the present invention.
Detailed Description
As shown in fig. 1, an intelligent access control system based on multi-biometric feature fusion includes an image sensing collector, a sound sensing collector, a data processor, and an access control device;
the image sensing collector is used for sensing a target and collecting face image information, the collected face image information is sent to the data processor through the wireless communication module, and the data processor analyzes and extracts features of the collected face image.
The sound sensing collector is used for collecting sound information of a target, sending the collected sound information to the data processor through the wireless communication module, and analyzing and extracting the characteristics of the collected sound information by the data processor.
The data processor is used for processing the received face image and voice information, fusing the feature information of the face image and the voice information, recognizing, storing and training the feature information by a deep learning algorithm, achieving the purpose of improving the recognition rate, and then outputting the discrimination result to the access control equipment.
The entrance guard control equipment is used for receiving the judgment result sent by the data processor, judging whether entrance guard responds or not, if the judgment is successful, opening the entrance guard lock, if the judgment is failed, re-collecting biological information (including face image information and sound information), and if the judgment is failed, re-performing biological information authentication by the data processor, and when the identification exceeds the limited times, giving an alarm by an alarm in the entrance guard control equipment.
Further, the image sensing collector comprises a camera and a pyroelectric infrared sensor; the pyroelectric infrared image sensor is used for carrying out living body detection, and when the detection is a living body target, the camera is used for carrying out face image information acquisition on the target. The image sensing collector sends the collected image information to the data processor through the wireless communication module.
Preferably, the pyroelectric infrared image sensor adopts a COMS infrared image sensor, the model is OV9712, and the sensor has the characteristics of low energy consumption, low cost, high integration degree and the like, and is suitable for the system.
Furthermore, the sound sensing collector comprises a sound sensor for collecting sound and sending collected sound information to the data processor through the wireless communication module. The sound sensor is a BR-ZS1 sound sensor, the sensor can directly collect sound without calibration again, software is automatically zeroed, and images of sound vibration can be truly and accurately reflected.
Furthermore, the data processor comprises a preprocessing module, a feature data storage module, a face recognition module, a voiceprint recognition module and a feature fusion module.
And the preprocessing module is used for preprocessing the face image and the sound information. Eliminating image interference items by adopting a preprocessing mode of graying and Gaussian filtering on the face image; the voice information is preprocessed by adopting a Fudi FM1188DSP chip, the DSP chip can perform certain phase reversal and time delay displacement operation on the input voice information, then the amplitude of the voice signal is amplified, the voice signal and an input source are subjected to logic addition, and under the condition of eliminating voice noise, the characteristic information of the original voice information is reserved.
The characteristic data storage module is used for storing three parts of information: 1) storing and receiving the face image information and the sound information sent by the preprocessing module; 2) storing the face characteristic information extracted by the face recognition module and the voice characteristic information extracted by the voice print recognition module; 3) storing the face characteristic information trained by the face recognition module and the voice characteristic information trained by the voice print recognition module;
the face recognition module is used for analyzing face image information and extracting biological characteristic information of the face image through vector transformation and image mapping in a face recognition algorithm. The face recognition algorithm mainly adopts a principal component analysis method and a linear discriminant analysis method. The two algorithms are respectively adopted for sequentially analyzing the same face image, the analyzed result values are fused and cross-calculated, the optimal solution of face feature extraction is effectively improved, meanwhile, the two algorithms are combined, a plurality of adverse factors influencing face image recognition are avoided, and the recognition rate and the accuracy are improved. The processing procedure of the face recognition module is shown in fig. 3, and the face recognition module includes a first sample training unit and a face matching unit.
The first sample training part unit is used for training and learning face feature information, mainly relies on a deep learning algorithm to carry out a large amount of feature analysis on face image information in the feature data storage module, the face image information is input into a grid for processing, an output result is calculated through hierarchical transformation, and the output result is stored in the feature data storage module and is used for comparison and matching of follow-up face feature information. The deep learning algorithm mainly adopts a deep ID2-plus algorithm and a deep convolutional neural network algorithm to carry out a large amount of analysis on the face image information and is used for continuously training a face recognition algorithm. The first sample training unit specifically works as follows:
step 1, extracting a human face image information sample α from a characteristic data storage module, inputting the human face image information sample α into a deepID2-plus algorithm and a deep convolutional neural network algorithm, and setting an ideal output value as Yp
Step 2, calculating the actual output value Y0
Y0=Fβ(…(F2(F1(αw(1))w(2))…)w(β))
Wherein F1To FβOutput vector, w, representing face image information samples after grid 1 to β layer transformation(1)To w(β)Representing a trellis convolution weight matrix of layers 1 through β.
Step 3, calculating forward propagation characteristics of the face image information through a conv function in the matlab;
the forward propagation algorithm is as follows:
picQ=conv(picPifilter,valid)
wherein picQ represents the central feature of the convolution of the returned image, picP is an original convolution vector, filter is a convolution algorithm filtering vector, and valid is a vector convolution return type.
Step 4, calculating an actual output value Y0 and an ideal output value YpThe error of (2) is convolved with back propagation.
epicP=conv(epicQ,filterRot180,full)
Wherein epicP represents the characteristics of the image back propagation convolution center, Rot is the steering rate, full is the vector convolution return type, and e is the coefficient.
And 5, adjusting the matrix weight by adopting a minimization error method, carrying out classification comparison, extracting the human face image characteristics, storing the human face image characteristics in a characteristic data storage module, and then carrying out next training and learning. The weight value adjustment formula is as follows:
filterD=conv(epicP,Rot180,epicQ,valid)
wherein, filterD represents the adjusted matrix weight, and the rest symbols refer to the symbol meaning in the above step.
The human face matching unit specifically works as follows:
step 1, receiving face image information sent by a preprocessing module, and firstly analyzing a principal component analysis algorithm. Assuming that the size of a received image is M × n, connecting each column to obtain a column vector, where the size of the column vector is D ═ M × n, D is the spatial dimension of the face image, M is the total number of samples, and the face vector of the jth face image in the total number of samples M uses XjRepresenting to obtain a sample covariance matrix STWherein
Figure BDA0002388469680000059
u is the average image vector of the training sample, C represents the total class number of the sample, and T is Xs-a transposed representation of the u matrix. Selecting WoptThe function is used as an optimal mapping function, the face image is mapped to a matrix vector, and an orthogonal normalization formula is adopted to calculate STOrthogonal normalized vector UTI.e. the eigenvector of the total dispersion matrix.
Step 2, linear discriminant analysis algorithm analysis: assuming that the original image library has N images, centralizing the trained face sample, and calculating the average value u of various face imagesdpAnd the average value u of the total face imagezp. Wherein
Figure BDA00023884696800000510
QiFor each original face image vector.
And 3, forming an array matrix by all the centralized face sample images, and solving an orthogonal basis matrix U of the matrix through a component analysis algorithm.
Step 4, projecting all the centralized image vectors to the generated orthogonal basis matrix U,
Figure BDA0002388469680000053
wherein
Figure BDA0002388469680000054
And x form a linear regression equation,
Figure BDA0002388469680000055
representing an approximate projection of the image vectors, x representing each image vector. All centered means are projected onto the orthogonal basis matrix U,
Figure BDA0002388469680000056
wherein
Figure BDA0002388469680000057
And u form a linear regression equation,
Figure BDA0002388469680000058
represents an approximate projection of the mean, and u represents each of the centered means. T in the above two equations represents the transpose of the matrix U.
Step 5, calculating the inter-class distribution degree SBAnd intra-class dispersion SWAnd calculating a characteristic value and a characteristic vector;
wherein
Figure BDA0002388469680000061
1≤I≤C,SIFor approximating the projection median calculated value, for calculating the median divergence SBAnd intra-class dispersion SWThe remaining symbolic meanings refer to the symbolic meanings in the above steps. And (3) calculating a characteristic value R and a corresponding characteristic vector R: sBR=λSwr and lambda are characteristic coefficients;
step 6, sorting the eigenvectors R according to the magnitude of the eigenvalue R, taking C-1 with the largest energy as Fisher basis vectors, and projecting the original image onto the Fisher basis vectors respectively;
and 7, on the basis of the Fisher face method, performing classification and identification work according to the projection vector and the basis vector, extracting the feature information of the face image, storing the feature information in a feature data storage module, and calculating the identification rate of face identification, wherein the identification rate is used as visual feedback of the quality of the face identification algorithm.
The voiceprint recognition module is used for analyzing the voice information and extracting the characteristics of the voice information through MFCC characteristics. The module mainly adopts Mel cepstrum coefficients to establish a voiceprint recognition model, the voiceprint recognition processing process is shown in figure 4, and the voiceprint recognition module comprises a second sample training unit and a sound matching unit.
The second sample training unit mainly adopts the existing deep learning technology DCNN to carry out mesh training on the voice information, stores the trained voice information in the feature data storage module, and constructs a mixed Gaussian model (GMM) based on MFCC features for recognizing the voice information. Along with the increase of the number of samples, the recognition rate of the GMM model under the grid training is improved, and the effect of improving the recognition rate through training is achieved. The working process of the second sample training unit is as follows:
step 1, extracting M' sound information samples from a characteristic data storage module, and calculating Gaussian probability density weighted sum of the sound information samples
Figure BDA0002388469680000062
The formula is as follows:
Figure BDA0002388469680000063
wherein the content of the first and second substances,
Figure BDA0002388469680000064
is a random vector of a certain dimension,
Figure BDA0002388469680000065
is a gaussian probability density function of each sample,
Figure BDA0002388469680000066
is the mixing weight.
Step 2, calculating the density function of each member through the Gaussian probability density function, namely the Gaussian distribution function of each voice signal, whereinThe Gaussian probability density function refers to the expression of the Gaussian distribution function, and the parameter values are replaced by
Figure BDA0002388469680000067
Step 3, matching and sequencing each Gaussian distribution function value obtained in the step 2 with the voice time sequence, and calculating the likelihood probability of the GMM
Figure BDA0002388469680000071
The calculation formula is as follows:
Figure BDA0002388469680000078
wherein T 'represents a speech signal of length T',
Figure BDA0002388469680000073
represents a time series of the speech signal, ∈' represents a weight value of the model,
Figure BDA0002388469680000074
time series corresponding to speech signal representing length t ∈t'model weight value corresponding to the voice signal representing the length of t',
Figure BDA0002388469680000075
and et′T 'in (1, 2., T').
And 4, recognizing the likelihood probability calculated in the step 3 by adopting a Bayes principle, judging by utilizing a maximum likelihood probability principle, and matching the attribute of the feature information by matching the optimal similarity value, namely completing model training.
The working process of the sound matching unit is as follows:
step 1, the voiceprint matching unit receives the sound information S sent by the preprocessing modulenFirstly, the sound information S is processed by differencenProcessing, determining sampling period according to signal broadband and Shannon theorem, eliminating signal distortion,obtaining a processed sample Xn
Step 2, extracting the voiceprint features by using MFCC features, wherein the extraction process is as follows:
step 2.1, pre-emphasis, using a first order digital filter: h(z)=1-0.97z-1Wherein z is a transfer function H(z)Fractional coefficient of (c). Extracting useful frames through end point detection, and then adding a Hamming window to each V point signal, wherein the formula is expressed as follows:
Figure BDA0002388469680000076
wherein w(v)Representing the Hamming window value of each V-point signal.
Step 2.2, for the processed sample XnPerforming discrete Fourier transform, wherein the transform formula is as follows:
Figure BDA0002388469680000079
where k, j' is the Fourier spectral coefficient, X(k)Is a frequency spectrum.
Step 2.3, spectrum X(k)The Mel frequency is obtained through the Mel filter, and the relationship between the Mel frequency and the actual frequency can be approximately seen as:
Figure BDA0002388469680000081
f is the frequency.
Step 2.4, defining the frequency response of the filter as Hm(k) The filter definition is referred to. Calculating the log spectrum S of each filter output(m)
Figure BDA0002388469680000085
Step 2.5, the logarithmic spectrum S(m)Obtaining the MFCC characteristic coefficient through discrete cosine change, wherein the discrete cosine change formula is as follows:
Figure BDA0002388469680000086
wherein P ' is the dimension of MFCC, P ' is taken as 12, M ' is the number of filter banks, C(n)Is the MFCC characteristic coefficient.
Step 2.6, for MFCC characteristic coefficient C(n)And (4) respectively carrying out strengthening processing on the marking and the high and low characteristics, and classifying and extracting sound characteristic information.
And 3, storing the extracted sound characteristic information in a characteristic data storage module, inputting a GMM model for matching and training at the same time, and identifying a matching result.
The feature fusion module adopts an algorithm of a Support Vector Machine (SVM), and fusion of feature information of the SVM and the SVM is carried out at a decision layer. The feature fusion module extracts the face feature information and the voice feature information extracted by the face recognition module and the voice print recognition module from the feature data storage module, the face feature information and the voice feature information are used as input information of the SVM algorithm, a decision fusion matrix is established, and the information is brought into the matrix to be output and judged. As shown in fig. 5, the two features are fused, and compared by using different kernel functions, and compared by using a normalization method and a non-normalization method, so as to obtain the most suitable feature information data with the highest matching degree. The SVM algorithm is realized by the following steps:
step 1, extracting the face characteristic information and the sound characteristic information extracted by a face recognition module and a voiceprint recognition module from a characteristic data storage module by a characteristic fusion module, establishing a hyperplane for the two characteristics by adopting an MAX-WIN algorithm, and needing training together
Figure BDA0002388469680000084
The classifiers are used for pairwise distinguishing q training sets, wherein the classifiers adopt the existing classifiers in an SVM algorithm;
step 2, obtaining one-to-one matching characteristics according to the decision function and the constraint conditions, wherein the decision function is selected as follows:
Figure BDA0002388469680000091
wherein class of y 'is a decision function, the constraint condition refers to MAX-WIN algorithm constraint, 1 ≤ a' ≦ k ', 1 ≤ B' ≦ k 'and B' ≠ d ', 1 ≤ d' ≦ k ', W' is a step function response coefficient, B 'is a step function response constant value, W' is a step function response constant value, and′a′b′representing the response coefficient of the step function under the constraint values of a 'and b',
Figure BDA0002388469680000095
representing an algorithmic constraint function, B′a′b′Expressing the response constant value of the step function under the constraint values of a 'and b', and selecting a reference decision function response table;
step 3, the total number of the strategy-dependent functions is
Figure BDA0002388469680000093
And establishing an identification matrix as follows:
Figure BDA0002388469680000094
step 4, bringing the target sample into an identification matrix L40×40In (1), a decision function between the i 'class and the j' class is assumed to be SVMi″j″Sample pair decision function SVMi″j″The output is 1, then Li″,j″A value of 1, Lj″,i″The value is 0; if the output is-1, then Lj″,i″A value of 1, with Li″,j″The value is 0;
and 5, outputting a binary matrix according to the value-taking result in the step 4, accumulating the number of 1 in each line in the matrix, wherein the line number corresponding to the maximum value is the category to which the sample belongs, classifying the line number according to the category to serve as feature fusion information, and storing the fused feature information in a feature data storage module for sample training of deep learning and matching of new feature information.
And 6, according to the feature fusion information obtained in the step 5, performing feature matching in a feature storage module, and matching the features by adopting a while circular matching method. And adopting an if branch judgment method, if the corresponding characteristic information is matched, outputting to be 0, otherwise, if the characteristic information is not matched, outputting to be 1. And sending the output result value to the access control equipment through the wireless communication module.
Furthermore, the access control equipment is used for receiving the judgment result sent by the data processor so as to judge whether the access control system responds to the opening of the door lock or whether the system identification exceeds the limited times and then starts the reminding and warning functions. The entrance guard control equipment comprises a wireless entrance guard control lock, an alarm and a counter. The wireless access control lock receives an instruction to automatically open or close the lock, the ALM1I end is connected with an alarm, the ALM1O end is connected with a counter, and the counter automatically counts along with the action of the access control lock.
Furthermore, the wireless access control lock is in a model selection type RS-458, is stable in signal receiving and low in cost, and is suitable for the system. The alarm is of an AL-9805 model, is provided with a switch lock, and is higher in convenience. The counter is a Honeywell model counter.
The entrance guard control equipment comprises the following working processes:
step 1, receiving a matching result sent by a feature fusion module, namely a wireless communication module, and setting an instruction to be 0 or 1, wherein 0 represents that fusion feature information is successfully matched, and 1 represents that fusion feature information is failed to be matched;
step 2, if the received wireless signal instruction is 0, sending information to an access controller, and opening an access lock; if the received wireless signal instruction is 1, the counter is accumulated for 1 time;
step 3, when the received wireless signal instruction is 1 and the number of times of the counter is less than a specified value of t times, restarting the image sensing collector and the sound sensing collector to collect face images and sound information, and then re-sending the face images and the sound information to the face recognition module and the voiceprint recognition module to repeat the steps to extract feature information;
and 4, when the received wireless signal instruction is 1 and the number of times of the counter is greater than a specified value t, judging that the fused feature information cannot be distinguished, and starting an alarm if the personnel information does not exist in the feature library.
According to the invention, based on the analysis of the fuel consumption and the accuracy of the experimental result, after a plurality of tests, the improved face recognition algorithm and the voiceprint recognition algorithm in the invention are found to be utilized, and the authentication mode of combining the face recognition algorithm and the voiceprint recognition algorithm is utilized, so that the safety and the accuracy of recognition are greatly improved, the recognition rate reaches about 98 percent, and an intelligent and automatic recognition level is reached.

Claims (8)

1. An intelligent access control system based on multi-biological characteristic fusion is characterized by comprising an image sensing collector, a sound sensing collector, a data processor and access control equipment;
the image sensing collector and the sound sensing collector are respectively used for sensing a target, collecting face image information and sound information, and sending the collected face image information and sound information to the data processor;
the data processor is used for processing the received face image and voice information, fusing the feature information of the face image and the voice information, recognizing, storing and training the feature information by assisting a deep learning algorithm, and outputting a judgment result to the access control equipment;
the entrance guard control equipment is used for receiving the judgment result sent by the data processor, judging whether entrance guard responds or not, if the judgment is successful, opening the entrance guard lock, if the judgment is failed, acquiring biological information again, carrying out biological information authentication again by the data processor, and when the identification exceeds the limited times, sending an alarm by an alarm in the entrance guard control equipment.
2. The access control system of claim 1, wherein the data processor comprises a preprocessing module, a feature data storage module, a face recognition module, a voiceprint recognition module, and a feature fusion module;
the preprocessing module is used for preprocessing the face image and the sound information;
the feature data storage module is used for storing the face image information and the voice information sent by the preprocessing module, the face feature information extracted by the face recognition module and the voice feature information extracted by the voice print recognition module, the face feature information trained by the face recognition module and the voice feature information trained by the voice print recognition module;
the face recognition module is used for analyzing face image information and extracting biological characteristic information of the face image through vector transformation and image mapping;
the voiceprint recognition module is used for analyzing the voice information and extracting the characteristics of the voice information through MFCC characteristics;
the feature fusion module performs fusion of feature information of the face recognition module and the voice feature information extracted by the voice print recognition module from the feature data storage module, the extracted face feature information and voice feature information are used as input information of an SVM algorithm, a decision fusion matrix is established, and the information is brought into the matrix to be output and judged.
3. The access control system of claim 2, wherein the face recognition module comprises a first sample training unit and a face matching unit;
the first sample training part unit is used for training and learning face feature information, a large amount of feature analysis is carried out on face image information in the feature data storage module through a deep learning algorithm, the face image information is input into a grid for processing, an output result is calculated through hierarchical transformation, and the output result is stored in the feature data storage module and is used for comparison and matching of subsequent face feature information;
the face matching unit analyzes the same face image in sequence by adopting a principal component analysis method and a linear discriminant analysis method, and the result values of the analysis are fused and cross-calculated to obtain the recognition rate of the face recognition.
4. The access control system of claim 3, wherein the first sample training unit comprises the following specific working processes:
step 1, extracting a human face image information sample α from a characteristic data storage module, inputting the human face image information sample α into a deepID2-plus algorithm and a deep convolutional neural network algorithm, and setting an ideal output value;
step 2, calculating the actual output value Y0
Y0=Fβ(…(F2(F1(aw(1))w(2))...)w(β))
Wherein F1To FβOutput vector, w, representing face image information samples after grid 1 to β layer transformation(1)To w(β)A grid convolution weight matrix representing layers 1 to β;
step 3, calculating forward propagation characteristics of the face image information through a conv function in the matlab;
the forward propagation algorithm is as follows:
picQ=conv(picP,filter,valid)
wherein picQ represents the central feature of the convolution of the returned image, picP is an original convolution vector, filter is a convolution algorithm filtering vector, and valid is a vector convolution return type;
step 4, calculating the actual output value Y0And ideal output value YpPerforms a back-propagation convolution:
epicP=conv(epicQ,filterRot180,full)
wherein the epicP represents the central feature of image back propagation convolution, Rot is the steering rate, full is the vector convolution return type, and e is the coefficient;
step 5, adjusting matrix weights by adopting a minimization error method, carrying out classification comparison, extracting facial image features, storing the facial image features in a feature data storage module, and then carrying out next training and learning; the weight value adjustment formula is as follows: filed ═ conv (epicP, Rot180, epicQ, valid), where filed denotes the adjusted matrix weights.
5. The access control system of claim 3, wherein the face matching unit specifically operates as follows:
step 1, receiving face image information sent by a preprocessing module, and firstly analyzing a principal component analysis algorithm: the face vector of the jth face image in the total number M of samples uses XjRepresenting to obtain a sample covariance matrix ST
Figure FDA0002388469670000021
u is an average image vector of the training samples, and C represents the total class number of the samples; selecting WoptThe function is used as an optimal mapping function, the face image is mapped to a matrix vector, and an orthogonal normalization formula is adopted to calculate STOrthogonal normalized vector UTThe feature vector of the total dispersion matrix is obtained;
step 2, linear discriminant analysis algorithm analysis: assuming that the original image library has N images, centralizing the trained face sample, and calculating the average value u of various face imagesdpAnd the average value u of the total face imagezp
Step 3, forming all centralized face sample images into an array matrix, and solving an orthogonal basis matrix U of the matrix through a component analysis algorithm;
step 4, projecting all the centralized image vectors to the generated orthogonal basis matrix U,
Figure FDA0002388469670000031
wherein
Figure FDA0002388469670000032
And x form a linear regression equation,
Figure FDA0002388469670000033
representing an approximate projection of the image vectors, x representing each image vector; all centered means are projected onto the orthogonal basis matrix U,
Figure FDA0002388469670000034
wherein
Figure FDA0002388469670000035
And u form a linear regression equation,
Figure FDA0002388469670000036
an approximate projection representing the mean, u representing each of the centered means;
step 5, calculating the inter-class distribution degree SBAnd intra-class dispersion SWAnd calculating a characteristic value and a characteristic vector;
wherein
Figure FDA0002388469670000037
SIFor approximating the projection median calculated value, for calculating the median divergence SBAnd intra-class dispersion SWAnd calculating a characteristic value R and a corresponding characteristic vector R: sBR=λSwr and lambda are characteristic coefficients;
step 6, sorting the eigenvectors R according to the magnitude of the eigenvalue R, taking C-1 with the largest energy as Fisher basis vectors, and projecting the original image onto the Fisher basis vectors respectively;
and 7, on the basis of the Fisher face method, performing classification and identification work according to the projection vector and the basis vector, extracting the feature information of the face image, storing the feature information in a feature data storage module, and calculating the identification rate of face identification.
6. The access control system of claim 2, wherein the voiceprint recognition module comprises a second sample training unit and a voice matching unit;
the second sample training unit carries out mesh training on the sound information by adopting the existing deep learning technology DCNN, stores the trained sound information in the feature data storage module, and the voiceprint matching unit receives the sound information sent by the preprocessing module, carries out voiceprint feature extraction by adopting MFCC features, and simultaneously inputs a GMM model for matching and training and identifies a matching result.
7. The door access system of claim 6, wherein the second sample training unit operates as follows:
step 1,Extracting M' sound information samples from a characteristic data storage module, and calculating the Gaussian probability density weighted sum of the sound information samples
Figure FDA0002388469670000041
The formula is as follows:
Figure FDA0002388469670000042
wherein
Figure FDA0002388469670000043
Is a random vector of a certain dimension,
Figure FDA0002388469670000044
is a gaussian probability density function of each sample,
Figure FDA0002388469670000045
is the mixing weight;
step 2, calculating the density function of each member through the Gaussian probability density function, namely the Gaussian distribution function of each voice signal, wherein the Gaussian probability density function refers to a Gaussian distribution function expression, and parameter values are replaced by Gaussian distribution function expressions
Figure FDA0002388469670000046
Step 3, matching and sequencing each Gaussian distribution function value obtained in the step 2 with a voice time sequence, and calculating the likelihood probability of the GMM;
and 4, recognizing the likelihood probability calculated in the step 3 by adopting a Bayes principle, judging by utilizing a maximum likelihood probability principle, and matching the attribute of the feature information by matching the optimal similarity value, namely completing model training.
8. The door access control system of claim 6, wherein the sound matching unit operates as follows:
step 1, the voiceprint matching unit receives the sound information S sent by the preprocessing modulenFirstly, the sound information S is processed by differencenProcessing, determining sampling period according to signal broadband and Shannon theorem, eliminating signal distortion to obtain processed sample Xn
Step 2, extracting voiceprint features by adopting MFCC features;
and 3, storing the extracted sound characteristic information in a characteristic data storage module, inputting a GMM model for matching and training at the same time, and identifying a matching result.
CN202010105758.7A 2020-02-21 2020-02-21 Intelligent access control system based on multi-biological-feature fusion Pending CN111311809A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010105758.7A CN111311809A (en) 2020-02-21 2020-02-21 Intelligent access control system based on multi-biological-feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010105758.7A CN111311809A (en) 2020-02-21 2020-02-21 Intelligent access control system based on multi-biological-feature fusion

Publications (1)

Publication Number Publication Date
CN111311809A true CN111311809A (en) 2020-06-19

Family

ID=71145144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010105758.7A Pending CN111311809A (en) 2020-02-21 2020-02-21 Intelligent access control system based on multi-biological-feature fusion

Country Status (1)

Country Link
CN (1) CN111311809A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112149638A (en) * 2020-10-23 2020-12-29 贵州电网有限责任公司 Personnel identity recognition system construction and use method based on multi-modal biological characteristics
CN112784478A (en) * 2021-01-07 2021-05-11 李琳琳 Intelligent doorbell AI scene self-learning training modeling method and system
CN112951244A (en) * 2021-03-15 2021-06-11 讯翱(上海)科技有限公司 Digital certificate authentication method based on voiceprint recognition
CN113192251A (en) * 2021-04-29 2021-07-30 常熟理工学院 Method for realizing identity recognition of multiple biological characteristics capable of being configured on line
CN113345145A (en) * 2021-05-21 2021-09-03 马中原 Two-door intelligent management system based on multiple authentication
CN113392719A (en) * 2021-05-21 2021-09-14 华南农业大学 Intelligent electronic lock unlocking method, electronic equipment and storage medium
CN113505739A (en) * 2021-07-27 2021-10-15 同济大学 Indoor human pet distinguishing and behavior recognition method and system
CN114973490A (en) * 2022-05-26 2022-08-30 南京大学 Monitoring and early warning system based on face recognition
CN115223278A (en) * 2022-07-15 2022-10-21 深圳牛智技术科技有限公司 Intelligent door lock based on face recognition and unlocking method
CN116259095A (en) * 2023-03-31 2023-06-13 南京审计大学 Computer-based identification system and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1794266A (en) * 2005-12-31 2006-06-28 清华大学 Biocharacteristics fusioned identity distinguishing and identification method
CN102034288A (en) * 2010-12-09 2011-04-27 江南大学 Multiple biological characteristic identification-based intelligent door control system
CN105426875A (en) * 2015-12-18 2016-03-23 武汉科技大学 Face identification method and attendance system based on deep convolution neural network
CN107680229A (en) * 2017-10-23 2018-02-09 西安科技大学 Gate control system and its control method based on phonetic feature and recognition of face
CN107784316A (en) * 2016-08-26 2018-03-09 阿里巴巴集团控股有限公司 A kind of image-recognizing method, device, system and computing device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1794266A (en) * 2005-12-31 2006-06-28 清华大学 Biocharacteristics fusioned identity distinguishing and identification method
CN102034288A (en) * 2010-12-09 2011-04-27 江南大学 Multiple biological characteristic identification-based intelligent door control system
CN105426875A (en) * 2015-12-18 2016-03-23 武汉科技大学 Face identification method and attendance system based on deep convolution neural network
CN107784316A (en) * 2016-08-26 2018-03-09 阿里巴巴集团控股有限公司 A kind of image-recognizing method, device, system and computing device
CN107680229A (en) * 2017-10-23 2018-02-09 西安科技大学 Gate control system and its control method based on phonetic feature and recognition of face

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李雨凇等: "采用决策层融合的人脸语音识别技术", 《微电子学与计算机》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112149638A (en) * 2020-10-23 2020-12-29 贵州电网有限责任公司 Personnel identity recognition system construction and use method based on multi-modal biological characteristics
CN112149638B (en) * 2020-10-23 2022-07-01 贵州电网有限责任公司 Personnel identity recognition system construction and use method based on multi-modal biological characteristics
CN112784478A (en) * 2021-01-07 2021-05-11 李琳琳 Intelligent doorbell AI scene self-learning training modeling method and system
CN112951244A (en) * 2021-03-15 2021-06-11 讯翱(上海)科技有限公司 Digital certificate authentication method based on voiceprint recognition
CN113192251A (en) * 2021-04-29 2021-07-30 常熟理工学院 Method for realizing identity recognition of multiple biological characteristics capable of being configured on line
CN113345145A (en) * 2021-05-21 2021-09-03 马中原 Two-door intelligent management system based on multiple authentication
CN113392719A (en) * 2021-05-21 2021-09-14 华南农业大学 Intelligent electronic lock unlocking method, electronic equipment and storage medium
CN113505739A (en) * 2021-07-27 2021-10-15 同济大学 Indoor human pet distinguishing and behavior recognition method and system
CN113505739B (en) * 2021-07-27 2022-10-25 同济大学 Indoor favorite distinguishing and behavior recognition method and system
CN114973490A (en) * 2022-05-26 2022-08-30 南京大学 Monitoring and early warning system based on face recognition
CN115223278A (en) * 2022-07-15 2022-10-21 深圳牛智技术科技有限公司 Intelligent door lock based on face recognition and unlocking method
CN116259095A (en) * 2023-03-31 2023-06-13 南京审计大学 Computer-based identification system and method

Similar Documents

Publication Publication Date Title
CN111311809A (en) Intelligent access control system based on multi-biological-feature fusion
US10565433B2 (en) Age invariant face recognition using convolutional neural networks and set distances
Sahoo et al. Multimodal biometric person authentication: A review
JP4543423B2 (en) Method and apparatus for automatic object recognition and collation
CN107507286B (en) Bimodal biological characteristic sign-in system based on face and handwritten signature
Sasidhar et al. Multimodal biometric systems-study to improve accuracy and performance
Sanchez-Reillo Hand geometry pattern recognition through gaussian mixture modelling
Hanmandlu et al. Score level fusion of hand based biometrics using t-norms
Gopal et al. Accurate human recognition by score-level and feature-level fusion using palm–phalanges print
Conti et al. Fuzzy fusion in multimodal biometric systems
WO2022268183A1 (en) Video-based random gesture authentication method and system
RU2381553C1 (en) Method and system for recognising faces based on list of people not subject to verification
Rajasekar et al. Efficient multimodal biometric recognition for secure authentication based on deep learning approach
TWI325568B (en) A method for face varification
Khan et al. Dorsal hand vein biometric using Independent Component Analysis (ICA)
Badrinath et al. An efficient multi-algorithmic fusion system based on palmprint for personnel identification
Wang et al. An effective multi-biometrics solution for embedded device
Prasanth et al. Fusion of iris and periocular biometrics authentication using CNN
Sharma et al. Multimodal biometric system fusion using fingerprint and face with fuzzy logic
CN114973307A (en) Finger vein identification method and system for generating countermeasure and cosine ternary loss function
Tahmasebi et al. Signature identification using dynamic and HMM features and KNN classifier
Sheena et al. Fingerprint Classification with reduced penetration rate: Using Convolutional Neural Network and DeepLearning
Arriaga-Gómez et al. A comparative survey on supervised classifiers for face recognition
Saber Development of e-learning security techniques to improve the security of information with voice and face recognition
Melin et al. Human Recognition using Face, Fingerprint and Voice

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200619