CN111144352B - Intelligent sensing-oriented safe transmission and identification method for face images - Google Patents

Intelligent sensing-oriented safe transmission and identification method for face images Download PDF

Info

Publication number
CN111144352B
CN111144352B CN201911400109.3A CN201911400109A CN111144352B CN 111144352 B CN111144352 B CN 111144352B CN 201911400109 A CN201911400109 A CN 201911400109A CN 111144352 B CN111144352 B CN 111144352B
Authority
CN
China
Prior art keywords
face
image
face image
user
end server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911400109.3A
Other languages
Chinese (zh)
Other versions
CN111144352A (en
Inventor
李运发
涂逸飞
王云超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201911400109.3A priority Critical patent/CN111144352B/en
Publication of CN111144352A publication Critical patent/CN111144352A/en
Application granted granted Critical
Publication of CN111144352B publication Critical patent/CN111144352B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/44Secrecy systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof
    • G06V40/53Measures to keep reference information secret, e.g. cancellable biometrics

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Signal Processing (AREA)
  • Collating Specific Patterns (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face image intelligent induction-oriented safe transmission and identification method, which comprises the following steps: firstly, constructing an acquisition and feature construction algorithm of a face image of a user in a combined way at a server side; secondly, constructing a face image sensing and encrypting algorithm of the user with unknown identity at the image acquisition front end of the intelligent sensor; thirdly, constructing a face image receiving and decrypting algorithm of the user with unknown identity at the server side; and finally, constructing a safety recognition algorithm of the face image of the user with unknown identity at the server side. The invention utilizes the chaotic sequence of the face image to generate the image position mapping matrix, and then utilizes a certain algorithm to scramble the pixel position of the face image so as to mask the true value of the face image. The new encryption algorithm can effectively resist image statistics attack and infinite attack, and has good safety protection performance.

Description

Intelligent sensing-oriented safe transmission and identification method for face images
Technical Field
The invention belongs to the field of safe transmission and identification of intelligent sensing of face images in the Internet of things, and aims to provide a safe identification method for intelligent sensing of face images in the Internet of things. The method relates to a legal user face image acquisition and feature construction algorithm, an identity unknown user face image sensing and encryption algorithm, an identity unknown user face image receiving and decryption algorithm and an identity unknown user face image safety recognition algorithm.
Background
Along with the rapid development of the internet of things, various intelligent sensor devices are widely applied in daily life. The intelligent face image collector can be widely used in banks, public security, courts, armies, government departments, factories and mining enterprises and other institutions, so that the intelligent face image collector has been rapidly developed. In the internet of things, since the image information collected by the intelligent face image collector is transmitted wirelessly, many security problems are faced in the transmission process generally, such as: tampered, stolen, tampered with, etc. In order to avoid the security threat of intelligent acquisition information of the face image, a safe and effective identification method is needed.
In recent years, face image recognition has turned to the intelligent level from traditional alignment techniques. The intelligent recognition of the face image is to recognize and judge the face image by combining the unique facial five sense organs, contours, physiology and behavior characteristics of the face through intelligent acquisition technology, computer graphics, neural network technology, digital image processing, mode recognition and wireless sensing technology. The security recognition of the face image is to encrypt the face image through modern cryptography, computer graphics, digital image processing, pattern recognition and wireless sensing technologies, then transmit the face image through network communication technology, and finally decrypt and authenticate the received image at the destination.
Currently, people basically use identity recognition for safety recognition of face images. The method mainly comprises the following steps: (1) an identity authentication scheme of static passwords. In the scheme, the user uses a fixed password in the whole identity authentication process, and the password cannot be replaced in the middle. The advantages of this solution are: the authentication process is simple, complex key calculation and communication are not existed, and the authentication process is easy to realize. The disadvantages are: the security of the static password system is easy to lose, and lawbreakers can easily acquire corresponding passwords through modes of guessing, stealing, monitoring and the like. (2) identity authentication scheme of dynamic password. In this scheme, the user encrypts, transmits, decrypts and authenticates the information according to time and varying passwords by dynamically using the token. The advantages of this solution are: the authentication security is higher, and the lawbreaker is difficult to obtain the dynamic password in modes of guessing, stealing, monitoring and the like. The disadvantages are: the calculation of the dynamic change of the password is complex, the encryption process is complex, and the key negotiation and transmission are frequent. In addition, if the information transmitting end and the information receiving end cannot keep the synchronization of time and token, the information receiving end may not be able to receive or decrypt the ciphertext information.
From the above analysis, it can be seen that: in the Internet of things, the identity authentication scheme using the static password or the identity authentication scheme using the dynamic password can improve the security of the system to a certain extent, and have certain advantages. However, since the two identity authentication schemes also have certain drawbacks, certain security problems are faced in the internet of things. Particularly, in the wireless transmission process of the internet of things, the security problem faced will be greater. Under the condition, a face image intelligent sensing-oriented safe transmission and identification method is designed in the Internet of things. The method comprises the steps of firstly constructing an acquisition and feature construction algorithm of a face image of a user in a combined mode at a server side. Then, an algorithm for sensing and encrypting the face image of the user with unknown identity is constructed at the image acquisition front end of the intelligent sensor. On the basis, a face image receiving and decrypting algorithm of the user with unknown identity is constructed at the server side. And finally, constructing a safety recognition algorithm of the face image of the user with unknown identity at the server side.
Disclosure of Invention
In view of the above technical problems existing in the prior art, an object of the present invention is to: in the Internet of things, a collection and feature construction algorithm of a legal user face image is constructed; (2) An algorithm for sensing and encrypting the face image of the user with unknown identity; (3) A face image receiving and decrypting algorithm of the user with unknown identity; (4) The identity is unknown, and the user face image is safe. Through the four algorithms, the safety recognition of intelligent face acquisition is realized in the Internet of things.
In order to solve the problems, in the acquisition and feature construction algorithm of the legal user face image, based on the intelligent acquisition requirement of the face image in the Internet of things, the advantages and disadvantages of an identity authentication scheme of a static password or an identity authentication scheme of a dynamic password are considered, and on one hand, the acquired face image is grayed. And then, the gray-scale face image information is corresponding to the identity information, and an information base of the legal face image of the user is constructed, so that the face image of the user with unknown identity can be conveniently searched. On the other hand, a sample set and a characteristic face space are formed by constructing the face gray images, and the acquired difference face vector between the face gray image and the average face of each legal user is projected to the characteristic face space, so that the identification of the face image of the user with unknown identity is facilitated.
In order to solve the problems, in the algorithm for sensing and encrypting the face image of the user with unknown identity, on one hand, the symmetric encryption key is adopted to participate in fusion calculation, so that the face image of the user with unknown identity is ensured to be simpler in communication in network transmission, no complex key calculation exists, and the subsequent image recognition and authentication are facilitated. And on the other hand, the face image of the unknown user is fused with the random image, so that the face image of the unknown user is not exposed in network transmission, and the security of the face image of the unknown user in network transmission is maintained.
In order to solve the above problems, in the face image receiving and decrypting algorithm of the present invention, an encrypted image is received on one hand, and a symmetric key is used to decrypt and iterate the received encrypted image, so as to calculate a face image and a random image of an unknown user. And the method is used for safely identifying the face image of the user with unknown identity in the next step.
In order to solve the problems, in the security recognition algorithm of the face image of the unknown user, the influences of image fusion, encryption and decryption on an original image are fully considered, and the Euler's normal form is adopted to calculate the Euclidean distance between the feature space of the face image after decryption and the feature space of the face image of the legal user. Through the rule classification of the Euclidean distance, the face image and the identity of the user with unknown identity are safely identified, and a safe support is provided for other application services of the system.
In a word, the safety transmission and identification method for intelligent face image acquisition in the Internet of things has the following advantages and effects:
1. new legal user face image acquisition and feature construction algorithm is adopted
The construction algorithm fully considers the safety problem faced by the image information in the transmission process of the Internet of things, and aims at the advantages and disadvantages of an identity authentication scheme combining a static password or an identity authentication scheme combining a dynamic password, and firstly, whether the face image of a legal user needs to be acquired is judged. And then, graying each pixel point of the acquired face image of the legal user to cover up the true value of the image. Meanwhile, the identity information and the database of the legal user are constructed. On the basis, a sample set and a characteristic face space are formed by the face gray level images, and the acquired difference face vector between the face gray level image and the average face of each legal user is projected to the characteristic face space. Through the construction algorithm, on one hand, the authenticity of the image of the legal user can be safely protected and hidden, and on the other hand, the characteristic face space of the legal user is constructed, so that the safety recognition of the face image of the user with unknown identity is facilitated.
2. Human face image sensing and encrypting algorithm using new unknown user
In order to solve the problems, in the face image sensing and encrypting algorithm of the unknown user, firstly, a face image is sensed by a face image intelligent sensor. Based on the random image fusion, the random image fusion is carried out on the sensed face image, and the encrypted image is obtained. On the one hand, the algorithm adopts a static encryption scheme, so that the encryption process is simpler, complex key calculation and communication are not existed, and the method is easy to realize. On the other hand, the safety of the face image in transmission is ensured. In the algorithm, the face images are constructed into chaotic sequences with the same size through encryption calculation. By the chaos sequence, enough key space is provided for the encryption space, the traditional encryption mode and method are changed, and the method has more frontier novelty and creativity.
3. Face image receiving and decrypting algorithm adopting new unknown user
In order to solve the problems, in the face image receiving and decrypting algorithm of the unknown user, the encrypted image is subjected to iterative computation according to the sensitivity and the synchronous characteristic of the chaotic sequence to the initial value, so that the pixel position of the encrypted image is replaced and restored, and the encrypted image is decrypted. The face image receiving and decrypting algorithm of the new unknown user improves the robustness of the system, so that the whole encryption system has good user experience.
4. Security recognition algorithm adopting new face image of unknown user
In order to solve the above problems, in the security recognition algorithm of the face image of the unknown user of the present invention, the face image of the unknown user received and decrypted is first subjected to gray processing, and then the face image P of the unknown user to be recognized is subjected to gray processing * (U n ) The difference value from the average face is projected to a feature space to construct a feature vector thereof. On the basis, a threshold value is defined, and Euclidean distance between the feature space of the decrypted face image and the feature space of the face image of the legal user is calculated by adopting an Euler normal form. Through the distance, a classification rule of face recognition is constructed, so that the face image and the identity of the user with unknown identity are safely recognized, and a safe support is provided for other application services of the system.
5. Good robust performance
The intelligent sensing safe transmission and identification method of the face image combines the advantages of an identity authentication scheme of a static password on one hand, and gray level of image pixels is used for fusion with images in multi-layer encryption of the face image on the other hand, so that the face image is constructed into a chaotic sequence with the same size. Through the chaos sequence, a sufficient key space is provided for the encryption space, the traditional encryption mode and method are changed, the novel and creative feature of the front edge is achieved, and the robust performance is good.
6. Good safety and protection
The key idea of the novel human face image encryption algorithm actually used by the human face image intelligent sensing safe transmission and identification method is that an image position mapping matrix is generated by using a chaos sequence of human face images, and then the pixel positions of the human face images are scrambled by using a certain algorithm to cover the true values of the human face images. The new encryption algorithm can effectively resist image statistics attack and infinite attack, and has good safety protection performance.
Drawings
Fig. 1 is an architecture diagram of a security transmission and recognition method of intelligent sensing of a face image.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
From the perspective of a face image intelligent sensing safe transmission and identification method, the method comprises the following steps: (1) Collecting and characteristic construction algorithm of legal user face images; (2) The algorithm of human face image sensing and encryption of the user with unknown identity. (3) The face image receiving and decrypting algorithm of the user with unknown identity. (4) The identity is unknown, and the user face image is safe.
Algorithm 1: acquisition and feature construction algorithm for legal user face image
The first step: the back-end face image collector judges whether the face image of the legal user needs to be collected or not according to the command of the back-end server. If yes, turning to the second step; if not, turning to a twenty-first step;
and a second step of: the method comprises the steps that a face image P (U) of a legal user U of a rear-end face image collector is randomly selected, and two larger integers M and N are selected;
and a third step of: the back-end face image collector divides the face image P (U) into M x N blocks with the same image size and stores pixel RGB values, namely (R, G, B) values, of each block of the face image P (U).
Fourth step: the back-end face image collector transmits the collected face image P (U) and pixel RGB values of each block of the face image P (U) to the back-end server;
fifth step: the back-end server receives the face image P (U) and the pixel RGB value of each block of the face image P (U) sent by the back-end face image collector. Then, it is determined whether or not the pixel RGB values of each block of the face image P (U) of the legitimate user are grayed out. If yes, turning to a sixth step; otherwise, turning to the eighth step.
Sixth step: the backend server will use a gray scale algorithm: gray= (R0.3+g 0.59+b 0.11) to calculate the Gray value of the block;
seventh step: the back-end server judges whether the pixel RGB value of each block of the face image P (U) of the legal user U is calculated. If the calculation is finished, turning to an eighth step; otherwise, turning to the next pixel block without calculating RGB value, and turning to the sixth step;
eighth step: after each pixel block of the face image P (U) of the legal user U is grayed, the back-end server marks the gray image of each block of the face image P (U) of the legal user U as P i,j (U), (i=1, 2,3,) M, j=1, 2,3,) N) and entering the identity information ID (U) of the legitimate user U;
ninth step: the back-end server sets the gray-scale image P of each block of the face image P (U) of the legal user U i,j (U), (i=1, 2,3,) M, j=1, 2,3,) N) and identity information ID (U) are saved to a database.
Tenth step: according to the gray level image P of each block of the face image P (U) of the legal user U in the database i,j (U), (i=1, 2,3,., M, j=1, 2,3,.,. N) and identity information ID (U), the back-end server constructs a face grayscale image sample set of the legitimate user U, the sample set consisting entirely of grayscale images of each block of the legitimate user U.
Eleventh step: according to a face gray image sample set of a legal user U, a back-end server constructs a sample matrix of the face gray image of the legal user U: x= [ X ] 1 (U),X 2 (U),X 3 (U),...,X i (U)...,X M (U)] T Wherein the vector X i (U) dividing the face image of the legal user U into gray image vectors of all blocks corresponding to the ith row after M rows and N columns of blocks, namely X i (U)=[P i,1 (U),P i,2 (U),P i,3 (U),...,P i,N (U)];
Twelfth step: the back-end server calculates X in the legal user U by the following formula i Average face value of face image block corresponding to (U):
Figure BDA0002345515670000061
thirteenth step: the back-end server calculates X in the legal user U by the following formula i Gray difference of face and average face of (U):
d i =X i (U)-Ψ i ,i=1,2,...,M。 (2)
fourteenth step: the back-end server constructs a covariance matrix of the face gray level image:
A=(d 1 ,d 2 ,...,d M ) (3)
Figure BDA0002345515670000062
/>
fifteenth step: the back-end server obtains A by adopting a singular value decomposition method T The characteristic value and the characteristic vector of A;
sixteenth step: determining AA based on the feature value and the feature vector obtained in the fourteenth step T Is described, and feature vectors.
Seventeenth step: based on the feature value and feature vector obtained in the fourteenth step, the back-end server pair A T Carrying out standard orthogonalization on the feature vector corresponding to each feature value lambdaj in A to obtain an orthogonalized standard feature vector V i
Eighteenth step: and selecting the first K largest characteristic values and the corresponding characteristic vectors according to the contribution rates of the characteristic values. Wherein the contribution rate refers to the ratio of the sum of the selected characteristic values to the sum of all the characteristic values. Here, it is assumed that the contribution ratio of the feature value is
Figure BDA0002345515670000071
Then:
Figure BDA0002345515670000072
where b is a constant (5) determined by the system
Nineteenth step: the back-end server selects b=99%, namely, ensures that orthogonal projection of feature vectors corresponding to the first K maximum feature values of the gray image sample occupies the whole A T 99% of normalized orthogonal eigenvectors corresponding to the characteristic values A, and obtaining eigenvectors of the original covariance matrix meeting the condition
Figure BDA0002345515670000073
The calculation formula is as follows:
Figure BDA0002345515670000074
twenty-step: covariance matrix AA for constructing face gray level image T The result of the feature face space under the condition that the contribution rate is more than 99 percent is as follows:
Figure BDA0002345515670000075
twenty-first step: the back-end server stores w and transmits the corresponding integers M and N to the front-end face intelligent sensor in a secret mode;
twenty-first step: ending
Algorithm 2: face image sensing and encrypting algorithm for user with unknown identity
The first step: the front-end face intelligent sensor senses an unknown user u with a certain identity randomly and intelligently Θ Face image P of (a) 1 And other non-face images P 2
And a second step of: the front-end face intelligent sensor randomly selects one number in (0, 1) and is set as a 0 The method comprises the steps of carrying out a first treatment on the surface of the In [3.57,4 ]]Randomly selecting one number, and setting the number as beta; setting the initial iteration times t to be 0;
and a third step of: the front-end face intelligent sensor randomly selects an image according to encryption requirements and sets the image as P r
Fourth step: the front-end face intelligent sensor receives integers M and N sent by the back-end server and respectively sends P to the back-end server 1 、P 2 And P r Dividing the video into M x N blocks with the same image size;
fifth step: front-end face intelligent sensor is according to a 0 Beta and t=0, the chaotic sequence { a } is calculated using the following equation i 0 therein<a i <1,i=1,2,...M。
a i+1 =β*a i *(1-a i ) (8)
Sixth step: the front-end face intelligent sensor judges whether t is larger than 15, if yes, the step is switched to a fifteenth step, and if not, the step is switched to a seventh step;
seventh step: i=1;
eighth step: judging whether i is larger than M by the front-end face intelligent sensor, if so, turning to a fourteenth step, and if not, turning to a ninth step;
ninth step: j=1;
tenth step: judging whether j is larger than N by the front-end face intelligent sensor, if yes, turning to a thirteenth step, and if not, turning to an eleventh step;
eleventh step: the front-end face intelligent sensor calculates through the following steps
Figure BDA0002345515670000081
Twelfth step: j=j+1, go to the tenth step;
thirteenth step: i=i+1, go to eighth step;
fourteenth step: t=t+1, go to the sixth step;
fifteenth step: definition of an unidentified user u Θ E, wherein E (i, j) is u Θ The gray value, P, of the face encryption image of (a) in the image coordinate (i, j) block 1 (i, j) is the user image P 1 Gray value, P, of block at image coordinates (i, j) 2 (i, j) is other non-face image P 2 Gray values at the image coordinate (i, j) block;
sixteenth step: the front-end face intelligent sensor secretly converts information xi= { a 0 ||β||P r Transmitting to a backend server;
seventeenth step: the front-end human face intelligent sensor is used for identifying unknown user u Θ The face encryption image E of the person is transmitted to a back-end server;
eighteenth step: and (5) ending.
Algorithm 3: face image receiving and decrypting algorithm for user with unknown identity
The first step: front-end face intelligent receiving by back-end serverInformation ζ= { a sent by sensor 0 ||β||P r };
And a second step of: the back-end server receives the unknown identity user u sent by the front-end face intelligent sensor Θ A face encryption image E of (2);
and a third step of: the back-end server generates information xi= { a according to the information xi= { a sent by the front-end face intelligent sensor 0 ||β||P r The chaotic sequence { a } and the face encryption image E are calculated by the following equation i 0 therein<a i <1,i=1,2,...M。
a i+1 =β*a i *(1-a i ) (10)
Fourth step: the back-end server judges whether t is larger than 15, if yes, the process goes to the thirteenth step, and if not, the process goes to the fifth step;
fifth step: i=1;
sixth step: the back-end server judges whether i is larger than M, if so, the step is switched to the twelfth step, and if not, the step is switched to the seventh step;
seventh step: j=1;
eighth step: the back-end server judges whether j is larger than N, if yes, the step is switched to the eleventh step, and if not, the step is switched to the ninth step;
ninth step: the backend server performs the calculations by
Figure BDA0002345515670000091
Tenth step: j=j+1, go to eighth step;
eleventh step: i=i+1, go to the sixth step;
twelfth step: t=t+1, go to the fourth step;
thirteenth step: the back-end server sets i=1;
fourteenth step: the back-end server judges whether i is larger than M, if so, the step is switched to the twentieth step, and if not, the step is switched to the fifteenth step;
fifteenth step: j=1;
sixteenth step: the back-end server judges whether j is larger than N, if yes, the step is switched to nineteenth, and if not, the step is switched to seventeenth;
seventeenth step: the backend server performs the calculations by
Figure BDA0002345515670000101
Eighteenth step: j=j+1, go to the sixteenth step;
nineteenth step: i=i+1, go to the fourteenth step;
twenty-step: the back-end server stores the decrypted face image and sets the face image as a CP 1 And CP 2 The method comprises the steps of carrying out a first treatment on the surface of the Wherein CP 1 (i, j) is the decrypted face image CP 1 Gray scale value of block at image coordinates (i, j), CP 2 (i, j) is other non-face image CP after decryption 2 Gray values at the image coordinate (i, j) block;
twenty-first step: and (5) ending.
Algorithm 4: security recognition algorithm for face image of user with unknown identity
The first step: the back-end server selects the face image CP of the unknown user with the identity to be identified after decryption from the decrypted face image according to the requirement 1 (or other non-face image CP 2 );
And a second step of: the back-end server selects the face image CP of the unknown user with the identity to be identified 1 (or other non-face image CP 2 ) Face image CP to be recognized 1 (or other non-face image CP 2 ) Dividing into M x N blocks with the same image size and storing a face image CP 1 The pixel RGB values, i.e., (R, G, B) values, of each block;
and a third step of: the back-end server calculates face images CP of the unknown identity users according to the seventh step to the twentieth step in the algorithm 1 1 (or other non-face image CP 2 ) Constructing a feature face space of a gray image under the condition that the contribution rate is more than 99% (the result is assumed to be w Θ ) Feature vector A corresponding to feature value Θ V Θ (here V Θ =[V 1 Θ ,V 2 Θ ,...,V M Θ ]) Value difference A between the eigenvector corresponding to the eigenvalue and the corresponding average vector Θ V ΘΘ (where ψ Θ =[Ψ 1 Θ2 Θ ,...,Ψ M Θ ]):
Fourth step: the back-end server calculates a face image CP of a user with unknown identity 1 (or other non-face image CP 2 ) The projection feature face space under the condition that the contribution rate is more than 99% is satisfied, and the calculation formula is as follows:
Figure BDA0002345515670000102
fifth step: the backend server calculates w and AV according to algorithm 1 ii And calculating a projection characteristic face space of a pixel value of a legal user U under the condition that the contribution rate is more than 99%, wherein the calculation formula is as follows:
Ω P(U) =w T (AV-Ψ) (14)
sixth step: the back-end server calculates the image threshold of the legal user: i.e.
Figure BDA0002345515670000111
/>
Seventh step: the back-end server calculates Ω using euclidean distance P(U) Distance epsilon from decrypted image i The method comprises the following steps:
Figure BDA0002345515670000112
eighth step: the back-end server adopts the following rules to identify and classify the faces:
1) If all epsilon i ≥θ 2 (here i=1, 2,., M), the image to be recognized is not a face image;
2) If all epsilon i <θ 2 (here i=1, 2,., M), the image to be identified is a legal user gray chart;
3) If part i is present (where 1.ltoreq.i.ltoreq.M) such that ε i ≥θ 2 There is also a part i (here
1.ltoreq.i.ltoreq.M) such that ε i <θ 2 The image to be identified is not a legal user gray level image;
ninth step: the back-end server stores and displays the face recognition result;
tenth step: and (5) ending.

Claims (4)

1. The intelligent face image sensing-oriented safe transmission and identification method is characterized by comprising the following steps of: firstly, constructing an acquisition and feature construction algorithm of a face image of a user in a combined way at a server side; secondly, constructing a face image sensing and encrypting algorithm of the user with unknown identity at the image acquisition front end of the intelligent sensor; thirdly, constructing a face image receiving and decrypting algorithm of the user with unknown identity at the server side; finally, constructing a safety recognition algorithm of the face image of the user with unknown identity at the server side;
the legal user face image acquisition and feature construction algorithm comprises the following steps:
the first step: the back-end face image collector judges whether a face image of a legal user needs to be collected according to a command of a back-end server; if yes, turning to the second step; if not, turning to the twenty-first step;
and a second step of: the rear-end face image collector collects face images P (U) of legal users U and randomly selects two integers M and N, wherein both M and N are greater than or equal to 4;
and a third step of: the rear-end face image collector divides a face image P (U) into M x N blocks with the same image size and stores pixel RGB values, namely (R, G, B) values, of each block of the face image P (U);
fourth step: the back-end face image collector transmits the collected face image P (U) and pixel RGB values of each block of the face image P (U) to the back-end server;
fifth step: the back-end server receives the face image P (U) and the pixel RGB value of each block of the face image P (U) sent by the back-end face image collector; then, judging whether the pixel RGB value of each block of the face image P (U) of the legal user is gray; if yes, turning to a sixth step; otherwise, turning to an eighth step;
sixth step: the backend server will use a gray scale algorithm: gray= (R0.3+g 0.59+b 0.11) to calculate the Gray value of the block;
seventh step: the back-end server judges whether the pixel RGB value of each block of the face image P (U) of the legal user U is calculated; if the calculation is finished, turning to an eighth step; otherwise, turning to the next pixel block without calculating the RGB value, and turning to the sixth step;
eighth step: after each pixel block of the face image P (U) of the legal user U is grayed, the back-end server marks the gray image of each block of the face image P (U) of the legal user U as P i,j (U), i=1, 2,3,) M, j=1, 2,3,) N and entering identity information ID (U) of the legitimate user U;
ninth step: the back-end server sets the gray-scale image P of each block of the face image P (U) of the legal user U i,j (U), i=1, 2,3,..m, j=1, 2,3,.. N and identity information ID (U) are stored in a database;
tenth step: according to the gray level image P of each block of the face image P (U) of the legal user U in the database i,j (U), i=1, 2,3, M, j=1, 2,3, N and identity information ID (U), the back-end server constructs a human face gray image sample set of the legal user U, wherein the sample set is composed of gray images of each block of the legal user U;
eleventh step: according to a face gray image sample set of a legal user U, a back-end server constructs a sample matrix of the face gray image of the legal user U: x= [ X ] 1 (U),X 2 (U),X 3 (U),...,X i (U)...,X M (U)] T Wherein the vector X i (U) dividing the face image of the legal user U into gray image vectors of all blocks corresponding to the ith row after M rows and N columns of blocks, namely X i (U)=[P i,1 (U),P i,2 (U),P i,3 (U),...,P i,N (U)];
Twelfth step: the back-end server calculates X in the legal user U by the following formula i Average face value of face image block corresponding to (U):
Figure FDA0004130294960000021
thirteenth step: the back-end server calculates X in the legal user U by the following formula i Gray difference of face and average face of (U):
d i =X i (U)-Ψ i ,i=1,2,...,M (2)
fourteenth step: the back-end server constructs a covariance matrix of the face gray level image:
A=(d 1 ,d 2 ,...,d M ) (3)
Figure FDA0004130294960000022
fifteenth step: the back-end server obtains A by adopting a singular value decomposition method T The characteristic value and the characteristic vector of A;
sixteenth step: determining AA based on the feature value and the feature vector obtained in the fourteenth step T Is a feature value and a feature vector of (1);
seventeenth step: based on the feature value and feature vector obtained in the fourteenth step, the back-end server pair A T Each eigenvalue lambda in A i Orthogonalization is carried out on the corresponding feature vector to obtain an orthogonalized standard feature vector V i
Eighteenth step: selecting the first K largest characteristic values and corresponding characteristic vectors thereof according to the contribution rate of the characteristic values; the contribution rate refers to the ratio of the sum of the selected characteristic values to the sum of all the characteristic values; here, it is assumed that the contribution ratio of the feature value is
Figure FDA0004130294960000023
Then:
Figure FDA0004130294960000031
nineteenth step: the back-end server selects b=99%, namely, ensures that orthogonal projection of feature vectors corresponding to the first K maximum feature values of the gray image sample occupies the whole A T 99% of normalized orthogonal eigenvectors corresponding to the characteristic values A, and obtaining eigenvectors of the original covariance matrix meeting the condition
Figure FDA0004130294960000032
The calculation formula is as follows:
Figure FDA0004130294960000033
twenty-step: covariance matrix AA for constructing face gray level image T The result of the feature face space under the condition that the contribution rate is more than 99 percent is as follows:
Figure FDA0004130294960000034
twenty-first step: the back-end server stores w and transmits the corresponding integers M and N to the front-end face intelligent sensor in a secret mode;
twenty-first step: and (5) ending.
2. The intelligent sensing-oriented face image security transmission and identification method as claimed in claim 1, wherein: the algorithm for sensing and encrypting the face image of the user with unknown identity comprises the following steps:
the first step: the front-end face intelligent sensor senses an unknown user u with a certain identity randomly and intelligently Θ Face image P of (a) 1 And other non-face images P 2
And a second step of: the front-end human face intelligent sensor is arranged in (0, 1)Randomly selecting a number a 0 The method comprises the steps of carrying out a first treatment on the surface of the In [3.57,4 ]]Randomly selecting one number, and setting the number as beta; setting the initial iteration times t to be 0;
and a third step of: the front-end face intelligent sensor randomly selects an image according to encryption requirements and sets the image as P r
Fourth step: the front-end face intelligent sensor receives integers M and N sent by the back-end server and respectively sends P to the back-end server 1 、P 2 And P r Dividing the video into M x N blocks with the same image size;
fifth step: front-end face intelligent sensor is according to a 0 Beta and t=0, the chaotic sequence { a } is calculated using the following equation i 0 therein<a i <1,i=1,2,...M;
a i+1 =β*a i *(1-a i ) (8)
Sixth step: the front-end face intelligent sensor judges whether t is larger than 15, if yes, the step is switched to a fifteenth step, and if not, the step is switched to a seventh step;
seventh step: i=1;
eighth step: judging whether i is larger than M by the front-end face intelligent sensor, if so, turning to a fourteenth step, and if not, turning to a ninth step;
ninth step: j=1;
tenth step: judging whether j is larger than N by the front-end face intelligent sensor, if yes, turning to a thirteenth step, and if not, turning to an eleventh step;
eleventh step: the front-end face intelligent sensor calculates through the following steps
Figure FDA0004130294960000041
Twelfth step: j=j+1, go to the tenth step;
thirteenth step: i=i+1, go to eighth step;
fourteenth step: t=t+1, go to the sixth step;
fifteenth step: definition of an unidentified user u Θ Is the human face encryption imageE, wherein E (i, j) is u Θ The gray value, P, of the face encryption image of (a) in the image coordinate (i, j) block 1 (i, j) is the user image P 1 Gray value, P, of block at image coordinates (i, j) 2 (i, j) is other non-face image P 2 Gray values at the image coordinate (i, j) block;
sixteenth step: the front-end face intelligent sensor secretly converts information xi= { a 0 ||β||P r Transmitting to a backend server;
seventeenth step: the front-end human face intelligent sensor is used for identifying unknown user u Θ The face encryption image E of the person is transmitted to a back-end server;
eighteenth step: and (5) ending.
3. The face image intelligent sensing-oriented safe transmission and identification method as claimed in claim 2, wherein: the face image receiving and decrypting algorithm of the user with unknown identity comprises the following steps:
the first step: the back-end server receives information xi= { a sent by the front-end face intelligent sensor 0 ||β||P r };
And a second step of: the back-end server receives the unknown identity user u sent by the front-end face intelligent sensor Θ A face encryption image E of (2);
and a third step of: the back-end server generates information xi= { a according to the information xi= { a sent by the front-end face intelligent sensor 0 ||β||P r The chaotic sequence { a } and the face encryption image E are calculated by the following equation i 0 therein<a i <1,i=1,2,...M;
a i+1 =β*a i *(1-a i ) (10)
Fourth step: the back-end server judges whether t is larger than 15, if yes, the process goes to the thirteenth step, and if not, the process goes to the fifth step;
fifth step: i=1;
sixth step: the back-end server judges whether i is larger than M, if so, the step is switched to the twelfth step, and if not, the step is switched to the seventh step;
seventh step: j=1;
eighth step: the back-end server judges whether j is larger than N, if yes, the step is switched to the eleventh step, and if not, the step is switched to the ninth step;
ninth step: the backend server performs the calculations by
Figure FDA0004130294960000051
Tenth step: j=j+1, go to eighth step;
eleventh step: i=i+1, go to the sixth step;
twelfth step: t=t+1, go to the fourth step;
thirteenth step: the back-end server sets i=1;
fourteenth step: the back-end server judges whether i is larger than M, if so, the step is switched to the twentieth step, and if not, the step is switched to the fifteenth step;
fifteenth step: j=1;
sixteenth step: the back-end server judges whether j is larger than N, if yes, the step is switched to nineteenth, and if not, the step is switched to seventeenth;
seventeenth step: the backend server performs the calculations by
Figure FDA0004130294960000052
Eighteenth step: j=j+1, go to the sixteenth step;
nineteenth step: i=i+1, go to the fourteenth step;
twenty-step: the back-end server stores the decrypted face image and sets the face image as a CP 1 And CP 2 The method comprises the steps of carrying out a first treatment on the surface of the Wherein CP 1 (i, j) is the decrypted face image CP 1 Gray scale value of block at image coordinates (i, j), CP 2 (i, j) is other non-face image CP after decryption 2 Gray values at the image coordinate (i, j) block;
twenty-first step: and (5) ending.
4. A face image intelligent sensing oriented secure transmission and identification method as defined in claim 3, wherein: the safety recognition algorithm of the face image of the user with unknown identity comprises the following steps:
the first step: the back-end server selects the face image CP of the unknown user with the identity to be identified after decryption from the decrypted face image according to the requirement 1 Or other non-face images CP 2
And a second step of: the back-end server selects the face image CP of the unknown user with the identity to be identified 1 Or other non-face images CP 2 Face image CP to be recognized 1 Or other non-face images CP 2 Dividing into M x N blocks with the same image size and storing a face image CP 1 RGB values, i.e., R, G, B values, for each block of pixels;
and a third step of: the back-end server calculates face images CP of the unknown identity users according to the seventh step to the twentieth step in the algorithm 1 1 Or other non-face images CP 2 Constructing a characteristic face space of a gray image under the condition that the contribution rate is more than 99%, and assuming that the result is w Θ Feature vector A corresponding to feature value Θ V Θ ,V Θ =[V 1 Θ ,V 2 Θ ,...,V M Θ ]Value difference A between the eigenvector corresponding to the eigenvalue and the corresponding average vector Θ V ΘΘ ,Ψ Θ =[Ψ 1 Θ2 Θ ,...,Ψ M Θ ]:
Fourth step: the back-end server calculates a face image CP of a user with unknown identity 1 Or other non-face images CP 2 The projection feature face space under the condition that the contribution rate is more than 99% is satisfied, and the calculation formula is as follows:
Figure FDA0004130294960000061
fifth step: the backend server calculates w and AV according to algorithm 1 ii And calculating a projection characteristic face space of a pixel value of a legal user U under the condition that the contribution rate is more than 99%, wherein the calculation formula is as follows:
Ω P(U) =w T (AV-Ψ) (14)
sixth step: the back-end server calculates the image threshold of the legal user: i.e.
Figure FDA0004130294960000062
Seventh step: the back-end server calculates Ω using euclidean distance P(U) Distance epsilon from decrypted image i The method comprises the following steps:
Figure FDA0004130294960000071
eighth step: the back-end server adopts the following rules to identify and classify the faces:
1) If all epsilon i ≥θ 2 I=1, 2, M, the image to be identified is not a face image;
2) If all epsilon i <θ 2 I=1, 2, M, the image to be identified is a legal user gray scale;
3) If part i is present, 1.ltoreq.i.ltoreq.M such that ε i ≥θ 2 There is also a part i, where 1.ltoreq.i.ltoreq.M, such that ε i <θ 2 The image to be identified is not a legal user gray level image;
ninth step: the back-end server stores and displays the face recognition result;
tenth step: and (5) ending.
CN201911400109.3A 2019-12-30 2019-12-30 Intelligent sensing-oriented safe transmission and identification method for face images Active CN111144352B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911400109.3A CN111144352B (en) 2019-12-30 2019-12-30 Intelligent sensing-oriented safe transmission and identification method for face images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911400109.3A CN111144352B (en) 2019-12-30 2019-12-30 Intelligent sensing-oriented safe transmission and identification method for face images

Publications (2)

Publication Number Publication Date
CN111144352A CN111144352A (en) 2020-05-12
CN111144352B true CN111144352B (en) 2023-05-05

Family

ID=70522151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911400109.3A Active CN111144352B (en) 2019-12-30 2019-12-30 Intelligent sensing-oriented safe transmission and identification method for face images

Country Status (1)

Country Link
CN (1) CN111144352B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152083B (en) * 2023-08-31 2024-04-09 哈尔滨工业大学 Ground penetrating radar road disease image prediction visualization method based on category activation mapping

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886235A (en) * 2014-03-03 2014-06-25 杭州电子科技大学 Face image biological key generating method
CN107862282A (en) * 2017-11-07 2018-03-30 深圳市金城保密技术有限公司 A kind of finger vena identification and safety certifying method and its terminal and system
CN110336776A (en) * 2019-04-28 2019-10-15 杭州电子科技大学 A kind of multi-point cooperative Verification System and method based on user images intelligent acquisition
CN110458091A (en) * 2019-08-08 2019-11-15 北京阿拉丁智慧科技有限公司 Recognition of face 1 based on position screening is than N algorithm optimization method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8023699B2 (en) * 2007-03-09 2011-09-20 Jiris Co., Ltd. Iris recognition system, a method thereof, and an encryption system using the same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886235A (en) * 2014-03-03 2014-06-25 杭州电子科技大学 Face image biological key generating method
CN107862282A (en) * 2017-11-07 2018-03-30 深圳市金城保密技术有限公司 A kind of finger vena identification and safety certifying method and its terminal and system
CN110336776A (en) * 2019-04-28 2019-10-15 杭州电子科技大学 A kind of multi-point cooperative Verification System and method based on user images intelligent acquisition
CN110458091A (en) * 2019-08-08 2019-11-15 北京阿拉丁智慧科技有限公司 Recognition of face 1 based on position screening is than N algorithm optimization method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
盛家伦 ; 姚智童 ; 王云涛 ; 付绍静 ; .一种基于人脸识别的WLAN安全通信***的设计与实现.信息网络安全.2013,(09),全文. *
符艳军 ; 程咏梅 ; 董淑福 ; 王晓东 ; .结合人脸特征和密码技术的网络身份认证***.计算机应用研究.2010,(02),全文. *

Also Published As

Publication number Publication date
CN111144352A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
Panchal et al. A novel approach to fingerprint biometric-based cryptographic key generation and its applications to storage security
Barman et al. Fingerprint-based crypto-biometric system for network security
Leng et al. Dual-key-binding cancelable palmprint cryptosystem for palmprint protection and information security
El-Shafai et al. Efficient and secure cancelable biometric authentication framework based on genetic encryption algorithm
Mariño et al. A crypto-biometric scheme based on iris-templates with fuzzy extractors
JP2016131335A (en) Information processing method, information processing program and information processing device
Falmari et al. Privacy preserving cloud based secure digital locker using Paillier based difference function and chaos based cryptosystem
Vallabhadas et al. Securing multimodal biometric template using local random projection and homomorphic encryption
Jacob et al. Biometric template security using DNA codec based transformation
CN114065169B (en) Privacy protection biometric authentication method and device and electronic equipment
CN111144352B (en) Intelligent sensing-oriented safe transmission and identification method for face images
Helmy et al. A hybrid encryption framework based on Rubik’s cube for cancelable biometric cyber security applications
Shanthini et al. Multimodal biometric-based secured authentication system using steganography
Helmy et al. A novel cancellable biometric recognition system based on Rubik’s cube technique for cyber-security applications
Shrivas et al. A survey on visual cryptography techniques and their applications
Selimović et al. Authentication based on the image encryption using delaunay triangulation and catalan objects
Buhan et al. Secure ad-hoc pairing with biometrics: SAfE
CN111404691A (en) Quantum secret sharing method and system with credible authentication based on quantum walking
Barman et al. Approach to cryptographic key generation from fingerprint biometrics
Mehta et al. An efficient & secure encryption scheme for biometric data using holmes map & singular value decomposition
Alghamdi et al. Bio-chaotic stream cipher-based iris image encryption
Salama et al. Safeguarding images over insecure channel using master key visual cryptopgraphy
Eid et al. A secure multimodal authentication system based on chaos cryptography and fuzzy fusion of iris and face
Marimuthu et al. Dual fingerprints fusion for cryptographic key generation
Ghazali et al. Security performance evaluation of biometric lightweight encryption for fingerprint template protection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant