CN113591782A - Training-based face recognition intelligent safety box application method and system - Google Patents

Training-based face recognition intelligent safety box application method and system Download PDF

Info

Publication number
CN113591782A
CN113591782A CN202110921822.3A CN202110921822A CN113591782A CN 113591782 A CN113591782 A CN 113591782A CN 202110921822 A CN202110921822 A CN 202110921822A CN 113591782 A CN113591782 A CN 113591782A
Authority
CN
China
Prior art keywords
training
safe
face image
training sample
samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110921822.3A
Other languages
Chinese (zh)
Inventor
韩亚东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Huilang Times Technology Co Ltd
Original Assignee
Beijing Huilang Times Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Huilang Times Technology Co Ltd filed Critical Beijing Huilang Times Technology Co Ltd
Priority to CN202110921822.3A priority Critical patent/CN113591782A/en
Publication of CN113591782A publication Critical patent/CN113591782A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/00174Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
    • G07C9/00563Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys using personal physical data of the operator, e.g. finger prints, retinal images, voicepatterns
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/00174Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
    • G07C9/00896Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys specially adapted for particular uses
    • G07C9/00912Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys specially adapted for particular uses for safes, strong-rooms, vaults or the like

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a training-based face recognition intelligent safety box application method and system, and relates to the technical field of face recognition. The method comprises the steps that a training model is used for collecting face image information of a holder by using a face recognition device and is defined as a first training sample; acquiring face image information of a non-holder from the Internet, and defining the face image information as a second training sample; leading the first training sample and the second training sample into a training model for training to obtain a decision model; when the face recognition device detects a user, acquiring a face image of the user, matching the face image based on a decision model, and sending an instruction for opening the intelligent safe case when a result value exceeds a first preset value; and when the result value is lower than the second preset value, sending an instruction for forbidding opening the intelligent safety box. The face recognition method can realize high-precision face recognition for application of the intelligent safe.

Description

Training-based face recognition intelligent safety box application method and system
Technical Field
The invention relates to the technical field of face recognition, in particular to an application method and an application system of an intelligent safety box based on training type face recognition.
Background
In modern society, countries, enterprises and families put more and more importance on property protection, and intelligent safes also play more and more important roles. It can effectively preserve important property of book, cash, gold, etc. and provide important support for property protection. Meanwhile, the safety problem of the system also becomes a potential hidden danger, and a plurality of lawbreakers cause huge property loss through ways of embezzlement and the like. Based on this, many research institutions, science and technology companies have effectively combined the face recognition technology and the application of intelligent safe, have also protected property safety when providing convenience.
However, the traditional face recognition technology has certain disadvantages, and the safety application of the intelligent safe based on face recognition cannot be effectively guaranteed. Firstly, human faces collected from different angles often have certain differences, and the differences of different human faces are not fully considered in the traditional method; thereby significantly reducing the accuracy of face recognition. Therefore, how to establish a more effective intelligent safety box application method based on accurate training type face recognition, ensure the accuracy of face recognition, and ensure the safe use of the intelligent safety box to the maximum extent is a problem to be solved urgently.
Disclosure of Invention
The invention aims to provide a training-based face recognition intelligent safety box application method, which can realize high-precision face recognition facing to intelligent safety box application.
The embodiment of the invention is realized by the following steps:
in a first aspect, an embodiment of the present application provides a training-based face recognition intelligent safe application method, which includes defining a training model, where the training model includes a support vector machine model; collecting face image information of a holder of the safe by using a face recognition device on the safe, and defining the face image information as a first training sample; collecting face image information of a non-safe case holder from the internet, and defining the face image information as a second training sample; leading the first training sample and the second training sample into a training model for training to obtain a decision model; when a face recognition device of the safe box detects a user, acquiring a face image of the user, matching the face image based on a decision model, and sending an instruction for opening the intelligent safe box when a result value matched based on the decision model exceeds a first preset value; and when the result value based on the decision model matching is lower than a second preset value, sending an instruction for forbidding opening the intelligent safe.
In some embodiments of the present invention, the method further comprises repeating the above steps when the result value of the decision-based model matching is between the first preset value and the second preset value.
In some embodiments of the present invention, when a result value based on decision model matching is between a first preset value and a second preset value, the method further includes characterizing the first training sample and the second training sample by using a deep hash coding method, calculating a similarity between the first training sample and the second training sample by using an euclidean distance calculation method for the characterized samples, and saving a first preset number of the first training sample and the second training sample as comparison samples according to a similarity from high to low; matching the image to be detected with the comparison sample by adopting a multi-group K-means algorithm, and sending an instruction for opening the safe case if the image to be detected is judged to be the face image of the holder of the safe case; and if the image to be detected is judged not to be the face image of the holder of the safe case, sending a command for forbidding opening the safe case.
In some embodiments of the present invention, the method for deep hash coding includes classifying, by using a loss function, samples that meet a preset similarity in the first training sample and the second training sample, and excluding samples that do not meet the preset similarity; and generating a binary code from the samples which accord with the preset similarity, enabling the binary code to approach an expected discrete value by using a regularizer, and quantizing the output binary code representation.
In some embodiments of the present invention, the first training sample of the comparison samples includes a plurality of positive samples, the second training sample of the comparison samples includes a plurality of negative samples, the euclidean distance is respectively used to calculate the similarity between two pairs of positive samples and the similarity between two pairs of negative samples, and a plurality of positive samples and a plurality of negative samples with the same number are retained.
In some embodiments of the present invention, the multi-group K-means algorithm comprises placing a plurality of the positive samples and a plurality of the negative samples in a data set, and dividing the plurality of positive samples and the plurality of negative samples into a plurality of classification groups, wherein any one of the classification groups contains an equal number of positive samples and negative samples; randomly selecting a second preset number of initial clustering centers from each classification group; calculating the distance from the sample data of any one of the classification groups to the initial clustering center corresponding to the sample data, and classifying each sample to the clustering center closest to the sample data to form a classification cluster; taking the mean value of all samples in any classification cluster as a new secondary clustering center of the classification cluster; repeating the steps until the secondary clustering center is not changed; if the images to be detected are clustered and then are matched with the positive sample, determining the face image of the safe holder; and if the images to be detected are clustered and then are matched with the negative sample, judging the images to be detected to be the face images of the persons who do not have the safe case.
In some embodiments of the invention, if more than half of the classification groups in the plurality of classification groups determine the image to be detected as the face image of the safe holder, the image to be detected is finally determined as the face image of the safe holder; and if more than half of the classification groups in the plurality of classification groups judge the image to be detected as the face image of the non-safe case holder, the image to be detected is finally judged as the face image of the non-safe case holder.
In a second aspect, an embodiment of the present application provides a training-based face recognition intelligent safe application system, which includes a model definition module, where the model definition module defines a training model, and the training model includes a support vector machine model; the first acquisition module acquires face image information of a holder of the safety box by using a face recognition device on the safety box and defines the face image information as a first training sample; the second acquisition module acquires a face image of a non-safe holder from the internet and defines the face image information as a second training sample; the processing module is used for importing the first training sample and the second training sample into a training model for training to obtain a decision model; the judgment module is used for collecting a face image of a user when the face recognition device of the safe box detects the user, matching the face image based on a decision model, and sending an instruction for opening the intelligent safe box when a result value matched based on the decision model exceeds a first preset value; and when the result value based on the decision model matching is lower than a second preset value, sending an instruction for forbidding opening the intelligent safe.
In a third aspect, an embodiment of the present application provides an electronic device, which includes at least one processor, at least one memory, and a data bus; wherein: the processor and the memory complete mutual communication through the data bus; the memory stores program instructions executable by the processor, which calls the program instructions to perform the methods described above.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the above-mentioned method.
Compared with the prior art, the embodiment of the invention has at least the following advantages or beneficial effects:
the embodiment of the invention relates to a training-type-based face recognition intelligent safety box application method, and the traditional face recognition technology has the defects that the human face has certain difference at different angles, so that a face recognition device on a safety box always has certain difference when acquiring different angles of the same human face, and in order to solve the defect, the high-precision face recognition oriented to intelligent application is realized by using models of a deep hash code, a Support Vector Machine (SVM), a multi-group K-means algorithm (K-means) and the like of the safety box, and the specific implementation mode is as follows:
s101, defining a training model, wherein the training model comprises a support vector machine model;
in order to improve the accuracy of face recognition, the invention adopts a training type face recognition method and adopts a support vector machine model to process the acquired face image information, wherein the support vector machine is a generalized linear classifier for binary classification of data according to a supervised learning mode, and a decision boundary of the support vector machine is a maximum margin hyperplane for solving a learning sample. And the support vector machine uses the hinge loss function to calculate the empirical risk, and a regularization item is added in a solving system to optimize the structural risk, so that the support vector machine has sparsity and robustness during operation. In addition, because the biological information is identified and judged in the invention, the nonlinear classification is carried out by a kernel method by using a support vector machine.
S201, acquiring face image information of a holder of the safe by using a face recognition device on the safe, and defining the face image information as a first training sample;
s301, collecting face image information of a non-safe holder from the Internet, and defining the face image information as a second training sample;
the method comprises the following steps of collecting face image information of a safe holder and face image information of a non-safe holder, and aiming at providing classification samples for operation of a support vector machine, such as: the method comprises the steps of collecting 60 human face images of a safe holder at different angles and different postures by using a human face recognition device on a safe as a first training sample, and collecting 60 human face images of a non-safe holder as a second training sample by using the internet and other ways.
S401, importing a first training sample and a second training sample into a training model for training to obtain a decision model;
as shown in fig. 2, after training with the training model, a decision model is obtained, a decision boundary, a first training sample interval boundary, a second training sample interval boundary, a first training sample support vector, and a second training sample support vector can be obtained in the decision model, and a value exceeding the first training sample interval boundary is defined as a first preset value, and a value lower than the second training sample interval boundary is defined as a second preset value.
S501a, when the face recognition device of the safe box detects a user, collecting the face image of the user, matching the face image based on a decision model, and sending an instruction for opening the intelligent safe box when the result value based on the decision model matching exceeds a first preset value; and when the result value based on the decision model matching is lower than a second preset value, sending an instruction for forbidding opening the intelligent safe.
After the training of the training step based on the support vector machine, importing the image information to be detected into a decision model for matching, and when the result value matched based on the decision model exceeds a first preset value, indicating that the user is the holder of the safe case, and sending an instruction for opening the intelligent safe case to open the safe case; and when the result value based on the decision model matching is lower than the second preset value, the user is not the holder of the safe, and an instruction for forbidding opening the intelligent safe is sent to continue closing the safe.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flow chart of an application method of an intelligent safe based on training face recognition in the invention;
FIG. 2 is a schematic diagram of a decision model according to the present invention;
FIG. 3 is another flow chart of the training-based face recognition intelligent safe application method of the present invention;
FIG. 4 is another flowchart of the training-based face recognition intelligent safe application method of the present invention;
FIG. 5 is a flow chart of an intelligent safe case application system based on training face recognition in the invention;
fig. 6 is a schematic structural diagram of an electronic device according to the present invention.
Icon: 1. a model definition module; 11. a first training sample support vector; 12. a first training sample; 13. a first training sample interval boundary; 14. a decision boundary; 15. a second training sample interval boundary; 16. a second training sample support vector; 17. a second training sample; 2. a first acquisition module; 3. a second acquisition module; 4. a processing module; 5. a judgment module; 6. a processor; 7. a memory; 8. a data bus.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In the description of the present application, it should be noted that the terms "upper", "lower", "inner", "outer", and the like indicate orientations or positional relationships based on orientations or positional relationships shown in the drawings or orientations or positional relationships conventionally found in use of products of the application, and are used only for convenience in describing the present application and for simplification of description, but do not indicate or imply that the referred devices or elements must have a specific orientation, be constructed in a specific orientation, and be operated, and thus should not be construed as limiting the present application.
In the description of the present application, it is also to be noted that, unless otherwise explicitly specified or limited, the terms "disposed" and "connected" are to be interpreted broadly, e.g., as being either fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the individual features of the embodiments can be combined with one another without conflict.
Example 1
Referring to fig. 1, for the application method of the training-type face recognition intelligent safe provided in the embodiment of the present application, the traditional face recognition technology has a disadvantage that the human face has certain differences at different angles, so that a face recognition device on the safe often has certain differences when acquiring different angles of the same human face, and to solve this defect, the embodiment uses models such as deep hash coding, Support Vector Machine (SVM), multi-group K-means algorithm (K-means), and the like to realize high-precision face recognition facing the application of the intelligent safe, and the specific implementation manner is as follows:
s101, defining a training model, wherein the training model comprises a support vector machine model;
in order to improve the accuracy of face recognition, the invention adopts a training type face recognition method and adopts a support vector machine model to process the collected face image information, wherein the support vector machine is a generalized linear classifier for binary classification of data according to a supervised learning mode, and a decision boundary 14 of the support vector machine is a maximum margin hyperplane for solving a learning sample. And the support vector machine uses the hinge loss function to calculate the empirical risk, and a regularization item is added in a solving system to optimize the structural risk, so that the support vector machine has sparsity and robustness during operation. In addition, because the biological information is identified and judged in the invention, the nonlinear classification is carried out by a kernel method by using a support vector machine.
S201, acquiring face image information of a holder of the safe by using a face recognition device on the safe, and defining the face image information as a first training sample 12;
s301, collecting face image information of a non-safe holder from the Internet, and defining the face image information as a second training sample 17;
the method comprises the following steps of collecting face image information of a safe holder and face image information of a non-safe holder, and aiming at providing classification samples for operation of a support vector machine, such as: the method comprises the steps of collecting 60 human face images of a safe holder at different angles and different postures by using a human face recognition device on a safe as a first training sample 12, and collecting 60 human face images of a non-safe holder by using the internet and other ways as a second training sample 17.
S401, importing a first training sample 12 and a second training sample 17 into a training model for training to obtain a decision model;
as shown in fig. 2, after training with the training model, a decision model is obtained, in which a decision boundary 14, a first training sample interval boundary 13, a second training sample interval boundary 15, a first training sample support vector 11, and a second training sample support vector 16 can be obtained, and a value exceeding the first training sample interval boundary 13 is defined as a first preset value, and a value lower than the second training sample interval boundary 15 is defined as a second preset value.
S501a, when the face recognition device of the safe box detects a user, collecting the face image of the user, matching the face image based on a decision model, and sending an instruction for opening the intelligent safe box when the result value based on the decision model matching exceeds a first preset value; and when the result value based on the decision model matching is lower than a second preset value, sending an instruction for forbidding opening the intelligent safe.
After the training of the training step based on the support vector machine, importing the image information to be detected into a decision model for matching, and when the result value matched based on the decision model exceeds a first preset value, indicating that the user is the holder of the safe case, and sending an instruction for opening the intelligent safe case to open the safe case; and when the result value based on the decision model matching is lower than the second preset value, the user is not the holder of the safe, and an instruction for forbidding opening the intelligent safe is sent to continue closing the safe.
Referring to fig. 3, S501 d: further comprising repeating steps S101 to S501a when the result value of the decision model based matching is between the first preset value and the second preset value.
In some embodiments of the present invention, as for the accuracy of face recognition, when an error mainly exists in a classification margin part after classification based on the support vector machine model, matching errors are easily caused, so that it can be known in fig. 2 that the face information to be detected is located between the first training sample interval boundary 13 and the second training sample interval boundary 15 in the decision model, and respectively approaches the corresponding interval boundaries, that is, when the result value based on the decision model matching is between the first preset value and the second preset value. Therefore, in order to avoid the errors, the steps are repeated, data acquisition is continued, and the decision model is updated again, so that the matching based on the support vector machine model is more accurate.
Example 2
Referring to fig. 4, this embodiment provides based on embodiment 1 that, when a result value based on decision model matching is between a first preset value and a second preset value, the method further includes respectively characterizing the first training sample 12 and the second training sample 17 by using a deep hash coding method, calculating a similarity between the first training sample 12 and the second training sample by using an euclidean distance calculation method for the characterized samples, and storing a first preset number of the first training samples 12 and the second training samples as comparison samples according to a similarity from high to low; matching the image to be detected with the comparison sample by adopting a multi-group K-means algorithm, and sending an instruction for opening the safe case if the image to be detected is judged to be the face image of the holder of the safe case; and if the image to be detected is judged not to be the face image of the holder of the safe case, sending a command for forbidding opening the safe case.
S501 b: the deep hash coding method comprises the steps of classifying samples which accord with preset similarity in a first training sample 12 and a second training sample 17 by adopting a loss function, and excluding the samples which do not accord with the preset similarity; and generating a binary code from the samples which accord with the preset similarity, enabling the binary code to approach an expected discrete value by using a regularizer, and quantizing the output binary code representation.
In some embodiments of the present invention, the Deep hash coding of the present invention adopts the CNN architecture in the paper "Deep Supervised Hashing for Fast Image Retrieval", and takes the first training sample 12 and the second training sample 17 as training inputs, so that the output of each Image approximates to a discrete value (e.g., + 1/-1). And sets the loss function:
let Ω denote the RGB space, and in addition the coding of similar images should be as close as possible and the coding of different images should be far apart. For this reason, the loss function pulls the encoding of similar images together, excluding the encoding of different images. Specifically, one image
Figure DEST_PATH_IMAGE001
The corresponding binary network output is
Figure 47756DEST_PATH_IMAGE002
If they are similar, define
Figure DEST_PATH_IMAGE003
Otherwise
Figure 823951DEST_PATH_IMAGE004
. Thus, two figures are illustratedThe loss of an image is defined as:
Figure 10213DEST_PATH_IMAGE006
Figure 389111DEST_PATH_IMAGE008
wherein
Figure DEST_PATH_IMAGE009
Representing the hamming distance between two binary vectors,
Figure 141166DEST_PATH_IMAGE010
is a marginal threshold parameter. A first penalty for mapping to different binary-coded similar images, and a second penalty for mapping to similar binary-coded dissimilar images with a Hamming distance below a marginal threshold
Figure DEST_PATH_IMAGE011
A penalty is imposed. Only the dissimilar images whose distance is within one radius are eligible to contribute their own loss function (in fact, the dissimilar images whose distance is too far apart are not useful for training the network). Suppose there is
Figure 744055DEST_PATH_IMAGE012
For training image pairs randomly sampled from the training images, the loss function that minimizes the population is:
Figure 542246DEST_PATH_IMAGE014
Figure 147671DEST_PATH_IMAGE016
the above method encodes the supervisory information of the input image pair from the first training sample 12 and the second training sample 17, and at the same time regularizes the real-valued output to approximate the required discrete values, thereby maximally improving the identifiability of the output space. In addition, in image retrieval, a newly appeared image to be detected can be encoded through an input network, and then the network output is quantized into a binary code representation, so that the image can be easily encoded.
S502 b: the first training sample 12 in the comparison sample comprises a plurality of positive samples, the second training sample 17 in the comparison sample comprises a plurality of negative samples, the similarity between every two positive samples and the similarity between every two negative samples are calculated by adopting the Euclidean distance respectively, and a plurality of positive samples and negative samples with the same quantity are reserved.
In some embodiments of the present invention, the euclidean distance transform is applied to the first training sample 12 and the second training sample 17 after binary coding characterization, and the principle is to convert the value of a pixel in the foreground into the distance from the point to the nearest background point, from a binary image (here we assume white as foreground color and black as background color). The similarity between every two positive samples and the similarity between every two negative samples are calculated, and 40 positive samples and 40 negative samples with large differences are respectively reserved.
In some embodiments of the present invention, in order to enhance the calculation accuracy, a multi-component K-means algorithm (K-means algorithm) is adopted, which is based on the principle that data is divided into K groups, then K objects are randomly selected as initial clustering centers, then the distance between each object and each seed clustering center is calculated, and each object is assigned to the clustering center closest to the object. The cluster centers and the objects assigned to them represent a cluster. The cluster center of a cluster is recalculated for each sample assigned based on the objects existing in the cluster. This process will be repeated until some termination condition is met. The specific implementation mode is as follows:
the multi-group K-means algorithm comprises the following steps:
s503 b: placing a plurality of positive samples and a plurality of negative samples in a data set, and dividing the plurality of positive samples and the plurality of negative samples into a plurality of classification groups, wherein any classification group contains the same number of positive samples and negative samples;
putting the 40 positive samples and the 40 negative samples obtained in the step (a) into a data set, and dividing the data set into 5 groups, wherein each group contains 8 positive samples and 8 negative samples;
s504, 504 b: randomly selecting a second preset number of initial clustering centers from each classification group;
for each group, 2 initial cluster centers were randomly chosen.
S505 b: calculating the distance from the sample data of any classification group to the initial clustering center corresponding to the sample data, and classifying each sample to the clustering center closest to the sample data to form a classification cluster;
and calculating the distance from each sample to each cluster center, and classifying each sample to the cluster center with the closest distance.
S506 b: taking the mean value of all samples in any classification cluster as a new secondary clustering center of the classification cluster;
for each classification cluster, taking the mean value of all samples as a new clustering center of the classification cluster, namely a secondary clustering center;
s507 b: repeating the steps until the secondary clustering center is not changed;
s508 b: if the images to be detected are clustered and then are matched with the positive sample, judging the images to be detected as face images of the safe holder; and if the images to be detected are clustered and then are matched with the negative sample, judging the images to be detected to be the face images of the persons who are not safe holders.
S508 c: if more than half of the classification groups in the classification groups determine the image to be detected as the face image of the safe holder, the image to be detected is finally determined as the face image of the safe holder; and if more than half of the classification groups in the plurality of classification groups judge the image to be detected as the face image of the non-safe case holder, the image to be detected is finally judged as the face image of the non-safe case holder.
In some embodiments of the present invention, after the calculation according to the above steps, the obtained plurality of classification groups still have differences, and the determination is performed according to the total number of positive samples or negative samples, according to the above assumed data, the specific implementation manner is as follows: if 3 or more than 5 groups of images to be detected are judged as the face image of the safe holder, the image to be detected is finally judged as the face image of the safe holder; if 3 or more of the 5 groups of images to be detected are judged as the face images of the persons without safe holders, the images to be detected are finally judged as the face images of the persons without safe holders.
Example 3
Referring to fig. 5, an intelligent safe application system based on training face recognition provided in an embodiment of the present application includes:
the model definition module 1, the model definition module 1 defines a training model, the training model includes a support vector machine model;
the first acquisition module 2 acquires the face image information of a holder of the safe by using a face recognition device on the safe, and defines the face image information as a first training sample 12;
the second acquisition module 3 acquires a face image of a non-safe holder from the internet, and defines the face image information as a second training sample 17;
the processing module 4 is used for leading the first training sample 12 and the second training sample 17 into a training model for training to obtain a decision model;
the judgment module 5 is used for collecting a face image of the face recognition device of the safe when the face recognition device of the safe detects a user, matching the face image based on a decision model, and sending an instruction for opening the intelligent safe when a result value based on the decision model matching exceeds a first preset value; and when the result value based on the decision model matching is lower than a second preset value, sending an instruction for forbidding opening the intelligent safe.
Example 4
Referring to fig. 6, an electronic device according to an embodiment of the present application includes at least one processor 6, at least one memory 7, and a data bus 8; wherein: the processor 6 and the memory 7 complete mutual communication through a data bus 8; the memory 7 stores program instructions executable by the processor 6, and the processor 6 calls the program instructions to execute the above-mentioned training-based face recognition intelligent safe application method. For example, the following steps are realized:
s101, defining a training model, wherein the training model comprises a support vector machine model; s201, acquiring face image information of a holder of the safe by using a face recognition device on the safe, and defining the face image information as a first training sample 12; s301, collecting face image information of a non-safe holder from the Internet, and defining the face image information as a second training sample 17; s401, importing a first training sample 12 and a second training sample 17 into a training model for training to obtain a decision model; s501a, when the face recognition device of the safe box detects a user, collecting the face image of the user, matching the face image based on a decision model, and sending an instruction for opening the intelligent safe box when the result value based on the decision model matching exceeds a first preset value; and when the result value based on the decision model matching is lower than a second preset value, sending an instruction for forbidding opening the intelligent safe.
Example 5
The embodiment of the application provides a computer-readable storage medium, on which a computer program is stored, and is characterized in that when being executed by a processor 6, the computer program implements the above training-based face recognition intelligent safe application method. For example, the following steps are realized:
s101, defining a training model, wherein the training model comprises a support vector machine model; s201, acquiring face image information of a holder of the safe by using a face recognition device on the safe, and defining the face image information as a first training sample 12; s301, collecting face image information of a non-safe holder from the Internet, and defining the face image information as a second training sample 17; s401, importing a first training sample 12 and a second training sample 17 into a training model for training to obtain a decision model; s501a, when the face recognition device of the safe box detects a user, collecting the face image of the user, matching the face image based on a decision model, and sending an instruction for opening the intelligent safe box when the result value based on the decision model matching exceeds a first preset value; and when the result value based on the decision model matching is lower than a second preset value, sending an instruction for forbidding opening the intelligent safe.
The Memory 7 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor 6 may be an integrated circuit chip having signal processing capabilities. The Processor 6 may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In the embodiments provided in the present application, it should be understood that the disclosed method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (10)

1. A training-based face recognition intelligent safe application method is characterized by comprising the following steps:
s101: defining a training model, the training model comprising a support vector machine model;
s201: collecting face image information of a holder of the safe by using a face recognition device on the safe, and defining the face image information as a first training sample;
s301: collecting face image information of a non-safe case holder from the internet, and defining the face image information as a second training sample;
s401: leading the first training sample and the second training sample into a training model for training to obtain a decision model;
s501a: when a face recognition device of the safe box detects a user, acquiring a face image of the user, matching the face image based on a decision model, and sending an instruction for opening the intelligent safe box when a result value matched based on the decision model exceeds a first preset value; and when the result value based on the decision model matching is lower than a second preset value, sending an instruction for forbidding opening the intelligent safe.
2. The intelligent safe application method based on training face recognition of claim 1, further comprising repeating steps S101 to S501a when the result value based on decision model matching is between a first preset value and a second preset value.
3. The intelligent safety box application method based on training face recognition based on the decision model as claimed in claim 1, when a result value based on decision model matching is between a first preset value and a second preset value, further comprising the steps of respectively characterizing the first training sample and the second training sample by adopting a deep hash coding method, calculating the similarity of the first training sample and the second training sample by adopting an Euclidean distance calculation method for the characterized samples, and saving a first preset number of the first training sample and the second training sample as comparison samples according to the similarity from high to low; matching the image to be detected with the comparison sample by adopting a multi-group K-means algorithm, and sending an instruction for opening the safe case if the image to be detected is judged to be the face image of the holder of the safe case; and if the image to be detected is judged not to be the face image of the holder of the safe case, sending a command for forbidding opening the safe case.
4. The intelligent safety box application method based on the training formula for the face recognition as claimed in claim 3, wherein the deep hash coding method comprises classifying samples which accord with a preset similarity in the first training sample and the second training sample by using a loss function, and excluding samples which do not accord with the preset similarity; and generating a binary code from the samples which accord with the preset similarity, enabling the binary code to approach an expected discrete value by using a regularizer, and quantizing the output binary code representation.
5. The intelligent safety box application method based on training type face recognition as claimed in claim 3, wherein the first training sample in the comparison samples comprises a plurality of positive samples, the second training sample in the comparison samples comprises a plurality of negative samples, the Euclidean distance is respectively adopted to calculate the similarity between every two positive samples and the similarity between every two negative samples, and a plurality of positive samples and negative samples with the same quantity are reserved.
6. The intelligent safe based on training face recognition according to claim 5,
the multi-group K-means algorithm comprises the steps of putting a plurality of positive samples and a plurality of negative samples into a data set, dividing the positive samples and the negative samples into a plurality of classification groups, wherein any classification group contains the same number of positive samples and negative samples;
randomly selecting a second preset number of initial clustering centers from each classification group;
calculating the distance from the sample data of any one of the classification groups to the initial clustering center corresponding to the sample data, and classifying each sample to the clustering center closest to the sample data to form a classification cluster;
taking the mean value of all samples in any classification cluster as a new secondary clustering center of the classification cluster;
repeating the steps until the secondary clustering center is not changed;
if the images to be detected are clustered and then are matched with the positive sample, determining the face image of the safe holder; and if the images to be detected are clustered and then are matched with the negative sample, judging the images to be detected to be the face images of the persons who do not have the safe case.
7. The intelligent safe application method based on the training formula for the face recognition as claimed in claim 6, further comprising determining the image to be detected as the face image of the safe holder if more than half of the classification groups in the classification groups determine the image to be detected as the face image of the safe holder; and if more than half of the classification groups in the plurality of classification groups judge the image to be detected as the face image of the non-safe case holder, the image to be detected is finally judged as the face image of the non-safe case holder.
8. The utility model provides a face identification intelligence safe deposit box application system based on training formula which characterized in that includes:
a model definition module that defines a training model, the training model comprising a support vector machine model;
the first acquisition module acquires face image information of a holder of the safety box by using a face recognition device on the safety box and defines the face image information as a first training sample;
the second acquisition module acquires a face image of a non-safe holder from the internet and defines the face image information as a second training sample;
the processing module is used for importing the first training sample and the second training sample into a training model for training to obtain a decision model;
the judgment module is used for collecting a face image of a user when the face recognition device of the safe box detects the user, matching the face image based on a decision model, and sending an instruction for opening the intelligent safe box when a result value matched based on the decision model exceeds a first preset value; and when the result value based on the decision model matching is lower than a second preset value, sending an instruction for forbidding opening the intelligent safe.
9. An electronic device comprising at least one processor, at least one memory, and a data bus; wherein: the processor and the memory complete mutual communication through the data bus; the memory stores program instructions executable by the processor, the processor calling the program instructions to perform the method of any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202110921822.3A 2021-08-12 2021-08-12 Training-based face recognition intelligent safety box application method and system Pending CN113591782A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110921822.3A CN113591782A (en) 2021-08-12 2021-08-12 Training-based face recognition intelligent safety box application method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110921822.3A CN113591782A (en) 2021-08-12 2021-08-12 Training-based face recognition intelligent safety box application method and system

Publications (1)

Publication Number Publication Date
CN113591782A true CN113591782A (en) 2021-11-02

Family

ID=78257319

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110921822.3A Pending CN113591782A (en) 2021-08-12 2021-08-12 Training-based face recognition intelligent safety box application method and system

Country Status (1)

Country Link
CN (1) CN113591782A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102900305A (en) * 2012-07-13 2013-01-30 太仓博天网络科技有限公司 Keyless safe box system based on human face recognition
CN105095884A (en) * 2015-08-31 2015-11-25 桂林电子科技大学 Pedestrian recognition system and pedestrian recognition processing method based on random forest support vector machine
CN106503617A (en) * 2016-09-21 2017-03-15 北京小米移动软件有限公司 Model training method and device
CN207863709U (en) * 2018-01-29 2018-09-14 长沙舍同智能科技有限责任公司 Recognition of face intelligence controlled drug safety cabinet
CN208521345U (en) * 2018-07-13 2019-02-19 湖南创合未来科技股份有限公司 A kind of recognition of face safe cabinet and intelligent secrecy system
CN109727350A (en) * 2018-12-14 2019-05-07 深圳壹账通智能科技有限公司 A kind of Door-access control method and device based on recognition of face
CN110851645A (en) * 2019-11-08 2020-02-28 吉林大学 Image retrieval method based on similarity maintenance under depth metric learning
WO2020211387A1 (en) * 2019-04-18 2020-10-22 深圳壹账通智能科技有限公司 Electronic contract displaying method and apparatus, electronic device, and computer readable storage medium
WO2021017261A1 (en) * 2019-08-01 2021-02-04 平安科技(深圳)有限公司 Recognition model training method and apparatus, image recognition method and apparatus, and device and medium
CN112364803A (en) * 2020-11-20 2021-02-12 深圳龙岗智能视听研究院 Living body recognition auxiliary network and training method, terminal, equipment and storage medium
CN112541458A (en) * 2020-12-21 2021-03-23 中国科学院自动化研究所 Domain-adaptive face recognition method, system and device based on meta-learning

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102900305A (en) * 2012-07-13 2013-01-30 太仓博天网络科技有限公司 Keyless safe box system based on human face recognition
CN105095884A (en) * 2015-08-31 2015-11-25 桂林电子科技大学 Pedestrian recognition system and pedestrian recognition processing method based on random forest support vector machine
CN106503617A (en) * 2016-09-21 2017-03-15 北京小米移动软件有限公司 Model training method and device
CN207863709U (en) * 2018-01-29 2018-09-14 长沙舍同智能科技有限责任公司 Recognition of face intelligence controlled drug safety cabinet
CN208521345U (en) * 2018-07-13 2019-02-19 湖南创合未来科技股份有限公司 A kind of recognition of face safe cabinet and intelligent secrecy system
CN109727350A (en) * 2018-12-14 2019-05-07 深圳壹账通智能科技有限公司 A kind of Door-access control method and device based on recognition of face
WO2020211387A1 (en) * 2019-04-18 2020-10-22 深圳壹账通智能科技有限公司 Electronic contract displaying method and apparatus, electronic device, and computer readable storage medium
WO2021017261A1 (en) * 2019-08-01 2021-02-04 平安科技(深圳)有限公司 Recognition model training method and apparatus, image recognition method and apparatus, and device and medium
CN110851645A (en) * 2019-11-08 2020-02-28 吉林大学 Image retrieval method based on similarity maintenance under depth metric learning
CN112364803A (en) * 2020-11-20 2021-02-12 深圳龙岗智能视听研究院 Living body recognition auxiliary network and training method, terminal, equipment and storage medium
CN112541458A (en) * 2020-12-21 2021-03-23 中国科学院自动化研究所 Domain-adaptive face recognition method, system and device based on meta-learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
慢行厚积: "图像检索(imageretrieval)-2-DeepSupervisedHashingforFastImageRetrieval-1-论文学习", pages 160 - 162, Retrieved from the Internet <URL:https://www.cnblogs.com/wanghui-garcia/p/13710885.html> *

Similar Documents

Publication Publication Date Title
Bhateja et al. Iris recognition based on sparse representation and k-nearest subspace with genetic algorithm
CN106845358B (en) Method and system for recognizing image features of handwritten characters
CN110598019B (en) Repeated image identification method and device
CN110188357B (en) Industry identification method and device for objects
Pan et al. Neighborhood feature line segment for image classification
Wang et al. An improved text classification method for sentiment classification
Wang et al. Novelty detection and online learning for chunk data streams
CN113269010B (en) Training method and related device for human face living body detection model
CN112668482A (en) Face recognition training method and device, computer equipment and storage medium
JP2015125662A (en) Object identification program and device
Al-wajih et al. An enhanced LBP-based technique with various size of sliding window approach for handwritten Arabic digit recognition
Travieso et al. Bimodal biometric verification based on face and lips
WO2022267167A1 (en) Text type intelligent recognition method and apparatus, device, and medium
Ge et al. Deep and discriminative feature learning for fingerprint classification
Suma et al. Analytical study of selected classification algorithms for clinical dataset
Sahoo et al. Indian sign language recognition using skin color detection
CN110414229B (en) Operation command detection method, device, computer equipment and storage medium
CN111859979A (en) Ironic text collaborative recognition method, ironic text collaborative recognition device, ironic text collaborative recognition equipment and computer readable medium
CN113591782A (en) Training-based face recognition intelligent safety box application method and system
CN115080745A (en) Multi-scene text classification method, device, equipment and medium based on artificial intelligence
Sachdeva et al. Categorical classification and deletion of spam images on smartphones using image processing and machine learning
CN114117037A (en) Intention recognition method, device, equipment and storage medium
Zhang et al. Segmentation-based Euler number with multi-levels for image feature description
CN112037174A (en) Chromosome abnormality detection method, device, equipment and computer readable storage medium
Qu et al. Filtering image spam using image semantics and near-duplicate detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211102