CN113761983B - Method and device for updating human face living body detection model and image acquisition equipment - Google Patents

Method and device for updating human face living body detection model and image acquisition equipment Download PDF

Info

Publication number
CN113761983B
CN113761983B CN202010503534.1A CN202010503534A CN113761983B CN 113761983 B CN113761983 B CN 113761983B CN 202010503534 A CN202010503534 A CN 202010503534A CN 113761983 B CN113761983 B CN 113761983B
Authority
CN
China
Prior art keywords
layer
convolution
living body
data
face living
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010503534.1A
Other languages
Chinese (zh)
Other versions
CN113761983A (en
Inventor
浦世亮
王晶晶
王春茂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202010503534.1A priority Critical patent/CN113761983B/en
Publication of CN113761983A publication Critical patent/CN113761983A/en
Application granted granted Critical
Publication of CN113761983B publication Critical patent/CN113761983B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a method and a device for updating a human face living body detection model and image acquisition equipment, and belongs to the technical field of deep learning. According to the application, the current input data of the target layer in the face living body image detection process of the face living body image detection model is obtained by obtaining the face living body image collected in the current environment, and the characteristic parameters of the target layer in the face living body detection model are updated on line according to the current input data of the target layer, so that the real-time on-line updating of the face living body detection model can be realized. Therefore, the scheme does not need to collect a large number of image samples or label information, and can be used for fine-tuning to obtain the human face living body detection model applicable to the current environment, so that a large amount of labor cost, time cost and equipment cost are not required to be consumed. Meanwhile, a large number of samples are not needed, so that the scheme is small in calculated amount and can be realized on image acquisition equipment with weak front-end calculation capability.

Description

Method and device for updating human face living body detection model and image acquisition equipment
Technical Field
The application relates to the technical field of deep learning, in particular to a method and a device for updating a human face living body detection model and image acquisition equipment.
Background
With the wide application of face recognition technology, the security problem of identity verification based on the face recognition technology is getting more and more important. The face living body detection is an indispensable module in a face recognition system, and by inputting a captured image into a face living body detection model (hereinafter simply referred to as a detection model) to determine whether or not it is a living body, a malicious molecule is prevented from illegally invading through the face recognition system using a photograph or video of a legitimate user or the like.
The training process of the detection model is to collect massive human face living body images in an environment and blend non-human face living body images as training data. And labeling each image, and training to obtain a detection model according to the training data and the labeling information. Because the acquisition environments of the training data such as scenes, illumination, shooting equipment, attack types and the like are limited, various environments are difficult to cover, the detection model obtained by training is difficult to be applied to environments other than the acquisition environments of the training data, namely, the identification accuracy of the detection model in other acquisition environments cannot be ensured, and therefore the safety of identity authentication cannot be ensured.
In the related art, in order to obtain a new environment in which the detection model can be applied in addition to the acquisition environment of the training data, a first implementation manner is to acquire a huge amount of face living body images under the new environment again as training sample data, perform labeling, and retrain to obtain the detection model applicable to the new environment. The second implementation mode is that the existing detection model is migrated to a new environment, and fine adjustment is carried out on the detection model according to the image and the labeling information collected in the new environment, namely, the detection model is continuously trained by using the image collected in the new environment, so that the detection model can be gradually applied to the new environment. However, both of these implementations re-acquire and label the training data, requiring high labor, time and equipment costs, and requiring a large number of samples for even fine tuning and labeling. In addition, the retraining process requires a large number of reverse iterations to perform supervised learning, and is very computationally intensive, and can only be implemented on a backend device with relatively high processing capacity, but cannot be implemented on a front-end device with relatively low processing capacity.
Disclosure of Invention
The application provides a method, a device and image acquisition equipment for updating a human face living body detection model, which can update the human face living body detection model on line to adapt to the current environment under the conditions of ensuring smaller calculated amount, saving labor cost, time cost and equipment cost. The technical scheme is as follows:
In one aspect, a method for updating a face living body detection model is provided, the method comprising:
acquiring a face living body image acquired in a current environment;
acquiring current input data of a target layer in the process of detecting the human face living body image by the human face living body detection model;
and updating the characteristic parameters of the target layer in the human face living body detection model on line according to the current input data of the target layer.
Optionally, the target layer is a BN layer;
the online updating of the characteristic parameters of the target layer in the face living body detection model according to the current input data of the target layer comprises the following steps:
determining a data mean value and a data variance of the BN layer at the current moment according to the current input data of the BN layer;
and updating the current characteristic parameters of the BN layer according to the data mean value and the data variance of the BN layer at the current moment and the characteristic parameters of the BN layer obtained by the last update.
Optionally, the determining the data mean and the data variance of the BN layer according to the current input data of the BN layer includes:
and determining the data mean value and the data variance of the BN layer at the current moment according to the current input data of the BN layer and the data mean value and the data variance of the BN layer which are determined last time.
Optionally, after determining the data mean and the data variance of the BN layer at the current time, the method further includes:
and if the times of determining the data mean value and the data variance of the BN layer reach the designated times, executing the step of updating the current characteristic parameters of the BN layer according to the data mean value and the data variance of the BN layer at the current moment and the characteristic parameters of the BN layer obtained by the last update.
Optionally, the determining, according to the current input data of the BN layer, the data mean and the data variance of the BN layer at the current time includes:
if the number of the cached input data of the BN layer before the current moment reaches the designated number, determining the data mean value and the data variance of the BN layer at the current moment according to the current input data of the BN layer and the cached input data of the BN layer.
Optionally, the target layer is a convolution layer, the convolution layer comprises a first convolution kernel, and the convolution layer is connected with an encoder, and the encoder is connected with a decoder;
the online updating of the characteristic parameters of the target layer in the face living body detection model according to the current input data of the target layer comprises the following steps:
adding a second convolution kernel to the convolution layer;
Processing current input data of the convolution layer according to the first convolution kernel and the second convolution kernel to obtain output data of the convolution layer;
performing dimension compression on the output data of the convolution layer through the encoder to obtain low-dimensional characteristic data;
performing dimension decompression on the low-dimensional characteristic data through the decoder to obtain reconstructed characteristic data;
updating the second convolution kernel according to the output data of the convolution layer and the reconstruction feature data;
and taking the first convolution kernel and the updated second convolution kernel as the updated characteristic parameters of the convolution layer.
Optionally, the updating the second convolution kernel according to the output data of the convolution layer and the reconstruction feature data includes:
determining a difference value between the output data of the convolution layer and the reconstructed feature data;
and updating the second convolution kernel according to the difference value.
Optionally, after the acquiring the face living body image acquired in the current environment, the method further includes:
and detecting the human face living body image according to a human face living body detection model to obtain a living body detection result.
Optionally, after detecting the face living body image according to the face living body detection model to obtain a living body detection result, the method further includes:
Determining a face recognition result according to the face living body image and the stored effective face living body image;
and determining a security verification result according to the living body detection result and the face recognition result.
In another aspect, there is provided an apparatus for updating a face living body detection model, the apparatus comprising:
the first acquisition module is used for acquiring a face living body image acquired in the current environment;
the second acquisition module is used for acquiring the current input data of a target layer in the process of detecting the human face living body image by the human face living body detection model;
and the updating module is used for updating the characteristic parameters of the target layer in the human face living body detection model on line according to the current input data of the target layer.
Optionally, the target layer is a BN layer;
the updating module comprises:
the first determining unit is used for determining the data mean value and the data variance of the BN layer at the current moment according to the current input data of the BN layer;
and the first updating unit is used for updating the current characteristic parameters of the BN layer according to the data mean value and the data variance of the BN layer at the current moment and the characteristic parameters of the BN layer obtained by the last updating.
Optionally, the first determining unit includes:
and the first determination subunit is used for determining the data mean value and the data variance of the BN layer at the current moment according to the current input data of the BN layer and the data mean value and the data variance of the BN layer which are determined last time.
Optionally, the first determining unit further includes:
and the second determining subunit is used for triggering the first updating unit to execute the characteristic parameters of the BN layer obtained according to the data mean value and the data variance of the BN layer at the current moment and the latest update and update the current characteristic parameters of the BN layer if the times of determining the data mean value and the data variance of the BN layer reach the designated times.
Optionally, the first determining unit includes:
and the third determining subunit is used for determining the data mean value and the data variance of the BN layer at the current moment according to the current input data of the BN layer and the cached input data of the BN layer if the number of the cached input data of the BN layer before the current moment reaches the designated number.
Optionally, the target layer is a convolution layer, the convolution layer comprises a first convolution kernel, and the convolution layer is connected with an encoder, and the encoder is connected with a decoder;
The updating module comprises:
an adding unit for adding a second convolution kernel in the convolution layer;
the processing unit is used for processing the current input data of the convolution layer according to the first convolution kernel and the second convolution kernel to obtain output data of the convolution layer;
the coding unit is used for carrying out dimension compression on the output data of the convolution layer through the coder to obtain low-dimensional characteristic data;
the decoding unit is used for performing dimension decompression on the low-dimensional characteristic data through the decoder to obtain reconstructed characteristic data;
a second updating unit, configured to update the second convolution kernel according to the output data of the convolution layer and the reconstruction feature data;
and the third determining unit is used for taking the first convolution kernel and the updated second convolution kernel as the updated characteristic parameters of the convolution layer.
Optionally, the second updating unit includes:
a fourth determining subunit, configured to determine a difference value between the output data of the convolutional layer and the reconstructed feature data;
and the updating subunit is used for updating the second convolution kernel according to the difference value.
Optionally, the apparatus further comprises:
And the detection module is used for detecting the human face living body image according to the human face living body detection model to obtain a living body detection result.
Optionally, the apparatus further comprises:
the recognition module is used for determining a face recognition result according to the face living body image and the stored effective face living body image;
and the determining module is used for determining a security verification result according to the living body detection result and the face recognition result.
On the other hand, an image acquisition device is provided, a human face living body detection model is deployed in the image acquisition device, and the image acquisition device comprises an image acquisition device and a processor;
the image collector is used for collecting a face living body image in the current environment;
the processor is used for acquiring current input data of a target layer in the process of detecting the human face living body image by the human face living body detection model; and updating the characteristic parameters of the target layer in the human face living body detection model on line according to the current input data of the target layer.
Optionally, the image acquisition device further comprises a memory;
the memory is used for storing the effective human face living body image;
the processor is further used for detecting the human face living body image according to the human face living body detection model to obtain a living body detection result;
The processor is further configured to determine a face recognition result according to the face living body image and the valid face living body image, and determine a security verification result according to the living body detection result and the face recognition result.
In another aspect, a computer readable storage medium is provided, in which a computer program is stored, the computer program implementing the steps of the method for updating a human face living body detection model described above when executed by a processor.
In another aspect, a computer program product is provided comprising instructions which, when run on a computer, cause the computer to perform the steps of the method of updating a face biopsy model described above.
The technical scheme provided by the application has at least the following beneficial effects:
in the application, the current input data of the target layer in the face living body image detection process of the face living body detection model is obtained by obtaining the face living body image collected in the current environment, and the characteristic parameters of the target layer in the face living body detection model are updated on line according to the current input data of the target layer, so that the face living body detection model can be updated in real time. Therefore, the face living body detection model can be updated on line in real time according to the face living body image acquired in the current environment. Because a large number of image samples are not required to be acquired, and information is not required to be marked, the human face living body detection model which can be applied to the current environment can be obtained through fine adjustment, and therefore a large amount of labor cost, time cost and equipment cost are not required to be consumed in the scheme. Meanwhile, a large number of samples are not needed, so that the scheme is small in calculation amount and can be implemented on front-end equipment with weak processing capacity.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for updating a face living body detection model according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a face living body detection model according to an embodiment of the present application;
FIG. 3 is a flowchart of a method for updating characteristic parameters of a BN layer according to an embodiment of the application;
fig. 4 is a schematic structural diagram of a face living body detection model according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a first convolution kernel included in a convolution layer according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a characteristic parameter of a convolution layer with the addition of a second convolution kernel provided by an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a device for updating a face living body detection model according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
With the wide application of face recognition technology, the security problem of identity verification based on the face recognition technology is getting more and more important. The face living body detection is an indispensable module in the face recognition system, and by inputting the shot image into the face living body detection model, the face living body detection is performed to detect whether the malicious molecules are human bodies or not, and the malicious molecules are prevented from illegally invading through the face recognition system by using photos or videos of legal users and the like.
Because the existing human face living body detection model is obtained by training according to training data and labeling information acquired by one environment, if the existing human face living body detection model is directly applied to a new environment or the illumination of the original environment and the like are greatly changed, the detection accuracy of the human face living body detection model cannot be ensured, so that the safety of identity authentication cannot be ensured. The method can update the existing human face living body detection model on line to obtain the human face living body detection model applicable to the current environment.
The method for updating the human face living body detection model can be applied to image acquisition equipment, such as equipment provided with cameras, including mobile phones, access control equipment, payment equipment and the like, so as to update the human face living body detection model online, and can also be applied to background equipment connected with the image acquisition equipment, wherein the background equipment can acquire human face living body images acquired by the image acquisition equipment in real time and update the human face living body detection model online. That is, a face living body detection model is disposed in the image acquisition device, or a face living body detection model is disposed in a background device connected with the image acquisition device, the face living body detection model can be used for detecting face living body images acquired in real time, and the face living body detection model can be updated online according to processing data generated in the detection process, so that the updated model can better detect face living body images acquired later. The scheme can be applied to equipment with strong processing capacity, and also can be applied to front-end equipment with weaker processing capacity, namely, the requirement on equipment is lower.
The method for updating the human face living body detection model provided by the embodiment of the application is explained in detail.
Fig. 1 is a flowchart of a method for updating a face living body detection model, which is provided by an embodiment of the application and is applied to an image acquisition device for introduction. Referring to fig. 1, the method includes the following steps.
Step 101: and acquiring a face living body image acquired in the current environment.
In the embodiment of the application, in the face recognition scene, such as the scene of face payment, face punching, face unlocking and the like, the image acquisition equipment can acquire the face living body image in the current environment. For example, a camera equipped with an access control device may capture a live image of a human face to unlock a door lock.
In the embodiment of the application, a face living body detection model is deployed in the image acquisition equipment, and after the face living body image acquired in the current environment is acquired, the image acquisition equipment can input the face living body image into the face living body detection model and process the face living body image so as to carry out face living body detection and obtain a living body detection result. The human face living body detection model is updated in real time according to the updating method provided by the embodiment of the application.
It should be noted that, the method for updating the face living body detection model provided by the embodiment of the application can continuously update the deployed face living body detection model while carrying out face living body detection on line, and the earliest deployed face living body detection model can be obtained by training according to training data and labeling information. The training data may include a mass of face live images acquired in other environments and a blended non-face live image, or the training data may include a mass of face live images acquired in the current environment and a blended non-face live image. That is, in the embodiment of the application, the face living body image can be detected according to the face living body detection model updated on line in real time, so as to obtain a living body detection result, and the accuracy of the living body detection result can be improved.
In addition, the face living body detection model in the embodiment of the application can be an AlexNet network model, and can also be other models constructed based on a convolutional neural network, and the following embodiment will take the face living body detection model as an AlexNet network model as an example.
Step 102: and acquiring current input data of a target layer in the process of detecting the human face living body image by the human face living body detection model.
In the embodiment of the application, the image acquisition equipment can acquire the current input data of the target layer in the process of detecting the human face living body image by the human face living body detection model.
It should be noted that the target layer may include one or more processing layers in the face biopsy model (the processing layers may be referred to as a network layer or a computing layer in some embodiments, etc.). For example, the target layer may be a batch normalized (Batch Normalization, BN) layer, a convolutional layer, or the like.
Fig. 2 is a schematic structural diagram of a face living body detection model provided by an embodiment of the present application, where the face living body detection model is an AlexNet network model. Referring to fig. 2, the alexnet network model includes an input layer, an intermediate layer, and an output layer. The input layer may be used for inputting image data of the collected face living body image, the middle layer may be used for processing the input image data, and the output layer may be used for outputting a living body detection result. The intermediate layers may include convolutional layers Conv, BN layers, and fully-connected layers FC, one BN layer being connected after each convolutional layer, and one BN layer being connected after each fully-connected layer except the last fully-connected layer.
In the embodiment of the application, in the forward prediction process according to the human face living body detection model, the image acquisition equipment can acquire the current input data of the BN layer, or acquire the current input data of the convolution layer, or acquire the current input data of the BN layer and the convolution layer.
It should be noted that the human face living body detection model may include a plurality of BN layers or a plurality of convolution layers, and the image capturing apparatus may acquire current input data of each BN layer of the specified characteristic parameters to be updated, or acquire current input data of each convolution layer of the specified characteristic parameters to be updated in the plurality of convolution layers. That is, in the embodiment of the present application, it may be specified to acquire the current input data of all BN layers or all convolution layers in the face living body detection model, or may be specified to acquire the current input data of a part of BN layers or a part of convolution layers.
Step 103: and updating the characteristic parameters of the target layer in the human face living body detection model on line according to the current input data of the target layer.
In the embodiment of the application, the image acquisition equipment can update the characteristic parameters of the target layer in the face living body detection model on line according to the acquired current input data of the target layer so as to obtain the face living body detection model after on-line updating, and carries out face living body detection according to the face living body detection model obtained by on-line updating.
As can be seen from the foregoing, in the embodiment of the present application, the image capturing device may update the face living body detection model by updating the characteristic parameters of the BN layer, or by updating the characteristic parameters of the convolutional layer, or by updating the characteristic parameters of the BN layer and the convolutional layer, that is, there are various implementations of updating the face living body detection model, and three implementations thereof will be described.
In the first implementation manner, the image acquisition equipment updates the characteristic parameters of the BN layer to realize online updating of the human face living body detection model.
In this implementation manner, the target layer requiring updating of the characteristic parameter is the BN layer, and the image capturing apparatus may determine the data mean and the data variance of the BN layer at the current time according to the current input data of the BN layer, and update the current characteristic parameter of the BN layer according to the data mean and the data variance of the BN layer at the current time and the characteristic parameter of the BN layer obtained by the last update.
It should be noted that, in the embodiment of the present application, if updating the characteristic parameters of a plurality of BN layers is specified, the method for updating the characteristic parameters is the same for each BN layer, and an example of updating the characteristic parameters of one of the BN layers will be described below.
In the embodiment of the present application, the input data of the BN layer includes characteristic data of a plurality of channels, and it is assumed that characteristic data of a jth channel is x j The BN layer is expressed as x according to the formula (1) j The data of the j-th channel output after processing is y j
Wherein, the liquid crystal display device comprises a liquid crystal display device,and->Respectively mean value parameters and variance parameters included by characteristic parameters of the jth channel of the BN layer at the current moment, and gamma j And beta j The characteristic parameters of the jth channel of the BN layer comprise scaling parameters and translation parameters respectively.
In the embodiment of the application, the image acquisition equipment can update the mean parameter and the variance parameter included in the characteristic parameter of each channel of the BN layer. Since the mean and variance need to be calculated based on a certain sample size, the calculation can be performed by means of small batches of samples. The image capturing apparatus may update the characteristic parameter of the BN layer based on the input data of the BN layer specifying the sample amount. That is, if the number of face living images in the current environment is acquired to reach the specified sample size, the characteristic parameters of the BN layer are updated once.
For example, assuming that the specified sample size is N, each time N face living images in the current environment are acquired, and each acquired face living detection model is detected in real time on line according to the face living detection model, N sets of input data of BN layers may be acquired, and the image detection apparatus may update the mean parameter and the variance parameter included in the characteristic parameters of the BN layers once based on the acquired N sets of input data of the BN layers.
In the embodiment of the application, a plurality of possible implementations of updating the characteristic parameters of the BN layer based on the input data of the BN layer with a specified sample size are provided.
In a first possible implementation manner, the data mean and variance of the input data of the BN layer with the specified sample size are calculated online, that is, the current data mean and current data variance of each channel of the BN layer are adjusted once according to formulas (2) and (3) every time the current input data of one BN layer is obtained according to one face living body image. That is, the image capturing apparatus determines the data mean and the data variance of the BN layer at the current time based on the current input data of the BN layer and the data mean and the data variance of the BN layer that were last determined.
Wherein, the liquid crystal display device comprises a liquid crystal display device,n represents the number of face living body images in the current environment after the characteristic parameters of the BN layer are updated last time, and x is the number of face living body images in the current environment n Is the current input data of the j-th channel of the BN layer,and->Respectively represent the data mean and the data variance of the jth channel of the BN layer at the current moment, namely the mean and the variance of the first n samples, mu 1 =x 1 ,σ 1 =0. That is, after the characteristic parameters of the BN layer are updated last time, when the first face living body image is detected, the calculated data average value of the jth channel of the BN layer is the input data itself, the data variance is 0, when the second face living body image is detected, the calculated data average value of the jth channel of the BN layer is the average value of the input data of the BN layer corresponding to the first two face living body images respectively, the data variance is the variance of the input data of the BN layer corresponding to the first two face living body images respectively, and by analogy, when the nth sample is detected, the data average value and the data variance of the BN layer corresponding to the N face living body images can be determined. That is, in the process of online acquisition and real-time detection of a face living image, the image acquisition apparatus may determine the data mean and the data variance of the BN layer at the current time according to the current input data of the BN layer and the data mean and the data variance of the BN layer determined last time.
After determining the data mean and variance of each channel of the BN layer at the current time according to the above-described method, if the number of times of determining the data mean and the data variance of the BN layer reaches the specified number of times, the image capturing apparatus may perform the step of determining the current characteristic parameter of the BN layer according to the data mean and the data variance of the BN layer at the current time and the characteristic parameter of the BN layer obtained by the last update. That is, after the characteristic parameters of the BN layer are updated last time, when the number of detected face living images reaches the specified sample size, the characteristic parameters of the BN layer may be updated again. Wherein the specified number of times is equal to the specified sample size.
In the embodiment of the present application, the image capturing apparatus may update the characteristic parameters of the BN layer according to formulas (4) and (5).
Wherein, the liquid crystal display device comprises a liquid crystal display device,and->Respectively the data mean value and the data variance of the jth channel of the BN layer at the current moment,/>Andthe characteristic parameters of the jth channel of the BN layer obtained by the last update respectively comprise a mean parameter and a mean variance,and->And respectively updating the mean value parameter and the mean value variance included in the obtained current characteristic parameter of the jth channel of the BN layer, wherein alpha is the updating rate. Therefore, as the samples are increased, the characteristic parameters of each channel of the BN layer can be gradually adjusted according to the samples of the current environment, so that the human face living body detection model is gradually adapted to the current environment.
Fig. 3 is a flowchart of a method for updating a characteristic parameter of a BN layer according to an embodiment of the present application. Referring to fig. 3, the image acquisition apparatus acquires a face living image in the acquired current environment, detects the face living image through a face living detection model to obtain a living detection result, acquires current input data of a BN layer, adjusts a data mean and a data variance of the BN layer on line according to formulas (2) and (3), and judges whether the number of times of determining the data mean and the data variance of the BN layer reaches a specified number of times, and if the number of times of determining the data mean and the data variance of the BN layer does not reach the specified number of times, continues to adjust the data mean and the data variance of the input data of the BN layer according to the next face living image. If the specified number of times is reached, that is, the data of the accumulated acquired face living body image reaches the specified sample size, the characteristic parameters of the BN layer may be updated online according to formulas (4) and (5). According to the updating mode, the characteristic parameters of the BN layer are continuously updated to gradually update the face living body detection model which can be applied to the current environment.
It is noted that in this way of calculating the mean and variance online, the image capturing apparatus does not need to buffer the input data of the BN layer of the specified sample size, but adjusts the mean and variance of the data of the BN layer in real time by adopting an online update way, so that the storage pressure of the image capturing apparatus can be reduced.
In a second possible implementation manner, the image acquisition device caches the input data of the BN layer every time the image acquisition device acquires the input data of the BN layer, and if the number of the cached input data of the BN layer before the current time reaches the specified number, the data mean value and the data variance of the BN layer at the current time are determined according to the current input data of the BN layer and the cached input data of the BN layer.
In the embodiment of the application, the image acquisition equipment can cache the acquired input data of the BN layer corresponding to each face living body image after the characteristic parameters of the BN layer are updated last time, and when the number of the caches reaches the designated number, namely the input data of the BN layer with the designated sample size is cached, the data mean value and the data variance of the input data can be calculated to obtain the data mean value and the data variance of the BN layer at the current moment.
In this possible implementation manner, after determining the data mean value and the data variance of the BN layer at the current time, the image capturing apparatus may update the current characteristic parameter of the BN layer according to the data mean value and the data variance of the BN layer at the current time and the characteristic parameter of the BN layer obtained by the last update, and the implementation manner may refer to the foregoing related description, that is, the update of the characteristic parameter of the BN layer may be implemented according to formulas (4) and (5).
In the first implementation manner, since the calculation such as gradient derivation is not required in the process of updating the characteristic parameters of the BN layer by the image capturing apparatus, only the mean value and the variance are calculated, and the parameter amount involved is small, the calculation amount is small.
In a second implementation manner, the image acquisition device updates the characteristic parameters of the convolution layer to realize online updating of the human face living body detection model.
In this implementation, the target layer requiring updating of the characteristic parameters is a convolution layer, and it should be noted that if the characteristic parameters of a plurality of convolution layers are specified to be updated, the method for updating the characteristic parameters is the same for each convolution layer, and the characteristic parameters of one of the convolution layers will be described as an example.
In the embodiment of the application, the convolution layer is connected with an encoder, and the encoder is connected with a decoder, and the original convolution kernel of the convolution layer can be a first convolution kernel. Based on this, a second convolution kernel may be added to the convolution layer based on the first convolution kernel. The image acquisition device can process the input data of the convolution layer according to the first convolution kernel and the second convolution kernel to obtain the output data of the convolution layer. Then, the image acquisition device can conduct dimension compression on the output data of the convolution layer through the encoder to obtain low-dimension characteristic data, and then conduct dimension decompression on the low-dimension characteristic data through the decoder to obtain reconstruction characteristic data. Then, the image acquisition device can update the second convolution kernel according to the output data of the convolution layer and the reconstruction feature data, and the first convolution kernel and the updated second convolution kernel are used as feature parameters after the convolution layer is updated.
Fig. 4 is a schematic structural diagram of a human face living body detection model according to an embodiment of the present application. Referring to fig. 4, the face living body detection model includes an input layer, a convolution layer, a BN layer (not shown), a full connection layer, and an output layer, wherein for one convolution layer Conv, for example Conv3, which needs to update a characteristic parameter, a second convolution kernel, for example, a diagonal line portion, is added on the basis of the original first convolution kernel, and the convolution layer is connected with an encoder, which is connected with a decoder.
For each collected face living body image, the image collecting device can process the input data of the convolution layer according to the first convolution kernel and the second convolution kernel of the convolution layer in the process of forward prediction of the face living body image to obtain the output data of the convolution layer.
Fig. 5 is a schematic diagram of a first convolution kernel included in a convolution layer shown in an embodiment of the present application, referring to fig. 5, assuming that the first convolution kernel is a convolution kernel of 3*3, in an embodiment of the present application, a second convolution kernel, for example, a convolution kernel of 1*1, may be added to the convolution layer. Fig. 6 shows characteristic parameters of a convolution layer added with a second convolution kernel, for input data of the convolution layer, convolution operation may be performed on the input data by using a first convolution kernel to obtain a convolution result 1, convolution operation may be performed on the input data by using the added second convolution kernel to obtain a convolution result 2, and then the convolution result 1 and the convolution result 2 are added to obtain output data of the convolution layer.
It should be noted that fig. 5 is only a form of a first convolution kernel, that is, a convolution kernel of 3*3, and in practice, the first convolution kernel may be a convolution kernel of 5*5, or another convolution kernel, which is related to the constructed face living body detection model.
After the output data of the convolution layer is obtained, the image acquisition device can use an encoder to conduct dimension compression on the output data to obtain low-dimension characteristic data, and then use a decoder to conduct dimension decompression on the low-dimension characteristic data to obtain reconstructed characteristic data.
In the embodiment of the application, the encoder and the decoder can be realized based on convolution operation and deconvolution operation, namely, the encoder can perform dimension compression on the output data of the convolution layer through the convolution operation to obtain low-dimensional characteristic data, and the decoder performs dimension decompression on the low-dimensional characteristic data through deconvolution operation to obtain reconstruction characteristic data. Alternatively, both the encoder and the decoder may be implemented according to a full-join operation, that is, the encoder compresses output data of the convolutional layer to a low dimension through one full-join operation to obtain low-dimension feature data, and the decoder decompresses output data of the convolutional layer to a high dimension through another full-join operation to obtain reconstructed feature data.
It should be noted that the dimension of the reconstructed feature data is the same as the dimension of the output data of the convolution layer.
In the embodiment of the application, after obtaining the reconstruction feature data, the image acquisition device may update the added second convolution kernel in the convolution layer based on a difference value between the reconstruction feature data and the output data of the convolution layer. That is, the image acquisition apparatus may determine a difference value between the output data of the convolution layer and the reconstruction feature data, and update the second convolution kernel according to the difference value.
Because the output data of the convolution layer is a matrix, the reconstructed feature data is also a matrix with the same dimension, based on the matrix, the image acquisition device can determine the difference value between the output data of the convolution layer and the reconstructed feature data by any method for determining the similarity between the matrices, for example, a method based on a two-norm, an F-norm and the like.
Illustratively, a difference value between the output data of the convolutional layer and the reconstructed feature data is determined based on the F-norm. The image acquisition device may calculate a matrix difference between the output data and the reconstructed feature data, and then take an F-norm of the matrix difference as a difference value between the output data and the reconstructed feature data of the convolution layer.
After determining the difference value, the image acquisition device may reverse derivative and update the added second convolution kernel in the convolution layer by a minimum gradient algorithm based on the difference value.
Optionally, the image acquisition device may also reversely derive and update the decoding parameters in the decoder according to the difference value by using a minimum gradient algorithm, so that the output data of the decoder may be as close as possible to the input data of the encoder, thereby ensuring the accuracy of subsequent detection.
In the embodiment of the application, the image acquisition equipment can update the second convolution kernel added in the convolution layer once when processing one face image data, namely, can update the characteristic parameters of the convolution layer once, namely, update the face living body detection model once.
In the second implementation manner, the parameter quantity involved in performing the inverse iterative update of the second convolution kernel is small, and the number of involved layers is shallow, so that the calculation quantity is also small.
In a third implementation manner, the image acquisition device may update the face living body detection model by updating characteristic parameters of the BN layer and the convolution layer.
In this implementation manner, the updating manner of the characteristic parameters of the BN layer and the convolutional layer may be implemented with reference to the foregoing related description, which is not repeated herein.
In the embodiment, the method for updating the human face living body detection model provided by the embodiment of the application is introduced by taking the object layer as the BN layer or the convolution layer as an example, in practical application, different human face living body detection models can be obtained based on different deep learning algorithms or deep learning frameworks, the human face living body detection models possibly have other types of processing layers, and for each type of processing layer in the human face living body detection models, the method for updating the characteristic parameters correspondingly can be designed according to the parameter types and the characteristics related to the corresponding processing layer, so as to realize online updating of the human face living body detection models. In addition, it should be noted that, compared to the method of retraining or fine-tuning the model according to the sample and label information, the layer number involved in the scheme is very shallow, and the number of parameters involved and the calculation amount are very small, so that the method can be implemented on front-end equipment with weaker processing capability.
In addition, it is noted that online updating in the method refers to updating the face living body detection model in real time on equipment deployed by the face living body detection model. That is, the face living body detection model is updated by using the face living body image collected on line while the face living body image collected on line by the front end device or other image collection devices communicating with the front end device is detected in living body by the face living body detection model after the face living body detection model is deployed on the front end device. Under the condition that the model is separated from equipment to be deployed, the offline updating methods of the model are retrained or fine-tuned according to the sample and the labeling information, a large number of pre-collected and stored samples are required to update the face living body detection model offline, and then the updated face living body detection model is deployed in the equipment, so that the face living body detection model deployed on the equipment cannot be updated again.
In the embodiment of the application, the image acquisition equipment can update the human face living body detection model on line in real time so as to gradually obtain the human face living body detection model which can be better applied to the current environment. If the face living body detection model is applied to a new environment, the image acquisition device may need to adjust the characteristic parameters to a larger extent, if the application scene of the current environment is the scene where the acquired training data is located, and the illumination and the like in the scene change, the image acquisition device may also update the face living body detection model online by the method introduced above to adapt to the change of the current environment, and in this case, the image acquisition device may need to adjust the characteristic parameters to a smaller extent.
Optionally, the image acquisition device may further store an effective face living body image, and after detecting the acquired face living body image to obtain a living body detection result, the image acquisition device may further determine a face recognition result according to the face living body image and the stored effective face living body image, and determine a security verification result according to the living body detection result and the face recognition result.
The image acquisition device is an access control device of a company, the image acquisition device can store effective living body face images of all people of the company, the image acquisition device respectively determines living body detection results and face recognition results when carrying out living body detection and face comparison recognition on the acquired living body images, and the access control device can unlock the access control when the living body detection results indicate that the acquired living body images are 'living bodies' and the face recognition results indicate that the acquired living body images are 'effective faces'.
In summary, in the embodiment of the application, the current input data of the target layer in the face living body image detection process of the face living body detection model can be obtained by obtaining the face living body image acquired in the current environment, and the characteristic parameters of the target layer in the face living body detection model can be updated on line according to the current input data of the target layer, so that the face living body detection model can be updated in real time. Therefore, the face living body detection model can be updated on line in real time according to the face living body image acquired in the current environment. Because a large number of image samples are not required to be acquired, and information is not required to be marked, the human face living body detection model which can be applied to the current environment can be obtained through fine adjustment, and therefore a large amount of labor cost, time cost and equipment cost are not required to be consumed in the scheme. Meanwhile, a large number of samples are not needed, so that the scheme is small in calculation amount and can be implemented on front-end equipment with weak processing capacity.
Fig. 7 is a schematic structural diagram of an apparatus for updating a face living body detection model according to an embodiment of the present application, where the apparatus for updating a face living body detection model may be implemented as part or all of a computer device by software, hardware, or a combination of both. Referring to fig. 7, the apparatus includes: a first acquisition module 701, a second acquisition module 702, and an update module 703.
A first acquiring module 701, configured to acquire a face living body image acquired in a current environment;
a second obtaining module 702, configured to obtain current input data of a target layer in a process of detecting a face living body image by using a face living body detection model;
and the updating module 703 is configured to update the feature parameters of the target layer in the face living body detection model online according to the current input data of the target layer.
Optionally, the target layer is a BN layer;
the update module 703 includes:
the first determining unit is used for determining the data mean value and the data variance of the BN layer at the current moment according to the current input data of the BN layer;
the first updating unit is used for updating the current characteristic parameters of the BN layer according to the data mean value and the data variance of the BN layer at the current moment and the characteristic parameters of the BN layer obtained by the last updating.
Optionally, the first determining unit includes:
and the first determination subunit is used for determining the data mean value and the data variance of the BN layer at the current moment according to the current input data of the BN layer and the data mean value and the data variance of the BN layer which are determined last time.
Optionally, the first determining unit further includes:
and the second determining subunit is used for triggering the first updating unit to execute the characteristic parameters of the BN layer obtained according to the data mean value and the data variance of the BN layer at the current moment and the latest update and update the current characteristic parameters of the BN layer if the times of determining the data mean value and the data variance of the BN layer reach the designated times.
Optionally, the first determining unit includes:
and the third determining subunit is used for determining the data mean value and the data variance of the BN layer at the current moment according to the current input data of the BN layer and the cached input data of the BN layer if the number of the cached input data of the BN layer before the current moment reaches the designated number.
Optionally, the target layer is a convolution layer, the convolution layer comprises a first convolution kernel, the convolution layer is connected with an encoder, and the encoder is connected with a decoder;
the update module 703 includes:
an adding unit for adding a second convolution kernel in the convolution layer;
the processing unit is used for processing the current input data of the convolution layer according to the first convolution kernel and the second convolution kernel to obtain output data of the convolution layer;
the coding unit is used for carrying out dimension compression on the output data of the convolution layer through the coder to obtain low-dimension characteristic data;
the decoding unit is used for performing dimension decompression on the low-dimensional characteristic data through the decoder to obtain reconstructed characteristic data;
the second updating unit is used for updating the second convolution kernel according to the output data and the reconstruction characteristic data of the convolution layer;
and the third determining unit is used for taking the first convolution kernel and the updated second convolution kernel as the characteristic parameters after the convolution layer is updated.
Optionally, the second updating unit includes:
a fourth determining subunit, configured to determine a difference value between the output data of the convolutional layer and the reconstructed feature data;
and the updating subunit is used for updating the second convolution kernel according to the difference value.
Optionally, the apparatus 700 further comprises:
and the detection module is used for detecting the human face living body image according to the human face living body detection model to obtain a living body detection result.
Optionally, the apparatus 700 further comprises:
the recognition module is used for determining a face recognition result according to the face living body image and the stored effective face living body image;
and the determining module is used for determining a security verification result according to the living body detection result and the face recognition result.
In summary, in the embodiment of the present application, the current input data of the target layer in the face living body image detection process of the face living body detection model can be obtained by obtaining the face living body image collected in the current environment, and the feature parameters of the target layer in the face living body detection model can be updated online according to the current input data of the target layer, so that the face living body detection model can be updated in real time. Therefore, the face living body detection model can be updated on line in real time according to the face living body image acquired in the current environment. Because a large number of image samples are not required to be acquired, and information is not required to be marked, the human face living body detection model which can be applied to the current environment can be obtained through fine adjustment, and therefore a large amount of labor cost, time cost and equipment cost are not required to be consumed in the scheme. Meanwhile, a large number of samples are not needed, so that the scheme is small in calculation amount and can be implemented on front-end equipment with weak processing capacity.
It should be noted that: the device for updating the face living body detection model provided in the above embodiment only uses the division of the above functional modules to illustrate when updating the face living body detection model, in practical application, the above functional allocation may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the device for updating the face living body detection model and the method embodiment for updating the face living body detection model provided in the foregoing embodiments belong to the same concept, and detailed implementation processes of the device and the method embodiment are described in the method embodiment, which are not repeated here.
The embodiment of the application provides image acquisition equipment, wherein a human face living body detection model is deployed in the image acquisition equipment, and the image acquisition equipment comprises an image acquisition device and a processor.
The image collector can be used for collecting the face living body image in the current environment.
The processor can be used for acquiring current input data of a target layer in the process of detecting the human face living body image of the human face living body detection model, and updating characteristic parameters of the target layer in the human face living body detection model on line according to the current input data of the target layer.
Optionally, the image acquisition device may further comprise a memory, which may be used for storing valid face live images. The processor can also be used for detecting the human face living body image according to the human face living body detection model to obtain a living body detection result. The processor may be further configured to determine a face recognition result based on the face living image and the valid face living image, and determine a security verification result based on the living detection result and the face recognition result.
In the embodiment of the application, the current input data of the target layer in the face living body image detection process of the face living body detection model can be acquired by acquiring the face living body image acquired in the current environment, and the characteristic parameters of the target layer in the face living body detection model can be updated on line according to the current input data of the target layer, so that the face living body detection model can be updated in real time. Therefore, the face living body detection model can be updated on line in real time according to the face living body image acquired in the current environment. Because a large number of image samples are not required to be acquired, and information is not required to be marked, the human face living body detection model which can be applied to the current environment can be obtained through fine adjustment, and therefore a large amount of labor cost, time cost and equipment cost are not required to be consumed in the scheme. Meanwhile, a large number of samples are not needed, so that the scheme is small in calculation amount and can be implemented on front-end equipment with weak processing capacity.
It should be noted that: in the image capturing device provided in the above embodiment, when updating the face living body detection model and the face living body detection, only the division of the above functional modules is used for illustration, in practical application, the above functional allocation may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the image acquisition device provided in the above embodiment and the method embodiment for updating the face living body detection model belong to the same concept, and detailed implementation processes of the image acquisition device are shown in the method embodiment, and are not repeated here.
Fig. 8 is a block diagram of a computer device 800 according to an embodiment of the present application. The computer device 800 may be an image capturing device, a background device connected to the image capturing device, or the like.
In general, the computer device 800 includes: a processor 801 and a memory 802.
Processor 801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 801 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 801 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 801 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and rendering of content required to be displayed by the display screen. In some embodiments, the processor 801 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 802 may include one or more computer-readable storage media, which may be non-transitory. Memory 802 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 802 is used to store at least one instruction for execution by processor 801 to implement the method of updating a face biopsy model provided by the method embodiments of the present application.
In some embodiments, the computer device 800 may optionally further include: a peripheral interface 803, and at least one peripheral. The processor 801, the memory 802, and the peripheral interface 803 may be connected by a bus or signal line. Individual peripheral devices may be connected to the peripheral device interface 803 by buses, signal lines, or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 804, a touch display 805, a camera 806, audio circuitry 807, a positioning component 808, and a power supply 809.
Peripheral interface 803 may be used to connect at least one Input/Output (I/O) related peripheral to processor 801 and memory 802. In some embodiments, processor 801, memory 802, and peripheral interface 803 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 801, the memory 802, and the peripheral interface 803 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 804 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 804 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 804 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 804 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 804 may communicate with other computer devices via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuit 804 may also include NFC (Near Field Communication ) related circuits, which the present application is not limited to.
The display 805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 805 is a touch display, the display 805 also has the ability to collect touch signals at or above the surface of the display 805. The touch signal may be input as a control signal to the processor 801 for processing. At this time, the display 805 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 805 may be a front panel provided to the computer device 800; in other embodiments, the display 805 may be at least two different surfaces or in a folded configuration, each of which is disposed on the computer device 800; in other embodiments, the display 805 may be a flexible display disposed on a curved surface or a folded surface of the computer device 800. Even more, the display 805 may be arranged in an irregular pattern other than rectangular, i.e., a shaped screen. The display 805 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 806 is used to capture images or video. Optionally, the camera assembly 806 includes a front camera and a rear camera. Typically, the front camera is disposed on a front panel of the computer device and the rear camera is disposed on a rear surface of the computer device. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 806 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
Audio circuitry 807 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, inputting the electric signals to the processor 801 for processing, or inputting the electric signals to the radio frequency circuit 804 for voice communication. For purposes of stereo acquisition or noise reduction, the microphone may be multiple, each disposed at a different location of the computer device 800. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 801 or the radio frequency circuit 804 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuit 807 may also include a headphone jack.
The location component 808 is used to locate the current geographic location of the computer device 800 for navigation or LBS (Location Based Service, location-based services). The positioning component 808 may be a positioning component based on the United states GPS (Global Positioning System ), the Beidou system of China, or the Galileo system of Russia.
The power supply 809 is used to power the various components in the computer device 800. The power supply 809 may be an alternating current, direct current, disposable battery, or rechargeable battery. When the power supply 809 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the computer device 800 also includes one or more sensors 810. The one or more sensors 810 include, but are not limited to: acceleration sensor 811, gyroscope sensor 812, pressure sensor 813, fingerprint sensor 814, optical sensor 815, and proximity sensor 816.
The acceleration sensor 811 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the computer device 800. For example, the acceleration sensor 811 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 801 may control the touch display screen 805 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 811. Acceleration sensor 811 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 812 may detect a body direction and a rotation angle of the computer device 800, and the gyro sensor 812 may collect a 3D motion of the user on the computer device 800 in cooperation with the acceleration sensor 811. The processor 801 may implement the following functions based on the data collected by the gyro sensor 812: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
Pressure sensor 813 may be disposed on a side frame of computer device 800 and/or on an underlying layer of touch display 805. When the pressure sensor 813 is disposed on a side frame of the computer device 800, a grip signal of the computer device 800 by a user may be detected, and the processor 801 performs left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 813. When the pressure sensor 813 is disposed at the lower layer of the touch display screen 805, the processor 801 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 805. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 814 is used to collect a fingerprint of a user, and the processor 801 identifies the identity of the user based on the fingerprint collected by the fingerprint sensor 814, or the fingerprint sensor 814 identifies the identity of the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 801 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 814 may be provided on the front, back, or side of the computer device 800. When a physical key or vendor Logo is provided on the computer device 800, the fingerprint sensor 814 may be integrated with the physical key or vendor Logo.
The optical sensor 815 is used to collect the ambient light intensity. In one embodiment, the processor 801 may control the display brightness of the touch display screen 805 based on the intensity of ambient light collected by the optical sensor 815. Specifically, when the intensity of the ambient light is high, the display brightness of the touch display screen 805 is turned up; when the ambient light intensity is low, the display brightness of the touch display screen 805 is turned down. In another embodiment, the processor 801 may also dynamically adjust the shooting parameters of the camera module 806 based on the ambient light intensity collected by the optical sensor 815.
A proximity sensor 816, also referred to as a distance sensor, is typically provided on the front panel of the computer device 800. The proximity sensor 816 is used to collect the distance between the user and the front of the computer device 800. In one embodiment, when the proximity sensor 816 detects a gradual decrease in the distance between the user and the front of the computer device 800, the processor 801 controls the touch display 805 to switch from the bright screen state to the off screen state; when the proximity sensor 816 detects that the distance between the user and the front of the computer device 800 gradually increases, the touch display 805 is controlled by the processor 801 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is not limiting and that more or fewer components than shown may be included or that certain components may be combined or that a different arrangement of components may be employed.
In some embodiments, there is also provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of updating a face biopsy model in the above embodiments. For example, the computer readable storage medium may be ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
It is noted that the computer readable storage medium mentioned in the present application may be a non-volatile storage medium, in other words, a non-transitory storage medium.
It should be understood that all or part of the steps to implement the above-described embodiments may be implemented by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The computer instructions may be stored in the computer-readable storage medium described above.
That is, in some embodiments, there is also provided a computer program product containing instructions that, when run on a computer, cause the computer to perform the steps of the above-described method of updating a face biopsy model.
The above embodiments are not intended to limit the present application, and any modifications, equivalent substitutions, improvements, etc. within the spirit and principle of the present application should be included in the scope of the present application.

Claims (9)

1. A method of updating a face in-vivo detection model, the method comprising:
acquiring a face living body image acquired in a current environment;
acquiring current input data of a target layer in the process of detecting the human face living body image by a human face living body detection model, wherein the target layer comprises one or more processing layers in the human face living body detection model;
according to the current input data of the target layer, updating the characteristic parameters of the target layer in the human face living body detection model on line;
the target layer comprises a convolution layer, wherein the convolution layer comprises a first convolution kernel, and the convolution layer is connected with an encoder which is connected with a decoder;
The online updating of the characteristic parameters of the target layer in the face living body detection model according to the current input data of the target layer comprises the following steps:
adding a second convolution kernel to the convolution layer;
performing convolution operation on the current input data of the convolution layer according to the first convolution check to obtain a first convolution result, performing convolution operation on the current input data of the convolution layer according to the second convolution check to obtain a second convolution result, and adding the first convolution result and the second convolution result to obtain output data of the convolution layer;
performing dimension compression on the output data of the convolution layer through the encoder to obtain low-dimensional characteristic data;
performing dimension decompression on the low-dimensional characteristic data through the decoder to obtain reconstructed characteristic data;
updating the second convolution kernel according to the output data of the convolution layer and the reconstruction feature data;
and taking the first convolution kernel and the updated second convolution kernel as the updated characteristic parameters of the convolution layer.
2. The method of claim 1, wherein the target layer further comprises a batch normalized BN layer;
the online updating of the characteristic parameters of the target layer in the face living body detection model according to the current input data of the target layer comprises the following steps:
Determining a data mean value and a data variance of the BN layer at the current moment according to the current input data of the BN layer;
and updating the current characteristic parameters of the BN layer according to the data mean value and the data variance of the BN layer at the current moment and the characteristic parameters of the BN layer obtained by the last update.
3. The method of claim 2, wherein the determining the data mean and the data variance of the BN layer from the current input data of the BN layer comprises:
and determining the data mean value and the data variance of the BN layer at the current moment according to the current input data of the BN layer and the data mean value and the data variance of the BN layer which are determined last time.
4. The method of claim 3, wherein after determining the data mean and the data variance of the BN layer at the current time, further comprising:
and if the times of determining the data mean value and the data variance of the BN layer reach the designated times, executing the step of updating the current characteristic parameters of the BN layer according to the data mean value and the data variance of the BN layer at the current moment and the characteristic parameters of the BN layer obtained by the last update.
5. The method of claim 2, wherein determining the data mean and the data variance of the BN layer at the current time based on the current input data of the BN layer comprises:
If the number of the cached input data of the BN layer before the current moment reaches the designated number, determining the data mean value and the data variance of the BN layer at the current moment according to the current input data of the BN layer and the cached input data of the BN layer.
6. The method of claim 1, wherein updating the second convolution kernel from the output data of the convolution layer and the reconstructed feature data comprises:
determining a difference value between the output data of the convolution layer and the reconstructed feature data;
and updating the second convolution kernel according to the difference value.
7. An apparatus for updating a face living body detection model, the apparatus comprising:
the first acquisition module is used for acquiring a face living body image acquired in the current environment;
the second acquisition module is used for acquiring current input data of a target layer in the process of detecting the human face living body image by the human face living body detection model, wherein the target layer comprises one or more processing layers in the human face living body detection model;
the updating module is used for updating the characteristic parameters of the target layer in the human face living body detection model on line according to the current input data of the target layer;
The target layer comprises a convolution layer, wherein the convolution layer comprises a first convolution kernel, and the convolution layer is connected with an encoder which is connected with a decoder;
the updating module is specifically configured to:
adding a second convolution kernel to the convolution layer;
performing convolution operation on the current input data of the convolution layer according to the first convolution check to obtain a first convolution result, performing convolution operation on the current input data of the convolution layer according to the second convolution check to obtain a second convolution result, and adding the first convolution result and the second convolution result to obtain output data of the convolution layer;
performing dimension compression on the output data of the convolution layer through the encoder to obtain low-dimensional characteristic data;
performing dimension decompression on the low-dimensional characteristic data through the decoder to obtain reconstructed characteristic data;
updating the second convolution kernel according to the output data of the convolution layer and the reconstruction feature data;
and taking the first convolution kernel and the updated second convolution kernel as the updated characteristic parameters of the convolution layer.
8. An image acquisition device is characterized in that a human face living body detection model is deployed in the image acquisition device, and the image acquisition device comprises an image acquisition device and a processor;
The image collector is used for collecting a face living body image in the current environment;
the processor is used for acquiring current input data of a target layer in the process of detecting the human face living body image by the human face living body detection model, and the target layer comprises one or more processing layers in the human face living body detection model; according to the current input data of the target layer, updating the characteristic parameters of the target layer in the human face living body detection model on line;
the target layer comprises a convolution layer, wherein the convolution layer comprises a first convolution kernel, and the convolution layer is connected with an encoder which is connected with a decoder; the processor is specifically configured to:
adding a second convolution kernel to the convolution layer;
performing convolution operation on the current input data of the convolution layer according to the first convolution check to obtain a first convolution result, performing convolution operation on the current input data of the convolution layer according to the second convolution check to obtain a second convolution result, and adding the first convolution result and the second convolution result to obtain output data of the convolution layer;
performing dimension compression on the output data of the convolution layer through the encoder to obtain low-dimensional characteristic data;
Performing dimension decompression on the low-dimensional characteristic data through the decoder to obtain reconstructed characteristic data;
updating the second convolution kernel according to the output data of the convolution layer and the reconstruction feature data;
and taking the first convolution kernel and the updated second convolution kernel as the updated characteristic parameters of the convolution layer.
9. The image acquisition device of claim 8, further comprising a memory;
the memory is used for storing the effective human face living body image;
the processor is further used for detecting the human face living body image according to the human face living body detection model to obtain a living body detection result;
the processor is further configured to determine a face recognition result according to the face living body image and the valid face living body image, and determine a security verification result according to the living body detection result and the face recognition result.
CN202010503534.1A 2020-06-05 2020-06-05 Method and device for updating human face living body detection model and image acquisition equipment Active CN113761983B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010503534.1A CN113761983B (en) 2020-06-05 2020-06-05 Method and device for updating human face living body detection model and image acquisition equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010503534.1A CN113761983B (en) 2020-06-05 2020-06-05 Method and device for updating human face living body detection model and image acquisition equipment

Publications (2)

Publication Number Publication Date
CN113761983A CN113761983A (en) 2021-12-07
CN113761983B true CN113761983B (en) 2023-08-22

Family

ID=78783902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010503534.1A Active CN113761983B (en) 2020-06-05 2020-06-05 Method and device for updating human face living body detection model and image acquisition equipment

Country Status (1)

Country Link
CN (1) CN113761983B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220635A (en) * 2017-06-21 2017-09-29 北京市威富安防科技有限公司 Human face in-vivo detection method based on many fraud modes
CN107609638A (en) * 2017-10-12 2018-01-19 湖北工业大学 A kind of method based on line decoder and interpolation sampling optimization convolutional neural networks
CN107633296A (en) * 2017-10-16 2018-01-26 中国电子科技集团公司第五十四研究所 A kind of convolutional neural networks construction method
CN108334843A (en) * 2018-02-02 2018-07-27 成都国铁电气设备有限公司 A kind of arcing recognition methods based on improvement AlexNet
CN108416427A (en) * 2018-02-22 2018-08-17 重庆信络威科技有限公司 Convolution kernel accumulates data flow, compressed encoding and deep learning algorithm
CN108900848A (en) * 2018-06-12 2018-11-27 福建帝视信息科技有限公司 A kind of video quality Enhancement Method based on adaptive separable convolution
CN109034102A (en) * 2018-08-14 2018-12-18 腾讯科技(深圳)有限公司 Human face in-vivo detection method, device, equipment and storage medium
CN109903301A (en) * 2019-01-28 2019-06-18 杭州电子科技大学 A kind of image outline detection method based on multi-stage characteristics channel Optimized Coding Based
CN110765923A (en) * 2019-10-18 2020-02-07 腾讯科技(深圳)有限公司 Face living body detection method, device, equipment and storage medium
WO2020037898A1 (en) * 2018-08-23 2020-02-27 平安科技(深圳)有限公司 Face feature point detection method and apparatus, computer device, and storage medium
CN111027400A (en) * 2019-11-15 2020-04-17 烟台市广智微芯智能科技有限责任公司 Living body detection method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10930010B2 (en) * 2018-05-10 2021-02-23 Beijing Sensetime Technology Development Co., Ltd Method and apparatus for detecting living body, system, electronic device, and storage medium
US11004183B2 (en) * 2018-07-10 2021-05-11 The Board Of Trustees Of The Leland Stanford Junior University Un-supervised convolutional neural network for distortion map estimation and correction in MRI
US11169514B2 (en) * 2018-08-27 2021-11-09 Nec Corporation Unsupervised anomaly detection, diagnosis, and correction in multivariate time series data
JP2020057172A (en) * 2018-10-01 2020-04-09 株式会社Preferred Networks Learning device, inference device and trained model

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220635A (en) * 2017-06-21 2017-09-29 北京市威富安防科技有限公司 Human face in-vivo detection method based on many fraud modes
CN107609638A (en) * 2017-10-12 2018-01-19 湖北工业大学 A kind of method based on line decoder and interpolation sampling optimization convolutional neural networks
CN107633296A (en) * 2017-10-16 2018-01-26 中国电子科技集团公司第五十四研究所 A kind of convolutional neural networks construction method
CN108334843A (en) * 2018-02-02 2018-07-27 成都国铁电气设备有限公司 A kind of arcing recognition methods based on improvement AlexNet
CN108416427A (en) * 2018-02-22 2018-08-17 重庆信络威科技有限公司 Convolution kernel accumulates data flow, compressed encoding and deep learning algorithm
CN108900848A (en) * 2018-06-12 2018-11-27 福建帝视信息科技有限公司 A kind of video quality Enhancement Method based on adaptive separable convolution
CN109034102A (en) * 2018-08-14 2018-12-18 腾讯科技(深圳)有限公司 Human face in-vivo detection method, device, equipment and storage medium
WO2020037898A1 (en) * 2018-08-23 2020-02-27 平安科技(深圳)有限公司 Face feature point detection method and apparatus, computer device, and storage medium
CN109903301A (en) * 2019-01-28 2019-06-18 杭州电子科技大学 A kind of image outline detection method based on multi-stage characteristics channel Optimized Coding Based
CN110765923A (en) * 2019-10-18 2020-02-07 腾讯科技(深圳)有限公司 Face living body detection method, device, equipment and storage medium
CN111027400A (en) * 2019-11-15 2020-04-17 烟台市广智微芯智能科技有限责任公司 Living body detection method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄海新等.基于深度学习的人脸活体检测算法.《电子技术应用》.2019,第45卷(第8期),第44-47页. *

Also Published As

Publication number Publication date
CN113761983A (en) 2021-12-07

Similar Documents

Publication Publication Date Title
CN110502954B (en) Video analysis method and device
CN110222789B (en) Image recognition method and storage medium
CN112633306B (en) Method and device for generating countermeasure image
CN110059652B (en) Face image processing method, device and storage medium
CN108288032B (en) Action characteristic acquisition method, device and storage medium
CN109558837B (en) Face key point detection method, device and storage medium
CN111127509B (en) Target tracking method, apparatus and computer readable storage medium
CN112907725B (en) Image generation, training of image processing model and image processing method and device
CN108363982B (en) Method and device for determining number of objects
CN109360222B (en) Image segmentation method, device and storage medium
CN108776822B (en) Target area detection method, device, terminal and storage medium
CN113763228B (en) Image processing method, device, electronic equipment and storage medium
CN110827195B (en) Virtual article adding method and device, electronic equipment and storage medium
CN110991457B (en) Two-dimensional code processing method and device, electronic equipment and storage medium
CN111062248A (en) Image detection method, device, electronic equipment and medium
CN110991445B (en) Vertical text recognition method, device, equipment and medium
CN110675473B (en) Method, device, electronic equipment and medium for generating GIF dynamic diagram
CN111325701B (en) Image processing method, device and storage medium
CN111598896A (en) Image detection method, device, equipment and storage medium
CN110705438A (en) Gait recognition method, device, equipment and storage medium
CN111931712B (en) Face recognition method, device, snapshot machine and system
CN110737692A (en) data retrieval method, index database establishment method and device
CN111860064B (en) Video-based target detection method, device, equipment and storage medium
CN110232417B (en) Image recognition method and device, computer equipment and computer readable storage medium
CN111488895B (en) Countermeasure data generation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant