CN109344716A - Training method, detection method, device, medium and equipment of living body detection model - Google Patents

Training method, detection method, device, medium and equipment of living body detection model Download PDF

Info

Publication number
CN109344716A
CN109344716A CN201811015451.7A CN201811015451A CN109344716A CN 109344716 A CN109344716 A CN 109344716A CN 201811015451 A CN201811015451 A CN 201811015451A CN 109344716 A CN109344716 A CN 109344716A
Authority
CN
China
Prior art keywords
detection model
vivo detection
characteristic pattern
living body
textural characteristics
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811015451.7A
Other languages
Chinese (zh)
Inventor
胡欢
刘兆祥
廉士国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Robotics Co Ltd
Original Assignee
Cloudminds Shenzhen Robotics Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Shenzhen Robotics Systems Co Ltd filed Critical Cloudminds Shenzhen Robotics Systems Co Ltd
Priority to CN201811015451.7A priority Critical patent/CN109344716A/en
Publication of CN109344716A publication Critical patent/CN109344716A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to a training method, a detection method, an apparatus, a medium, and a device for a living body detection model, the method comprising: determining a frequency spectrum characteristic image, a texture characteristic image and a motion characteristic image corresponding to at least two face images according to the at least two face images corresponding to the same face; fusing the frequency spectrum characteristic diagram, the texture characteristic diagram and the motion characteristic diagram to obtain a first target characteristic diagram; inputting a first target feature map into the living body detection model, and obtaining a first judgment result to finish one training of the living body detection model, wherein the first judgment result is used for indicating whether the face is a living body face; and when the training times of the living body detection model are less than the preset times or the accuracy of the living body detection model determined according to the judgment result is less than the preset threshold value, updating the living body detection model according to the accuracy. Therefore, the application range of the trained living body detection model can be effectively improved, and meanwhile, the accuracy of the living body detection model is improved.

Description

Training method, detection method, device, medium and the equipment of In vivo detection model
Technical field
This disclosure relates to In vivo detection field, and in particular, to a kind of training method of In vivo detection model, detection side Method, device, medium and equipment.
Background technique
Now, face identification system is increasingly being applied in the fields such as security protection, finance, is led in above-mentioned application scenarios Often need the authentication of high request.And in above-mentioned application field, in addition to ensuring that except whether the face of authenticatee accurate, It is a legal biological living firstly the need of authenticatee.
In the prior art, In vivo detection is carried out by using following manner:
It is identified first is that relying on special hardware device, such as infrared camera, depth camera.It is carried out by infrared signature Identification can only be identified that limitation is larger to the forgery carried out by plain modes such as photos, and need to rely on hardware device, Higher cost.
It is identified second is that carrying out movement by tested personnel's matched orders.In the identification process, it is tested personnel It needs to make corresponding actions according to instruction or rests on as you were.Therefore, when being identified by this way, user is used It experiences poor.
Summary of the invention
In order to solve the above-mentioned technical problem, the disclosure provides a kind of training method of In vivo detection model, detection method, dress It sets, medium and equipment.
To achieve the goals above, according to the disclosure in a first aspect, provide a kind of training method of In vivo detection model, The described method includes:
According at least two facial images for corresponding to same face, determination is corresponding at least two facial images Spectrum signature figure, textural characteristics figure and motion feature figure;
The spectrum signature figure, the textural characteristics figure and the motion feature figure are merged, first object is obtained Characteristic pattern;
The first object characteristic pattern is input to the In vivo detection model, first is obtained and determines as a result, to complete State the primary training of In vivo detection model, wherein the first judgement result is used to indicate whether the face is living body faces;
It is less than preset times in the frequency of training of the In vivo detection model, or determined according to the judgement result When the accuracy rate of the In vivo detection model is less than preset threshold, the In vivo detection model is updated according to the accuracy rate.
Optionally, the method also includes:
After updating the In vivo detection model, at least two face figures that the basis corresponds to same face are returned The step of picture, determining spectrum signature figure corresponding at least two facial images, textural characteristics figure and motion feature figure, directly Frequency of training to the In vivo detection model is greater than or equal to the preset times, and the accuracy rate of the In vivo detection model Until the preset threshold.
Optionally, described to merge the spectrum signature figure, the textural characteristics figure and the motion feature figure, it obtains Obtain first object characteristic pattern, comprising:
The spectrum signature figure, the textural characteristics figure and the motion feature figure are input to characteristic pattern Fusion Model, Obtain the first object characteristic pattern, wherein the characteristic pattern Fusion Model is convolutional neural networks model, the first object Characteristic pattern is the characteristic pattern of the last one characteristic layer output in the characteristic pattern Fusion Model.
Optionally, the spectrum signature figure is any one of following:
Fourier spectrum figure, discrete cosine transform spectrogram, Gabor wavelet convert spectrogram;
The textural characteristics figure is any one of following:
LBP characteristic pattern, HOG characteristic pattern, histogram contrast's characteristic pattern, image blur characteristic pattern;
The motion feature figure is Optical-flow Feature figure or SFM characteristic pattern.
According to the second aspect of the disclosure, a kind of biopsy method is provided, which comprises
Obtain at least two facial images to be measured for corresponding to same target face;
According at least two facial images to be measured, determining frequency spectrum corresponding at least two facial images to be measured Characteristic pattern, textural characteristics figure and motion feature figure;
It will the spectrum signature figure corresponding at least two facial images to be measured, the textural characteristics figure and described Motion feature figure is merged, and the second target signature is obtained;
Second target signature is input to In vivo detection model, second is obtained and determines as a result, described second determines As a result it is used to indicate whether the target face is living body faces, wherein the In vivo detection model is by above-mentioned first party What the training method for any In vivo detection model that face provides was trained.
According to the third aspect of the disclosure, a kind of training device of In vivo detection model is provided, described device includes:
First determining module, for according at least two facial images for corresponding to same face, it is determining with it is described at least The corresponding spectrum signature figure of two facial images, textural characteristics figure and motion feature figure;
First Fusion Module, for carrying out the spectrum signature figure, the textural characteristics figure and the motion feature figure Fusion obtains first object characteristic pattern;
Second determining module obtains first for the first object characteristic pattern to be input to the In vivo detection model Determine as a result, to complete the primary training of the In vivo detection model, wherein the first judgement result is used to indicate the people Whether face is living body faces;
Update module is less than preset times for the frequency of training in the In vivo detection model, or is sentenced according to described When determining the accuracy rate for the In vivo detection model that result is determined less than preset threshold, the work is updated according to the accuracy rate Body detection model.
Optionally, after update module updates the In vivo detection model, the first determining module of triggering is according to corresponding to At least two facial images of same face determine that spectrum signature figure corresponding at least two facial images, texture are special Sign figure and motion feature figure, until the frequency of training of the In vivo detection model is greater than or equal to the preset times, and described Until the accuracy rate of In vivo detection model is greater than or equal to the preset threshold.
Optionally, second determining module is used for:
The spectrum signature figure, the textural characteristics figure and the motion feature figure are input to characteristic pattern Fusion Model, Obtain the first object characteristic pattern, wherein the characteristic pattern Fusion Model is convolutional neural networks model, the first object Characteristic pattern is the characteristic pattern of the last one characteristic layer output in the characteristic pattern Fusion Model.
Optionally, the spectrum signature figure is any one of following:
Fourier spectrum figure, discrete cosine transform spectrogram, Gabor wavelet convert spectrogram;
The textural characteristics figure is any one of following:
LBP characteristic pattern, HOG characteristic pattern, histogram contrast's characteristic pattern, image blur characteristic pattern;
The motion feature figure is Optical-flow Feature figure or SFM characteristic pattern.
According to the fourth aspect of the disclosure, a kind of living body detection device is provided, described device includes:
Module is obtained, for obtaining at least two facial images to be measured for corresponding to same target face;
Third determining module, for determining and described at least two to be measured according at least two facial images to be measured The corresponding spectrum signature figure of facial image, textural characteristics figure and motion feature figure;
Second Fusion Module, for will the spectrum signature figure corresponding at least two facial images to be measured, institute It states textural characteristics figure and the motion feature figure is merged, obtain the second target signature;
4th determining module obtains second and determines for second target signature to be input to In vivo detection model As a result, the second judgement result is used to indicate whether the target face is living body faces, wherein the In vivo detection model It is that the training method of any In vivo detection model provided by above-mentioned first aspect is trained.
According to the 5th of the disclosure the aspect, a kind of computer readable storage medium is provided, computer program is stored thereon with, The program realizes the step of training method for any body detection model that above-mentioned first aspect provides when being executed by processor.
According to the 6th of the disclosure the aspect, a kind of computer readable storage medium is provided, computer program is stored thereon with, The program realizes the step of biopsy method that above-mentioned second aspect provides when being executed by processor.
According to the 7th of the disclosure the aspect, a kind of electronic equipment is provided, comprising:
Memory is stored thereon with computer program;
Processor, for executing the computer program in the memory, to realize what above-mentioned first aspect provided The step of training method of any In vivo detection model.
According to the eighth aspect of the disclosure, a kind of electronic equipment is provided, comprising:
Memory is stored thereon with computer program;
Processor, for executing the computer program in the memory, to realize what above-mentioned second aspect provided The step of biopsy method.
In the above-mentioned technical solutions, pass through the feature to facial image in multiple dimensions, i.e. spectrum signature, textural characteristics And motion feature, it extracts and merges, to obtain the first object characteristic pattern of characterization facial image comprehensive characteristics.Later, In vivo detection model is trained based on the first object characteristic pattern, can effectively widen the In vivo detection model trained The scope of application, while the limitation that can also be detected to avoid single features, improve the accuracy rate of In vivo detection model.In addition, this In vivo detection model is trained by first object characteristic pattern in open so that be not necessarily dependent on special external hardware or The cooperation for being user action can be based on the In vivo detection model realization In vivo detection, so that testing cost is effectively reduced, simultaneously Promote user experience.
Other feature and advantage of the disclosure will the following detailed description will be given in the detailed implementation section.
Detailed description of the invention
Attached drawing is and to constitute part of specification for providing further understanding of the disclosure, with following tool Body embodiment is used to explain the disclosure together, but does not constitute the limitation to the disclosure.In the accompanying drawings:
Fig. 1 is the flow chart of the training method of the In vivo detection model provided according to an embodiment of the present disclosure;
Fig. 2 is the flow chart of the biopsy method provided according to an embodiment of the present disclosure;
Fig. 3 is the block diagram of the training device of the In vivo detection model provided according to an embodiment of the present disclosure;
Fig. 4 is the block diagram of the living body detection device provided according to an embodiment of the present disclosure;
Fig. 5 is the block diagram of a kind of electronic equipment shown according to an exemplary embodiment;
Fig. 6 is the block diagram of a kind of electronic equipment shown according to an exemplary embodiment.
Specific embodiment
It is described in detail below in conjunction with specific embodiment of the attached drawing to the disclosure.It should be understood that this place is retouched The specific embodiment stated is only used for describing and explaining the disclosure, is not limited to the disclosure.
Shown in Fig. 1, for the process of the training method of the In vivo detection model provided according to an embodiment of the present disclosure Figure, as shown in Figure 1, which comprises
In S11, according at least two facial images for corresponding to same face, determining and at least two facial images pair Spectrum signature figure, textural characteristics figure and the motion feature figure answered.
Wherein it is possible to extract its spectrum signature figure and textural characteristics figure respectively for every facial image, pass through the correspondence It is special in the movement corresponding at least two facial images of the change detection of the corresponding characteristic point of multiple facial images of same face Sign figure.
In one embodiment, can be selected from the multiple spectrum signature figures extracted one as at least two faces The corresponding spectrum signature figure of image, selected from the multiple textural characteristics figures extracted one as at least two facial images Corresponding textural characteristics figure.Illustratively, it can be random selection, can also be selected according to the readability of facial image, To this without limiting in the disclosure.
In another embodiment, the spectrum signature figure of at least two facial images can be averaging processing, to obtain Spectrum signature figure corresponding at least two facial images is obtained, the textural characteristics figure of at least two facial images is carried out flat It handles, to obtain textural characteristics figure corresponding at least two facial images.Wherein, which it is flat to can be arithmetic It or is weighted and averaged, in the disclosure without limiting.
Optionally, the spectrum signature figure is any one of following:
Fourier spectrum figure, discrete cosine transform spectrogram, Gabor wavelet convert spectrogram;
The textural characteristics figure is any one of following:
LBP (Local Binary Pattern, local binary patterns) characteristic pattern, HOG (Histogram of Oriented Gradient, histograms of oriented gradients) characteristic pattern, histogram contrast's characteristic pattern, image blur characteristic pattern;
The motion feature figure is that Optical-flow Feature figure or SFM (Structure from Motion, motion structure restore) are special Sign figure.
In S12, spectrum signature figure, textural characteristics figure and motion feature figure are merged, obtain first object feature Figure.
In the disclosure, by the way that spectrum signature figure, textural characteristics figure and motion feature figure are merged, it is hereby achieved that The first object characteristic pattern of the comprehensive characteristics of facial image can be characterized.
Wherein, in the spectrum signature figure that facial image extracts, the corresponding spectrum signature figure medium-high frequency letter of living body faces Cease it is more, rather than in the corresponding spectrum signature figure of living body faces, the loss of its high-frequency information it is larger, therefore, corresponding frequency spectrum is special The high-frequency information levied in figure is less.
In the textural characteristics figure that facial image extracts, in the corresponding textural characteristics figure of living body faces texture information compared with It is more, rather than in the corresponding textural characteristics figure of living body faces, its texture feature information loss it is larger, texture is smoother, therefore, Texture information in corresponding textural characteristics figure is less.
Motion feature body in the textural characteristics figure that facial image extracts, in the corresponding motion feature figure of living body faces Reveal vector erratic behavior, rather than in the corresponding motion feature figure of living body faces, its motion feature then show as vector rule one Cause property.
In the above-mentioned technical solutions, not only fully considered living body faces image and non-living body facial image in multiple dimensions In feature difference;Meanwhile also merging spectrum signature figure, textural characteristics figure and motion feature figure, so as to base In features described above, show that the comprehensive characteristics of facial image indicate, based on the first object characteristic pattern of the expression comprehensive characteristics to work Body detection model is trained, then can widen the scope of application of In vivo detection model, and can also be improved based on the living body The accuracy of detection model progress In vivo detection.
In S13, first object characteristic pattern is input to In vivo detection model, first is obtained and determines as a result, to complete to live The primary training of body detection model, wherein the first judgement result is used to indicate whether face is living body faces.
Wherein, which can be realized by classifier, illustratively, can be SVM (Support Vector Machine, support vector machines) classifier or kNN (k-NearestNeighbor, K arest neighbors) classifier, this public affairs It opens to this without limiting.
In S14, it is less than preset times in the frequency of training of In vivo detection model, or determined according to judgement result When the accuracy rate of In vivo detection model is less than preset threshold, In vivo detection model is updated according to accuracy rate.
Wherein, complete In vivo detection model it is primary after training, the standard of the primary In vivo detection model can be calculated True rate.Illustratively, according to the accuracy rate for determining that result can determine In vivo detection model in the following manner:
By currently trained judgement result and image identification, the accuracy rate of this training is determined, wherein according to face Before image is trained In vivo detection model, it is living body faces also right and wrong that it is corresponding, which to need to identify the facial image in advance, Living body faces, in order to be trained according to the image identification to In vivo detection model.Later, by repeatedly trained accuracy rate Average value is determined as the accuracy rate of In vivo detection model.
Wherein, preset times and preset threshold can be determined according to the detection accuracy of the In vivo detection model, living body The detection accuracy of detection model is higher, and corresponding preset times and preset threshold are bigger.When frequency of training is less than preset times, Indicate that the current frequency of training of the In vivo detection model is insufficient, at this point, then needing to continue training living body detection model.
When the accuracy rate for the In vivo detection model determined according to judgement result is less than preset threshold, then it represents that living body inspection The accuracy rate for surveying model is insufficient, and error is larger, at this time, it is also desirable to continue training to obtain more accurate In vivo detection mould Type.It needs to be illustrated, the method that In vivo detection model is updated according to accuracy rate is the prior art, and details are not described herein.
In the above-mentioned technical solutions, pass through the feature to facial image in multiple dimensions, i.e. spectrum signature, textural characteristics And motion feature, it extracts and merges, to obtain the first object characteristic pattern of characterization facial image comprehensive characteristics.Later, In vivo detection model is trained based on the first object characteristic pattern, can effectively widen the In vivo detection model trained The scope of application, while the limitation that can also be detected to avoid single features, improve the accuracy rate of In vivo detection model.In addition, this In vivo detection model is trained by first object characteristic pattern in open so that be not necessarily dependent on special external hardware or The cooperation for being user action can be based on the In vivo detection model realization In vivo detection, so that testing cost is effectively reduced, simultaneously Promote user experience.
Optionally, the method also includes:
After updating the In vivo detection model, at least two face figures that the basis corresponds to same face are returned The step of picture, determining spectrum signature figure corresponding at least two facial images, textural characteristics figure and motion feature figure, directly Frequency of training to the In vivo detection model is greater than or equal to the preset times, and the accuracy rate of the In vivo detection model Until the preset threshold.
Wherein, there is the corresponding facial image of multiple faces in training sample, in one embodiment, in return step S11, It can choose the corresponding different facial image of a face, to determine corresponding spectrum signature figure, line according to the facial image Manage characteristic pattern and motion feature figure.Later, the spectrum signature figure, textural characteristics figure and motion feature figure determined are melted It closes, obtains new first object characteristic pattern, and be trained to In vivo detection model based on the new first object characteristic pattern.Its In, the step of subsequent training has been described in detail above, and details are not described herein.
In one embodiment, in return step S11, multiple corresponding facial images of new face be can choose, with root Corresponding spectrum signature figure, textural characteristics figure and motion feature figure are determined according to multiple facial images.Later, to the frequency determined Spectrum signature figure, textural characteristics figure and motion feature figure are merged, and obtain new first object characteristic pattern, and based on this new the One target signature is trained In vivo detection model.Wherein, the step of subsequent training has been described in detail above, herein no longer It repeats.
In the above-mentioned technical solutions, above-mentioned steps (S11-S14) is executed by circulation, to realize to In vivo detection model Training.When the frequency of training of updated In vivo detection model is greater than or equal to preset times, and the standard of In vivo detection model When true rate is greater than or equal to preset threshold, indicate that the accuracy rate of the In vivo detection model and frequency of training are all satisfied training requirement, It can terminate the training process of the In vivo detection model at this time.It therefore, through the above technical solutions, can be quickly and accurately right In vivo detection model is trained, so as to be effectively ensured the In vivo detection model the scope of application and testing result it is accurate Degree promotes user experience.
Optionally, spectrum signature figure, textural characteristics figure and motion feature figure are merged, obtains first object characteristic pattern Step S13 sample implementation it is as follows, comprising:
Spectrum signature figure, textural characteristics figure and motion feature figure are input to characteristic pattern Fusion Model, obtain first object Characteristic pattern, wherein characteristic pattern Fusion Model is convolutional neural networks model, and first object characteristic pattern is characterized in figure Fusion Model The characteristic pattern of the last one characteristic layer output.
Illustratively, characteristic pattern Fusion Model can be trained in the following way:
According at least two facial images for corresponding to same face, frequency spectrum corresponding at least two facial images is determined Characteristic pattern, textural characteristics figure and motion feature figure, wherein the determination side of spectrum signature figure, textural characteristics figure and motion feature figure Formula has been described in detail above, and details are not described herein.
Spectrum signature figure, textural characteristics figure and motion feature figure are input to characteristic pattern Fusion Model, characteristic pattern is obtained and melts The judgement result of molding type.Wherein, convolutional neural networks can be VGG, GoogLeNet, ResNet etc..
When being greater than error threshold based on the decision errors for determining that result and image identification are determined, mould is merged to characteristic pattern Type is updated and is trained, until the decision errors of characteristic pattern Fusion Model are less than or equal to error threshold.Wherein, convolutional Neural The training method of network model is the prior art, and details are not described herein.By spectrum signature figure, textural characteristics figure and motion feature When figure is input to characteristic pattern Fusion Model, the last one characteristic layer of this feature figure Fusion Model is full articulamentum, full articulamentum Purpose be the Feature Mapping that will be arrived in e-learning to label space, therefore, the characteristic pattern of output is the feature to input Figure merge obtained, determines it as the comprehensive characteristics carry out table that first object characteristic pattern can accurately to image Sign.Wherein, the mode for extracting the characteristic pattern of the last one characteristic layer output in characteristic pattern Fusion Model is the prior art, herein not It repeats again.
In the above-mentioned technical solutions, by characteristic pattern Fusion Model to spectrum signature figure, textural characteristics figure and motion feature Figure is merged, it is hereby achieved that the first object characteristic pattern of characterization image comprehensive characteristics.Characteristic pattern Fusion Model passes through volume Product neural network is realized, can efficiently and quickly be merged to each characteristic pattern, and first object characteristic pattern is improved Accuracy, and then guarantee the accuracy of In vivo detection model.
The disclosure also provides a kind of biopsy method.Shown in Fig. 2, provided according to an embodiment of the present disclosure The flow chart of biopsy method, as shown in Fig. 2, the method includes
In S21, at least two facial images to be measured for corresponding to same target face are obtained.Wherein it is possible to directly obtain Still image is taken, is also possible to the crawl from video and is obtained later by face extraction algorithm comprising the target image of target face Obtain facial image to be measured.Illustratively, face extraction algorithm can be seetaface algorithm, mtcnn algorithm etc..
In S22, according at least two facial images to be measured, frequency spectrum corresponding at least two facial images to be measured is determined Characteristic pattern, textural characteristics figure and motion feature figure;
It, will spectrum signature figure corresponding at least two facial images to be measured, textural characteristics figure and motion feature in S23 Figure is merged, and the second target signature is obtained;
In S24, the second target signature is input to In vivo detection model, second is obtained and determines as a result, second determines As a result it is used to indicate whether target face is living body faces, wherein the In vivo detection model is by above-mentioned In vivo detection mould What the training method of type was trained.
Wherein it is determined that spectrum signature figure corresponding at least two facial images to be measured, textural characteristics figure and motion feature Figure, and the mode merged to spectrum signature figure, textural characteristics figure and motion feature figure has hereinbefore been described in detail, herein not It repeats again.
In the above-mentioned technical solutions, it is special that the corresponding spectrum signature figure of facial image to be measured, textural characteristics figure and movement are extracted Sign figure, and spectrum signature figure, textural characteristics figure and motion feature figure are subjected to fusion and obtain the second target signature, later should Second target signature inputs In vivo detection model to determine whether target face is living body faces.Therefore, it is mentioned based on the disclosure When the method for confession carries out In vivo detection, it is not necessary to dependent on special external hardware (for example, thermal camera) or user action Cooperation can realize In vivo detection, testing cost is effectively reduced, while promoting user experience.
The disclosure also provides a kind of training device of In vivo detection model.As shown in figure 3, for according to a kind of reality of the disclosure The block diagram of the training device for the In vivo detection model that the mode of applying provides, as shown in figure 3, described device 10 includes:
First determining module 101, for according at least two facial images for corresponding to same face, it is determining with it is described extremely Few corresponding spectrum signature figure of two facial images, textural characteristics figure and motion feature figure;
First Fusion Module 102, for by the spectrum signature figure, the textural characteristics figure and the motion feature figure into Row fusion, obtains first object characteristic pattern;
Second determining module 103 obtains for the first object characteristic pattern to be input to the In vivo detection model One determines as a result, to complete the primary training of the In vivo detection model, wherein the first judgement result is used to indicate described Whether face is living body faces;
Update module 104 is less than preset times for the frequency of training in the In vivo detection model, or according to described When determining that the accuracy rate for the In vivo detection model that result is determined is less than preset threshold, according to accuracy rate update In vivo detection model.
Optionally, after update module 104 updates the In vivo detection model, 101 basis of the first determining module is triggered Corresponding at least two facial images of same face, determining spectrum signature figure corresponding at least two facial images, Textural characteristics figure and motion feature figure, until the frequency of training of the In vivo detection model is greater than or equal to the preset times, And until the accuracy rate of the In vivo detection model is greater than or equal to the preset threshold.
Optionally, second determining module 103 is used for:
The spectrum signature figure, the textural characteristics figure and the motion feature figure are input to characteristic pattern Fusion Model, Obtain the first object characteristic pattern, wherein the characteristic pattern Fusion Model is convolutional neural networks model, the first object Characteristic pattern is the characteristic pattern of the last one characteristic layer output in the characteristic pattern Fusion Model.
Optionally, the spectrum signature figure is any one of following:
Fourier spectrum figure, discrete cosine transform spectrogram, Gabor wavelet convert spectrogram;
The textural characteristics figure is any one of following:
LBP characteristic pattern, HOG characteristic pattern, histogram contrast's characteristic pattern, image blur characteristic pattern;
The motion feature figure is Optical-flow Feature figure or SFM characteristic pattern.
The disclosure also provides a kind of living body detection device.Shown in Fig. 4, provided according to an embodiment of the present disclosure The block diagram of living body detection device, as shown in figure 4, described device 20 includes:
Module 201 is obtained, for obtaining at least two facial images to be measured for corresponding to same target face;
Third determining module 202, for according at least two facial images to be measured, determine with described at least two to Survey the corresponding spectrum signature figure of facial image, textural characteristics figure and motion feature figure;
Second Fusion Module 203, for will the spectrum signature figure corresponding at least two facial images to be measured, The textural characteristics figure and the motion feature figure are merged, and the second target signature is obtained;
4th determining module 204 obtains second and sentences for second target signature to be input to In vivo detection model Determine as a result, the second judgement result is used to indicate whether the target face is living body faces, wherein the In vivo detection mould Type is that the training method of any In vivo detection model provided by the disclosure is trained.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method Embodiment in be described in detail, no detailed explanation will be given here.
Fig. 5 is the block diagram of a kind of electronic equipment 500 shown according to an exemplary embodiment.As shown in figure 5, the electronics is set Standby 500 may include: processor 501, memory 502.The electronic equipment 500 can also include multimedia component 503, input/ Export one or more of (I/O) interface 504 and communication component 505.
Wherein, processor 501 is used to control the integrated operation of the electronic equipment 500, to complete above-mentioned In vivo detection mould The training method or all or part of the steps in biopsy method of type.Memory 502 is for storing various types of data To support the operation in the electronic equipment 500, these data for example may include appointing for what is operated on the electronic equipment 500 The instruction and the relevant data of application program of what application program or method, for example, contact data, the message of transmitting-receiving, picture, Audio, video etc..The memory 502 can be by any kind of volatibility or non-volatile memory device or their group It closes and realizes, such as static random access memory (Static Random Access Memory, abbreviation SRAM), electric erasable Programmable read only memory (Electrically Erasable Programmable Read-Only Memory, referred to as EEPROM), Erasable Programmable Read Only Memory EPROM (Erasable Programmable Read-Only Memory, abbreviation EPROM), programmable read only memory (Programmable Read-Only Memory, abbreviation PROM), read-only memory (Read-Only Memory, abbreviation ROM), magnetic memory, flash memory, disk or CD.Multimedia component 503 can wrap Include screen and audio component.Wherein screen for example can be touch screen, and audio component is used for output and/or input audio signal. For example, audio component may include a microphone, microphone is for receiving external audio signal.The received audio signal can To be further stored in memory 502 or be sent by communication component 505.Audio component further includes at least one loudspeaker, For output audio signal.I/O interface 504 provides interface, other above-mentioned interfaces between processor 501 and other interface modules Module can be keyboard, mouse, button etc..These buttons can be virtual push button or entity button.Communication component 505 is used for Wired or wireless communication is carried out between the electronic equipment 500 and other equipment.Wireless communication, such as Wi-Fi, bluetooth, near field are logical Believe (Near Field Communication, abbreviation NFC), 2G, 3G or 4G or they one or more of combination, because This corresponding communication component 505 may include: Wi-Fi module, bluetooth module, NFC module.
In one exemplary embodiment, electronic equipment 500 can be by one or more application specific integrated circuit (Application Specific Integrated Circuit, abbreviation ASIC), digital signal processor (Digital Signal Processor, abbreviation DSP), digital signal processing appts (Digital Signal Processing Device, Abbreviation DSPD), programmable logic device (Programmable Logic Device, abbreviation PLD), field programmable gate array (Field Programmable Gate Array, abbreviation FPGA), controller, microcontroller, microprocessor or other electronics member Part is realized, for executing the training method or biopsy method of above-mentioned In vivo detection model.
In a further exemplary embodiment, a kind of computer readable storage medium including program instruction is additionally provided, it should The step of training method or biopsy method of above-mentioned In vivo detection model are realized when program instruction is executed by processor.Example Such as, which can be the above-mentioned memory 502 including program instruction, and above procedure instruction can be by electronics The processor 501 of equipment 500 is executed to complete the training method or biopsy method of above-mentioned In vivo detection model.
Fig. 6 is the block diagram of a kind of electronic equipment 600 shown according to an exemplary embodiment.For example, electronic equipment 600 can To be provided as a server.Referring to Fig. 6, electronic equipment 600 includes processor 622, and quantity can be one or more, with And memory 632, for storing the computer program that can be executed by processor 622.The computer program stored in memory 632 May include it is one or more each correspond to one group of instruction module.In addition, processor 622 can be configured as The computer program is executed, to execute the training method or biopsy method of above-mentioned In vivo detection model.
In addition, electronic equipment 600 can also include power supply module 626 and communication component 650, which can be with It is configured as executing the power management of electronic equipment 600, which, which can be configured as, realizes electronic equipment 600 Communication, for example, wired or wireless communication.In addition, the electronic equipment 600 can also include input/output (I/O) interface 658.Electricity Sub- equipment 600 can be operated based on the operating system for being stored in memory 632, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM etc..
In a further exemplary embodiment, a kind of computer readable storage medium including program instruction is additionally provided, it should The step of training method or biopsy method of above-mentioned In vivo detection model are realized when program instruction is executed by processor.Example Such as, which can be the above-mentioned memory 632 including program instruction, and above procedure instruction can be by electronics The processor 622 of equipment 600 is executed to complete the training method or biopsy method of above-mentioned In vivo detection model.
The preferred embodiment of the disclosure is described in detail in conjunction with attached drawing above, still, the disclosure is not limited to above-mentioned reality The detail in mode is applied, in the range of the technology design of the disclosure, a variety of letters can be carried out to the technical solution of the disclosure Monotropic type, these simple variants belong to the protection scope of the disclosure.
It is further to note that specific technical features described in the above specific embodiments, in not lance In the case where shield, it can be combined in any appropriate way.In order to avoid unnecessary repetition, the disclosure to it is various can No further explanation will be given for the combination of energy.
In addition, any combination can also be carried out between a variety of different embodiments of the disclosure, as long as it is without prejudice to originally Disclosed thought equally should be considered as disclosure disclosure of that.

Claims (11)

1. a kind of training method of In vivo detection model, which is characterized in that the described method includes:
According at least two facial images for corresponding to same face, frequency spectrum corresponding at least two facial images is determined Characteristic pattern, textural characteristics figure and motion feature figure;
The spectrum signature figure, the textural characteristics figure and the motion feature figure are merged, first object feature is obtained Figure;
The first object characteristic pattern is input to the In vivo detection model, first is obtained and determines as a result, to complete the work The primary training of body detection model, wherein the first judgement result is used to indicate whether the face is living body faces;
It is less than preset times in the frequency of training of the In vivo detection model, or described in determining according to the judgement result When the accuracy rate of In vivo detection model is less than preset threshold, the In vivo detection model is updated according to the accuracy rate.
2. the method according to claim 1, wherein the method also includes:
After updating the In vivo detection model, at least two facial images that the basis corresponds to same face are returned, The step of determining spectrum signature figure corresponding at least two facial images, textural characteristics figure and motion feature figure, until The frequency of training of the In vivo detection model is greater than or equal to the preset times, and the accuracy rate of the In vivo detection model is big In or be equal to the preset threshold until.
3. the method according to claim 1, wherein described by the spectrum signature figure, the textural characteristics figure It is merged with the motion feature figure, obtains first object characteristic pattern, comprising:
The spectrum signature figure, the textural characteristics figure and the motion feature figure are input to characteristic pattern Fusion Model, obtained The first object characteristic pattern, wherein the characteristic pattern Fusion Model is convolutional neural networks model, the first object feature Figure is the characteristic pattern that the last one characteristic layer exports in the characteristic pattern Fusion Model.
4. method according to any one of claim 1-3, which is characterized in that the spectrum signature figure is appointing in following One:
Fourier spectrum figure, discrete cosine transform spectrogram, Gabor wavelet convert spectrogram;
The textural characteristics figure is any one of following:
LBP characteristic pattern, HOG characteristic pattern, histogram contrast's characteristic pattern, image blur characteristic pattern;
The motion feature figure is Optical-flow Feature figure or SFM characteristic pattern.
5. a kind of biopsy method, which is characterized in that the described method includes:
Obtain at least two facial images to be measured for corresponding to same target face;
According at least two facial images to be measured, determining spectrum signature corresponding at least two facial images to be measured Figure, textural characteristics figure and motion feature figure;
It will the spectrum signature figure corresponding at least two facial images to be measured, the textural characteristics figure and the movement Characteristic pattern is merged, and the second target signature is obtained;
Second target signature is input to In vivo detection model, second is obtained and determines as a result, described second determines result It is used to indicate whether the target face is living body faces, wherein the In vivo detection model is by appointing in claim 1-4 What method described in one was trained.
6. a kind of training device of In vivo detection model, which is characterized in that described device includes:
First determining module, for determining and described at least two according at least two facial images for corresponding to same face The corresponding spectrum signature figure of facial image, textural characteristics figure and motion feature figure;
First Fusion Module, for the spectrum signature figure, the textural characteristics figure and the motion feature figure to be merged, Obtain first object characteristic pattern;
Second determining module obtains first and determines for the first object characteristic pattern to be input to the In vivo detection model As a result, to complete the primary training of the In vivo detection model, wherein the first judgement result, which is used to indicate the face, is No is living body faces;
Update module is less than preset times for the frequency of training in the In vivo detection model, or is tied according to the judgement When the accuracy rate for the In vivo detection model that fruit is determined is less than preset threshold, the living body is updated according to the accuracy rate and is examined Survey model.
7. a kind of living body detection device, which is characterized in that described device includes:
Module is obtained, for obtaining at least two facial images to be measured for corresponding to same target face;
Third determining module, for according at least two facial images to be measured, determining and at least two faces to be measured The corresponding spectrum signature figure of image, textural characteristics figure and motion feature figure;
Second Fusion Module, for will the spectrum signature figure corresponding at least two facial images to be measured, the line Reason characteristic pattern and the motion feature figure are merged, and the second target signature is obtained;
4th determining module, for second target signature to be input to In vivo detection model, obtain second determine as a result, The second judgement result is used to indicate whether the target face is living body faces, wherein the In vivo detection model is logical Cross what method of any of claims 1-4 was trained.
8. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is held by processor The step of any one of claim 1-4 the method is realized when row.
9. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is held by processor The step of claim 5 the method is realized when row.
10. a kind of electronic equipment characterized by comprising
Memory is stored thereon with computer program;
Processor, for executing the computer program in the memory, to realize described in any one of claim 1-4 The step of method.
11. a kind of electronic equipment characterized by comprising
Memory is stored thereon with computer program;
Processor, for executing the computer program in the memory, to realize the step of claim 5 the method Suddenly.
CN201811015451.7A 2018-08-31 2018-08-31 Training method, detection method, device, medium and equipment of living body detection model Pending CN109344716A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811015451.7A CN109344716A (en) 2018-08-31 2018-08-31 Training method, detection method, device, medium and equipment of living body detection model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811015451.7A CN109344716A (en) 2018-08-31 2018-08-31 Training method, detection method, device, medium and equipment of living body detection model

Publications (1)

Publication Number Publication Date
CN109344716A true CN109344716A (en) 2019-02-15

Family

ID=65292046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811015451.7A Pending CN109344716A (en) 2018-08-31 2018-08-31 Training method, detection method, device, medium and equipment of living body detection model

Country Status (1)

Country Link
CN (1) CN109344716A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109961433A (en) * 2019-03-29 2019-07-02 北京百度网讯科技有限公司 Product defects detection method, device and computer equipment
CN109977867A (en) * 2019-03-26 2019-07-05 厦门瑞为信息技术有限公司 A kind of infrared biopsy method based on machine learning multiple features fusion
CN110059546A (en) * 2019-03-08 2019-07-26 深圳神目信息技术有限公司 Vivo identification method, device, terminal and readable medium based on spectrum analysis
CN110222647A (en) * 2019-06-10 2019-09-10 大连民族大学 A kind of human face in-vivo detection method based on convolutional neural networks
CN112560870A (en) * 2020-12-15 2021-03-26 哈尔滨工程大学 Image target identification method used in underwater complex environment
CN112733946A (en) * 2021-01-14 2021-04-30 北京市商汤科技开发有限公司 Training sample generation method and device, electronic equipment and storage medium
CN113422982A (en) * 2021-08-23 2021-09-21 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium
CN113569707A (en) * 2021-07-23 2021-10-29 北京百度网讯科技有限公司 Living body detection method, living body detection device, electronic apparatus, and storage medium
WO2021248733A1 (en) * 2020-06-12 2021-12-16 浙江大学 Live face detection system applying two-branch three-dimensional convolutional model, terminal and storage medium
CN113963427A (en) * 2021-12-22 2022-01-21 浙江工商大学 Method and system for rapid in vivo detection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529447A (en) * 2016-11-03 2017-03-22 河北工业大学 Small-sample face recognition method
CN107895155A (en) * 2017-11-29 2018-04-10 五八有限公司 A kind of face identification method and device
US20180122066A1 (en) * 2016-10-27 2018-05-03 Xerox Corporation System and method for extracting a periodic signal from video
CN107992842A (en) * 2017-12-13 2018-05-04 深圳云天励飞技术有限公司 Biopsy method, computer installation and computer-readable recording medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180122066A1 (en) * 2016-10-27 2018-05-03 Xerox Corporation System and method for extracting a periodic signal from video
CN106529447A (en) * 2016-11-03 2017-03-22 河北工业大学 Small-sample face recognition method
CN107895155A (en) * 2017-11-29 2018-04-10 五八有限公司 A kind of face identification method and device
CN107992842A (en) * 2017-12-13 2018-05-04 深圳云天励飞技术有限公司 Biopsy method, computer installation and computer-readable recording medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHUOWEN HU等: "A Polarimetric Thermal Database for Face Recognition Research", 《COMPUTER VISION FOUNDATION》 *
党鑫鹏: "基于多级纹理频谱特征与PCA的人脸识别算法", 《JOURNAL OF COMPUTER APPLICATIONS》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110059546A (en) * 2019-03-08 2019-07-26 深圳神目信息技术有限公司 Vivo identification method, device, terminal and readable medium based on spectrum analysis
CN109977867A (en) * 2019-03-26 2019-07-05 厦门瑞为信息技术有限公司 A kind of infrared biopsy method based on machine learning multiple features fusion
CN109961433A (en) * 2019-03-29 2019-07-02 北京百度网讯科技有限公司 Product defects detection method, device and computer equipment
CN110222647B (en) * 2019-06-10 2022-05-10 大连民族大学 Face in-vivo detection method based on convolutional neural network
CN110222647A (en) * 2019-06-10 2019-09-10 大连民族大学 A kind of human face in-vivo detection method based on convolutional neural networks
WO2021248733A1 (en) * 2020-06-12 2021-12-16 浙江大学 Live face detection system applying two-branch three-dimensional convolutional model, terminal and storage medium
CN112560870A (en) * 2020-12-15 2021-03-26 哈尔滨工程大学 Image target identification method used in underwater complex environment
CN112560870B (en) * 2020-12-15 2022-04-29 哈尔滨工程大学 Image target identification method used in underwater complex environment
CN112733946A (en) * 2021-01-14 2021-04-30 北京市商汤科技开发有限公司 Training sample generation method and device, electronic equipment and storage medium
CN112733946B (en) * 2021-01-14 2023-09-19 北京市商汤科技开发有限公司 Training sample generation method and device, electronic equipment and storage medium
CN113569707A (en) * 2021-07-23 2021-10-29 北京百度网讯科技有限公司 Living body detection method, living body detection device, electronic apparatus, and storage medium
CN113422982A (en) * 2021-08-23 2021-09-21 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium
CN113963427B (en) * 2021-12-22 2022-07-26 浙江工商大学 Method and system for rapid in-vivo detection
CN113963427A (en) * 2021-12-22 2022-01-21 浙江工商大学 Method and system for rapid in vivo detection

Similar Documents

Publication Publication Date Title
CN109344716A (en) Training method, detection method, device, medium and equipment of living body detection model
CN110458154B (en) Face recognition method, face recognition device and computer-readable storage medium
CN108009528B (en) Triple Loss-based face authentication method and device, computer equipment and storage medium
CN106897658B (en) Method and device for identifying human face living body
CN107077589B (en) Facial spoofing detection in image-based biometrics
CN111401521B (en) Neural network model training method and device, and image recognition method and device
KR20190072563A (en) Method and apparatus for detecting facial live varnish, and electronic device
CN105160739B (en) Automatic identification equipment, method and access control system
CN108235770A (en) image identification method and cloud system
CN105303179A (en) Fingerprint identification method and fingerprint identification device
CN106874826A (en) Face key point-tracking method and device
CN108647712A (en) Processing method, processing equipment, client and the server of vehicle damage identification
CN108229324A (en) Gesture method for tracing and device, electronic equipment, computer storage media
KR20170026222A (en) Method and device for classifying an object of an image and corresponding computer program product and computer-readable medium
CN109657539B (en) Face value evaluation method and device, readable storage medium and electronic equipment
CN108197585A (en) Recognition algorithms and device
CN109389002A (en) Biopsy method and device
CN109753910A (en) Crucial point extracting method, the training method of model, device, medium and equipment
KR20160132370A (en) Methods of storing a set of biometric data templates and of matching biometrics, biometric matching apparatus and computer program
CN108717520A (en) A kind of pedestrian recognition methods and device again
CN109635021A (en) A kind of data information input method, device and equipment based on human testing
CN106228133A (en) User authentication method and device
CN110427849A (en) Face pose determination method and device, storage medium and electronic equipment
CN111382791A (en) Deep learning task processing method, image recognition task processing method and device
CN111310531A (en) Image classification method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210302

Address after: 201111 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant after: Dalu Robot Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: Shenzhen Qianhaida Yunyun Intelligent Technology Co.,Ltd.

TA01 Transfer of patent application right
CB02 Change of applicant information

Address after: 201111 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Applicant after: Dayu robot Co.,Ltd.

Address before: 201111 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant before: Dalu Robot Co.,Ltd.

CB02 Change of applicant information