Summary of the invention
In order to solve the above-mentioned technical problem, the disclosure provides a kind of training method of In vivo detection model, detection method, dress
It sets, medium and equipment.
To achieve the goals above, according to the disclosure in a first aspect, provide a kind of training method of In vivo detection model,
The described method includes:
According at least two facial images for corresponding to same face, determination is corresponding at least two facial images
Spectrum signature figure, textural characteristics figure and motion feature figure;
The spectrum signature figure, the textural characteristics figure and the motion feature figure are merged, first object is obtained
Characteristic pattern;
The first object characteristic pattern is input to the In vivo detection model, first is obtained and determines as a result, to complete
State the primary training of In vivo detection model, wherein the first judgement result is used to indicate whether the face is living body faces;
It is less than preset times in the frequency of training of the In vivo detection model, or determined according to the judgement result
When the accuracy rate of the In vivo detection model is less than preset threshold, the In vivo detection model is updated according to the accuracy rate.
Optionally, the method also includes:
After updating the In vivo detection model, at least two face figures that the basis corresponds to same face are returned
The step of picture, determining spectrum signature figure corresponding at least two facial images, textural characteristics figure and motion feature figure, directly
Frequency of training to the In vivo detection model is greater than or equal to the preset times, and the accuracy rate of the In vivo detection model
Until the preset threshold.
Optionally, described to merge the spectrum signature figure, the textural characteristics figure and the motion feature figure, it obtains
Obtain first object characteristic pattern, comprising:
The spectrum signature figure, the textural characteristics figure and the motion feature figure are input to characteristic pattern Fusion Model,
Obtain the first object characteristic pattern, wherein the characteristic pattern Fusion Model is convolutional neural networks model, the first object
Characteristic pattern is the characteristic pattern of the last one characteristic layer output in the characteristic pattern Fusion Model.
Optionally, the spectrum signature figure is any one of following:
Fourier spectrum figure, discrete cosine transform spectrogram, Gabor wavelet convert spectrogram;
The textural characteristics figure is any one of following:
LBP characteristic pattern, HOG characteristic pattern, histogram contrast's characteristic pattern, image blur characteristic pattern;
The motion feature figure is Optical-flow Feature figure or SFM characteristic pattern.
According to the second aspect of the disclosure, a kind of biopsy method is provided, which comprises
Obtain at least two facial images to be measured for corresponding to same target face;
According at least two facial images to be measured, determining frequency spectrum corresponding at least two facial images to be measured
Characteristic pattern, textural characteristics figure and motion feature figure;
It will the spectrum signature figure corresponding at least two facial images to be measured, the textural characteristics figure and described
Motion feature figure is merged, and the second target signature is obtained;
Second target signature is input to In vivo detection model, second is obtained and determines as a result, described second determines
As a result it is used to indicate whether the target face is living body faces, wherein the In vivo detection model is by above-mentioned first party
What the training method for any In vivo detection model that face provides was trained.
According to the third aspect of the disclosure, a kind of training device of In vivo detection model is provided, described device includes:
First determining module, for according at least two facial images for corresponding to same face, it is determining with it is described at least
The corresponding spectrum signature figure of two facial images, textural characteristics figure and motion feature figure;
First Fusion Module, for carrying out the spectrum signature figure, the textural characteristics figure and the motion feature figure
Fusion obtains first object characteristic pattern;
Second determining module obtains first for the first object characteristic pattern to be input to the In vivo detection model
Determine as a result, to complete the primary training of the In vivo detection model, wherein the first judgement result is used to indicate the people
Whether face is living body faces;
Update module is less than preset times for the frequency of training in the In vivo detection model, or is sentenced according to described
When determining the accuracy rate for the In vivo detection model that result is determined less than preset threshold, the work is updated according to the accuracy rate
Body detection model.
Optionally, after update module updates the In vivo detection model, the first determining module of triggering is according to corresponding to
At least two facial images of same face determine that spectrum signature figure corresponding at least two facial images, texture are special
Sign figure and motion feature figure, until the frequency of training of the In vivo detection model is greater than or equal to the preset times, and described
Until the accuracy rate of In vivo detection model is greater than or equal to the preset threshold.
Optionally, second determining module is used for:
The spectrum signature figure, the textural characteristics figure and the motion feature figure are input to characteristic pattern Fusion Model,
Obtain the first object characteristic pattern, wherein the characteristic pattern Fusion Model is convolutional neural networks model, the first object
Characteristic pattern is the characteristic pattern of the last one characteristic layer output in the characteristic pattern Fusion Model.
Optionally, the spectrum signature figure is any one of following:
Fourier spectrum figure, discrete cosine transform spectrogram, Gabor wavelet convert spectrogram;
The textural characteristics figure is any one of following:
LBP characteristic pattern, HOG characteristic pattern, histogram contrast's characteristic pattern, image blur characteristic pattern;
The motion feature figure is Optical-flow Feature figure or SFM characteristic pattern.
According to the fourth aspect of the disclosure, a kind of living body detection device is provided, described device includes:
Module is obtained, for obtaining at least two facial images to be measured for corresponding to same target face;
Third determining module, for determining and described at least two to be measured according at least two facial images to be measured
The corresponding spectrum signature figure of facial image, textural characteristics figure and motion feature figure;
Second Fusion Module, for will the spectrum signature figure corresponding at least two facial images to be measured, institute
It states textural characteristics figure and the motion feature figure is merged, obtain the second target signature;
4th determining module obtains second and determines for second target signature to be input to In vivo detection model
As a result, the second judgement result is used to indicate whether the target face is living body faces, wherein the In vivo detection model
It is that the training method of any In vivo detection model provided by above-mentioned first aspect is trained.
According to the 5th of the disclosure the aspect, a kind of computer readable storage medium is provided, computer program is stored thereon with,
The program realizes the step of training method for any body detection model that above-mentioned first aspect provides when being executed by processor.
According to the 6th of the disclosure the aspect, a kind of computer readable storage medium is provided, computer program is stored thereon with,
The program realizes the step of biopsy method that above-mentioned second aspect provides when being executed by processor.
According to the 7th of the disclosure the aspect, a kind of electronic equipment is provided, comprising:
Memory is stored thereon with computer program;
Processor, for executing the computer program in the memory, to realize what above-mentioned first aspect provided
The step of training method of any In vivo detection model.
According to the eighth aspect of the disclosure, a kind of electronic equipment is provided, comprising:
Memory is stored thereon with computer program;
Processor, for executing the computer program in the memory, to realize what above-mentioned second aspect provided
The step of biopsy method.
In the above-mentioned technical solutions, pass through the feature to facial image in multiple dimensions, i.e. spectrum signature, textural characteristics
And motion feature, it extracts and merges, to obtain the first object characteristic pattern of characterization facial image comprehensive characteristics.Later,
In vivo detection model is trained based on the first object characteristic pattern, can effectively widen the In vivo detection model trained
The scope of application, while the limitation that can also be detected to avoid single features, improve the accuracy rate of In vivo detection model.In addition, this
In vivo detection model is trained by first object characteristic pattern in open so that be not necessarily dependent on special external hardware or
The cooperation for being user action can be based on the In vivo detection model realization In vivo detection, so that testing cost is effectively reduced, simultaneously
Promote user experience.
Other feature and advantage of the disclosure will the following detailed description will be given in the detailed implementation section.
Specific embodiment
It is described in detail below in conjunction with specific embodiment of the attached drawing to the disclosure.It should be understood that this place is retouched
The specific embodiment stated is only used for describing and explaining the disclosure, is not limited to the disclosure.
Shown in Fig. 1, for the process of the training method of the In vivo detection model provided according to an embodiment of the present disclosure
Figure, as shown in Figure 1, which comprises
In S11, according at least two facial images for corresponding to same face, determining and at least two facial images pair
Spectrum signature figure, textural characteristics figure and the motion feature figure answered.
Wherein it is possible to extract its spectrum signature figure and textural characteristics figure respectively for every facial image, pass through the correspondence
It is special in the movement corresponding at least two facial images of the change detection of the corresponding characteristic point of multiple facial images of same face
Sign figure.
In one embodiment, can be selected from the multiple spectrum signature figures extracted one as at least two faces
The corresponding spectrum signature figure of image, selected from the multiple textural characteristics figures extracted one as at least two facial images
Corresponding textural characteristics figure.Illustratively, it can be random selection, can also be selected according to the readability of facial image,
To this without limiting in the disclosure.
In another embodiment, the spectrum signature figure of at least two facial images can be averaging processing, to obtain
Spectrum signature figure corresponding at least two facial images is obtained, the textural characteristics figure of at least two facial images is carried out flat
It handles, to obtain textural characteristics figure corresponding at least two facial images.Wherein, which it is flat to can be arithmetic
It or is weighted and averaged, in the disclosure without limiting.
Optionally, the spectrum signature figure is any one of following:
Fourier spectrum figure, discrete cosine transform spectrogram, Gabor wavelet convert spectrogram;
The textural characteristics figure is any one of following:
LBP (Local Binary Pattern, local binary patterns) characteristic pattern, HOG (Histogram of
Oriented Gradient, histograms of oriented gradients) characteristic pattern, histogram contrast's characteristic pattern, image blur characteristic pattern;
The motion feature figure is that Optical-flow Feature figure or SFM (Structure from Motion, motion structure restore) are special
Sign figure.
In S12, spectrum signature figure, textural characteristics figure and motion feature figure are merged, obtain first object feature
Figure.
In the disclosure, by the way that spectrum signature figure, textural characteristics figure and motion feature figure are merged, it is hereby achieved that
The first object characteristic pattern of the comprehensive characteristics of facial image can be characterized.
Wherein, in the spectrum signature figure that facial image extracts, the corresponding spectrum signature figure medium-high frequency letter of living body faces
Cease it is more, rather than in the corresponding spectrum signature figure of living body faces, the loss of its high-frequency information it is larger, therefore, corresponding frequency spectrum is special
The high-frequency information levied in figure is less.
In the textural characteristics figure that facial image extracts, in the corresponding textural characteristics figure of living body faces texture information compared with
It is more, rather than in the corresponding textural characteristics figure of living body faces, its texture feature information loss it is larger, texture is smoother, therefore,
Texture information in corresponding textural characteristics figure is less.
Motion feature body in the textural characteristics figure that facial image extracts, in the corresponding motion feature figure of living body faces
Reveal vector erratic behavior, rather than in the corresponding motion feature figure of living body faces, its motion feature then show as vector rule one
Cause property.
In the above-mentioned technical solutions, not only fully considered living body faces image and non-living body facial image in multiple dimensions
In feature difference;Meanwhile also merging spectrum signature figure, textural characteristics figure and motion feature figure, so as to base
In features described above, show that the comprehensive characteristics of facial image indicate, based on the first object characteristic pattern of the expression comprehensive characteristics to work
Body detection model is trained, then can widen the scope of application of In vivo detection model, and can also be improved based on the living body
The accuracy of detection model progress In vivo detection.
In S13, first object characteristic pattern is input to In vivo detection model, first is obtained and determines as a result, to complete to live
The primary training of body detection model, wherein the first judgement result is used to indicate whether face is living body faces.
Wherein, which can be realized by classifier, illustratively, can be SVM (Support
Vector Machine, support vector machines) classifier or kNN (k-NearestNeighbor, K arest neighbors) classifier, this public affairs
It opens to this without limiting.
In S14, it is less than preset times in the frequency of training of In vivo detection model, or determined according to judgement result
When the accuracy rate of In vivo detection model is less than preset threshold, In vivo detection model is updated according to accuracy rate.
Wherein, complete In vivo detection model it is primary after training, the standard of the primary In vivo detection model can be calculated
True rate.Illustratively, according to the accuracy rate for determining that result can determine In vivo detection model in the following manner:
By currently trained judgement result and image identification, the accuracy rate of this training is determined, wherein according to face
Before image is trained In vivo detection model, it is living body faces also right and wrong that it is corresponding, which to need to identify the facial image in advance,
Living body faces, in order to be trained according to the image identification to In vivo detection model.Later, by repeatedly trained accuracy rate
Average value is determined as the accuracy rate of In vivo detection model.
Wherein, preset times and preset threshold can be determined according to the detection accuracy of the In vivo detection model, living body
The detection accuracy of detection model is higher, and corresponding preset times and preset threshold are bigger.When frequency of training is less than preset times,
Indicate that the current frequency of training of the In vivo detection model is insufficient, at this point, then needing to continue training living body detection model.
When the accuracy rate for the In vivo detection model determined according to judgement result is less than preset threshold, then it represents that living body inspection
The accuracy rate for surveying model is insufficient, and error is larger, at this time, it is also desirable to continue training to obtain more accurate In vivo detection mould
Type.It needs to be illustrated, the method that In vivo detection model is updated according to accuracy rate is the prior art, and details are not described herein.
In the above-mentioned technical solutions, pass through the feature to facial image in multiple dimensions, i.e. spectrum signature, textural characteristics
And motion feature, it extracts and merges, to obtain the first object characteristic pattern of characterization facial image comprehensive characteristics.Later,
In vivo detection model is trained based on the first object characteristic pattern, can effectively widen the In vivo detection model trained
The scope of application, while the limitation that can also be detected to avoid single features, improve the accuracy rate of In vivo detection model.In addition, this
In vivo detection model is trained by first object characteristic pattern in open so that be not necessarily dependent on special external hardware or
The cooperation for being user action can be based on the In vivo detection model realization In vivo detection, so that testing cost is effectively reduced, simultaneously
Promote user experience.
Optionally, the method also includes:
After updating the In vivo detection model, at least two face figures that the basis corresponds to same face are returned
The step of picture, determining spectrum signature figure corresponding at least two facial images, textural characteristics figure and motion feature figure, directly
Frequency of training to the In vivo detection model is greater than or equal to the preset times, and the accuracy rate of the In vivo detection model
Until the preset threshold.
Wherein, there is the corresponding facial image of multiple faces in training sample, in one embodiment, in return step S11,
It can choose the corresponding different facial image of a face, to determine corresponding spectrum signature figure, line according to the facial image
Manage characteristic pattern and motion feature figure.Later, the spectrum signature figure, textural characteristics figure and motion feature figure determined are melted
It closes, obtains new first object characteristic pattern, and be trained to In vivo detection model based on the new first object characteristic pattern.Its
In, the step of subsequent training has been described in detail above, and details are not described herein.
In one embodiment, in return step S11, multiple corresponding facial images of new face be can choose, with root
Corresponding spectrum signature figure, textural characteristics figure and motion feature figure are determined according to multiple facial images.Later, to the frequency determined
Spectrum signature figure, textural characteristics figure and motion feature figure are merged, and obtain new first object characteristic pattern, and based on this new the
One target signature is trained In vivo detection model.Wherein, the step of subsequent training has been described in detail above, herein no longer
It repeats.
In the above-mentioned technical solutions, above-mentioned steps (S11-S14) is executed by circulation, to realize to In vivo detection model
Training.When the frequency of training of updated In vivo detection model is greater than or equal to preset times, and the standard of In vivo detection model
When true rate is greater than or equal to preset threshold, indicate that the accuracy rate of the In vivo detection model and frequency of training are all satisfied training requirement,
It can terminate the training process of the In vivo detection model at this time.It therefore, through the above technical solutions, can be quickly and accurately right
In vivo detection model is trained, so as to be effectively ensured the In vivo detection model the scope of application and testing result it is accurate
Degree promotes user experience.
Optionally, spectrum signature figure, textural characteristics figure and motion feature figure are merged, obtains first object characteristic pattern
Step S13 sample implementation it is as follows, comprising:
Spectrum signature figure, textural characteristics figure and motion feature figure are input to characteristic pattern Fusion Model, obtain first object
Characteristic pattern, wherein characteristic pattern Fusion Model is convolutional neural networks model, and first object characteristic pattern is characterized in figure Fusion Model
The characteristic pattern of the last one characteristic layer output.
Illustratively, characteristic pattern Fusion Model can be trained in the following way:
According at least two facial images for corresponding to same face, frequency spectrum corresponding at least two facial images is determined
Characteristic pattern, textural characteristics figure and motion feature figure, wherein the determination side of spectrum signature figure, textural characteristics figure and motion feature figure
Formula has been described in detail above, and details are not described herein.
Spectrum signature figure, textural characteristics figure and motion feature figure are input to characteristic pattern Fusion Model, characteristic pattern is obtained and melts
The judgement result of molding type.Wherein, convolutional neural networks can be VGG, GoogLeNet, ResNet etc..
When being greater than error threshold based on the decision errors for determining that result and image identification are determined, mould is merged to characteristic pattern
Type is updated and is trained, until the decision errors of characteristic pattern Fusion Model are less than or equal to error threshold.Wherein, convolutional Neural
The training method of network model is the prior art, and details are not described herein.By spectrum signature figure, textural characteristics figure and motion feature
When figure is input to characteristic pattern Fusion Model, the last one characteristic layer of this feature figure Fusion Model is full articulamentum, full articulamentum
Purpose be the Feature Mapping that will be arrived in e-learning to label space, therefore, the characteristic pattern of output is the feature to input
Figure merge obtained, determines it as the comprehensive characteristics carry out table that first object characteristic pattern can accurately to image
Sign.Wherein, the mode for extracting the characteristic pattern of the last one characteristic layer output in characteristic pattern Fusion Model is the prior art, herein not
It repeats again.
In the above-mentioned technical solutions, by characteristic pattern Fusion Model to spectrum signature figure, textural characteristics figure and motion feature
Figure is merged, it is hereby achieved that the first object characteristic pattern of characterization image comprehensive characteristics.Characteristic pattern Fusion Model passes through volume
Product neural network is realized, can efficiently and quickly be merged to each characteristic pattern, and first object characteristic pattern is improved
Accuracy, and then guarantee the accuracy of In vivo detection model.
The disclosure also provides a kind of biopsy method.Shown in Fig. 2, provided according to an embodiment of the present disclosure
The flow chart of biopsy method, as shown in Fig. 2, the method includes
In S21, at least two facial images to be measured for corresponding to same target face are obtained.Wherein it is possible to directly obtain
Still image is taken, is also possible to the crawl from video and is obtained later by face extraction algorithm comprising the target image of target face
Obtain facial image to be measured.Illustratively, face extraction algorithm can be seetaface algorithm, mtcnn algorithm etc..
In S22, according at least two facial images to be measured, frequency spectrum corresponding at least two facial images to be measured is determined
Characteristic pattern, textural characteristics figure and motion feature figure;
It, will spectrum signature figure corresponding at least two facial images to be measured, textural characteristics figure and motion feature in S23
Figure is merged, and the second target signature is obtained;
In S24, the second target signature is input to In vivo detection model, second is obtained and determines as a result, second determines
As a result it is used to indicate whether target face is living body faces, wherein the In vivo detection model is by above-mentioned In vivo detection mould
What the training method of type was trained.
Wherein it is determined that spectrum signature figure corresponding at least two facial images to be measured, textural characteristics figure and motion feature
Figure, and the mode merged to spectrum signature figure, textural characteristics figure and motion feature figure has hereinbefore been described in detail, herein not
It repeats again.
In the above-mentioned technical solutions, it is special that the corresponding spectrum signature figure of facial image to be measured, textural characteristics figure and movement are extracted
Sign figure, and spectrum signature figure, textural characteristics figure and motion feature figure are subjected to fusion and obtain the second target signature, later should
Second target signature inputs In vivo detection model to determine whether target face is living body faces.Therefore, it is mentioned based on the disclosure
When the method for confession carries out In vivo detection, it is not necessary to dependent on special external hardware (for example, thermal camera) or user action
Cooperation can realize In vivo detection, testing cost is effectively reduced, while promoting user experience.
The disclosure also provides a kind of training device of In vivo detection model.As shown in figure 3, for according to a kind of reality of the disclosure
The block diagram of the training device for the In vivo detection model that the mode of applying provides, as shown in figure 3, described device 10 includes:
First determining module 101, for according at least two facial images for corresponding to same face, it is determining with it is described extremely
Few corresponding spectrum signature figure of two facial images, textural characteristics figure and motion feature figure;
First Fusion Module 102, for by the spectrum signature figure, the textural characteristics figure and the motion feature figure into
Row fusion, obtains first object characteristic pattern;
Second determining module 103 obtains for the first object characteristic pattern to be input to the In vivo detection model
One determines as a result, to complete the primary training of the In vivo detection model, wherein the first judgement result is used to indicate described
Whether face is living body faces;
Update module 104 is less than preset times for the frequency of training in the In vivo detection model, or according to described
When determining that the accuracy rate for the In vivo detection model that result is determined is less than preset threshold, according to accuracy rate update
In vivo detection model.
Optionally, after update module 104 updates the In vivo detection model, 101 basis of the first determining module is triggered
Corresponding at least two facial images of same face, determining spectrum signature figure corresponding at least two facial images,
Textural characteristics figure and motion feature figure, until the frequency of training of the In vivo detection model is greater than or equal to the preset times,
And until the accuracy rate of the In vivo detection model is greater than or equal to the preset threshold.
Optionally, second determining module 103 is used for:
The spectrum signature figure, the textural characteristics figure and the motion feature figure are input to characteristic pattern Fusion Model,
Obtain the first object characteristic pattern, wherein the characteristic pattern Fusion Model is convolutional neural networks model, the first object
Characteristic pattern is the characteristic pattern of the last one characteristic layer output in the characteristic pattern Fusion Model.
Optionally, the spectrum signature figure is any one of following:
Fourier spectrum figure, discrete cosine transform spectrogram, Gabor wavelet convert spectrogram;
The textural characteristics figure is any one of following:
LBP characteristic pattern, HOG characteristic pattern, histogram contrast's characteristic pattern, image blur characteristic pattern;
The motion feature figure is Optical-flow Feature figure or SFM characteristic pattern.
The disclosure also provides a kind of living body detection device.Shown in Fig. 4, provided according to an embodiment of the present disclosure
The block diagram of living body detection device, as shown in figure 4, described device 20 includes:
Module 201 is obtained, for obtaining at least two facial images to be measured for corresponding to same target face;
Third determining module 202, for according at least two facial images to be measured, determine with described at least two to
Survey the corresponding spectrum signature figure of facial image, textural characteristics figure and motion feature figure;
Second Fusion Module 203, for will the spectrum signature figure corresponding at least two facial images to be measured,
The textural characteristics figure and the motion feature figure are merged, and the second target signature is obtained;
4th determining module 204 obtains second and sentences for second target signature to be input to In vivo detection model
Determine as a result, the second judgement result is used to indicate whether the target face is living body faces, wherein the In vivo detection mould
Type is that the training method of any In vivo detection model provided by the disclosure is trained.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method
Embodiment in be described in detail, no detailed explanation will be given here.
Fig. 5 is the block diagram of a kind of electronic equipment 500 shown according to an exemplary embodiment.As shown in figure 5, the electronics is set
Standby 500 may include: processor 501, memory 502.The electronic equipment 500 can also include multimedia component 503, input/
Export one or more of (I/O) interface 504 and communication component 505.
Wherein, processor 501 is used to control the integrated operation of the electronic equipment 500, to complete above-mentioned In vivo detection mould
The training method or all or part of the steps in biopsy method of type.Memory 502 is for storing various types of data
To support the operation in the electronic equipment 500, these data for example may include appointing for what is operated on the electronic equipment 500
The instruction and the relevant data of application program of what application program or method, for example, contact data, the message of transmitting-receiving, picture,
Audio, video etc..The memory 502 can be by any kind of volatibility or non-volatile memory device or their group
It closes and realizes, such as static random access memory (Static Random Access Memory, abbreviation SRAM), electric erasable
Programmable read only memory (Electrically Erasable Programmable Read-Only Memory, referred to as
EEPROM), Erasable Programmable Read Only Memory EPROM (Erasable Programmable Read-Only Memory, abbreviation
EPROM), programmable read only memory (Programmable Read-Only Memory, abbreviation PROM), read-only memory
(Read-Only Memory, abbreviation ROM), magnetic memory, flash memory, disk or CD.Multimedia component 503 can wrap
Include screen and audio component.Wherein screen for example can be touch screen, and audio component is used for output and/or input audio signal.
For example, audio component may include a microphone, microphone is for receiving external audio signal.The received audio signal can
To be further stored in memory 502 or be sent by communication component 505.Audio component further includes at least one loudspeaker,
For output audio signal.I/O interface 504 provides interface, other above-mentioned interfaces between processor 501 and other interface modules
Module can be keyboard, mouse, button etc..These buttons can be virtual push button or entity button.Communication component 505 is used for
Wired or wireless communication is carried out between the electronic equipment 500 and other equipment.Wireless communication, such as Wi-Fi, bluetooth, near field are logical
Believe (Near Field Communication, abbreviation NFC), 2G, 3G or 4G or they one or more of combination, because
This corresponding communication component 505 may include: Wi-Fi module, bluetooth module, NFC module.
In one exemplary embodiment, electronic equipment 500 can be by one or more application specific integrated circuit
(Application Specific Integrated Circuit, abbreviation ASIC), digital signal processor (Digital
Signal Processor, abbreviation DSP), digital signal processing appts (Digital Signal Processing Device,
Abbreviation DSPD), programmable logic device (Programmable Logic Device, abbreviation PLD), field programmable gate array
(Field Programmable Gate Array, abbreviation FPGA), controller, microcontroller, microprocessor or other electronics member
Part is realized, for executing the training method or biopsy method of above-mentioned In vivo detection model.
In a further exemplary embodiment, a kind of computer readable storage medium including program instruction is additionally provided, it should
The step of training method or biopsy method of above-mentioned In vivo detection model are realized when program instruction is executed by processor.Example
Such as, which can be the above-mentioned memory 502 including program instruction, and above procedure instruction can be by electronics
The processor 501 of equipment 500 is executed to complete the training method or biopsy method of above-mentioned In vivo detection model.
Fig. 6 is the block diagram of a kind of electronic equipment 600 shown according to an exemplary embodiment.For example, electronic equipment 600 can
To be provided as a server.Referring to Fig. 6, electronic equipment 600 includes processor 622, and quantity can be one or more, with
And memory 632, for storing the computer program that can be executed by processor 622.The computer program stored in memory 632
May include it is one or more each correspond to one group of instruction module.In addition, processor 622 can be configured as
The computer program is executed, to execute the training method or biopsy method of above-mentioned In vivo detection model.
In addition, electronic equipment 600 can also include power supply module 626 and communication component 650, which can be with
It is configured as executing the power management of electronic equipment 600, which, which can be configured as, realizes electronic equipment 600
Communication, for example, wired or wireless communication.In addition, the electronic equipment 600 can also include input/output (I/O) interface 658.Electricity
Sub- equipment 600 can be operated based on the operating system for being stored in memory 632, such as Windows ServerTM, Mac OS
XTM, UnixTM, LinuxTM etc..
In a further exemplary embodiment, a kind of computer readable storage medium including program instruction is additionally provided, it should
The step of training method or biopsy method of above-mentioned In vivo detection model are realized when program instruction is executed by processor.Example
Such as, which can be the above-mentioned memory 632 including program instruction, and above procedure instruction can be by electronics
The processor 622 of equipment 600 is executed to complete the training method or biopsy method of above-mentioned In vivo detection model.
The preferred embodiment of the disclosure is described in detail in conjunction with attached drawing above, still, the disclosure is not limited to above-mentioned reality
The detail in mode is applied, in the range of the technology design of the disclosure, a variety of letters can be carried out to the technical solution of the disclosure
Monotropic type, these simple variants belong to the protection scope of the disclosure.
It is further to note that specific technical features described in the above specific embodiments, in not lance
In the case where shield, it can be combined in any appropriate way.In order to avoid unnecessary repetition, the disclosure to it is various can
No further explanation will be given for the combination of energy.
In addition, any combination can also be carried out between a variety of different embodiments of the disclosure, as long as it is without prejudice to originally
Disclosed thought equally should be considered as disclosure disclosure of that.