CN108171204A - Detection method and device - Google Patents
Detection method and device Download PDFInfo
- Publication number
- CN108171204A CN108171204A CN201810045159.3A CN201810045159A CN108171204A CN 108171204 A CN108171204 A CN 108171204A CN 201810045159 A CN201810045159 A CN 201810045159A CN 108171204 A CN108171204 A CN 108171204A
- Authority
- CN
- China
- Prior art keywords
- pixel
- camera
- image
- distance
- human face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present application discloses detection method and device.One specific embodiment of this method includes:The focal length for continuously adjusting camera shoots target object, generates image sequence;For each pixel of the human face region in captured image, by in the image sequence, pixel, most clearly image is determined as the target image of the pixel, it determines the focal length when camera shoots the target image, and the physical location of the corresponding target object of the pixel and the distance of the camera is determined based on the focal length;Distance based on the corresponding physical location of each pixel in human face region Yu the camera determines whether the target object is live body.This embodiment improves the flexibilities of face In vivo detection.
Description
Technical field
The invention relates to field of computer technology, and in particular to Internet technical field more particularly to detection side
Method and device.
Background technology
With the development of computer technology, the application of face recognition technology is more and more extensive.In general, face recognition technology needs
Video camera or camera acquisition image or video flowing containing face, and automatic detect and track face in the picture are utilized,
And then the face to detecting carries out relevant operation, for example, carrying out face In vivo detection etc..
Existing human face in-vivo detection method is usually required using multiple camera collection images, to generate face depth
Figure, and then carry out face vivo identification.
Invention content
The embodiment of the present application proposes detection method and device.
In a first aspect, the embodiment of the present application provides a kind of detection method for being used to move equipment, mobile equipment is equipped with
The camera of adjustable focal length, this method include:The focal length for continuously adjusting camera shoots target object, generates image
Sequence;For each pixel of the human face region in captured image, in the image sequence, pixel is most clearly schemed
Target image as being determined as the pixel, determines focal length during camera photographic subjects image, and determine the pixel based on focal length
The physical location of corresponding target object and the distance of camera;Based on the corresponding physical location of each pixel in human face region
With the distance of camera, determine whether target object is live body.
In some embodiments, target object is shot in the focal length for continuously adjusting camera, generates image sequence
Later, method further includes:Any image in image sequence is extracted, Face datection is carried out to the image extracted, determines face
The position in the image extracted in region, by position of the location determination for the human face region in each image in image sequence
It puts.
In some embodiments, based on the corresponding physical location of each pixel and the distance of camera in human face region,
Determine whether target object is live body, including:Based on the corresponding physical location of each pixel in human face region and camera
Distance generates face depth map;Face depth map is input to In vivo detection model trained in advance, obtains In vivo detection knot
Fruit, wherein, In vivo detection model be used to detecting face involved by face depth map whether be live body face.
In some embodiments, based on the corresponding physical location of each pixel and the distance of camera in human face region,
Determine whether target object is live body, including:In response to determining the corresponding physical location of each pixel in human face region with taking the photograph
As the distance of head is consistent, it is not live body to determine target object.
In some embodiments, based on the corresponding physical location of each pixel and the distance of camera in human face region,
Determine whether target object is live body, is further included:In response to determining the corresponding physical location of pixel in human face region and camera shooting
The distance of head is inconsistent, and it is live body to determine target object.
In some embodiments, based on the corresponding physical location of each pixel and the distance of camera in human face region,
Determine whether target object is live body, including:In response to determining the corresponding physical location of pixel and camera in human face region
Distance it is inconsistent, based on the corresponding physical location of each pixel and the distance of camera in human face region, generation face is deep
Degree figure;Face depth map is input to in advance trained In vivo detection model, obtains In vivo detection as a result, wherein, In vivo detection
Model be used to detecting face involved by face depth map whether be live body face.
Second aspect, the embodiment of the present application provide a kind of detection device for being used to move equipment, and mobile equipment is equipped with
The camera of adjustable focal length, the device include:Shooting unit is configured to continuously adjust the focal length of camera to target object
It is shot, generates image sequence;First determination unit is configured to for each of the human face region in captured image
A pixel, by the image sequence, pixel, most clearly image is determined as the target image of the pixel, determines that camera is shot
Focal length during target image, and the physical location of the corresponding target object of the pixel and the distance of camera are determined based on focal length;
Second determination unit is configured to based on the corresponding physical location of each pixel and the distance of camera in human face region, really
Whether the object that sets the goal is live body.
In some embodiments, device further includes:Third determination unit, any figure being configured in extraction image sequence
Picture carries out Face datection to the image extracted, the position in the image extracted of human face region is determined, by location determination
Position for the human face region in each image in image sequence.
In some embodiments, the second determination unit includes:First generation module, is configured to based in human face region
The distance of the corresponding physical location of each pixel and camera, generates face depth map;First input module is configured to people
Face depth map is input to In vivo detection model trained in advance, obtains In vivo detection as a result, wherein, In vivo detection model is used to examine
Survey face depth map involved by face whether be live body face.
In some embodiments, the second determination unit is further configured to:It is each in human face region in response to determining
The corresponding physical location of pixel is consistent with the distance of camera, and it is not live body to determine target object.
In some embodiments, the second determination unit is further configured to:In response to determining the pixel in human face region
Corresponding physical location and the distance of camera are inconsistent, and it is live body to determine target object.
In some embodiments, the second determination unit includes:Second generation module is configured in response to determining face area
The distance of the corresponding physical location of pixel and camera in domain is inconsistent, based on the corresponding reality of each pixel in human face region
The distance of body region and camera generates face depth map;Second input module is configured to face depth map being input to pre-
First trained In vivo detection model obtains In vivo detection as a result, wherein, In vivo detection model is used to detect involved by face depth map
And face whether be live body face.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, including:One or more processors;Storage dress
It puts, for storing one or more programs, when one or more programs are executed by one or more processors so that one or more
A processor is realized such as the method for any embodiment in test method.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey
Sequence is realized when the program is executed by processor such as the method for any embodiment in test method.
Detection method provided by the embodiments of the present application and device, by continuously adjust the focal length of camera to target object into
Row shooting, to generate image sequence pair, then for each pixel of the human face region in image, by it is in image sequence,
Most clearly image is determined as the target image of the pixel to the pixel, determines the focal length when camera shoots the target image,
And the physical location of the corresponding target object of the pixel and the distance of the camera are determined based on the focal length, it is finally based on face
The corresponding physical location of each pixel and the distance of the camera in region, determine whether the target object is live body, so as to
Using only the camera collection image of an adjustable focal length, do not need to, using multiple camera collection images, improve
The flexibility of face In vivo detection.
Description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the detection method of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the detection method of the application;
Fig. 4 is the structure diagram according to one embodiment of the detection device of the application;
Fig. 5 is adapted for the structure diagram of the computer system of the electronic equipment for realizing the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1, which is shown, can apply the detection method of the application or the exemplary system architecture 100 of detection device.
As shown in Figure 1, system architecture 100 can include mobile equipment 101,102,103, network 104 and server 105.
Network 104 is in the medium for moving offer communication link between equipment 101,102,103 and server 105.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..User can use mobile equipment
101st, it 102,103 is interacted by network 104 with server 105, to receive or send message etc..Mobile equipment 101,102,103
On camera and various telecommunication customer end applications can be installed, such as web browser applications, the application of shopping class, searching class
Using, instant messaging tools, mailbox client, social platform software etc..
Mobile equipment 101,102,103 can be the various electronic equipments for being equipped with camera and showing screen, including but
It is not limited to smart mobile phone, tablet computer, pocket computer on knee etc..
Server 105 can be to provide the server of various services, such as the server for carrying out user's debarkation authentication.
The information (such as distance, image etc.) that the server can send mobile equipment 101,102,103 carries out the processing such as analyzing, and
Handling result is returned into mobile equipment 101,102,103.
It should be noted that the detection method that the embodiment of the present application is provided generally is held by mobile equipment 101,102,103
Row, correspondingly, detection device is generally positioned in mobile equipment 101,102,103.It should be pointed out that terminal device 101,
102nd, it 103 can not also be interacted during face In vivo detection or after face In vivo detection with server 105, this
When, server 105 and network 104 can be not present in exemplary system architecture 100.
It should be understood that the number of the mobile equipment, network and server in Fig. 1 is only schematical.According to realization need
Will, can have any number of mobile equipment, network and server.
With continued reference to Fig. 2, the stream for being used to move one embodiment of the detection method of equipment according to the application is shown
Journey 200.Above-mentioned mobile equipment is equipped with the camera of adjustable focal length, which includes the following steps:
Step 201, the focal length for continuously adjusting camera shoots target object, generates image sequence.
In the present embodiment, test method operation thereon electronic equipment (such as mobile equipment 101 shown in FIG. 1,
102nd, 103) focal length that can continuously adjust camera shoots target object, generates image sequence.Wherein, above-mentioned target
Object can be the face of people, present the human body of face, the head for presenting face etc..Above-mentioned image sequence can be according to bat
Sequence is taken the photograph to be made of multiple images (such as 256), and each image in above-mentioned image sequence can be with a focal length phase
It is corresponding.In practice, above-mentioned electronic equipment can store the correspondence of each image and focal length.
It should be noted that during the focal length for continuously adjusting camera shoots target object, due to when
Between it is very short, it is believed that the position of target object and camera be it is changeless, therefore, the people of each image in image sequence
The position in face region may be considered changeless.
Step 202, for each pixel of the human face region in captured image, by the image sequence, picture
Most clearly image is determined as the target image of the pixel to element, determines focal length during camera photographic subjects image, and based on coke
Distance away from the physical location and camera for determining the corresponding target object of the pixel.
In the present embodiment, for each pixel of the human face region in captured image, above-mentioned electronic equipment can
With first by the above-mentioned image sequence, pixel, most clearly image is determined as the target image of the pixel, determine to take the photograph later
Focal length during as head photographic subjects image, based on focal length determine the corresponding target object of the pixel physical location (such as nose,
Eyes, face, eyebrow, chin etc.) with the distance of camera.Herein, above-mentioned electronic equipment can be clapped to target object
Face datection is carried out to captured image during taking the photograph or using various existing method for detecting human face after shooting, to determine
The position of human face region in image.Above-mentioned electronic equipment can in above-mentioned image sequence one (such as the 1st, the 100th
, last etc.), can also carry out Face datection to multiple images in above-mentioned image sequence.
For each pixel of human face region, above-mentioned electronic equipment can determine the target of the pixel as follows
Image:
The first step, above-mentioned electronic equipment can determine the pixel in each image RGB (RedGreen Blue, it is red green
It is blue) value.In practice, rgb value can refer to brightness, and can be represented (i.e. three integers) using integer.Under normal conditions, RGB
Respectively there are 256 grades of brightness, with 0 to 255 digital representation.Herein, above-mentioned electronic equipment can be handled above three integer
(such as be maximized or average value), using treated numerical value as the definition values of the pixel.
Second step, it may be determined that rgb value of the adjacent pixel (such as 8) of the pixel in each image.It needs to illustrate
, adjacent pixel can have multiple, and above-mentioned electronic equipment can (three channels correspond to by the rgb value of each adjacent pixel
Numerical value) carry out the definition values of the adjacent pixel that same treatment (such as be maximized or average value) obtains.Then, may be used
To be handled the clarity of each adjacent pixel (such as be averaged or be maximized), by treated, value is determined as this
The adjacent pixel definition values of pixel.
Third walks, and for each image, above-mentioned electronic equipment determines that the definition values of the pixel are adjacent with the pixel
The image of difference maximum in above-mentioned image sequence is determined as the target image of the pixel by the difference of pixel resolution value.It is real
In trampling, difference is bigger, and the clarity of the pixel is higher, therefore, the pixel most clearly image be the pixel target image.
It should be noted that for each pixel of the human face region in captured image, the pixel is being determined
After target image, since each image is corresponding with a focal length, and above-mentioned electronic equipment can store it is captured each
The correspondence of a image and focal length, therefore, focal length when above-mentioned electronic equipment can directly determine to shoot the target image.This
Outside, above-mentioned electronic equipment can be based on optical imaging concept, determine that the pixel is corresponding according to the corresponding focal length of the target image
The physical location of target object and the distance of camera.It should be noted that above-mentioned optical imaging concept is research extensively at present
With the known technology of application, details are not described herein.
In some optional realization methods of the present embodiment, above-mentioned electronic equipment after above-mentioned image sequence is generated,
Any image in above-mentioned image sequence can be extracted, Face datection is carried out to the image that is extracted, determine human face region
Position in the image extracted, by position of the above-mentioned location determination for the human face region in each image in above-mentioned image sequence
It puts.
Step 203, based on the corresponding physical location of each pixel and the distance of camera in human face region, target is determined
Whether object is live body.
In the present embodiment, above-mentioned electronic equipment can based on the corresponding physical location of each pixel in human face region with
The distance of camera, profit determine whether target object is live body in various manners.As an example, in response to determining in human face region
The corresponding physical location of each pixel and the distance of camera meet preset condition (such as apart from it is identical or it is minimum away from
Deviation value is more than default value or the distance difference distance of maximum is less than default value etc.), it may be determined that above-mentioned target object
It is not live body;If it is determined that the distance of the corresponding physical location of pixel and camera in human face region is unsatisfactory for above-mentioned default item
Part, it may be determined that above-mentioned target object is live body.
In some optional realization methods of the present embodiment, above-mentioned electronic equipment can determine target in the following manner
Whether object is live body:
The first step, distance that can be based on the corresponding physical location of each pixel in human face region Yu above-mentioned camera,
Generate face depth map.Wherein, face depth map is a kind of depth image (depth image), also referred to as range image
(range image) refers to the distance of the physical location of the corresponding target object of pixel each from camera to human face region is (deep
Degree) image as pixel value.Face depth map can directly reflect the geometry on face surface.In practice, face depth
Image may be calculated point cloud data by coordinate conversion.
Above-mentioned face depth map can be input to In vivo detection model trained in advance, obtain In vivo detection by second step
As a result, wherein, above-mentioned In vivo detection model be used to detect face involved by face depth map whether be live body face.It is above-mentioned
In vivo detection model can be based on training sample, using supervised learning mode to the model of existing achievable classification feature
(such as convolutional neural networks (Convolutional Neural Network, CNN) or support vector machines etc.
(SupportVector Machine, SVM) etc.) be trained after obtain.Wherein, above-mentioned training sample can include a large amount of
Depth image and each depth image mark (for characterize whether be face live body).It, can be by depth map in practice
As the input as model, using the mark of depth image as the output of model, the model is carried out using machine learning method
Training, is determined as In vivo detection model by the model after training.
In some optional realization methods of the present embodiment, above-mentioned electronic equipment is each in human face region in response to determining
The corresponding physical location of a pixel is consistent with the distance of above-mentioned camera, it may be determined that above-mentioned target object is not live body.
In some optional realization methods of the present embodiment, above-mentioned electronic equipment is in response to determining the picture in human face region
The corresponding physical location of element and the distance of above-mentioned camera are inconsistent, it may be determined that above-mentioned target object is live body.
In some optional realization methods of the present embodiment, in response to determining the corresponding entity of pixel in human face region
Position and the distance of above-mentioned camera are inconsistent, can based on the corresponding physical location of each pixel in human face region with it is above-mentioned
The distance of camera generates face depth map.Then, above-mentioned face depth map can be input to In vivo detection trained in advance
Model obtains In vivo detection as a result, wherein, and whether above-mentioned In vivo detection model is used to detect face involved by face depth map
Face for live body.
With continued reference to Fig. 3, Fig. 3 is a schematic diagram according to the application scenarios of the detection method of the present embodiment.Fig. 3's
In application scenarios, the mobile equipment 301 for being equipped with the camera of adjustable focal length continuously adjusts the focal length of camera to mesh first
Mark object 302 is shot, and is generated by the image sequence of 256 image constructions.Then, for the face area in the image of shooting
Each pixel in domain, by the image sequence, pixel, most clearly image is determined as the mesh of the pixel to the movement equipment 301
Logo image determines the focal length when camera shoots the target image, and determines the corresponding target of the pixel based on the focal length
The physical location of object and the distance of the camera.Finally, the movement equipment is corresponding based on each pixel in human face region
Physical location and the distance of the camera, determine whether the target object is live body.
The method that above-described embodiment of the application provides, claps target object by the focal length for continuously adjusting camera
It takes the photograph, to generate image sequence pair, then for each pixel of the human face region in image, by the image sequence, picture
Most clearly image is determined as the target image of the pixel to element, determines the focal length when camera shoots the target image, and base
The physical location of the corresponding target object of the pixel and the distance of the camera are determined in the focal length, is finally based on human face region
In the corresponding physical location of each pixel and the camera distance, determine whether the target object is live body, so as to only make
With the camera collection image of an adjustable focal length, do not need to, using multiple camera collection images, improve face
The flexibility of In vivo detection.
With further reference to Fig. 4, as the realization to method shown in above-mentioned each figure, this application provides a kind of detection devices
One embodiment, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, which specifically can be applied to various shiftings
In dynamic equipment, above-mentioned mobile equipment can be equipped with the camera of adjustable focal length.
As shown in figure 4, the detection device 400 described in the present embodiment includes:Shooting unit 401 is configured to continuously adjust
The focal length of above-mentioned camera shoots target object, generates image sequence;First determination unit 402, be configured to for
Each pixel of human face region in captured image, by the above-mentioned image sequence, pixel, most clearly image is true
It is set to the target image of the pixel, determines focal length when above-mentioned camera shoots above-mentioned target image, and true based on above-mentioned focal length
The physical location of the corresponding above-mentioned target object of the fixed pixel and the distance of above-mentioned camera;Second determination unit 403, configuration are used
In the distance based on the corresponding physical location of each pixel in human face region Yu above-mentioned camera, determine that above-mentioned target object is
No is live body.
In some optional realization methods of the present embodiment, above-mentioned detection device 400 can also determine list including third
First (not shown).Wherein, above-mentioned third determination unit may be configured to extract any image in above-mentioned image sequence,
Face datection is carried out to the image extracted, determines the position in the image extracted of human face region, above-mentioned position is true
It is set to the position of the human face region in each image in above-mentioned image sequence.
In some optional realization methods of the present embodiment, above-mentioned second determination unit 403 can include the first generation
Module and the first input module (not shown).Wherein, above-mentioned first generation module may be configured to based on human face region
In the corresponding physical location of each pixel and above-mentioned camera distance, generate face depth map.Above-mentioned first input module
May be configured to for above-mentioned face depth map to be input in advance trained In vivo detection model, obtain In vivo detection as a result, its
In, above-mentioned In vivo detection model be used to detecting face involved by face depth map whether be live body face.
In some optional realization methods of the present embodiment, use can be further configured in above-mentioned second determination unit 403
In in response to determining that the corresponding physical location of each pixel in human face region is consistent with the distance of above-mentioned camera, determine above-mentioned
Target object is not live body.
In some optional realization methods of the present embodiment, use can be further configured in above-mentioned second determination unit 403
In in response to determine human face region in the corresponding physical location of pixel and the distance of above-mentioned camera it is inconsistent, determine above-mentioned mesh
Mark object is live body.
In some optional realization methods of the present embodiment, above-mentioned second determination unit 403 can include the second generation
Module and the second input module (not shown).Wherein, above-mentioned second generation module may be configured in response to determining people
The corresponding physical location of pixel and the distance of above-mentioned camera in face region is inconsistent, based on each pixel in human face region
Corresponding physical location and the distance of above-mentioned camera, generate face depth map.Above-mentioned second input module may be configured to
Above-mentioned face depth map is input to In vivo detection model trained in advance, obtains In vivo detection as a result, wherein, above-mentioned live body is examined
Survey model be used to detecting face involved by face depth map whether be live body face.
The device that above-described embodiment of the application provides, the focal length of camera is continuously adjusted to mesh by shooting unit 401
Mark object is shot, and to generate image sequence pair, then the first determination unit 402 is for each of the human face region in image
A pixel, by the image sequence, pixel, most clearly image is determined as the target image of the pixel, determines that the camera is clapped
The focal length during target image is taken the photograph, and physical location and the camera shooting of the corresponding target object of the pixel are determined based on the focal length
The distance of head, the second last determination unit 403 is based on the corresponding physical location of each pixel in human face region and the camera
Distance, determine whether the target object is live body, so as to using only the camera collection image of adjustable focal length,
It does not need to, using multiple camera collection images, improve the flexibility of face In vivo detection.
Below with reference to Fig. 5, it illustrates suitable for being used for realizing the computer system 500 of the electronic equipment of the embodiment of the present application
Structure diagram.Electronic equipment shown in Fig. 5 is only an example, to the function of the embodiment of the present application and should not use model
Shroud carrys out any restrictions.
As shown in figure 5, computer system 500 includes central processing unit (CPU) 501, it can be read-only according to being stored in
Program in memory (ROM) 502 or be loaded into program in random access storage device (RAM) 503 from storage section 508 and
Perform various appropriate actions and processing.In RAM 503, also it is stored with system 500 and operates required various programs and data.
CPU 501, ROM 502 and RAM 503 are connected with each other by bus 504.Input/output (I/O) interface 505 is also connected to always
Line 504.
I/O interfaces 505 are connected to lower component:Importation 506 including touch screen, touch tablet etc.;Including such as liquid
The output par, c 507 of crystal display (LCD) etc. and loud speaker etc.;Storage section 508 including hard disk etc.;And including such as
The communications portion 509 of the network interface card of LAN card, modem etc..Communications portion 509 is held via the network of such as internet
Row communication process.Driver 510 is also according to needing to be connected to I/O interfaces 505.Detachable media 511, such as semiconductor memory
Etc., it is mounted on driver 510, is deposited in order to be mounted into as needed from the computer program read thereon as needed
Store up part 508.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product, including being carried on computer-readable medium
On computer program, which includes for the program code of the method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communications portion 509 and/or from detachable media
511 are mounted.When the computer program is performed by central processing unit (CPU) 501, perform what is limited in the present processes
Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or
Computer readable storage medium either the two arbitrarily combines.Computer readable storage medium for example can be --- but
It is not limited to --- electricity, magnetic, optical, electromagnetic, system, device or the device of infrared ray or semiconductor or arbitrary above combination.
The more specific example of computer readable storage medium can include but is not limited to:Electrical connection with one or more conducting wires,
Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit
Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory
Part or above-mentioned any appropriate combination.In this application, computer readable storage medium can any be included or store
The tangible medium of program, the program can be commanded the either device use or in connection of execution system, device.And
In the application, computer-readable signal media can include the data letter propagated in a base band or as a carrier wave part
Number, wherein carrying computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but not
It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer
Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use
In by instruction execution system, device either device use or program in connection.It is included on computer-readable medium
Program code any appropriate medium can be used to transmit, including but not limited to:Wirelessly, electric wire, optical cable, RF etc., Huo Zheshang
Any appropriate combination stated.
Flow chart and block diagram in attached drawing, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
Architectural framework in the cards, function and the operation of sequence product.In this regard, each box in flow chart or block diagram can generation
The part of one module of table, program segment or code, the part of the module, program segment or code include one or more use
In the executable instruction of logic function as defined in realization.It should also be noted that it in some implementations as replacements, is marked in box
The function of note can also be occurred with being different from the sequence marked in attached drawing.For example, two boxes succeedingly represented are actually
It can perform substantially in parallel, they can also be performed in the opposite order sometimes, this is depended on the functions involved.Also it to note
Meaning, the combination of each box in block diagram and/or flow chart and the box in block diagram and/or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be set in the processor, for example, can be described as:A kind of processor packet
Include shooting unit, the first determination unit and the second determination unit.Wherein, the title of these units is not formed under certain conditions
To the restriction of the unit in itself, for example, shooting unit is also described as " continuously adjusting the focal length of the camera to target
The unit that object is shot ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be
Included in device described in above-described embodiment;Can also be individualism, and without be incorporated the device in.Above-mentioned calculating
Machine readable medium carries one or more program, when said one or multiple programs are performed by the device so that should
Device:The focal length for continuously adjusting camera shoots target object, generates image sequence;For in captured image
Each pixel of human face region, by the image sequence, pixel, most clearly image is determined as the target figure of the pixel
Picture determines the focal length when camera shoots the target image, and determines the corresponding target object of the pixel based on the focal length
Physical location and the camera distance;Based on the corresponding physical location of each pixel in human face region and the camera
Distance determines whether the target object is live body.
The preferred embodiment and the explanation to institute's application technology principle that above description is only the application.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to the technology that the specific combination of above-mentioned technical characteristic forms
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
The other technical solutions for arbitrarily combining and being formed.Such as features described above has similar work(with (but not limited to) disclosed herein
The technical solution that the technical characteristic of energy is replaced mutually and formed.
Claims (14)
1. a kind of for moving the detection method of equipment, the mobile equipment is equipped with the camera of adjustable focal length, the side
Method includes:
The focal length for continuously adjusting the camera shoots target object, generates image sequence;
For each pixel of the human face region in captured image, by it is in described image sequence, the pixel is most clear
Image be determined as the target image of the pixel, determine the focal length when camera shoots the target image, and based on institute
It states focal length and determines the physical location of the corresponding target object of the pixel and the distance of the camera;
Distance based on the corresponding physical location of each pixel in human face region Yu the camera, determines the target object
Whether it is live body.
2. detection method according to claim 1, wherein, in the focal length for continuously adjusting the camera to target pair
As being shot, after generating image sequence, the method further includes:
Extract any image in described image sequence, Face datection carried out to the image that is extracted, determine human face region
Position in the image extracted, by position of the location determination for the human face region in each image in described image sequence
It puts.
3. detection method according to claim 1, wherein, the corresponding entity of each pixel based in human face region
Position and the distance of the camera, determine whether the target object is live body, including:
Distance based on the corresponding physical location of each pixel in human face region Yu the camera generates face depth map;
The face depth map is input to in advance trained In vivo detection model, obtains In vivo detection as a result, wherein, the work
Body detection model be used to detecting face involved by face depth map whether be live body face.
4. detection method according to claim 1, wherein, the corresponding entity of each pixel based in human face region
Position and the distance of the camera, determine whether the target object is live body, including:
In response to determining that the corresponding physical location of each pixel in human face region is consistent with the distance of the camera, institute is determined
It is not live body to state target object.
5. detection method according to claim 4, wherein, the corresponding entity of each pixel based in human face region
Position and the distance of the camera, determine whether the target object is live body, is further included:
It is inconsistent in response to the corresponding physical location of pixel in determining human face region and the distance of the camera, it determines described
Target object is live body.
6. detection method according to claim 4, wherein, the corresponding entity of each pixel based in human face region
Position and the distance of the camera, determine whether the target object is live body, including:
It is inconsistent in response to the corresponding physical location of pixel in determining human face region and the distance of the camera, based on face
The corresponding physical location of each pixel and the distance of the camera in region, generate face depth map;
The face depth map is input to in advance trained In vivo detection model, obtains In vivo detection as a result, wherein, the work
Body detection model be used to detecting face involved by face depth map whether be live body face.
7. a kind of for moving the detection device of equipment, the mobile equipment is equipped with the camera of adjustable focal length, the dress
Put including:
Shooting unit, the focal length for being configured to continuously adjust the camera shoot target object, generate image sequence;
First determination unit is configured to each pixel for the human face region in captured image, by described image
Most clearly image is determined as the target image of the pixel in the sequence, pixel, determines that the camera shoots the target
Focal length during image, and the physical location of the corresponding target object of the pixel and the camera are determined based on the focal length
Distance;
Second determination unit is configured to based on the corresponding physical location of each pixel in human face region and the camera
Distance determines whether the target object is live body.
8. detection device according to claim 7, wherein, described device further includes:
Third determination unit, any image being configured in extraction described image sequence carry out face to the image extracted
Detection determines the position in the image extracted of human face region, is each in described image sequence by the location determination
The position of human face region in a image.
9. detection device according to claim 7, wherein, second determination unit includes:
First generation module is configured to based on the corresponding physical location of each pixel in human face region and the camera
Distance generates face depth map;
First input module is configured to for the face depth map to be input to In vivo detection model trained in advance, be lived
Physical examination is surveyed as a result, wherein, the In vivo detection model be used to detect face involved by face depth map whether be live body people
Face.
10. detection device according to claim 7, wherein, second determination unit is further configured to:
In response to determining that the corresponding physical location of each pixel in human face region is consistent with the distance of the camera, institute is determined
It is not live body to state target object.
11. detection device according to claim 10, wherein, second determination unit is further configured to:
It is inconsistent in response to the corresponding physical location of pixel in determining human face region and the distance of the camera, it determines described
Target object is live body.
12. detection device according to claim 10, wherein, second determination unit includes:
Second generation module is configured in response to determining the corresponding physical location of pixel in human face region and the camera
Distance it is inconsistent, the distance based on the corresponding physical location of each pixel in human face region Yu the camera, generate people
Face depth map;
Second input module is configured to for the face depth map to be input to In vivo detection model trained in advance, be lived
Physical examination is surveyed as a result, wherein, the In vivo detection model be used to detect face involved by face depth map whether be live body people
Face.
13. a kind of electronic equipment, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are performed by one or more of processors so that one or more of processors are real
The now method as described in any in claim 1-6.
14. a kind of computer readable storage medium, is stored thereon with computer program, wherein, when which is executed by processor
Realize the method as described in any in claim 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810045159.3A CN108171204B (en) | 2018-01-17 | 2018-01-17 | Detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810045159.3A CN108171204B (en) | 2018-01-17 | 2018-01-17 | Detection method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108171204A true CN108171204A (en) | 2018-06-15 |
CN108171204B CN108171204B (en) | 2019-09-17 |
Family
ID=62514585
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810045159.3A Active CN108171204B (en) | 2018-01-17 | 2018-01-17 | Detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108171204B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109376694A (en) * | 2018-11-23 | 2019-02-22 | 重庆中科云丛科技有限公司 | A kind of real-time face biopsy method based on image procossing |
CN109635770A (en) * | 2018-12-20 | 2019-04-16 | 上海瑾盛通信科技有限公司 | Biopsy method, device, storage medium and electronic equipment |
CN109684924A (en) * | 2018-11-21 | 2019-04-26 | 深圳奥比中光科技有限公司 | Human face in-vivo detection method and equipment |
CN110349310A (en) * | 2019-07-03 | 2019-10-18 | 源创客控股集团有限公司 | A kind of making prompting cloud platform service system for garden enterprise |
CN110866473A (en) * | 2019-11-04 | 2020-03-06 | 浙江大华技术股份有限公司 | Target object tracking detection method and device, storage medium and electronic device |
WO2023060756A1 (en) * | 2021-10-13 | 2023-04-20 | 深圳前海微众银行股份有限公司 | Face anti-spoofing detection method and device, and readable storage medium and computer program product |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104639927A (en) * | 2013-11-11 | 2015-05-20 | 财团法人资讯工业策进会 | Method for shooting stereoscopic image and electronic device |
CN105335722A (en) * | 2015-10-30 | 2016-02-17 | 商汤集团有限公司 | Detection system and detection method based on depth image information |
CN106998459A (en) * | 2017-03-15 | 2017-08-01 | 河南师范大学 | A kind of single camera stereoscopic image generation method of continuous vari-focus technology |
CN107480601A (en) * | 2017-07-20 | 2017-12-15 | 广东欧珀移动通信有限公司 | Detection method and related product |
CN107491775A (en) * | 2017-10-13 | 2017-12-19 | 理光图像技术(上海)有限公司 | Human face in-vivo detection method, device, storage medium and equipment |
-
2018
- 2018-01-17 CN CN201810045159.3A patent/CN108171204B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104639927A (en) * | 2013-11-11 | 2015-05-20 | 财团法人资讯工业策进会 | Method for shooting stereoscopic image and electronic device |
CN105335722A (en) * | 2015-10-30 | 2016-02-17 | 商汤集团有限公司 | Detection system and detection method based on depth image information |
CN106998459A (en) * | 2017-03-15 | 2017-08-01 | 河南师范大学 | A kind of single camera stereoscopic image generation method of continuous vari-focus technology |
CN107480601A (en) * | 2017-07-20 | 2017-12-15 | 广东欧珀移动通信有限公司 | Detection method and related product |
CN107491775A (en) * | 2017-10-13 | 2017-12-19 | 理光图像技术(上海)有限公司 | Human face in-vivo detection method, device, storage medium and equipment |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109684924A (en) * | 2018-11-21 | 2019-04-26 | 深圳奥比中光科技有限公司 | Human face in-vivo detection method and equipment |
CN109684924B (en) * | 2018-11-21 | 2022-01-14 | 奥比中光科技集团股份有限公司 | Face living body detection method and device |
CN109376694A (en) * | 2018-11-23 | 2019-02-22 | 重庆中科云丛科技有限公司 | A kind of real-time face biopsy method based on image procossing |
CN109635770A (en) * | 2018-12-20 | 2019-04-16 | 上海瑾盛通信科技有限公司 | Biopsy method, device, storage medium and electronic equipment |
CN110349310A (en) * | 2019-07-03 | 2019-10-18 | 源创客控股集团有限公司 | A kind of making prompting cloud platform service system for garden enterprise |
CN110866473A (en) * | 2019-11-04 | 2020-03-06 | 浙江大华技术股份有限公司 | Target object tracking detection method and device, storage medium and electronic device |
CN110866473B (en) * | 2019-11-04 | 2022-11-18 | 浙江大华技术股份有限公司 | Target object tracking detection method and device, storage medium and electronic device |
WO2023060756A1 (en) * | 2021-10-13 | 2023-04-20 | 深圳前海微众银行股份有限公司 | Face anti-spoofing detection method and device, and readable storage medium and computer program product |
Also Published As
Publication number | Publication date |
---|---|
CN108171204B (en) | 2019-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108171204B (en) | Detection method and device | |
CN107909065A (en) | The method and device blocked for detecting face | |
CN109308681A (en) | Image processing method and device | |
CN108133201B (en) | Face character recognition methods and device | |
CN107491771A (en) | Method for detecting human face and device | |
CN109191514A (en) | Method and apparatus for generating depth detection model | |
CN110348419B (en) | Method and device for photographing | |
CN108197618A (en) | For generating the method and apparatus of Face datection model | |
CN108494778A (en) | Identity identifying method and device | |
CN109086719A (en) | Method and apparatus for output data | |
CN108491809A (en) | The method and apparatus for generating model for generating near-infrared image | |
CN108363995A (en) | Method and apparatus for generating data | |
CN108280413A (en) | Face identification method and device | |
CN108154547A (en) | Image generating method and device | |
CN108062544A (en) | For the method and apparatus of face In vivo detection | |
CN108171206A (en) | information generating method and device | |
CN109344752A (en) | Method and apparatus for handling mouth image | |
CN109241934A (en) | Method and apparatus for generating information | |
CN110472460A (en) | Face image processing process and device | |
CN108171208A (en) | Information acquisition method and device | |
CN108182746A (en) | Control system, method and apparatus | |
CN109345580A (en) | Method and apparatus for handling image | |
CN108460366A (en) | Identity identifying method and device | |
CN108462832A (en) | Method and device for obtaining image | |
CN110110666A (en) | Object detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |