WO2020158158A1 - Authentication device - Google Patents

Authentication device Download PDF

Info

Publication number
WO2020158158A1
WO2020158158A1 PCT/JP2019/046890 JP2019046890W WO2020158158A1 WO 2020158158 A1 WO2020158158 A1 WO 2020158158A1 JP 2019046890 W JP2019046890 W JP 2019046890W WO 2020158158 A1 WO2020158158 A1 WO 2020158158A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
authentication
target person
image
authentication target
Prior art date
Application number
PCT/JP2019/046890
Other languages
French (fr)
Japanese (ja)
Inventor
文也 永井
Original Assignee
ミツミ電機株式会社
文也 永井
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ミツミ電機株式会社, 文也 永井 filed Critical ミツミ電機株式会社
Priority to CN201980090988.6A priority Critical patent/CN113383367A/en
Publication of WO2020158158A1 publication Critical patent/WO2020158158A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/117Identification of persons
    • A61B5/1171Identification of persons based on the shapes or appearances of their bodies or parts thereof
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images

Definitions

  • the present invention generally relates to an authentication device that performs three-dimensional face authentication of an authentication target person, and more specifically, an authentication target person that can be observed by irradiating the face of the authentication target person with light in a specific wavelength band.
  • the biometric information of the face is used as a feature point for generating the three-dimensional information of the face of the authentication target person, and the three-dimensional face authentication of the face of the authentication target person is performed.
  • biometric authentication does not burden the user because there is no problem of forgetting a password or ID, which is a problem with password and ID authentication, or theft or loss, which is a problem with physical key or ID card authentication. There are merits.
  • the camera module has been installed in various devices due to the recent miniaturization and high performance of the camera module.
  • Face authentication technology has been widely used to verify the identity of a person by comparing the registered face image of the person.
  • the face authentication executed by photographing the face of the authentication target person is excellent in that the authentication target person does not need an operation for authentication and the burden on the authentication target person is not imposed.
  • three-dimensional face authentication is often used from the viewpoint of high authentication accuracy and prevention of impersonation by others.
  • a stereo camera using a plurality of optical systems, an image sensor capable of acquiring image plane phase difference information, or the like is used to capture an image of the face of the authentication target person.
  • Dimension information is generated, and authentication is performed using the generated three-dimensional information of the face of the authentication subject.
  • a face of a person to be authenticated is photographed using an image pickup system including two optical systems, so that the feature points of the face of the person to be authenticated from the image pickup system (for example, forehead, Yamane (both eyes) Distance between the nose and the nose, mouth) is calculated, and based on the distance from the imaging system to each of the facial feature points of the authentication target, the face of the authentication target is calculated.
  • an authentication device that executes three-dimensional face authentication of an authentication target by generating three-dimensional information.
  • a part where a boundary (edge) such as an eye, a nose, or a mouth is conspicuous is used as a feature point.
  • the part where the boundary is not conspicuous cannot be used as a feature point.
  • many parts such as eyes, nose, and mouth that can be used as feature points are located near the centerline of the face, three-dimensional information near the centerline of the face can be generated relatively accurately.
  • Irradiate a target pattern with light of a certain pattern for example, a grid pattern or a dot pattern
  • a projector for this purpose is used to capture an image of a target site irradiated with a fixed pattern of light (see, for example, Patent Document 2).
  • the distance to the target location is calculated by irradiating the target location with a fixed pattern of light and analyzing the distortion and shift amount of the constant pattern projected on the target location. doing.
  • a projector for irradiating the target portion with light of a fixed pattern is required, and the configuration of the authentication device becomes large.
  • the present invention has been made in view of the above conventional problems, and an object thereof is biometric information of the face of an authentication target person that can be observed by irradiating the face of the authentication target person with light in a specific wavelength band,
  • An object of the present invention is to provide an authentication device that can be used as a feature point for generating three-dimensional information of a face of an authentication target person and can perform three-dimensional face authentication of the face of the authentication target person.
  • a light source for irradiating the face of the authentication subject with light in a specific wavelength band A first imaging system for capturing an image of the face of the authentication target person irradiated with light of the specific wavelength band, and acquiring a first face image of the authentication target person; A second imaging system for capturing an image of the face of the authentication target person irradiated with light in the specific wavelength band and acquiring a second face image of the authentication target person; A feature point extraction unit for extracting a plurality of feature points of the face of the authentication target person in each of the first face image and the second face image; 3 of the faces of the authentication target person based on the plurality of feature points of the face of the authentication target person in each of the first face image and the second face image extracted by the feature point extracting unit.
  • a three-dimensional information generation unit for generating dimensional information An authentication unit configured to be able to perform three-dimensional face authentication of the authentication target person using the three-dimensional information of the face of the authentication target person generated by the three-dimensional information generation unit, The plurality of feature points of the face of the authentication target person in each of the first face image and the second face image extracted by the feature point extraction unit are specified as the face of the authentication target person.
  • An authentication device comprising biometric information of the face of the authentication target person that can be observed by irradiating the light in the wavelength band.
  • Each of the first image pickup system and the second image pickup system substantially blocks light other than the wavelength band corresponding to the specific wavelength band of the light emitted from the light source.
  • the biometric information of the face of the authentication target person that can be observed by irradiating the face of the authentication target person with the light in the specific wavelength band is a vein of the face of the authentication target person.
  • the biometric information of the face of the authentication target person that can be observed by irradiating the face of the authentication target person with the light in the specific wavelength band is the melanin pigment of the face of the authentication target person.
  • the first imaging system includes a first optical system for forming a first optical image of the face of the authentication target person, and the first optical system formed by the first optical system.
  • a first image sensor for capturing an optical image and acquiring the first face image
  • the second imaging system forms a second optical image for forming a second optical image of the face of the authentication target person and the second optical image formed by the second optical system.
  • a second image sensor for capturing an image and obtaining the second face image uses the plurality of feature points of the face of the authentication target person in each of the first face image and the second face image extracted by the feature point extraction unit, An image magnification ratio between the magnification of the first optical image and the magnification of the second optical image is calculated, and the three-dimensional information of the face of the authentication target person is calculated based on the calculated image magnification ratio.
  • the authentication device according to any one of (1) to (4), which is configured to generate.
  • the first imaging system and the second imaging system include an optical axis of the first optical system of the first imaging system and an optical axis of the second optical system of the second imaging system.
  • the authentication device according to (5) above which is configured so as to be parallel to the optical axis but do not match.
  • the three-dimensional information generation unit includes the plurality of feature points of the face of the person to be authenticated in the first face image, and the feature points of the person to be authenticated in the corresponding second face image.
  • the authentication device according to any one of (1) to (4) above, which is configured to generate the three-dimensional information of the face based on a translational parallax between the face and the plurality of feature points.
  • the biometric information of the authentication subject's face which becomes observable by irradiating the authentication subject's face with the light of a specific wavelength band is produced
  • the authentication device of the present invention even without using a projector that irradiates a person to be authenticated with a certain pattern of light, there are few irregularities such as the cheeks and foreheads of the person to be authenticated, and there are few feature points that can be used.
  • the distance to can be calculated. Therefore, the system configuration of the authentication device can be simplified. As a result, the authentication apparatus can be downsized, the power consumption can be reduced, and the cost can be reduced, as compared with the case of using the projector that irradiates the face of the authentication target person with a certain pattern of light.
  • FIG. 1 is a block diagram schematically showing an authentication device according to the first embodiment of the present invention.
  • FIG. 2 is a schematic diagram for explaining a first face image acquired by the first imaging system shown in FIG. 1 or a second face image acquired by the second imaging system.
  • FIG. 3 is a flowchart showing an authentication method executed by the authentication device shown in FIG.
  • FIG. 4 is a flowchart showing an authentication method executed by the authentication device according to the second embodiment of the present invention.
  • FIG. 5 is a view for explaining a first face image acquired by the first imaging system or a second face image acquired by the second imaging system of the authentication device according to the third embodiment of the present invention. It is a schematic diagram.
  • FIG. 1 is a block diagram schematically showing an authentication device according to the first embodiment of the present invention.
  • FIG. 2 is a schematic diagram for explaining a first face image acquired by the first imaging system shown in FIG. 1 or a second face image acquired by the second imaging system.
  • the authentication device 1 is configured to perform three-dimensional face authentication of the authentication target person 100 by capturing the face of the authentication target person 100.
  • the authentication device 1 is irradiated with a control unit 2 that controls the authentication device 1, a light source LS that irradiates the face of the authentication target person 100 with light L in a specific wavelength band, and light L in a specific wavelength band.
  • the first imaging system IS1 for capturing the first face image of the authentication target person 100 and the face of the authentication target person 100 irradiated with the light L in the specific wavelength band.
  • a second imaging system IS2 for capturing a second face image of the authentication target person 100 and a plurality of faces of the authentication target person 100 in each of the first face image and the second face image.
  • the feature point extraction unit 3 Based on the feature point extraction unit 3 for extracting the feature points, and a plurality of feature points of the face of the authentication target person 100 in each of the first face image and the second face image extracted by the feature point extraction unit 3. And a three-dimensional information generation unit 4 for generating three-dimensional information of the face of the authentication target person 100, and an authentication information storage unit storing authentication information necessary for three-dimensional face authentication of the authentication target person 100. 5, the authentication unit 6 configured to execute the three-dimensional face authentication of the authentication target person 100 by using the three-dimensional information of the face of the authentication target person 100 generated by the three-dimensional information generation unit 4, and the user.
  • a display unit 8 for displaying arbitrary information such as a liquid crystal panel
  • a communication unit 9 for executing communication with an external device
  • a data bus 10 for exchanging data and instructions.
  • the first imaging system IS1 has a first optical system OS1 for forming a first optical image of the face of the authentication subject 100
  • the second imaging system IS2 has a first optical system OS1 of the authentication subject 100. It has a second optical system OS2 for forming a second optical image of the face.
  • the first optical system OS1 of the first image pickup system IS1 and the second optical system OS2 of the second image pickup system IS2 are arranged at arbitrary positions (measurement The change of the magnification m 1 of the first optical image according to the distance a to the distance object) is the second optical image according to the distance a to an arbitrary position (distance measuring object) on the face of the authentication subject 100.
  • Measurement The change of the magnification m 1 of the first optical image according to the distance a to the distance object is the second optical image according to the distance a to an arbitrary position (distance measuring object) on the face of the authentication subject 100.
  • the first optical system OS1 and the second optical system OS2 are configured and arranged so that at least one of the following three conditions is satisfied.
  • the focal length f 1 of the (first condition) first optical system OS1, the focal length f 2 of the second optical system OS2 are different from each other (f 1 ⁇ f 2)
  • the distance EP 1 from the exit pupil of the first optical system OS1 to the image formation position of the first optical image when the object to be measured is at infinity and the second optical system OS2.
  • the distance EP 2 from the exit pupil to the image formation position of the second optical image when the distance measurement target is at infinity is different from each other (EP 1 ⁇ EP 2 ).
  • a difference (depth parallax) D in the depth direction (optical axis direction) exists between the front principal point of the first optical system OS1 and the front principal point of the second optical system OS2 ( D ⁇ 0)
  • the image magnification ratio MR which is the ratio of the magnification m 1 of the first optical image and the magnification m 2 of the second optical image, does not hold as a function of the distance a to the object to be measured. Therefore, the first optical system OS1 and the second optical system OS2 are configured to further satisfy the fourth condition that the image magnification ratio MR holds as a function of the distance a to the object to be measured.
  • the authentication device 1 of the present embodiment such a first optical system OS1 and a second optical system OS2 are used, and the ratio of the magnification m 1 of the first optical image to the magnification m 2 of the second optical image. It is possible to calculate the distance a to each of a plurality of places on the face of the authentication subject 100 based on the image magnification ratio MR which is The authentication device 1 calculates a plurality of parts of the face of the authentication subject 100 calculated based on the image magnification ratio MR which is the ratio of the magnification m 1 of the first optical image and the magnification m 2 of the second optical image.
  • the three-dimensional information of the face of the authentication target person 100 is generated based on the distance a to each of them, and the three-dimensional face authentication of the authentication target person 100 is executed based on the three-dimensional information of the face of the authentication target person 100.
  • the authentication device 1 of the present embodiment includes a light source LS for irradiating the face L of the authentication target person 100 with the light L in the specific wavelength band. Therefore, the authentication device 1 according to the present embodiment is capable of observing the face of the authentication target person 100 that can be observed by normal shooting, such as shooting under sunlight or shooting by irradiating the face of the authentication target person 100 with white light.
  • the biometric information 110 for example, eyes, nose, mouth, ears, etc., see FIG. 2
  • the face of the authentication target person 100 can be observed by irradiating the face with the light L of the specific wavelength band.
  • the face biometric information 120 (for example, the vein of the face of the authentication target person 100 in this embodiment) can be used as a feature point for distance measurement.
  • the control unit 2 exchanges various data and various instructions with each component via the data bus 10 and controls the authentication device 1.
  • the control unit 2 includes a processor for executing arithmetic processing, and a memory storing data, programs, modules, etc. necessary for controlling the authentication device 1, and the processor of the control unit 2 is Control of the authentication device 1 is executed by using data, programs, modules, etc. stored in the memory.
  • the processor of the control unit 2 can provide a desired function by using each component of the authentication device 1.
  • the processor of the control unit 2 uses the three-dimensional information generation unit 4 to determine the face of the authentication target person 100 based on the plurality of feature points of the face of the authentication target person 100 extracted by the feature point extraction unit 3. Processing for generating three-dimensional information can be executed.
  • the processor of the control unit 2 is, for example, one or more microprocessors, microcomputers, microcontrollers, digital signal processors (DSP), central processing units (CPU), memory control units (MCU), arithmetic processing units for image processing. (GPU), a state machine, a logic circuit, an application specific integrated circuit (ASIC), or a combination thereof, which is an arithmetic unit that executes arithmetic processing such as signal operation based on computer readable instructions.
  • the processor of controller 2 is configured to fetch computer-readable instructions (eg, data, programs, modules, etc.) stored in the memory of controller 2 and perform signal manipulation and control.
  • the memory of the control unit 2 includes a volatile storage medium (eg, RAM, SRAM, DRAM), a non-volatile storage medium (eg, ROM, EPROM, EEPROM, flash memory, hard disk), or a removable or non-removable combination including a combination thereof. It is a removable computer-readable medium. Further, in the memory of the control unit 2, the distance a to each of a plurality of places on the face of the authentication subject 100, which will be described later, is determined by the configuration and arrangement of the first imaging system IS1 and the second imaging system IS2. The parameters used for the calculation to calculate are stored.
  • the light source LS is configured to irradiate the face of the authentication subject 100 with the light L in the specific wavelength band.
  • the light source LS is configured and arranged so as to irradiate the entire region of the face of the authentication subject 100 with the light L of the specific wavelength band substantially uniformly.
  • the light source LS is not particularly limited as long as it can emit light in a predetermined wavelength band, but an LED or the like that can emit light in a predetermined wavelength band can be used as the light source LS.
  • the light L in the specific wavelength band emitted from the light source LS to the authentication target person 100 is near infrared light (light having a wavelength of about 700 to about 2500 nm).
  • the reduced hemoglobin flowing through the veins of a person has a high absorption rate of light in the near-infrared band.
  • the veins of the face of the authentication target person 100 appear black in the obtained first face image and second face image.
  • the authentication device 1 of the present embodiment can add to the biometric information 110 (for example, eyes, nose, mouth, ears, etc.) of the face of the authentication target person 100 that can be observed by normal imaging.
  • the vein (biological information 120) of the face of the authentication target person 100 can be observed.
  • the first imaging system IS1 captures a first optical system OS1 for forming a first optical image of the face of the authentication subject 100 and a first optical image formed by the first optical system OS1. Then, the light source LS is arranged between the first image sensor S1 for acquiring the first face image of the face of the authentication subject 100 and the first optical system OS1 and the first image sensor S1.
  • the first bandpass filter F1 that allows only light in a wavelength band corresponding to the specific wavelength band of the irradiated light L to pass therethrough and substantially blocks light in other wavelength bands.
  • the second imaging system IS2 includes a second optical system OS2 for forming a second optical image of the face of the authentication subject 100, and a second optical system OS2 formed by the second optical system OS2.
  • the light source LS is disposed between the second image sensor OS2 and the second image sensor S2 for capturing an image and acquiring the second face image of the authentication target person 100, and the light source LS.
  • the second band-pass filter F2 that allows only light in the wavelength band corresponding to the specific wavelength band of the light L emitted by the laser beam to pass therethrough and substantially blocks light in the other wavelength bands.
  • the first imaging system IS1 has the first bandpass filter F1 and the second imaging system IS2 has the second bandpass filter F2.
  • the first bandpass filter F1 is an authentication target that can be observed by irradiating the face of the authentication target person 100 with light L in a specific wavelength band in the first face image acquired by the first imaging system IS1. It is used to emphasize the biometric information 120 of the face of the person 100.
  • the second bandpass filter F2 is observable by irradiating the face of the authentication subject 100 with the light L in the specific wavelength band in the second face image acquired by the second imaging system IS2. It is used to emphasize the biometric information 120 of the face of the authentication target person 100.
  • the first imaging system IS1 has the first bandpass filter F1. You don't have to.
  • the second imaging system IS2 sets the second bandpass filter F2. It does not have to have.
  • a mode in which the first imaging system IS1 does not have the first bandpass filter F1 and/or the second imaging system IS2 does not have the second bandpass filter F2 is also within the scope of the present invention. It is within.
  • the first bandpass filter F1 is arranged between the first optical system OS1 and the first image sensor S1, and the second bandpass filter F2 is the second optical system. Although it is arranged between the OS2 and the second image sensor S2, the present invention is not limited to this.
  • the first bandpass filter F1 is mounted on the imaging surface of the first image sensor S1, the first bandpass filter F1 and the first image sensor S1 are integrated, and/or the second band.
  • a mode in which the pass filter F2 is attached on the image pickup surface of the second image pickup device S2 and the second bandpass filter F2 and the second image pickup device S2 are integrated is also within the scope of the present invention. is there.
  • the first image sensor S1, the first optical system OS1, and the first bandpass filter F1 that form the first image pickup system IS1 are provided in the same housing
  • the second image pickup device S2, the second optical system OS2, and the second bandpass filter F2 that form the second image pickup system IS2 are provided in another same housing, but the present invention is not limited to this. Not limited.
  • the first optical system OS1, the second optical system OS2, the first image sensor S1, the second image sensor S2, the first bandpass filter F1, and the second bandpass filter F2 are all in the same housing. Is also within the scope of the present invention.
  • the first optical system OS1 and the second optical system OS2 are configured and arranged so as to satisfy at least one of the above-described first to third conditions and the fourth condition. .. Therefore, in the authentication device 1 of the present invention, the magnification m of the first optical image formed by the first optical system OS1 according to the distance a to the arbitrary position (distance measurement target) on the face of the authentication subject 100.
  • the change of 1 is different from the change of the magnification m 2 of the second optical image formed by the second optical system OS2 according to the distance a to the arbitrary position (distance measurement target) on the face of the authentication subject 100. It is like this.
  • Zobaihi MR is the ratio of the magnification m 2 of such first magnification m 1 of the first optical image obtained by the configuration of the optical system OS1 and a second optical system OS2 second optical image , Is used to calculate the distance a to an arbitrary part of the face of the authentication subject 100.
  • the optical axis of the first optical system OS1 and the optical axis of the second optical system OS2 are parallel, but do not match. Further, the second optical system OS2 is arranged so as to be shifted by the separation distance P in the direction perpendicular to the optical axis direction of the first optical system OS1.
  • Each of the first image sensor S1 and the second image sensor S2 is a CMOS image sensor having a color filter such as an RGB primary color filter or a CMY complementary color filter arranged in an arbitrary pattern such as a Bayer array, It may be a color image pickup device such as a CCD image sensor or a monochrome image pickup device having no such color filter.
  • the first face image obtained by the first image sensor S1 and the second face image obtained by the second image sensor S2 are color or monochrome brightness information of the face of the authentication subject 100.
  • the first bandpass filter F1 is used as in the illustrated mode, the wavelength band of light that passes through the first bandpass filter F1 and reaches the image pickup surface of the first image pickup element S1 has the first wavelength band.
  • the first image sensor S1 is preferably a monochrome image sensor.
  • the second image sensor S2 is preferably a monochrome image sensor.
  • the first optical system OS1 forms a first optical image of the face of the authentication target person 100 on the imaging surface of the first image sensor S1, and the first image sensor S1 causes the face of the authentication target person 100.
  • a first face image including a first optical image of the is acquired.
  • the acquired first face image is sent to the control unit 2 and the feature point extraction unit 3 via the data bus 10.
  • the second optical system OS2 forms a second optical image of the face of the authentication subject 100 on the imaging surface of the second image sensor S2, and the second image sensor S2 causes the authentication target 100 to be authenticated.
  • a second face image is acquired that includes a second optical image of the person's face.
  • the acquired second face image is sent to the control unit 2 and the feature point extraction unit 3 via the data bus 10.
  • the outline of the face image or the second face image is shown.
  • the face of the authentication subject 100 in the state where the light L of the specific wavelength band from the light source LS (near infrared light in the present embodiment) is irradiated is the first imaging system IS1 and the second imaging system IS1.
  • the first face image and the second face image are captured by the image capturing system IS2.
  • the biometric information 110 of the face of the authentication target person 100 that can be observed even in normal photographing of eyes, nose, mouth, etc.
  • the biometric information 120 of the face of the authentication target person 100 in this embodiment, the veins of the face of the authentication target person 100
  • the light L of the specific wavelength band in this embodiment, the light L of the specific wavelength band.
  • the first face image and the second face image sent to the feature point extraction unit 3 acquire a plurality of feature points of the face of the authentication target person 100 in each of the first face image and the second face image. Used for.
  • the first face image and the second face image sent to the control unit 2 are used for image display by the display unit 8 and communication of image signals by the communication unit 9.
  • the feature point extraction unit 3 includes a plurality of feature points of the face of the authentication subject 100 in each of the first face image received from the first imaging system IS1 and the second face image received from the second imaging system IS2. Has the function of extracting. Specifically, the feature point extraction unit 3 first performs a filtering process such as Canny on the first face image to obtain a plurality of biometric images of the face of the authentication target person 100 in the first face image. The information 110 and 120 are extracted as a plurality of feature points of the face of the authentication target person 100 in the first face image.
  • a filtering process such as Canny
  • the plurality of feature points of the face of the authentication target person 100 in each of the first face image and the second face image extracted by the feature point extraction unit 3 are eyes,
  • Biometric information 120 of the face in this embodiment, veins of the face of the authentication target person 100).
  • the feature point extraction unit 3 determines a plurality of faces of the authentication target person 100 in the second face image corresponding to the plurality of feature points of the extracted face of the authentication target person 100 in the first face image.
  • the corresponding feature point detection process for detecting the feature point of is executed.
  • the feature point extraction unit 3 is extracted in the corresponding feature point detection process using the parameters regarding the characteristics and arrangement of the first imaging system IS1 and the second imaging system IS2 stored in the memory of the control unit 2.
  • the epipolar line in the second face image corresponding to each of the plurality of feature points of the face of the authentication target person 100 in the first face image is derived, and the epipolar line in the derived second face image is searched for.
  • the feature point extraction unit 3 uses the epipolar line to detect a plurality of feature points of the face of the authentication target person 100 in the corresponding second face image in order to detect any corresponding feature point known in the art.
  • Output algorithms e.g., 8-point algorithm, Tsai algorithm, etc. can be used.
  • Information about a plurality of feature points of the face of the authentication target person 100 in the first face image extracted by the feature point extraction unit 3 and a plurality of feature points of the face of the authentication target person 100 in the corresponding second face image are transmitted to the three-dimensional information generation unit 4.
  • the three-dimensional information generation unit 4 has a function of generating three-dimensional information of the face of the authentication target person 100 based on the plurality of feature points of the face of the authentication target person 100 extracted by the feature point extraction unit.
  • the three-dimensional information generation unit 4 receives, from the feature point extraction unit 3, a plurality of feature points of the face of the authentication target person 100 in the first face image and the face of the authentication target person 100 in the corresponding second face image.
  • the plurality of feature points of the face of the authentication target person 100 in the first face image and the plurality of feature points of the face of the authentication target person 100 in the corresponding second face image are detected.
  • the magnification m 1 of the first optical image is calculated Zobaihi MR with magnification m 2 of the second optical image, based on the calculated Zobaihi MR, of an object's face 100 Generate three-dimensional information.
  • the three-dimensional information generation unit 4 receives from the feature point extraction unit 3 a plurality of feature points of the face of the authentication target person 100 in the first face image and the corresponding authentication targets in the second face image.
  • the distance between the plurality of feature points of the face of the person 100 to be authenticated 100 in the first face image is measured to determine the first face of the person to be authenticated 100.
  • the sizes Y FD1 of a plurality of locations in one optical image are acquired.
  • the three-dimensional information generation unit 4 selects a plurality of combinations of a plurality of feature points of the face of the authentication target person 100 in the first face image used to acquire the size Y FD1, thereby performing the authentication target. It is possible to acquire the sizes Y FD1 of the plurality of locations of the first optical image of the person 100.
  • the three-dimensional information generation unit 4 selects feature points adjacent to each other in the height direction from a plurality of feature points of the face of the authentication target person 100 in the first face image, and measures the distance between them. As a result, the image height of an arbitrary portion of the first optical image can be acquired. Similarly, the three-dimensional information generation unit 4 selects feature points adjacent in the width direction from the plurality of feature points of the face of the authentication target person 100 in the first face image, and measures the distance between them. Thereby, the image width of an arbitrary part of the first optical image can be acquired.
  • the selection of a combination of a plurality of feature points of the face of the authentication target person 100 in the first face image by the three-dimensional information generation unit 4 is performed by selecting all the feature points of the face of the authentication target person 100 in the first face image. May be performed so as to cover all combinations of the feature points, or may be performed so as to cover enough combinations of feature points for accurately generating the three-dimensional information of the face of the authentication target person 100.
  • the three-dimensional information generation unit 4 After acquiring the sizes Y FD1 of the plurality of locations of the first optical image of the face of the authentication target person 100, the three-dimensional information generation unit 4 determines the size of the plurality of locations of the first optical image of the face of the authentication target person 100. Based on the plurality of feature points of the face of the authentication target person 100 in the corresponding second face image, each of the plurality of locations of the corresponding second optical image is calculated by the same method as the size Y FD1 is acquired. Gets the size Y FD2 of.
  • the ratio of the size Y FD1 of the arbitrary portion of the first optical image acquired by the three-dimensional information generation unit 4 to the size Y FD2 of the corresponding portion of the second optical image is the magnification of the first optical image.
  • 3-dimensional information generating section 4 Zobai the magnification m 1 of the first optical image at an arbitrary position of an object's face 100 and the magnification m 2 of the second optical image Based on the ratio MR (m 2 /m 1 ), the distance a to an arbitrary location (distance measurement target) on the face of the authentication subject 100 is calculated. Specifically, the three-dimensional information generation unit 4 calculates the distance a to an arbitrary part (distance measurement target) of the face of the authentication subject 100 by using the following formula (1).
  • a is the distance from the distance measurement object to the front principal point of the first optical system OS1 of the first imaging system IS1
  • f 1 is the focal length of the first optical system OS1
  • the f 2 second optical the focal length of the system OS2 EP 1 from the exit pupil of the first optical system OS1, the first distance to the imaging position of the optical image when the distance measurement object is at infinity
  • EP 2 and the second optical The distance from the exit pupil of the system OS2 to the image formation position of the second optical image when the object to be measured is at infinity
  • D is the front principal point of the first optical system OS1 and the second optical system OS2. It is the depth parallax from the front principal point of.
  • K in the above equation (1) is a coefficient represented by the following equation (2), and is a fixed value determined by the configuration and arrangement of the first imaging system IS1 and the second imaging system IS2. is there.
  • a FD1 is the distance from the front principal point of the first optical system OS1 to the distance measurement target when the first optical image is in best focus on the imaging surface of the first image sensor S1.
  • FD2 is the distance from the front principal point of the second optical system OS2 to the object for distance measurement when the second optical image is in best focus on the imaging surface of the second image sensor S2.
  • the parameters used in the above equations (1) and (2) are fixed values determined when the first imaging system IS1 and the second imaging system IS2 are configured, except for the image magnification ratio MR, It is stored in the memory of the control unit 2.
  • the three-dimensional information generation unit 4 uses these parameters stored in the memory of the control unit 2 and the image magnification ratio MR to calculate an arbitrary position (measurement) of the face of the authentication target person 100 from the above equation (1).
  • the distance a to the distance object) can be calculated.
  • f 1 , f 2 , EP 1 , EP 2 , D, and K are fixed values determined by the configurations and arrangements of the first imaging system IS1 and the second imaging system IS2.
  • Magnification m of the first optical image obtained from the ratio of the size Y FD1 of the arbitrary portion of the first optical image and the size Y FD2 of the corresponding portion of the second optical image acquired by the three-dimensional information generation unit 4.
  • the distance from an arbitrary part (distance measurement target) of the face of the authentication subject 100 to the front principal point of the first optical system OS1. a can be calculated.
  • the three-dimensional information generation unit 4 calculates the ratio of the size Y FD1 of each of the plurality of locations of the first optical image and the size Y FD2 of the corresponding location of the second optical image by the magnification of the first optical image.
  • the three-dimensional information generation unit 4 generates three-dimensional information on the face of the authentication target person 100 based on the calculated distances a to each of the plurality of points on the face of the authentication target person 100. Specifically, when the three-dimensional information generation unit 4 calculates the distance a to each of a plurality of places on the face of the authentication target person 100, based on it, the three-dimensional grid (three-dimensional grid) of the face of the authentication target person 100 is calculated. (Mesh) and texture are generated, and the face of the authentication subject 100 is three-dimensionally modeled.
  • the authentication target person 100 in addition to the biometric information 110 (for example, eyes, nose, mouth, ears, etc.) of the face of the authentication target person 100 that can be observed by normal imaging, the authentication target person 100
  • the face biometric information 120 in this embodiment, the veins of the face of the authentication target person 100
  • the biometric information 110 of the face of the authentication target person 100 that can be observed by normal imaging is used as a feature point to form a three-dimensional model of the face of the authentication target person 100.
  • the three-dimensional information generation unit 4 can more accurately generate the three-dimensional information of the face of the authentication target person 100.
  • the biometric information 120 of the face of the authentication target person 100 (in the present embodiment, the veins of the face of the authentication target person 100) is present in a location other than near the center line of the face of the authentication target person 100.
  • the three-dimensional information generation unit 4 can particularly generate highly accurate three-dimensional information of the face of the authentication target person 100 at a position other than near the center line of the face of the authentication target person 100.
  • the authentication information storage unit 5 is an arbitrary non-volatile recording medium (for example, a hard disk, a flash memory) that stores the authentication information necessary to execute the three-dimensional face authentication of the authentication subject 100.
  • the administrator or the like of the authentication device 1 of the present invention is authorized in advance by capturing an image of a person who is authorized for authentication by using the authentication device 1 of the present invention or an imaging device having an equivalent function.
  • the three-dimensional information of the person's face is acquired and registered in the authentication information storage unit 5 as the authentication information.
  • the authentication information storage unit 5 is provided inside the authentication device 1, but the present invention is not limited to this.
  • the authentication information storage unit 5 is an external server or external storage device connected to the authentication device 1 via various wired or wireless networks such as the Internet, a local area network (LAN), and a wide area network (WAN). Good.
  • the authentication information storage unit 5 is an external server or an external storage device, one or more authentication information storage units 5 may be shared by a plurality of authentication devices 1.
  • the authentication device 1 communicates with the authentication information storage unit 5 provided outside by using the communication unit 9 every time the three-dimensional face authentication is performed on the authentication target person 100, and the authentication target person 100 is authenticated. Perform 3D face recognition for.
  • the authentication unit 6 is configured to be able to execute the three-dimensional face authentication of the authentication target person 100 by using the three-dimensional information of the face of the authentication target person 100 generated by the three-dimensional information generation unit 4.
  • the authentication unit 6 collates the three-dimensional information of the face of the authentication target person 100 generated by the three-dimensional information generation unit 4 with the authentication information stored in the authentication information storage unit 5, and the authentication target person 3 Perform 3D face authentication.
  • the authentication unit 6 is derived from the three-dimensional information of the face of the authentication target person 100, such as the height of the nose, the depth of the eye depression, the position and shape of the vein, which are included in the three-dimensional information of the authentication target person 100. If any one of the plurality of elements matches the corresponding element derived from the three-dimensional face information registered in advance in the authentication information storage unit 5 as the authentication information, it is determined that the three-dimensional face authentication has succeeded. Alternatively, if all of the plurality of elements match the corresponding elements derived from the three-dimensional face information registered in advance in the authentication information storage unit 5 as the authentication information, the three-dimensional face authentication is performed. May be judged to have succeeded.
  • the result (judgment result) of the three-dimensional face authentication of the authentication subject 100 by the authentication unit 6 is sent to the control unit 2 via the data bus 10.
  • the control unit 2 transmits the received result of the three-dimensional face authentication to an external device (for example, a door unlock device, a terminal that provides an arbitrary application, etc.) via the communication unit 9.
  • the external device can execute processing according to the received authentication result. For example, when the external device receives a result indicating that the three-dimensional face authentication of the authentication subject 100 is successful, the external device unlocks a physical lock such as a door lock or software lock, or an arbitrary application. When the result that the three-dimensional face authentication of the authentication subject 100 has failed is received, the physical lock such as the door lock or the software lock is maintained, or any application is permitted. Do not allow to start.
  • the operation unit 7 is used by a user, an administrator, or the like of the authentication device 1 to execute an operation.
  • the operation unit 7 is not particularly limited as long as the user of the authentication device 1 can perform the operation, and for example, a mouse, a keyboard, a numeric keypad, a button, a dial, a lever, a touch panel, or the like can be used as the operation unit 7.
  • the operation unit 7 transmits a signal according to the operation of the user of the authentication device 1 to the processor of the control unit 2.
  • the administrator or the like of the authentication device 1 can use the operation unit 7 to set the security level of the authentication device 1.
  • the communication unit 9 has a function of inputting data to the authentication device 1 or outputting data from the authentication device 1 to an external device by wire communication or wireless communication.
  • the communication unit 9 may be configured to be connectable to a network such as the Internet.
  • the authentication device 1 can communicate with an external device such as a web server or a data server provided outside by using the communication unit 9.
  • the authentication device 1 adds the biometric information 110 (for example, eyes, nose, mouth, ears, etc.) of the face of the authentication target person 100, which is observable by normal photographing, to the authentication target person 100.
  • a feature for distance measurement is biometric information 120 of the face of the authentication target person 100 (in this embodiment, a vein of the face of the authentication target person 100) that can be observed by irradiating the face with light L in a specific wavelength band. It can be used as a point. Therefore, the number of feature points that can be used to acquire the three-dimensional information of the face of the authentication target person 100 increases, and the three-dimensional information of the face of the authentication target person 100 can be acquired more accurately. The accuracy of the three-dimensional face authentication of the target person 100 can be improved.
  • the biometric information 120 of the face of the authentication target person 100 (the veins of the face of the authentication target person 100) is present in many places other than near the center line of the authentication target person 100. ) Can be used as a feature point for generating three-dimensional information of the face of the authentication target person 100. Therefore, according to the authentication device 1 of the present embodiment, even if a projector that irradiates the authentication target person 100 with a certain pattern of light is not used, there are few irregularities such as the face cheeks and the forehead of the authentication target person 100, and it can be used. It is possible to calculate the distance to a place with few characteristic points. Therefore, the system configuration of the authentication device 1 can be simplified. As a result, the authentication apparatus 1 can be downsized, the power consumption can be reduced, and the cost can be reduced as compared with the case where the projector that irradiates the face of the authentication target person with a certain pattern of light is used.
  • the light source LS is configured to irradiate the face of the authentication subject 100 with near infrared light, but the present invention is not limited to this.
  • the light source LS may be configured to irradiate the face of the authentication target person 100 with light in an arbitrary wavelength band that allows the veins of the face of the authentication target person 100 to be observed.
  • the vein of the face of the authentication target person 100 is cited.
  • the present invention is not limited to this. Generate biometric information 120 of any kind of the face of the authentication subject 100 that can be observed by irradiating the face of the authentication subject 100 with light L in a specific wavelength band, and generate three-dimensional information of the face of the authentication subject 100. It can be used as a feature point for doing.
  • FIG. 3 is a flowchart showing an authentication method executed by the authentication device shown in FIG.
  • the authentication method S100 shown in FIG. 3 is started by the authentication target person 100 using the operation unit 7 to execute an operation for executing the three-dimensional face authentication of the authentication target person 100.
  • step S101 under the control of the processor of the control unit 2, the light source LS irradiates the face of the authentication target person 100 with light L in the specific wavelength band (near infrared light in this embodiment).
  • step S102 the first image sensor S1 of the first image capturing system IS1 captures a first optical image of the face of the authentication target person 100 formed by the first optical system OS1, and the first face image is obtained. Is obtained.
  • the first face image is sent to the control unit 2 and the feature point extraction unit 3 via the data bus 10.
  • step S103 the second image sensor S2 of the second image capturing system IS2 captures a second optical image of the face of the authentication-subjected person 100 formed by the second optical system OS2, A face image is acquired.
  • the second face image is sent to the control unit 2 and the feature point extraction unit 3 via the data bus 10.
  • step S102 and step S103 may be executed simultaneously or separately.
  • the three-dimensional information of the face of the authentication target person 100 can be generated more accurately when the face of the authentication target person 100 in the same state is photographed by the first imaging system IS1 and the second imaging system IS2. It is preferable that S102 and step S103 are simultaneously performed.
  • step S104 the feature point extraction unit 3 performs a filtering process such as Canny on the first face image, and the authentication target person 100 in the first face image is processed.
  • the plurality of pieces of biometric information 110 and 120 of the first optical image of the face are extracted as the plurality of feature points of the face of the authentication target person 100 in the first face image.
  • the feature point extraction unit 3 detects the biometric information 110 of the face of the authentication target person 100 and the face of the authentication target person 100 that can be observed even in normal imaging such as eyes, nose, and mouth in the first face image.
  • the biometric information 120 of the face of the authentication target person 100 (in this embodiment, the veins of the face of the authentication target person 100) that can be observed by irradiating the light L of the specific wavelength band with the face of the authentication target person 100. Extracted as a plurality of feature points.
  • the feature point extraction unit 3 uses the parameters related to the characteristics and arrangement of the first imaging system IS1 and the second imaging system IS2 stored in the memory of the control unit 2 to extract the extracted first By deriving an epipolar line in the second face image corresponding to each of a plurality of feature points of the face of the authentication target person 100 in the face image, and searching for the epipolar line in the derived second face image, A plurality of feature points of the face of the authentication target person 100 in the second face image corresponding to the plurality of feature points of the face of the authentication target person 100 in the extracted first face image are detected. After that, information about a plurality of feature points of the face of the authentication target person 100 in each of the first face image and the second face image extracted by the feature point extraction unit 3 is transmitted to the three-dimensional information generation unit 4.
  • step S105 the three-dimensional information generation unit 4 determines the face of the authentication target person 100 based on the plurality of feature points of the face of the authentication target person 100 in the first face image extracted by the feature point extraction unit 3. Sizes (image widths or image heights) Y FD1 of a plurality of portions of the first optical image are calculated. Then, in step S106, the three-dimensional information generation unit 4 uses the second face corresponding to the plurality of feature points of the face of the authentication target person 100 in the first face image used to acquire the size Y FD1. Based on the plurality of feature points of the face of the authentication target person 100 in the image, the respective sizes Y FD2 of the plurality of locations of the corresponding second optical image are calculated.
  • step S108 the three-dimensional information generation unit 4 calculates (identifies) the distance a to each of the plurality of portions of the face of the authentication subject 100 based on the calculated image magnification ratio MR. Then, in step S109, the three-dimensional information generation unit 4 generates three-dimensional information of the face of the authentication target person 100 based on the distances a to each of the plurality of portions of the face of the authentication target person 100.
  • step S110 the authentication unit 6 generates the three-dimensional information of the face of the authentication target person 100 generated by the three-dimensional information generation unit 4 and the three-dimensional face included in the authentication information registered in advance in the authentication information storage unit 5.
  • the three-dimensional face authentication of the authentication subject 100 is executed by comparing the three-dimensional information.
  • the result of the three-dimensional face authentication of the authentication target person 100 by the authentication unit 6 is transmitted to the control unit 2.
  • the control unit 2 transmits the received authentication result to any external device via the communication unit 9, and the authentication method S100 ends. Thereby, an arbitrary external device can execute the processing according to the authentication result.
  • the authentication apparatus 1 of the present embodiment is configured and arranged so that the first image pickup system IS1 and the second image pickup system IS2 have the same configuration and characteristics, and further, the three-dimensional information generation unit 4 is the first Based on the translational parallax between the plurality of feature points of the face of the authentication target person 100 in the face image and the plurality of feature points of the face of the authentication target person 100 in the corresponding second face image.
  • the authentication device 1 is the same as the authentication device 1 of the first embodiment except that the distance a to each of the plurality of feature points of the face of the person 100 is calculated.
  • the first image pickup system IS1 and the second image pickup system IS2 are configured and arranged so as to have the same configuration and characteristics.
  • the first imaging system IS1 and the second imaging system IS2 are the optical axes of the first optical system OS1 of the first imaging system IS1.
  • the optical axis of the second optical system OS2 of the second imaging system IS2 are parallel, but do not match.
  • the second optical system OS2 is arranged so as to be shifted by the separation distance P in the direction perpendicular to the optical axis direction of the first optical system OS1.
  • the difference from the second face image is that the optical axis of the first optical system OS1 of the first imaging system IS1 and the optical axis of the second optical system OS2 of the second imaging system IS2 are separated from each other. Only the translational parallax (parallax in the direction perpendicular to the optical axis direction of the first optical system OS1) due to the distance P becomes.
  • the three-dimensional information generation unit 4 determines the plurality of feature points of the face of the authentication target person 100 in the first face image and the face of the authentication target person 100 in the corresponding second face image. Based on the translational parallax between the plurality of feature points of the authentication target person 100, the distances a to each of the plurality of feature points of the authentication target person 100 are calculated, and the calculated plurality of feature points of the face of the authentication target person 100 are calculated. 3D information of the face of the authentication subject 100 is generated based on the distance a to each of the above.
  • FIG. 4 is a flowchart showing an authentication method executed by the authentication device according to the second embodiment of the present invention.
  • Steps S201 to S204 in the authentication method S200 shown in FIG. 4 are the same as steps S101 to S104 of the authentication method S100 executed by the authentication device 1 of the first embodiment described in detail with reference to FIG. , Description is omitted.
  • step S205 the three-dimensional information generation unit 4 causes the plurality of feature points of the face of the authentication target person 100 in the first face image extracted by the feature point extraction unit 3 in step S204 to correspond to the second face.
  • the translational parallax between the plurality of feature points of the face of the authentication target person 100 in the image is calculated.
  • the three-dimensional information generation unit 4 calculates the distance a to each of the plurality of feature points on the face of the authentication target person 100 based on the calculated translation parallax.
  • step S206 as in step S109 of the authentication method S100 executed by the authentication device 1 of the first embodiment, the three-dimensional information generation unit 4 allows each of the plurality of feature points of the face of the authentication target person 100 to be detected.
  • the three-dimensional information of the face of the authentication subject 100 is generated based on the distance a.
  • step S207 similar to step S110 of the authentication method S100 executed by the authentication device 1 of the first embodiment, the authentication unit 6 generates the three-dimensional information of the face of the authentication target person 100 generated by the three-dimensional information generation unit 4. And the three-dimensional face information included in the authentication information registered in advance in the authentication information storage unit 5 are compared, whereby the three-dimensional face authentication of the authentication target person 100 is executed. After that, the result of the three-dimensional face authentication of the authentication target person 100 by the authentication unit 6 is transmitted to the control unit 2. The control unit 2 transmits the received authentication result to any external device via the communication unit 9, and the authentication method S200 ends.
  • FIG. 5 is a view for explaining a first face image acquired by the first imaging system or a second face image acquired by the second imaging system of the authentication device according to the third embodiment of the present invention. It is a schematic diagram.
  • the authentication device 1 of the third embodiment is the same as the authentication of the first and second embodiments except that the configuration of the light source LS is changed and the biometric information 120 is the melanin pigment of the face of the authentication subject 100. It is similar to the device 1.
  • the light source LS is configured to irradiate the face L of the authentication subject 100 with light L in the ultraviolet band (for example, light L having a wavelength of 350 to 400 nm). Since the melanin pigment present in a human face or the like has a high absorptance of light in the short wavelength band, the face of the authentication target person 100 in the state where the light in the ultraviolet band from the light source LS is irradiated is taken as the first imaging system. By imaging using IS1 and the second imaging system IS2, the melanin pigment on the face of the authentication-subject person 100 appears black in the obtained first face image and second face image.
  • the ultraviolet band for example, light L having a wavelength of 350 to 400 nm
  • the authentication device 1 of the present embodiment can add to the biometric information 110 (for example, eyes, nose, mouth, ears, etc.) of the face of the authentication target person 100 that can be observed by normal imaging. It is possible to observe the melanin pigment (biological information 120) on the face of the authentication subject 100, which can be observed by irradiating the face of the authentication subject 100 with the light L in the specific wavelength band.
  • biometric information 110 for example, eyes, nose, mouth, ears, etc.
  • the melanin pigment on the face of the authentication subject 100 used as the biometric information 120 may change according to the aging and physical condition of the authentication subject 100. Therefore, if the melanin pigment of the face of the authentication target person 100 is directly used as an element for face authentication of the authentication target person 100, the accuracy of the face authentication of the authentication target person 100 depends on the aging and physical condition of the authentication target person 100. Will decrease. For this reason, the general face authentication of the authentication target person 100 does not use the melanin pigment of the face that can change according to the aging or physical condition of the authentication target person 100.
  • the melanin pigment on the face of the authentication target person 100 is only used as a feature point for generating the three-dimensional information of the face of the authentication target person 100.
  • the 100 face melanin pigment itself is not used for authentication of the authentication target person 100. Even if the position or shape of the melanin pigment on the face of the authentication target person 100 changes due to aging or physical condition of the authentication target person 100, the three-dimensional shape of the face of the authentication target person 100 does not change.
  • the authentication device 1 of the present embodiment is characterized in that the melanin pigment of the face of the authentication target person 100, which is not used in the general face authentication of the authentication target person 100, is generated to generate the three-dimensional information of the face of the authentication target person 100.
  • the number of feature points that can be used to acquire the three-dimensional information of the face of the authentication target person 100 increases, and more accurately the three-dimensional information of the face of the authentication target person 100. Can be obtained. As a result, the accuracy of the three-dimensional face authentication of the authentication target person 100 can be improved.
  • the light source LS is configured to irradiate the face L of the authentication subject 100 with the light L in the ultraviolet band (for example, the light L having a wavelength of 350 to 400 nm).
  • the light source LS may be configured to irradiate the face of the authentication subject 100 with light in an arbitrary wavelength band that allows the melanin pigment of the face of the authentication subject 100 to be observed.
  • the biometric information 120 of the face of the authentication target person 100 that can be observed by irradiating the face of the authentication target person 100 with the light L in the specific wavelength band, the aging and physical condition of the authentication target person 100.
  • the melanin pigment of the face of the authentication subject 100 that can be changed according to the above is mentioned, the present invention is not limited to this.
  • the authentication device 1 of the present embodiment becomes observable by irradiating the face L of the authentication target person 100 with the light L in the specific wavelength band like the melanin pigment on the face, and the aging and physical condition of the authentication target person 100 can be achieved. It is possible to use a wide variety of biometric information that is not used in general face authentication of the authentication target person 100, which can change depending on the above, as feature points for generating three-dimensional information of the face of the authentication target person 100. it can.
  • the authentication device of the present invention has been described above based on the illustrated embodiment, but the present invention is not limited to this.
  • Each configuration of the present invention can be replaced with any configuration capable of exhibiting the same function, or any configuration can be added to each configuration of the present invention.
  • each component of the authentication device may be realized by hardware, software, or a combination thereof.
  • the authentication device has been described in detail as including the first image pickup system and the second image pickup system, but the present invention is not limited to this. It is also within the scope of the present invention that the authentication device has any number of additional image pickup systems in addition to the first image pickup system and the second image pickup system.
  • the authentication device of the present invention described in detail with reference to each embodiment can be used in any system that needs to execute three-dimensional face authentication of a person to be authenticated.
  • the authentication device of the present invention can be used in the system for performing the three-dimensional face authentication of the authentication target person by photographing the face of the authentication target person and unlocking the front door of the house, the lock of the car, the lock of the computer, etc.
  • the authentication device of the present invention Can be used.
  • the biometric information of the authentication subject's face which becomes observable by irradiating the authentication subject's face with the light of a specific wavelength band is produced
  • the authentication device of the present invention even without using a projector that irradiates a person to be authenticated with a certain pattern of light, there are few irregularities such as the cheeks and foreheads of the person to be authenticated, and there are few feature points that can be used.
  • the distance to can be calculated. Therefore, the system configuration of the authentication device can be simplified.
  • the authentication apparatus can be downsized, the power consumption can be reduced, and the cost can be reduced, as compared with the case of using the projector that irradiates the face of the authentication target person with a certain pattern of light. Therefore, the present invention has industrial applicability.

Landscapes

  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Input (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

An authentication device 1 is provided with: a light source LS for irradiating the face of an authentication subject 100 with light L of a specific wavelength band; a first imaging system IS1 and a second imaging system IS2 for acquiring a first face image and a second face image of the authentication subject 100; a feature point extraction unit 3 for extracting a plurality of feature points of the face of the authentication subject 100 on the basis of the first and second face images; a three-dimensional information generation unit 4 for generating three-dimensional information of the face of the authentication subject 100 on the basis of the plurality of feature points of the face of the authentication subject 100; and an authentication unit 6 configured so as to be able to execute three-dimensional face authentication of the authentication subject 100. The plurality of feature points of the face of the authentication subject 100 include biological information of the face of the authentication subject 100 that becomes observable by irradiation of the face of the authentication subject 100 with the light L of the specific wavelength band.

Description

認証装置Authentication device
 本発明は、一般に、認証対象者の3次元顔認証を実行する認証装置に関し、より具体的には、認証対象者の顔に特定波長帯域の光を照射することにより観測可能となる認証対象者の顔の生体情報を、認証対象者の顔の3次元情報を生成するための特徴点として利用し、認証対象者の顔の3次元顔認証を実行する認証装置に関する。 The present invention generally relates to an authentication device that performs three-dimensional face authentication of an authentication target person, and more specifically, an authentication target person that can be observed by irradiating the face of the authentication target person with light in a specific wavelength band. The biometric information of the face is used as a feature point for generating the three-dimensional information of the face of the authentication target person, and the three-dimensional face authentication of the face of the authentication target person is performed.
 従来、携帯電話、スマートフォン、ノートパソコン、ラップトップコンピューターのような様々なデバイスにおいて、パスワードとIDによる認証技術や、物理的な鍵やIDカードによる認証技術、顔認証、指紋認証、静脈認証、声紋認証、虹彩認証、手形認証のような生体認証技術等が、本人確認を行うために利用されてきた。特に、生体認証は、パスワードとIDによる認証で問題となるパスワードやIDの忘却や、物理的な鍵やIDカードによる認証で問題となる盗難・紛失といった問題がないため、ユーザーに負担がかからないというメリットがある。 Conventionally, in various devices such as mobile phones, smartphones, laptop computers, and laptop computers, authentication technology using passwords and IDs, authentication technology using physical keys and ID cards, face authentication, fingerprint authentication, vein authentication, voiceprint Biometric authentication techniques such as authentication, iris authentication, and handprint authentication have been used to confirm the identity. In particular, biometric authentication does not burden the user because there is no problem of forgetting a password or ID, which is a problem with password and ID authentication, or theft or loss, which is a problem with physical key or ID card authentication. There are merits.
 様々な生体認証技術の中でも、近年のカメラモジュールの小型化・高性能化によって様々なデバイスにカメラモジュールが搭載されたことに伴い、認証対象者の顔を撮影し、撮影した顔画像と、予め登録しておいた本人の顔画像とを照合することによる本人確認を行う顔認証技術が広く用いられるようになった。認証対象者の顔を撮影することにより実行される顔認証は、認証対象者が認証のための動作を必要とせず、認証対象者の負担がない点で優れている。様々な顔認証技術の中でも、認証精度の高さおよび他人によるなりすましの防止等の観点から、3次元顔認証が用いられることが多い。3次元顔認証では、複数の光学系を用いたステレオカメラや像面位相差情報を取得可能な画像センサー等を用いて、認証対象者の顔を撮影することにより、認証対象者の顔の3次元情報を生成し、生成した認証対象者の顔の3次元情報を用いて認証が実行される。 Among various biometric authentication technologies, the camera module has been installed in various devices due to the recent miniaturization and high performance of the camera module. Face authentication technology has been widely used to verify the identity of a person by comparing the registered face image of the person. The face authentication executed by photographing the face of the authentication target person is excellent in that the authentication target person does not need an operation for authentication and the burden on the authentication target person is not imposed. Among various face authentication techniques, three-dimensional face authentication is often used from the viewpoint of high authentication accuracy and prevention of impersonation by others. In three-dimensional face authentication, a stereo camera using a plurality of optical systems, an image sensor capable of acquiring image plane phase difference information, or the like is used to capture an image of the face of the authentication target person. Dimension information is generated, and authentication is performed using the generated three-dimensional information of the face of the authentication subject.
 例えば、特許文献1は、2つの光学系を含む撮像系を用いて認証対象者の顔を撮影することにより、撮像系から認証対象者の顔の特徴点(例えば、額、山根(両眼の間の鼻の***の基点)、鼻、口)のそれぞれまでの距離を算出し、さらに、撮像系から認証対象者の顔の特徴点のそれぞれまでの距離に基づいて、認証対象者の顔の3次元情報を生成することにより、認証対象者の3次元顔認証を実行する認証装置を開示している。 For example, in Japanese Patent Application Laid-Open No. 2004-242242, a face of a person to be authenticated is photographed using an image pickup system including two optical systems, so that the feature points of the face of the person to be authenticated from the image pickup system (for example, forehead, Yamane (both eyes) Distance between the nose and the nose, mouth) is calculated, and based on the distance from the imaging system to each of the facial feature points of the authentication target, the face of the authentication target is calculated. Disclosed is an authentication device that executes three-dimensional face authentication of an authentication target by generating three-dimensional information.
 認証対象者の顔の3次元情報を生成するためには、撮像系から認証対象者の顔の多くの特徴点までの距離を取得する必要がある。一般的には、認証対象者の顔の3次元情報を生成する際には、眼、鼻、口等の境界(エッジ)が目立つ部位が特徴点として利用されているが、額や頬等の境界が目立たない部位は特徴点として利用できない。特に、眼、鼻、口等の特徴点として利用可能な部位は、顔の中心線付近に多く位置しているため、顔の中心線付近の3次元情報は比較的正確に生成することができるが、それ以外の箇所には特徴点として利用可能な部位が少なく、顔の中心線付近以外の3次元情報を正確に生成することが困難である。そのため、認証対象者の3次元顔認証に利用可能な顔の3次元情報の量が少なく、認証対象者の3次元顔認証の精度を向上させることができないという問題があった。 In order to generate the three-dimensional information of the face of the person to be authenticated, it is necessary to acquire the distances from the imaging system to many feature points of the face of the person to be authenticated. Generally, when generating the three-dimensional information of the face of the person to be authenticated, a part where a boundary (edge) such as an eye, a nose, or a mouth is conspicuous is used as a feature point. The part where the boundary is not conspicuous cannot be used as a feature point. In particular, since many parts such as eyes, nose, and mouth that can be used as feature points are located near the centerline of the face, three-dimensional information near the centerline of the face can be generated relatively accurately. However, there are few sites that can be used as feature points in other areas, and it is difficult to accurately generate three-dimensional information other than near the center line of the face. Therefore, there is a problem that the amount of three-dimensional face information that can be used for the three-dimensional face authentication of the authentication target person is small, and the accuracy of the authentication target person's three-dimensional face authentication cannot be improved.
 顔の頬や額のような凹凸が少なく、利用可能な特徴点が少ない箇所までの距離を算出するために、対象箇所に対して一定パターン(例えば、格子パターンやドットパターン)の光を照射するためのプロジェクターを用い、一定パターンの光が照射された対象部位を撮像することが行われている(例えば、特許文献2参照)。このようなパターン照射方式の測距方法では、対象箇所に対して一定パターンの光を照射し、対象箇所に投影された一定パターンの歪みやシフト量を解析することにより対象箇所までの距離を算出している。このような構成により、凹凸が少なく、利用可能な特徴点が少ない対象箇所までの距離を算出することができる。しかしながら、パターン照射方式の測距方法を採用した場合、対象箇所に対して一定パターンの光を照射するためのプロジェクターが必要となり、認証装置の構成が大規模になってしまう。 Irradiate a target pattern with light of a certain pattern (for example, a grid pattern or a dot pattern) to calculate the distance to a part with few irregularities such as face cheeks or forehead and few available feature points. A projector for this purpose is used to capture an image of a target site irradiated with a fixed pattern of light (see, for example, Patent Document 2). In such a pattern irradiation type distance measuring method, the distance to the target location is calculated by irradiating the target location with a fixed pattern of light and analyzing the distortion and shift amount of the constant pattern projected on the target location. doing. With such a configuration, it is possible to calculate the distance to the target location with few irregularities and few usable feature points. However, when the distance measurement method of the pattern irradiation method is adopted, a projector for irradiating the target portion with light of a fixed pattern is required, and the configuration of the authentication device becomes large.
特開2006-221422号公報JP, 2006-222422, A 特開2013-190394号公報JP, 2013-190394, A
 本発明は、上記従来の問題点を鑑みたものであり、その目的は、認証対象者の顔に特定波長帯域の光を照射することにより観測可能となる認証対象者の顔の生体情報を、認証対象者の顔の3次元情報を生成するための特徴点として利用し、認証対象者の顔の3次元顔認証を実行可能な認証装置を提供することにある。 The present invention has been made in view of the above conventional problems, and an object thereof is biometric information of the face of an authentication target person that can be observed by irradiating the face of the authentication target person with light in a specific wavelength band, An object of the present invention is to provide an authentication device that can be used as a feature point for generating three-dimensional information of a face of an authentication target person and can perform three-dimensional face authentication of the face of the authentication target person.
 このような目的は、以下の(1)~(7)の本発明により達成される。
 (1)認証対象者の顔に対して特定波長帯域の光を照射するための光源と、
 前記特定波長帯域の光が照射された前記認証対象者の前記顔を撮像し、前記認証対象者の第1の顔画像を取得するための第1の撮像系と、
 前記特定波長帯域の光が照射された前記認証対象者の前記顔を撮像し、前記認証対象者の第2の顔画像を取得するための第2の撮像系と、
 前記第1の顔画像および前記第2の顔画像のそれぞれにおける前記認証対象者の前記顔の複数の特徴点を抽出するための特徴点抽出部と、
 前記特徴点抽出部が抽出した前記第1の顔画像および前記第2の顔画像のそれぞれにおける前記認証対象者の前記顔の前記複数の特徴点に基づいて、前記認証対象者の前記顔の3次元情報を生成するための3次元情報生成部と、
 前記3次元情報生成部が生成した前記認証対象者の前記顔の前記3次元情報を用いて、前記認証対象者の3次元顔認証を実行可能に構成された認証部と、を備え、
 前記特徴点抽出部によって抽出される前記第1の顔画像および前記第2の顔画像のそれぞれにおける前記認証対象者の前記顔の前記複数の特徴点は、前記認証対象者の前記顔に前記特定波長帯域の前記光を照射することにより観測可能となる前記認証対象者の前記顔の生体情報を含むことを特徴とする認証装置。
Such an object is achieved by the present invention of the following (1) to (7).
(1) A light source for irradiating the face of the authentication subject with light in a specific wavelength band,
A first imaging system for capturing an image of the face of the authentication target person irradiated with light of the specific wavelength band, and acquiring a first face image of the authentication target person;
A second imaging system for capturing an image of the face of the authentication target person irradiated with light in the specific wavelength band and acquiring a second face image of the authentication target person;
A feature point extraction unit for extracting a plurality of feature points of the face of the authentication target person in each of the first face image and the second face image;
3 of the faces of the authentication target person based on the plurality of feature points of the face of the authentication target person in each of the first face image and the second face image extracted by the feature point extracting unit. A three-dimensional information generation unit for generating dimensional information,
An authentication unit configured to be able to perform three-dimensional face authentication of the authentication target person using the three-dimensional information of the face of the authentication target person generated by the three-dimensional information generation unit,
The plurality of feature points of the face of the authentication target person in each of the first face image and the second face image extracted by the feature point extraction unit are specified as the face of the authentication target person. An authentication device comprising biometric information of the face of the authentication target person that can be observed by irradiating the light in the wavelength band.
 (2)前記第1の撮像系および前記第2の撮像系のそれぞれは、前記光源から照射される前記光の前記特定波長帯域に対応する波長帯域以外の光を実質的に遮断するバンドパスフィルターを有している上記(1)に記載の認証装置。 (2) Each of the first image pickup system and the second image pickup system substantially blocks light other than the wavelength band corresponding to the specific wavelength band of the light emitted from the light source. The authentication device according to (1) above.
 (3)前記認証対象者の前記顔に前記特定波長帯域の前記光を照射することにより観測可能となる前記認証対象者の前記顔の前記生体情報は、前記認証対象者の前記顔の静脈である上記(1)または(2)に記載の認証装置。 (3) The biometric information of the face of the authentication target person that can be observed by irradiating the face of the authentication target person with the light in the specific wavelength band is a vein of the face of the authentication target person. The authentication device according to (1) or (2) above.
 (4)前記認証対象者の前記顔に前記特定波長帯域の前記光を照射することにより観測可能となる前記認証対象者の前記顔の前記生体情報は、前記認証対象者の前記顔のメラニン色素である上記(1)または(2)に記載の認証装置。 (4) The biometric information of the face of the authentication target person that can be observed by irradiating the face of the authentication target person with the light in the specific wavelength band is the melanin pigment of the face of the authentication target person. The authentication device according to (1) or (2) above.
 (5) 前記第1の撮像系は、前記認証対象者の前記顔の第1の光学像を形成するための第1の光学系と、前記第1の光学系によって形成された前記第1の光学像を撮像し、前記第1の顔画像を取得するための第1の撮像素子とを有し、
 前記第2の撮像系は、前記認証対象者の前記顔の第2の光学像を形成するための第2の光学系と、前記第2の光学系によって形成された前記第2の光学像を撮像し、前記第2の顔画像を取得するための第2の撮像素子とを有し、
 前記3次元情報生成部は、前記特徴点抽出部が抽出した前記第1の顔画像および前記第2の顔画像のそれぞれにおける前記認証対象者の前記顔の前記複数の特徴点を用いて前記第1の光学像の前記倍率と前記第2の光学像の前記倍率との像倍比を算出し、算出された前記像倍比に基づいて、前記認証対象者の前記顔の前記3次元情報を生成するよう構成されている上記(1)ないし(4)のいずれかに記載の認証装置。
(5) The first imaging system includes a first optical system for forming a first optical image of the face of the authentication target person, and the first optical system formed by the first optical system. A first image sensor for capturing an optical image and acquiring the first face image,
The second imaging system forms a second optical image for forming a second optical image of the face of the authentication target person and the second optical image formed by the second optical system. A second image sensor for capturing an image and obtaining the second face image,
The three-dimensional information generation unit uses the plurality of feature points of the face of the authentication target person in each of the first face image and the second face image extracted by the feature point extraction unit, An image magnification ratio between the magnification of the first optical image and the magnification of the second optical image is calculated, and the three-dimensional information of the face of the authentication target person is calculated based on the calculated image magnification ratio. The authentication device according to any one of (1) to (4), which is configured to generate.
 (6)前記第1の撮像系および前記第2の撮像系は、前記第1の撮像系の前記第1の光学系の光軸と、前記第2の撮像系の前記第2の光学系の光軸とが、平行であるが、一致しないよう、構成されている上記(5)に記載の認証装置。 (6) The first imaging system and the second imaging system include an optical axis of the first optical system of the first imaging system and an optical axis of the second optical system of the second imaging system. The authentication device according to (5) above, which is configured so as to be parallel to the optical axis but do not match.
 (7)前記3次元情報生成部は、前記第1の顔画像中の前記認証対象者の前記顔の前記複数の特徴点と、対応する前記第2の顔画像中の前記認証対象者の前記顔の前記複数の特徴点との間の並進視差に基づいて、前記顔の前記3次元情報を生成するよう構成されている上記(1)ないし(4)のいずれかに記載の認証装置。 (7) The three-dimensional information generation unit includes the plurality of feature points of the face of the person to be authenticated in the first face image, and the feature points of the person to be authenticated in the corresponding second face image. The authentication device according to any one of (1) to (4) above, which is configured to generate the three-dimensional information of the face based on a translational parallax between the face and the plurality of feature points.
 本発明の認証装置によれば、認証対象者の顔に特定波長帯域の光を照射することにより観測可能となる認証対象者の顔の生体情報を、認証対象者の顔の3次元情報を生成するための特徴点として利用することができる。そのため、認証対象者の顔の3次元情報を生成するために利用可能な特徴点の数が増加し、より正確に認証対象者の顔の3次元情報を生成することができるので、認証対象者の3次元顔認証の精度を向上させることができる。 ADVANTAGE OF THE INVENTION According to the authentication device of this invention, the biometric information of the authentication subject's face which becomes observable by irradiating the authentication subject's face with the light of a specific wavelength band is produced|generated, and the three-dimensional information of the authentication subject's face is produced|generated. It can be used as a feature point for doing. Therefore, the number of feature points that can be used to generate the three-dimensional information of the face of the authentication target increases, and the three-dimensional information of the face of the authentication target can be generated more accurately. The accuracy of the three-dimensional face authentication can be improved.
 また、本発明の認証装置では、一定パターンの光を認証対象者に照射するプロジェクターを用いなくとも、認証対象者の顔の頬や額のような凹凸が少なく、利用可能な特徴点が少ない箇所までの距離を算出することができる。そのため、認証装置のシステム構成をシンプルにすることができる。これにより、一定パターンの光を認証対象者の顔に照射するプロジェクターを用いた場合と比較して、認証装置の小型化、低消費電力化、および低コスト化を実現することができる。 Further, in the authentication device of the present invention, even without using a projector that irradiates a person to be authenticated with a certain pattern of light, there are few irregularities such as the cheeks and foreheads of the person to be authenticated, and there are few feature points that can be used. The distance to can be calculated. Therefore, the system configuration of the authentication device can be simplified. As a result, the authentication apparatus can be downsized, the power consumption can be reduced, and the cost can be reduced, as compared with the case of using the projector that irradiates the face of the authentication target person with a certain pattern of light.
図1は、本発明の第1実施形態に係る認証装置を概略的に示すブロック図である。FIG. 1 is a block diagram schematically showing an authentication device according to the first embodiment of the present invention. 図2は、図1に示す第1の撮像系によって取得される第1の顔画像または第2の撮像系によって取得される第2の顔画像を説明するための概略図である。FIG. 2 is a schematic diagram for explaining a first face image acquired by the first imaging system shown in FIG. 1 or a second face image acquired by the second imaging system. 図3は、図1に示す認証装置によって実行される認証方法を示すフローチャートである。FIG. 3 is a flowchart showing an authentication method executed by the authentication device shown in FIG. 図4は、本発明の第2実施形態に係る認証装置によって実行される認証方法を示すフローチャートである。FIG. 4 is a flowchart showing an authentication method executed by the authentication device according to the second embodiment of the present invention. 図5は、本発明の第3実施形態に係る認証装置の第1の撮像系によって取得される第1の顔画像または第2の撮像系によって取得される第2の顔画像を説明するための概略図である。FIG. 5 is a view for explaining a first face image acquired by the first imaging system or a second face image acquired by the second imaging system of the authentication device according to the third embodiment of the present invention. It is a schematic diagram.
 <第1実施形態>
 最初に、図1および図2を参照して、本発明の第1実施形態に係る認証装置を詳述する。また、各図において、同一または対応する要素には、同じ参照番号が付されている。図1は、本発明の第1実施形態に係る認証装置を概略的に示すブロック図である。図2は、図1に示す第1の撮像系によって取得される第1の顔画像または第2の撮像系によって取得される第2の顔画像を説明するための概略図である。
<First Embodiment>
First, the authentication device according to the first embodiment of the present invention will be described in detail with reference to FIGS. 1 and 2. In each drawing, the same or corresponding elements are given the same reference numerals. FIG. 1 is a block diagram schematically showing an authentication device according to the first embodiment of the present invention. FIG. 2 is a schematic diagram for explaining a first face image acquired by the first imaging system shown in FIG. 1 or a second face image acquired by the second imaging system.
 認証装置1は、認証対象者100の顔を撮像することによって、認証対象者100の3次元顔認証を実行するよう構成されている。認証装置1は、認証装置1の制御を行う制御部2と、認証対象者100の顔に対して特定波長帯域の光Lを照射するための光源LSと、特定波長帯域の光Lが照射された認証対象者100の顔を撮像し、認証対象者100の第1の顔画像を取得するための第1の撮像系IS1と、特定波長帯域の光Lが照射された認証対象者100の顔を撮像し、認証対象者100の第2の顔画像を取得するための第2の撮像系IS2と、第1の顔画像および第2の顔画像のそれぞれにおける認証対象者100の顔の複数の特徴点を抽出するための特徴点抽出部3と、特徴点抽出部3が抽出した、第1の顔画像および第2の顔画像のそれぞれにおける認証対象者100の顔の複数の特徴点に基づいて、認証対象者100の顔の3次元情報を生成するための3次元情報生成部4と、認証対象者100の3次元顔認証のために必要な認証情報を記憶している認証情報記憶部5と、3次元情報生成部4が生成した認証対象者100の顔の3次元情報を用いて、認証対象者100の3次元顔認証を実行可能に構成された認証部6と、使用者による操作を入力するための操作部7と、液晶パネル等の任意の情報を表示するための表示部8と、外部デバイスとの通信を実行するための通信部9と、認証装置1の各コンポーネント間のデータや指示の授受を実行するためのデータバス10と、を備えている。 The authentication device 1 is configured to perform three-dimensional face authentication of the authentication target person 100 by capturing the face of the authentication target person 100. The authentication device 1 is irradiated with a control unit 2 that controls the authentication device 1, a light source LS that irradiates the face of the authentication target person 100 with light L in a specific wavelength band, and light L in a specific wavelength band. The first imaging system IS1 for capturing the first face image of the authentication target person 100 and the face of the authentication target person 100 irradiated with the light L in the specific wavelength band. A second imaging system IS2 for capturing a second face image of the authentication target person 100 and a plurality of faces of the authentication target person 100 in each of the first face image and the second face image. Based on the feature point extraction unit 3 for extracting the feature points, and a plurality of feature points of the face of the authentication target person 100 in each of the first face image and the second face image extracted by the feature point extraction unit 3. And a three-dimensional information generation unit 4 for generating three-dimensional information of the face of the authentication target person 100, and an authentication information storage unit storing authentication information necessary for three-dimensional face authentication of the authentication target person 100. 5, the authentication unit 6 configured to execute the three-dimensional face authentication of the authentication target person 100 by using the three-dimensional information of the face of the authentication target person 100 generated by the three-dimensional information generation unit 4, and the user. Between an operation unit 7 for inputting an operation, a display unit 8 for displaying arbitrary information such as a liquid crystal panel, a communication unit 9 for executing communication with an external device, and each component of the authentication device 1. And a data bus 10 for exchanging data and instructions.
 第1の撮像系IS1は、認証対象者100の顔の第1の光学像を形成するための第1の光学系OS1を有し、さらに、第2の撮像系IS2は、認証対象者100の顔の第2の光学像を形成するための第2の光学系OS2を有している。本実施形態の認証装置1において、第1の撮像系IS1の第1の光学系OS1および第2の撮像系IS2の第2の光学系OS2は、認証対象者100の顔の任意の箇所(測距対象)までの距離aに応じた第1の光学像の倍率mの変化が、認証対象者100の顔の任意の箇所(測距対象)までの距離aに応じた第2の光学像の倍率mの変化と異なるように、構成および配置されている。 The first imaging system IS1 has a first optical system OS1 for forming a first optical image of the face of the authentication subject 100, and the second imaging system IS2 has a first optical system OS1 of the authentication subject 100. It has a second optical system OS2 for forming a second optical image of the face. In the authentication device 1 according to the present embodiment, the first optical system OS1 of the first image pickup system IS1 and the second optical system OS2 of the second image pickup system IS2 are arranged at arbitrary positions (measurement The change of the magnification m 1 of the first optical image according to the distance a to the distance object) is the second optical image according to the distance a to an arbitrary position (distance measuring object) on the face of the authentication subject 100. Are configured and arranged so as to be different from the change of the magnification m 2 of.
 測距対象までの距離aに応じた第1の光学像の倍率mの変化と、測距対象までの距離aに応じた第2の光学像の倍率mの変化が異なるようになる条件は、第1の光学系OS1および第2の光学系OS2が、以下の3つの条件の少なくとも1つが満たされるよう、構成および配置されていることである。 Conditions under which the change in the magnification m 1 of the first optical image according to the distance a to the distance measurement target and the change in the magnification m 2 of the second optical image according to the distance a to the distance measurement target become different That is, the first optical system OS1 and the second optical system OS2 are configured and arranged so that at least one of the following three conditions is satisfied.
 (第1の条件)第1の光学系OS1の焦点距離fと、第2の光学系OS2の焦点距離fとが、互いに異なる(f≠f
 (第2の条件)第1の光学系OS1の射出瞳から、測距対象が無限遠にある場合の第1の光学像の結像位置までの距離EPと、第2の光学系OS2の射出瞳から、測距対象が無限遠にある場合の第2の光学像の結像位置までの距離EPとが、互いに異なる(EP≠EP
 (第3の条件)第1の光学系OS1の前側主点と、第2の光学系OS2の前側主点との間に奥行方向(光軸方向)の差(奥行視差)Dが存在する(D≠0)
The focal length f 1 of the (first condition) first optical system OS1, the focal length f 2 of the second optical system OS2 are different from each other (f 1f 2)
(Second condition) The distance EP 1 from the exit pupil of the first optical system OS1 to the image formation position of the first optical image when the object to be measured is at infinity and the second optical system OS2. The distance EP 2 from the exit pupil to the image formation position of the second optical image when the distance measurement target is at infinity is different from each other (EP 1 ≠EP 2 ).
(Third condition) A difference (depth parallax) D in the depth direction (optical axis direction) exists between the front principal point of the first optical system OS1 and the front principal point of the second optical system OS2 ( D≠0)
 加えて、上記第1~第3の条件の少なくとも1つを満たしていたとしても、f≠f、EP≠EP、D=0、f=EPかつf=EPという条件を満たす特別な場合には、第1の光学像の倍率mと第2の光学像の倍率mの比である像倍比MRが測距対象までの距離aの関数として成立しない。そのため、第1の光学系OS1および第2の光学系OS2は、像倍比MRが測距対象までの距離aの関数として成立しているという第4の条件をさらに満たすよう構成されている。 In addition, even if at least one of the above-mentioned first to third conditions is satisfied, f 1 ≠f 2 , EP 1 ≠EP 2 , D=0, f 1 =EP 1 and f 2 =EP 2 In a special case where the condition is satisfied, the image magnification ratio MR, which is the ratio of the magnification m 1 of the first optical image and the magnification m 2 of the second optical image, does not hold as a function of the distance a to the object to be measured. Therefore, the first optical system OS1 and the second optical system OS2 are configured to further satisfy the fourth condition that the image magnification ratio MR holds as a function of the distance a to the object to be measured.
 第1の光学系OS1および第2の光学系OS2が上述の条件を満たす場合に、認証対象者100の顔の任意の箇所までの距離aに応じた第1の光学像の倍率mの変化と、認証対象者100の顔の任意の箇所までの距離aに応じた第2の光学像の倍率mの変化が異なるようになる原理、および、第1の光学像の倍率mと第2の光学像の倍率mの比である像倍比MRに基づいて認証対象者100の顔の任意の箇所までの距離aを算出するための原理については、本発明者らによって既に出願されている特願2017-241896号に詳述されているので、説明を省略する。特願2017-241896号の開示内容の全てが参照によりここに援用される。 When the first optical system OS1 and the second optical system OS2 satisfy the above-described conditions, the change in the magnification m 1 of the first optical image according to the distance a to an arbitrary part of the face of the authentication subject 100. And the principle that the change in the magnification m 2 of the second optical image differs depending on the distance a to the arbitrary position on the face of the authentication subject 100, and the magnification m 1 of the first optical image and the first The present inventors have already applied for the principle for calculating the distance a to the arbitrary position on the face of the authentication subject 100 based on the image magnification ratio MR which is the ratio of the magnification m 2 of the optical image of 2 . The detailed description is omitted here because it is described in Japanese Patent Application No. 2017-241896. The entire disclosure of Japanese Patent Application No. 2017-241896 is incorporated herein by reference.
 本実施形態の認証装置1では、このような第1の光学系OS1および第2の光学系OS2が用いられ、第1の光学像の倍率mと第2の光学像の倍率mの比である像倍比MRに基づいて、認証対象者100の顔の複数の箇所のそれぞれまでの距離aを算出することが可能となっている。認証装置1は、第1の光学像の倍率mと第2の光学像の倍率mの比である像倍比MRに基づいて算出された、認証対象者100の顔の複数の箇所のそれぞれまでの距離aに基づいて、認証対象者100の顔の3次元情報を生成し、認証対象者100の顔の3次元情報に基づいて認証対象者100の3次元顔認証を実行する。 In the authentication device 1 of the present embodiment, such a first optical system OS1 and a second optical system OS2 are used, and the ratio of the magnification m 1 of the first optical image to the magnification m 2 of the second optical image. It is possible to calculate the distance a to each of a plurality of places on the face of the authentication subject 100 based on the image magnification ratio MR which is The authentication device 1 calculates a plurality of parts of the face of the authentication subject 100 calculated based on the image magnification ratio MR which is the ratio of the magnification m 1 of the first optical image and the magnification m 2 of the second optical image. The three-dimensional information of the face of the authentication target person 100 is generated based on the distance a to each of them, and the three-dimensional face authentication of the authentication target person 100 is executed based on the three-dimensional information of the face of the authentication target person 100.
 また、本実施形態の認証装置1は、認証対象者100の顔に対して特定波長帯域の光Lを照射するための光源LSを備えている。そのため、本実施形態の認証装置1は、太陽光下での撮影や白色光を認証対象者100の顔に照射して行う撮影のような、通常の撮影によって観測可能な認証対象者100の顔の生体情報110(例えば、眼、鼻、口、耳等、図2参照)に加え、認証対象者100の顔に特定波長帯域の光Lを照射することにより観測可能となる認証対象者100の顔の生体情報120(例えば、本実施形態では、認証対象者100の顔の静脈)を、測距のための特徴点として利用することができる。 Further, the authentication device 1 of the present embodiment includes a light source LS for irradiating the face L of the authentication target person 100 with the light L in the specific wavelength band. Therefore, the authentication device 1 according to the present embodiment is capable of observing the face of the authentication target person 100 that can be observed by normal shooting, such as shooting under sunlight or shooting by irradiating the face of the authentication target person 100 with white light. In addition to the biometric information 110 (for example, eyes, nose, mouth, ears, etc., see FIG. 2), the face of the authentication target person 100 can be observed by irradiating the face with the light L of the specific wavelength band. The face biometric information 120 (for example, the vein of the face of the authentication target person 100 in this embodiment) can be used as a feature point for distance measurement.
 以下、認証装置1の各コンポーネントについて詳述する。制御部2は、データバス10を介して、各コンポーネントとの間の各種データや各種指示の授受を行い、認証装置1の制御を実行する。制御部2は、演算処理を実行するためのプロセッサーと、認証装置1の制御を行うために必要なデータ、プログラム、モジュール等を保存しているメモリーとを備えており、制御部2のプロセッサーは、メモリー内に保存されているデータ、プログラム、モジュール等を用いることにより、認証装置1の制御を実行する。また、制御部2のプロセッサーは、認証装置1の各コンポーネントを用いることにより、所望の機能を提供することができる。例えば、制御部2のプロセッサーは、3次元情報生成部4を用いることにより、特徴点抽出部3が抽出した認証対象者100の顔の複数の特徴点に基づいて、認証対象者100の顔の3次元情報を生成するための処理を実行することができる。 Below, each component of the authentication device 1 will be described in detail. The control unit 2 exchanges various data and various instructions with each component via the data bus 10 and controls the authentication device 1. The control unit 2 includes a processor for executing arithmetic processing, and a memory storing data, programs, modules, etc. necessary for controlling the authentication device 1, and the processor of the control unit 2 is Control of the authentication device 1 is executed by using data, programs, modules, etc. stored in the memory. Moreover, the processor of the control unit 2 can provide a desired function by using each component of the authentication device 1. For example, the processor of the control unit 2 uses the three-dimensional information generation unit 4 to determine the face of the authentication target person 100 based on the plurality of feature points of the face of the authentication target person 100 extracted by the feature point extraction unit 3. Processing for generating three-dimensional information can be executed.
 制御部2のプロセッサーは、例えば、1つ以上のマイクロプロセッサー、マイクロコンピューター、マイクロコントローラー、デジタル信号プロセッサー(DSP)、中央演算処理装置(CPU)、メモリーコントロールユニット(MCU)、画像処理用演算処理装置(GPU)、状態機械、論理回路、特定用途向け集積回路(ASIC)、またはこれらの組み合わせ等のコンピューター可読命令に基づいて信号操作等の演算処理を実行する演算ユニットである。特に、制御部2のプロセッサーは、制御部2のメモリー内に保存されているコンピューター可読命令(例えば、データ、プログラム、モジュール等)をフェッチし、信号操作および制御を実行するよう構成されている。 The processor of the control unit 2 is, for example, one or more microprocessors, microcomputers, microcontrollers, digital signal processors (DSP), central processing units (CPU), memory control units (MCU), arithmetic processing units for image processing. (GPU), a state machine, a logic circuit, an application specific integrated circuit (ASIC), or a combination thereof, which is an arithmetic unit that executes arithmetic processing such as signal operation based on computer readable instructions. In particular, the processor of controller 2 is configured to fetch computer-readable instructions (eg, data, programs, modules, etc.) stored in the memory of controller 2 and perform signal manipulation and control.
 制御部2のメモリーは、揮発性記憶媒体(例えば、RAM、SRAM、DRAM)、不揮発性記憶媒体(例えば、ROM、EPROM、EEPROM、フラッシュメモリー、ハードディスク)、またはこれらの組み合わせを含む着脱式または非着脱式のコンピューター可読媒体である。また、制御部2のメモリー内には、第1の撮像系IS1および第2の撮像系IS2の構成および配置によって決定され、後述する認証対象者100の顔の複数の箇所のそれぞれまでの距離aを算出するための計算に用いられるパラメーターが保存されている。 The memory of the control unit 2 includes a volatile storage medium (eg, RAM, SRAM, DRAM), a non-volatile storage medium (eg, ROM, EPROM, EEPROM, flash memory, hard disk), or a removable or non-removable combination including a combination thereof. It is a removable computer-readable medium. Further, in the memory of the control unit 2, the distance a to each of a plurality of places on the face of the authentication subject 100, which will be described later, is determined by the configuration and arrangement of the first imaging system IS1 and the second imaging system IS2. The parameters used for the calculation to calculate are stored.
 光源LSは、認証対象者100の顔に対して特定波長帯域の光Lを照射するよう構成されている。光源LSは、認証対象者100の顔の全域に対して略均一に特定波長帯域の光Lを照射するよう構成および配置されている。光源LSは、所定の波長帯域の光を照射可能であれば特に限定されないが、所定の波長帯域の光を照射可能なLED等を光源LSとして用いることができる。 The light source LS is configured to irradiate the face of the authentication subject 100 with the light L in the specific wavelength band. The light source LS is configured and arranged so as to irradiate the entire region of the face of the authentication subject 100 with the light L of the specific wavelength band substantially uniformly. The light source LS is not particularly limited as long as it can emit light in a predetermined wavelength band, but an LED or the like that can emit light in a predetermined wavelength band can be used as the light source LS.
 本実施形態では、光源LSから認証対象者100に照射される特定波長帯域の光Lは、近赤外光(波長約700~約2500nmの光)である。人の静脈内を流れる還元ヘモグロビンは、近赤外線帯の光の吸収率が高いため、光源LSからの近赤外光が照射された状態の認証対象者100の顔を、第1の撮像系IS1および第2の撮像系IS2を用いて撮像することにより、得られる第1の顔画像および第2の顔画像において、認証対象者100の顔の静脈が黒く写る。本実施形態の認証装置1は、このような現象を利用することにより、通常の撮影によって観測可能な認証対象者100の顔の生体情報110(例えば、眼、鼻、口、耳等)に加え、認証対象者100の顔に特定波長帯域の光Lを照射することにより観測可能となる認証対象者100の顔の静脈(生体情報120)を観測することができる。 In the present embodiment, the light L in the specific wavelength band emitted from the light source LS to the authentication target person 100 is near infrared light (light having a wavelength of about 700 to about 2500 nm). The reduced hemoglobin flowing through the veins of a person has a high absorption rate of light in the near-infrared band. By capturing the image using the second image capturing system IS2, the veins of the face of the authentication target person 100 appear black in the obtained first face image and second face image. By utilizing such a phenomenon, the authentication device 1 of the present embodiment can add to the biometric information 110 (for example, eyes, nose, mouth, ears, etc.) of the face of the authentication target person 100 that can be observed by normal imaging. By irradiating the face of the authentication target person 100 with the light L in the specific wavelength band, the vein (biological information 120) of the face of the authentication target person 100, which can be observed, can be observed.
 第1の撮像系IS1は、認証対象者100の顔の第1の光学像を形成するための第1の光学系OS1と、第1の光学系OS1によって形成された第1の光学像を撮像し、認証対象者100の顔の第1の顔画像を取得するための第1の撮像素子S1と、第1の光学系OS1と第1の撮像素子S1との間に配置され、光源LSによって照射される光Lの特定波長帯域に相当する波長帯域の光のみを通過させ、それ以外の波長帯域の光を実質的に遮断する第1のバンドパスフィルターF1と、を有している。 The first imaging system IS1 captures a first optical system OS1 for forming a first optical image of the face of the authentication subject 100 and a first optical image formed by the first optical system OS1. Then, the light source LS is arranged between the first image sensor S1 for acquiring the first face image of the face of the authentication subject 100 and the first optical system OS1 and the first image sensor S1. The first bandpass filter F1 that allows only light in a wavelength band corresponding to the specific wavelength band of the irradiated light L to pass therethrough and substantially blocks light in other wavelength bands.
 同様に、第2の撮像系IS2は、認証対象者100の顔の第2の光学像を形成するための第2の光学系OS2と、第2の光学系OS2によって形成された第2の光学像を撮像し、認証対象者100の第2の顔画像を取得するための第2の撮像素子S2と、第2の光学系OS2と第2の撮像素子S2との間に配置され、光源LSによって照射される光Lの特定波長帯域に相当する波長帯域の光のみを通過させ、それ以外の波長帯域の光を実質的に遮断する第2のバンドパスフィルターF2と、を有している。 Similarly, the second imaging system IS2 includes a second optical system OS2 for forming a second optical image of the face of the authentication subject 100, and a second optical system OS2 formed by the second optical system OS2. The light source LS is disposed between the second image sensor OS2 and the second image sensor S2 for capturing an image and acquiring the second face image of the authentication target person 100, and the light source LS. The second band-pass filter F2 that allows only light in the wavelength band corresponding to the specific wavelength band of the light L emitted by the laser beam to pass therethrough and substantially blocks light in the other wavelength bands.
 図示の形態では、第1の撮像系IS1が第1のバンドパスフィルターF1を有しており、第2の撮像系IS2が第2のバンドパスフィルターF2を有しているが、本発明はこれに限られない。第1のバンドパスフィルターF1は、第1の撮像系IS1によって取得される第1の顔画像において、認証対象者100の顔に特定波長帯域の光Lを照射することにより観測可能となる認証対象者100の顔の生体情報120を強調させるために用いられている。同様に、第2のバンドパスフィルターF2は、第2の撮像系IS2によって取得される第2の顔画像において、認証対象者100の顔に特定波長帯域の光Lを照射することにより観測可能となる認証対象者100の顔の生体情報120を強調させるために用いられている。したがって、第1のバンドパスフィルターF1を用いなくとも、第1の顔画像中において生体情報120を十分観測可能な場合には、第1の撮像系IS1は、第1のバンドパスフィルターF1を有していなくてもよい。同様に、第2のバンドパスフィルターF2を用いなくとも、第2の顔画像中において生体情報120を十分観測可能な場合には、第2の撮像系IS2は、第2のバンドパスフィルターF2を有していなくてもよい。第1の撮像系IS1が第1のバンドパスフィルターF1を有していない、および/または、第2の撮像系IS2が第2のバンドパスフィルターF2を有していない態様も、本発明の範囲内である。 In the illustrated embodiment, the first imaging system IS1 has the first bandpass filter F1 and the second imaging system IS2 has the second bandpass filter F2. Not limited to The first bandpass filter F1 is an authentication target that can be observed by irradiating the face of the authentication target person 100 with light L in a specific wavelength band in the first face image acquired by the first imaging system IS1. It is used to emphasize the biometric information 120 of the face of the person 100. Similarly, the second bandpass filter F2 is observable by irradiating the face of the authentication subject 100 with the light L in the specific wavelength band in the second face image acquired by the second imaging system IS2. It is used to emphasize the biometric information 120 of the face of the authentication target person 100. Therefore, if the biometric information 120 can be sufficiently observed in the first face image without using the first bandpass filter F1, the first imaging system IS1 has the first bandpass filter F1. You don't have to. Similarly, if the biometric information 120 can be sufficiently observed in the second face image without using the second bandpass filter F2, the second imaging system IS2 sets the second bandpass filter F2. It does not have to have. A mode in which the first imaging system IS1 does not have the first bandpass filter F1 and/or the second imaging system IS2 does not have the second bandpass filter F2 is also within the scope of the present invention. It is within.
 また、図示の形態では、第1のバンドパスフィルターF1は、第1の光学系OS1と第1の撮像素子S1との間に配置され、第2のバンドパスフィルターF2は、第2の光学系OS2と第2の撮像素子S2との間に配置されているが、本発明はこれに限られない。第1のバンドパスフィルターF1が第1の撮像素子S1の撮像面上に取り付けられ、第1のバンドパスフィルターF1と第1の撮像素子S1とが一体化され、および/または、第2のバンドパスフィルターF2が第2の撮像素子S2の撮像面上に取り付けられ、第2のバンドパスフィルターF2と第2の撮像素子S2とが一体化されているような態様も、本発明の範囲内である。 Further, in the illustrated form, the first bandpass filter F1 is arranged between the first optical system OS1 and the first image sensor S1, and the second bandpass filter F2 is the second optical system. Although it is arranged between the OS2 and the second image sensor S2, the present invention is not limited to this. The first bandpass filter F1 is mounted on the imaging surface of the first image sensor S1, the first bandpass filter F1 and the first image sensor S1 are integrated, and/or the second band. A mode in which the pass filter F2 is attached on the image pickup surface of the second image pickup device S2 and the second bandpass filter F2 and the second image pickup device S2 are integrated is also within the scope of the present invention. is there.
 また、図示の形態では、第1の撮像系IS1を構成する第1の撮像素子S1、第1の光学系OS1、および第1のバンドパスフィルターF1が、同一の筐体内に設けられており、第2の撮像系IS2を構成する第2の撮像素子S2、第2の光学系OS2および第2のバンドパスフィルターF2が、別の同一の筐体内に設けられているが、本発明はこれに限られない。第1の光学系OS1、第2の光学系OS2、第1の撮像素子S1、第2の撮像素子S2、第1のバンドパスフィルターF1、および第2のバンドパスフィルターF2がすべて同一の筐体内に設けられているような様態も、本発明の範囲内である。 Further, in the illustrated embodiment, the first image sensor S1, the first optical system OS1, and the first bandpass filter F1 that form the first image pickup system IS1 are provided in the same housing, The second image pickup device S2, the second optical system OS2, and the second bandpass filter F2 that form the second image pickup system IS2 are provided in another same housing, but the present invention is not limited to this. Not limited. The first optical system OS1, the second optical system OS2, the first image sensor S1, the second image sensor S2, the first bandpass filter F1, and the second bandpass filter F2 are all in the same housing. Is also within the scope of the present invention.
 上述のように、第1の光学系OS1および第2の光学系OS2は、上述した第1~第3の条件の少なくとも1つ、および、第4の条件を満たすよう、構成および配置されている。そのため、本発明の認証装置1において、認証対象者100の顔の任意の箇所(測距対象)までの距離aに応じた第1の光学系OS1によって形成される第1の光学像の倍率mの変化が、認証対象者100の顔の任意の箇所(測距対象)までの距離aに応じた第2の光学系OS2によって形成される第2の光学像の倍率mの変化と異なるようになっている。このような第1の光学系OS1および第2の光学系OS2の構成によって得られる第1の光学像の倍率mと第2の光学像の倍率mとの比である像倍比MRは、認証対象者100の顔の任意の箇所までの距離aを算出するために用いられる。 As described above, the first optical system OS1 and the second optical system OS2 are configured and arranged so as to satisfy at least one of the above-described first to third conditions and the fourth condition. .. Therefore, in the authentication device 1 of the present invention, the magnification m of the first optical image formed by the first optical system OS1 according to the distance a to the arbitrary position (distance measurement target) on the face of the authentication subject 100. The change of 1 is different from the change of the magnification m 2 of the second optical image formed by the second optical system OS2 according to the distance a to the arbitrary position (distance measurement target) on the face of the authentication subject 100. It is like this. Zobaihi MR is the ratio of the magnification m 2 of such first magnification m 1 of the first optical image obtained by the configuration of the optical system OS1 and a second optical system OS2 second optical image , Is used to calculate the distance a to an arbitrary part of the face of the authentication subject 100.
 また、図示のように、第1の光学系OS1の光軸と、第2の光学系OS2の光軸は、平行であるが、一致していない。さらに、第2の光学系OS2は、第1の光学系OS1の光軸方向に対して垂直な方向に離間距離Pだけシフトして配置されている。 Also, as shown in the figure, the optical axis of the first optical system OS1 and the optical axis of the second optical system OS2 are parallel, but do not match. Further, the second optical system OS2 is arranged so as to be shifted by the separation distance P in the direction perpendicular to the optical axis direction of the first optical system OS1.
 第1の撮像素子S1および第2の撮像素子S2のそれぞれは、ベイヤー配列等の任意のパターンで配列されたRGB原色系カラーフィルターやCMY補色系カラーフィルターのようなカラーフィルターを有するCMOS画像センサーやCCD画像センサー等のカラー撮像素子であってもよいし、そのようなカラーフィルターを有さない白黒撮像素子であってもよい。第1の撮像素子S1によって得られる第1の顔画像および第2の撮像素子S2によって得られる第2の顔画像は、認証対象者100の顔のカラーまたは白黒の輝度情報である。図示の形態のように、第1のバンドパスフィルターF1が用いられる場合には、第1のバンドパスフィルターF1を通過し、第1の撮像素子S1の撮像面に到達する光の波長帯域が第1のバンドパスフィルターF1により限定されるので、第1の撮像素子S1としてカラー撮像素子を用いる必要がない。そのため、第1のバンドパスフィルターF1が用いられる場合には、第1の撮像素子S1は白黒撮像素子であることが好ましい。同様の理由により、第2のバンドパスフィルターF2が用いられる場合には、第2の撮像素子S2は白黒撮像素子であることが好ましい。 Each of the first image sensor S1 and the second image sensor S2 is a CMOS image sensor having a color filter such as an RGB primary color filter or a CMY complementary color filter arranged in an arbitrary pattern such as a Bayer array, It may be a color image pickup device such as a CCD image sensor or a monochrome image pickup device having no such color filter. The first face image obtained by the first image sensor S1 and the second face image obtained by the second image sensor S2 are color or monochrome brightness information of the face of the authentication subject 100. When the first bandpass filter F1 is used as in the illustrated mode, the wavelength band of light that passes through the first bandpass filter F1 and reaches the image pickup surface of the first image pickup element S1 has the first wavelength band. Since it is limited by the first band-pass filter F1, it is not necessary to use a color image sensor as the first image sensor S1. Therefore, when the first bandpass filter F1 is used, the first image sensor S1 is preferably a monochrome image sensor. For the same reason, when the second bandpass filter F2 is used, the second image sensor S2 is preferably a monochrome image sensor.
 第1の光学系OS1によって、第1の撮像素子S1の撮像面上に、認証対象者100の顔の第1の光学像が形成され、第1の撮像素子S1によって、認証対象者100の顔の第1の光学像を含む第1の顔画像が取得される。取得された第1の顔画像は、データバス10を介して、制御部2および特徴点抽出部3に送られる。同様に、第2の光学系OS2によって、第2の撮像素子S2の撮像面上に、認証対象者100の顔の第2の光学像が形成され、第2の撮像素子S2によって認証対象者100の顔の第2の光学像を含む第2の顔画像が取得される。取得された第2の顔画像は、データバス10を介して、制御部2および特徴点抽出部3に送られる。 The first optical system OS1 forms a first optical image of the face of the authentication target person 100 on the imaging surface of the first image sensor S1, and the first image sensor S1 causes the face of the authentication target person 100. A first face image including a first optical image of the is acquired. The acquired first face image is sent to the control unit 2 and the feature point extraction unit 3 via the data bus 10. Similarly, the second optical system OS2 forms a second optical image of the face of the authentication subject 100 on the imaging surface of the second image sensor S2, and the second image sensor S2 causes the authentication target 100 to be authenticated. A second face image is acquired that includes a second optical image of the person's face. The acquired second face image is sent to the control unit 2 and the feature point extraction unit 3 via the data bus 10.
 図2には、光源LSからの近赤外光によって照射された認証対象者100の顔を、第1の撮像系IS1または第2の撮像系IS2を用いて撮像することにより取得される第1の顔画像または第2の顔画像の概略が示されている。前述のように、光源LSからの特定波長帯域の光L(本実施形態では、近赤外光)が照射された状態の認証対象者100の顔が、第1の撮像系IS1および第2の撮像系IS2によって撮像され、第1の顔画像および第2の顔画像が取得される。そのため、第1の顔画像および第2の顔画像において、眼、鼻、口等の通常の撮影でも観測可能な認証対象者100の顔の生体情報110に加えて、認証対象者100の顔に特定波長帯域の光Lを照射することにより観測可能となる認証対象者100の顔の生体情報120(本実施形態では、認証対象者100の顔の静脈)を観測することが可能となっている。 In FIG. 2, a first image obtained by capturing an image of the face of the authentication subject 100, which is illuminated by the near-infrared light from the light source LS, using the first imaging system IS1 or the second imaging system IS2. The outline of the face image or the second face image is shown. As described above, the face of the authentication subject 100 in the state where the light L of the specific wavelength band from the light source LS (near infrared light in the present embodiment) is irradiated is the first imaging system IS1 and the second imaging system IS1. The first face image and the second face image are captured by the image capturing system IS2. Therefore, in the first face image and the second face image, in addition to the biometric information 110 of the face of the authentication target person 100 that can be observed even in normal photographing of eyes, nose, mouth, etc., It is possible to observe the biometric information 120 of the face of the authentication target person 100 (in this embodiment, the veins of the face of the authentication target person 100) that can be observed by irradiating the light L of the specific wavelength band. ..
 特徴点抽出部3に送られた第1の顔画像および第2の顔画像は、第1の顔画像および第2の顔画像のそれぞれにおける認証対象者100の顔の複数の特徴点を取得するために用いられる。一方、制御部2に送られた第1の顔画像および第2の顔画像は、表示部8による画像表示や通信部9による画像信号の通信のために用いられる。 The first face image and the second face image sent to the feature point extraction unit 3 acquire a plurality of feature points of the face of the authentication target person 100 in each of the first face image and the second face image. Used for. On the other hand, the first face image and the second face image sent to the control unit 2 are used for image display by the display unit 8 and communication of image signals by the communication unit 9.
 特徴点抽出部3は、第1の撮像系IS1から受信した第1の顔画像および第2の撮像系IS2から受信した第2の顔画像のそれぞれにおける認証対象者100の顔の複数の特徴点を抽出する機能を有している。具体的には、特徴点抽出部3は、最初に、第1の顔画像に対して、Cannyのようなフィルター処理を施し、第1の顔画像内における認証対象者100の顔の複数の生体情報110、120を、第1の顔画像内における認証対象者100の顔の複数の特徴点として抽出する。したがって、本実施形態の認証装置1においては、特徴点抽出部3によって抽出される第1の顔画像および第2の顔画像のそれぞれにおける認証対象者100の顔の複数の特徴点は、眼、鼻、口等の通常の撮影でも観測可能な認証対象者100の顔の生体情報110と、認証対象者100の顔に特定波長帯域の光Lを照射することにより観測可能となる認証対象者100の顔の生体情報120(本実施形態では、認証対象者100の顔の静脈)と、を含む。 The feature point extraction unit 3 includes a plurality of feature points of the face of the authentication subject 100 in each of the first face image received from the first imaging system IS1 and the second face image received from the second imaging system IS2. Has the function of extracting. Specifically, the feature point extraction unit 3 first performs a filtering process such as Canny on the first face image to obtain a plurality of biometric images of the face of the authentication target person 100 in the first face image. The information 110 and 120 are extracted as a plurality of feature points of the face of the authentication target person 100 in the first face image. Therefore, in the authentication device 1 of the present embodiment, the plurality of feature points of the face of the authentication target person 100 in each of the first face image and the second face image extracted by the feature point extraction unit 3 are eyes, The biometric information 110 of the face of the authentication target person 100 that can be observed even in normal imaging of the nose, mouth, etc., and the authentication target person 100 that can be observed by irradiating the face of the authentication target person 100 with light L in a specific wavelength band. Biometric information 120 of the face (in this embodiment, veins of the face of the authentication target person 100).
 次に、特徴点抽出部3は、抽出された第1の顔画像中の認証対象者100の顔の複数の特徴点にそれぞれ対応する第2の顔画像中の認証対象者100の顔の複数の特徴点を検出するための対応特徴点検出処理を実行する。特徴点抽出部3は、対応特徴点検出処理において、制御部2のメモリー内に保存されている第1の撮像系IS1および第2の撮像系IS2の特性および配置に関するパラメーターを用いて、抽出された第1の顔画像中の認証対象者100の顔の複数の特徴点にそれぞれ対応する第2の顔画像中のエピポーラ線を導出し、導出された第2の顔画像中のエピポーラ線上を探索することにより、抽出された第1の顔画像中の認証対象者100の顔の複数の特徴点にそれぞれ対応する第2の顔画像中の認証対象者100の顔の複数の特徴点を検出する。特徴点抽出部3は、エピポーラ線を利用して、対応する第2の顔画像中の認証対象者100の顔の複数の特徴点を検出するために、本分野において既知の任意の対応特徴点検出アルゴリズム(例えば、8点アルゴリズム、Tsaiのアルゴリズム等)を利用することができる。 Next, the feature point extraction unit 3 determines a plurality of faces of the authentication target person 100 in the second face image corresponding to the plurality of feature points of the extracted face of the authentication target person 100 in the first face image. The corresponding feature point detection process for detecting the feature point of is executed. The feature point extraction unit 3 is extracted in the corresponding feature point detection process using the parameters regarding the characteristics and arrangement of the first imaging system IS1 and the second imaging system IS2 stored in the memory of the control unit 2. The epipolar line in the second face image corresponding to each of the plurality of feature points of the face of the authentication target person 100 in the first face image is derived, and the epipolar line in the derived second face image is searched for. By doing so, a plurality of feature points of the face of the authentication target person 100 in the second face image corresponding to the plurality of feature points of the face of the authentication target person 100 in the extracted first face image are detected. .. The feature point extraction unit 3 uses the epipolar line to detect a plurality of feature points of the face of the authentication target person 100 in the corresponding second face image in order to detect any corresponding feature point known in the art. Output algorithms (e.g., 8-point algorithm, Tsai algorithm, etc.) can be used.
 特徴点抽出部3によって抽出された第1の顔画像内における認証対象者100の顔の複数の特徴点および対応する第2の顔画像内における認証対象者100の顔の複数の特徴点に関する情報(例えば、生体情報110、120の座標値等)は、3次元情報生成部4に送信される。 Information about a plurality of feature points of the face of the authentication target person 100 in the first face image extracted by the feature point extraction unit 3 and a plurality of feature points of the face of the authentication target person 100 in the corresponding second face image (For example, the coordinate values of the biometric information 110 and 120) are transmitted to the three-dimensional information generation unit 4.
 3次元情報生成部4は、特徴点抽出部が抽出した認証対象者100の顔の複数の特徴点に基づいて、認証対象者100の顔の3次元情報を生成する機能を有している。3次元情報生成部4は、特徴点抽出部3から、第1の顔画像中の認証対象者100の顔の複数の特徴点および対応する第2の顔画像中の認証対象者100の顔の複数の特徴点に関する情報を受信すると、第1の顔画像中の認証対象者100の顔の複数の特徴点および対応する第2の顔画像中の認証対象者100の顔の複数の特徴点を用いて、第1の光学像の倍率mと第2の光学像の倍率mとの像倍比MRを算出し、算出された像倍比MRに基づいて、認証対象者100の顔の3次元情報を生成する。 The three-dimensional information generation unit 4 has a function of generating three-dimensional information of the face of the authentication target person 100 based on the plurality of feature points of the face of the authentication target person 100 extracted by the feature point extraction unit. The three-dimensional information generation unit 4 receives, from the feature point extraction unit 3, a plurality of feature points of the face of the authentication target person 100 in the first face image and the face of the authentication target person 100 in the corresponding second face image. When the information about the plurality of feature points is received, the plurality of feature points of the face of the authentication target person 100 in the first face image and the plurality of feature points of the face of the authentication target person 100 in the corresponding second face image are detected. used, the magnification m 1 of the first optical image is calculated Zobaihi MR with magnification m 2 of the second optical image, based on the calculated Zobaihi MR, of an object's face 100 Generate three-dimensional information.
 具体的には、3次元情報生成部4は、特徴点抽出部3から、第1の顔画像中の認証対象者100の顔の複数の特徴点および対応する第2の顔画像中の認証対象者100の顔の複数の特徴点に関する情報を受信すると、第1の顔画像中の認証対象者100の顔の複数の特徴点間の離間距離を測定することにより認証対象者100の顔の第1の光学像の複数の箇所のサイズYFD1を取得する。この際、3次元情報生成部4は、サイズYFD1を取得するために用いる第1の顔画像中の認証対象者100の顔の複数の特徴点の複数の組み合わせを選択することによって、認証対象者100の顔の第1の光学像の複数の箇所のサイズYFD1を取得することができる。 Specifically, the three-dimensional information generation unit 4 receives from the feature point extraction unit 3 a plurality of feature points of the face of the authentication target person 100 in the first face image and the corresponding authentication targets in the second face image. When the information about the plurality of feature points of the face of the person 100 to be authenticated is received, the distance between the plurality of feature points of the face of the person to be authenticated 100 in the first face image is measured to determine the first face of the person to be authenticated 100. The sizes Y FD1 of a plurality of locations in one optical image are acquired. At this time, the three-dimensional information generation unit 4 selects a plurality of combinations of a plurality of feature points of the face of the authentication target person 100 in the first face image used to acquire the size Y FD1, thereby performing the authentication target. It is possible to acquire the sizes Y FD1 of the plurality of locations of the first optical image of the person 100.
 例えば、3次元情報生成部4は、第1の顔画像中の認証対象者100の顔の複数の特徴点の内、高さ方向に隣り合う特徴点を選択し、それらの離間距離を測定することにより、第1の光学像の任意の箇所の像高を取得することができる。同様に、3次元情報生成部4は、第1の顔画像中の認証対象者100の顔の複数の特徴点の内、幅方向に隣り合う特徴点を選択し、それらの離間距離を測定することにより、第1の光学像の任意の箇所の像幅を取得することができる。 For example, the three-dimensional information generation unit 4 selects feature points adjacent to each other in the height direction from a plurality of feature points of the face of the authentication target person 100 in the first face image, and measures the distance between them. As a result, the image height of an arbitrary portion of the first optical image can be acquired. Similarly, the three-dimensional information generation unit 4 selects feature points adjacent in the width direction from the plurality of feature points of the face of the authentication target person 100 in the first face image, and measures the distance between them. Thereby, the image width of an arbitrary part of the first optical image can be acquired.
 3次元情報生成部4による、第1の顔画像中の認証対象者100の顔の複数の特徴点の組み合わせの選択は、第1の顔画像中の認証対象者100の顔の特徴点の全ての組み合わせが網羅させるよう実行されてもよいし、認証対象者100の顔の3次元情報を正確に生成するのに十分なだけの特徴点の組み合わせが網羅されるように実行されてもよい。 The selection of a combination of a plurality of feature points of the face of the authentication target person 100 in the first face image by the three-dimensional information generation unit 4 is performed by selecting all the feature points of the face of the authentication target person 100 in the first face image. May be performed so as to cover all combinations of the feature points, or may be performed so as to cover enough combinations of feature points for accurately generating the three-dimensional information of the face of the authentication target person 100.
 認証対象者100の顔の第1の光学像の複数の箇所のサイズYFD1を取得した後、3次元情報生成部4は、認証対象者100の顔の第1の光学像の複数の箇所のサイズYFD1を取得したのと同様の方法によって、対応する第2の顔画像中の認証対象者100の顔の複数の特徴点に基づいて、対応する第2の光学像の複数の箇所のそれぞれのサイズYFD2を取得する。 After acquiring the sizes Y FD1 of the plurality of locations of the first optical image of the face of the authentication target person 100, the three-dimensional information generation unit 4 determines the size of the plurality of locations of the first optical image of the face of the authentication target person 100. Based on the plurality of feature points of the face of the authentication target person 100 in the corresponding second face image, each of the plurality of locations of the corresponding second optical image is calculated by the same method as the size Y FD1 is acquired. Gets the size Y FD2 of.
 3次元情報生成部4によって取得された第1の光学像の任意の箇所のサイズYFD1と、第2の光学像の対応する箇所のサイズYFD2との比は、第1の光学像の倍率mと第2の光学像の倍率mとの像倍比MR(m/m)に対応する。そのため、3次元情報生成部4によって取得された第1の光学像の任意の箇所のサイズYFD1と、第2の光学像の対応する箇所のサイズYFD2との比が、認証対象者100の顔の任意の箇所までの距離aを算出するための像倍比MRとして用いられる。 The ratio of the size Y FD1 of the arbitrary portion of the first optical image acquired by the three-dimensional information generation unit 4 to the size Y FD2 of the corresponding portion of the second optical image is the magnification of the first optical image. m 1 and corresponding to Zobaihi MR (m 2 / m 1) and magnification m 2 of the second optical image. Therefore, the ratio of the size Y FD1 of an arbitrary portion of the first optical image acquired by the three-dimensional information generation unit 4 to the size Y FD2 of the corresponding portion of the second optical image is the same as that of the authentication target person 100. It is used as the image magnification ratio MR for calculating the distance a to an arbitrary part of the face.
 像倍比MRを算出すると、3次元情報生成部4は、認証対象者100の顔の任意の箇所における第1の光学像の倍率mと第2の光学像の倍率mとの像倍比MR(m/m)に基づいて、認証対象者100の顔の任意の箇所(測距対象)までの距離aを算出する。具体的には、3次元情報生成部4は、以下の式(1)を用いて、認証対象者100の顔の任意の箇所(測距対象)までの距離aを算出する。 After calculating the Zobaihi MR, 3-dimensional information generating section 4, Zobai the magnification m 1 of the first optical image at an arbitrary position of an object's face 100 and the magnification m 2 of the second optical image Based on the ratio MR (m 2 /m 1 ), the distance a to an arbitrary location (distance measurement target) on the face of the authentication subject 100 is calculated. Specifically, the three-dimensional information generation unit 4 calculates the distance a to an arbitrary part (distance measurement target) of the face of the authentication subject 100 by using the following formula (1).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 ここで、aは測距対象から第1の撮像系IS1の第1の光学系OS1の前側主点までの距離、fは第1の光学系OS1の焦点距離、fは第2の光学系OS2の焦点距離、EPは第1の光学系OS1の射出瞳から、測距対象が無限遠にある場合の第1の光学像の結像位置までの距離、EPは第2の光学系OS2の射出瞳から、測距対象が無限遠にある場合の第2の光学像の結像位置までの距離、Dは第1の光学系OS1の前側主点と、第2の光学系OS2の前側主点との間の奥行視差である。 Here, a is the distance from the distance measurement object to the front principal point of the first optical system OS1 of the first imaging system IS1, f 1 is the focal length of the first optical system OS1, the f 2 second optical the focal length of the system OS2, EP 1 from the exit pupil of the first optical system OS1, the first distance to the imaging position of the optical image when the distance measurement object is at infinity, EP 2 and the second optical The distance from the exit pupil of the system OS2 to the image formation position of the second optical image when the object to be measured is at infinity, D is the front principal point of the first optical system OS1 and the second optical system OS2. It is the depth parallax from the front principal point of.
 さらに、上記式(1)中のKは、以下の式(2)で表される係数であり、第1の撮像系IS1および第2の撮像系IS2の構成および配置により決定される固定値である。 Further, K in the above equation (1) is a coefficient represented by the following equation (2), and is a fixed value determined by the configuration and arrangement of the first imaging system IS1 and the second imaging system IS2. is there.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 ここで、aFD1は、第1の撮像素子S1の撮像面で第1の光学像がベストピントとなる場合の第1の光学系OS1の前側主点から測距対象までの距離であり、aFD2は、第2の撮像素子S2の撮像面で第2の光学像がベストピントとなる場合の第2の光学系OS2の前側主点から測距対象までの距離である。上述の式(1)および式(2)で用いられているパラメーターは、像倍比MRを除き、第1の撮像系IS1および第2の撮像系IS2の構成時に決定される固定値であり、制御部2のメモリー内に保存されている。3次元情報生成部4は、制御部2のメモリー内に保存されているこれらパラメーターと、像倍比MRを用いて、上記式(1)から、認証対象者100の顔の任意の箇所(測距対象)までの距離aを算出することができる。 Here, a FD1 is the distance from the front principal point of the first optical system OS1 to the distance measurement target when the first optical image is in best focus on the imaging surface of the first image sensor S1. FD2 is the distance from the front principal point of the second optical system OS2 to the object for distance measurement when the second optical image is in best focus on the imaging surface of the second image sensor S2. The parameters used in the above equations (1) and (2) are fixed values determined when the first imaging system IS1 and the second imaging system IS2 are configured, except for the image magnification ratio MR, It is stored in the memory of the control unit 2. The three-dimensional information generation unit 4 uses these parameters stored in the memory of the control unit 2 and the image magnification ratio MR to calculate an arbitrary position (measurement) of the face of the authentication target person 100 from the above equation (1). The distance a to the distance object) can be calculated.
 上記式(1)中において、f、f、EP、EP、DおよびKは、第1の撮像系IS1および第2の撮像系IS2の構成および配置により決定される固定値なので、3次元情報生成部4によって取得された第1の光学像の任意の箇所のサイズYFD1と第2の光学像の対応する箇所のサイズYFD2の比から得られる第1の光学像の倍率mと第2の光学像の倍率mとの像倍比MRに基づいて、認証対象者100の顔の任意の箇所(測距対象)から第1の光学系OS1の前側主点までの距離aを算出することができる。 In the above formula (1), f 1 , f 2 , EP 1 , EP 2 , D, and K are fixed values determined by the configurations and arrangements of the first imaging system IS1 and the second imaging system IS2. Magnification m of the first optical image obtained from the ratio of the size Y FD1 of the arbitrary portion of the first optical image and the size Y FD2 of the corresponding portion of the second optical image acquired by the three-dimensional information generation unit 4. Based on the image magnification ratio MR of 1 and the magnification m 2 of the second optical image, the distance from an arbitrary part (distance measurement target) of the face of the authentication subject 100 to the front principal point of the first optical system OS1. a can be calculated.
 像倍比MRに基づいて測距対象までの距離aを算出するための上記式(1)の導出については、本発明者らによって既に出願されている上述の特願2017-241896号に詳述されているので、説明を省略する。 The derivation of the above equation (1) for calculating the distance a to the object to be measured based on the image magnification ratio MR will be described in detail in the above-mentioned Japanese Patent Application No. 2017-241896 already filed by the present inventors. Therefore, the description thereof will be omitted.
 3次元情報生成部4は、第1の光学像の複数の箇所のそれぞれのサイズYFD1と、第2の光学像の対応する箇所のサイズYFD2との比を、第1の光学像の倍率mと第2の光学像の倍率mとの像倍比MR(m/m)として利用し、上記式(1)を用いて、認証対象者100の顔の複数の箇所のそれぞれまでの距離aを算出する。 The three-dimensional information generation unit 4 calculates the ratio of the size Y FD1 of each of the plurality of locations of the first optical image and the size Y FD2 of the corresponding location of the second optical image by the magnification of the first optical image. m 1 and used as Zobaihi MR (m 2 / m 1) and magnification m 2 of the second optical image, using the above equation (1), each of the plurality of points of an object's face 100 The distance a to is calculated.
 その後、3次元情報生成部4は、算出した認証対象者100の顔の複数の箇所のそれぞれまでの距離aに基づいて、認証対象者100の顔の3次元情報を生成する。具体的には、3次元情報生成部4は、認証対象者100の顔の複数の箇所のそれぞれまでの距離aを算出すると、それに基づいて、認証対象者100の顔の3次元グリッド(3次元メッシュ)およびテクスチャを生成し、認証対象者100の顔を3次元モデル化する。 After that, the three-dimensional information generation unit 4 generates three-dimensional information on the face of the authentication target person 100 based on the calculated distances a to each of the plurality of points on the face of the authentication target person 100. Specifically, when the three-dimensional information generation unit 4 calculates the distance a to each of a plurality of places on the face of the authentication target person 100, based on it, the three-dimensional grid (three-dimensional grid) of the face of the authentication target person 100 is calculated. (Mesh) and texture are generated, and the face of the authentication subject 100 is three-dimensionally modeled.
 前述のように、本実施形態の認証装置1では、通常の撮影によって観測可能な認証対象者100の顔の生体情報110(例えば、眼、鼻、口、耳等)に加え、認証対象者100の顔に特定波長帯域の光Lを照射することにより観測可能となる認証対象者100の顔の生体情報120(本実施形態では、認証対象者100の顔の静脈)を、測距のための特徴点として利用することができる。そのため、本実施形態の認証装置1では、通常の撮影によって観測可能な認証対象者100の顔の生体情報110のみを、特徴点として用いて認証対象者100の顔を3次元モデル化する場合よりも、認証対象者100の顔を3次元モデル化のために利用可能な情報量が多い。そのため、3次元情報生成部4は、より正確に認証対象者100の顔の3次元情報を生成することができる。 As described above, in the authentication device 1 of the present embodiment, in addition to the biometric information 110 (for example, eyes, nose, mouth, ears, etc.) of the face of the authentication target person 100 that can be observed by normal imaging, the authentication target person 100 The face biometric information 120 (in this embodiment, the veins of the face of the authentication target person 100) that can be observed by irradiating the face L with the light L in the specific wavelength band for distance measurement. It can be used as a feature point. Therefore, in the authentication device 1 of the present embodiment, only the biometric information 110 of the face of the authentication target person 100 that can be observed by normal imaging is used as a feature point to form a three-dimensional model of the face of the authentication target person 100. However, there is a large amount of information that can be used for three-dimensional modeling of the face of the authentication subject 100. Therefore, the three-dimensional information generation unit 4 can more accurately generate the three-dimensional information of the face of the authentication target person 100.
 特に、認証対象者100の顔の中心線付近には、多くの生体情報110が存在している一方、認証対象者100の顔のそれ以外の箇所に存在している生体情報110の数は少ない。そのため、通常の撮影によって観測可能な認証対象者100の顔の生体情報110のみを、特徴点として用いて認証対象者100の顔を3次元モデル化する場合、認証対象者100の顔の中心線付近以外の箇所での3次元情報が不十分となる。一方、図2からわかるように、認証対象者100の顔の生体情報120(本実施形態では、認証対象者100の顔の静脈)は、認証対象者100の顔の中心線付近以外の箇所にも数多く存在している。そのため、3次元情報生成部4は、特に、認証対象者100の顔の中心線付近以外の箇所において、非常に精度の高い認証対象者100の顔の3次元情報を生成することができる。 In particular, while a large amount of biometric information 110 exists near the center line of the face of the authentication target person 100, the number of biometric information 110 existing in other parts of the face of the authentication target person 100 is small. .. Therefore, when the face of the authentication target person 100 is three-dimensionally modeled using only the biometric information 110 of the face of the authentication target person 100 that can be observed by normal imaging, as a feature point, the center line of the face of the authentication target person 100. The three-dimensional information at locations other than the vicinity becomes insufficient. On the other hand, as can be seen from FIG. 2, the biometric information 120 of the face of the authentication target person 100 (in the present embodiment, the veins of the face of the authentication target person 100) is present in a location other than near the center line of the face of the authentication target person 100. There are many. Therefore, the three-dimensional information generation unit 4 can particularly generate highly accurate three-dimensional information of the face of the authentication target person 100 at a position other than near the center line of the face of the authentication target person 100.
 認証情報記憶部5は、認証対象者100の3次元顔認証を実行するために必要な認証情報を記憶している任意の不揮発性記録媒体(例えば、ハードディスク、フラッシュメモリー)である。本発明の認証装置1の管理者等は、事前に、認証を許可された人物を、本発明の認証装置1または同等の機能を有する撮像装置を用いて撮像することにより、認証を許可された人物の顔の3次元情報を取得し、これを認証情報として、認証情報記憶部5内に登録する。 The authentication information storage unit 5 is an arbitrary non-volatile recording medium (for example, a hard disk, a flash memory) that stores the authentication information necessary to execute the three-dimensional face authentication of the authentication subject 100. The administrator or the like of the authentication device 1 of the present invention is authorized in advance by capturing an image of a person who is authorized for authentication by using the authentication device 1 of the present invention or an imaging device having an equivalent function. The three-dimensional information of the person's face is acquired and registered in the authentication information storage unit 5 as the authentication information.
 なお、図示の形態では、認証情報記憶部5は、認証装置1の内部に設けられているが、本発明はこれに限られない。例えば、認証情報記憶部5は、インターネット、ローカルエリアネットワーク(LAN)、広域ネットワーク(WAN)等の種々の有線または無線ネットワークを介して認証装置1に接続された外部サーバーや外部ストレージデバイスであってもよい。また、認証情報記憶部5が外部サーバーや外部ストレージデバイスである場合、1つ以上の認証情報記憶部5が複数の認証装置1間で共用されていてもよい。この場合、認証装置1は、認証対象者100に対する3次元顔認証が実行されるたびに、通信部9を用いて、外部に設けられた認証情報記憶部5と通信を行い、認証対象者100に対する3次元顔認証を実行する。 In the illustrated form, the authentication information storage unit 5 is provided inside the authentication device 1, but the present invention is not limited to this. For example, the authentication information storage unit 5 is an external server or external storage device connected to the authentication device 1 via various wired or wireless networks such as the Internet, a local area network (LAN), and a wide area network (WAN). Good. When the authentication information storage unit 5 is an external server or an external storage device, one or more authentication information storage units 5 may be shared by a plurality of authentication devices 1. In this case, the authentication device 1 communicates with the authentication information storage unit 5 provided outside by using the communication unit 9 every time the three-dimensional face authentication is performed on the authentication target person 100, and the authentication target person 100 is authenticated. Perform 3D face recognition for.
 認証部6は、3次元情報生成部4が生成した認証対象者100の顔の3次元情報を用いて、認証対象者100の3次元顔認証を実行可能に構成されている。認証部6は、3次元情報生成部4によって生成された認証対象者100の顔の3次元情報と、認証情報記憶部5に保存されている認証情報とを照合し、認証対象者100の3次元顔認証を実行する。 The authentication unit 6 is configured to be able to execute the three-dimensional face authentication of the authentication target person 100 by using the three-dimensional information of the face of the authentication target person 100 generated by the three-dimensional information generation unit 4. The authentication unit 6 collates the three-dimensional information of the face of the authentication target person 100 generated by the three-dimensional information generation unit 4 with the authentication information stored in the authentication information storage unit 5, and the authentication target person 3 Perform 3D face authentication.
 認証部6は、認証対象者100の顔の3次元情報に含まれる鼻の高さ、眼の窪みの深さ、静脈の位置および形状といった認証対象者100の顔の3次元情報から導出される複数の要素のいずれか1つが、認証情報記憶部5に事前に認証情報として登録されている顔の3次元情報から導出される対応する要素と一致すれば、3次元顔認証が成功したと判断してもよいし、複数の要素の全てが、認証情報記憶部5に事前に認証情報として登録されている顔の3次元情報から導出される対応する要素と一致した場合に、3次元顔認証が成功したと判断してもよい。 The authentication unit 6 is derived from the three-dimensional information of the face of the authentication target person 100, such as the height of the nose, the depth of the eye depression, the position and shape of the vein, which are included in the three-dimensional information of the authentication target person 100. If any one of the plurality of elements matches the corresponding element derived from the three-dimensional face information registered in advance in the authentication information storage unit 5 as the authentication information, it is determined that the three-dimensional face authentication has succeeded. Alternatively, if all of the plurality of elements match the corresponding elements derived from the three-dimensional face information registered in advance in the authentication information storage unit 5 as the authentication information, the three-dimensional face authentication is performed. May be judged to have succeeded.
 認証部6による認証対象者100の3次元顔認証の結果(判断結果)は、データバス10を介して、制御部2に送られる。制御部2は、受信した3次元顔認証の結果を、通信部9を介して、外部デバイス(例えば、ドアのロック解除装置、任意のアプリケーションを提供する端末等)に送信する。これにより、外部デバイスは、受信した認証結果に応じた処理を実行することができる。例えば、外部デバイスは、認証対象者100の3次元顔認証が成功したとの結果を受信した場合には、ドアのロックのような物理的なロックまたはソフトウェアのロックを解除、または、任意のアプリケーションの起動を許可し、認証対象者100の3次元顔認証が失敗したとの結果を受信した場合には、ドアのロックのような物理的なロックまたはソフトウェアのロックを維持、または、任意のアプリケーションの起動を許可しない。 The result (judgment result) of the three-dimensional face authentication of the authentication subject 100 by the authentication unit 6 is sent to the control unit 2 via the data bus 10. The control unit 2 transmits the received result of the three-dimensional face authentication to an external device (for example, a door unlock device, a terminal that provides an arbitrary application, etc.) via the communication unit 9. As a result, the external device can execute processing according to the received authentication result. For example, when the external device receives a result indicating that the three-dimensional face authentication of the authentication subject 100 is successful, the external device unlocks a physical lock such as a door lock or software lock, or an arbitrary application. When the result that the three-dimensional face authentication of the authentication subject 100 has failed is received, the physical lock such as the door lock or the software lock is maintained, or any application is permitted. Do not allow to start.
 操作部7は、認証装置1のユーザーや管理者等が操作を実行するために用いられる。操作部7は、認証装置1のユーザーが操作を実行することができれば特に限定されず、例えば、マウス、キーボード、テンキー、ボタン、ダイヤル、レバー、タッチパネル等を操作部7として用いることができる。操作部7は、認証装置1のユーザーによる操作に応じた信号を制御部2のプロセッサーに送信する。例えば、上述のように、認証装置1の管理者等は、操作部7を用いて、認証装置1のセキュリティレベルを設定することができる。 The operation unit 7 is used by a user, an administrator, or the like of the authentication device 1 to execute an operation. The operation unit 7 is not particularly limited as long as the user of the authentication device 1 can perform the operation, and for example, a mouse, a keyboard, a numeric keypad, a button, a dial, a lever, a touch panel, or the like can be used as the operation unit 7. The operation unit 7 transmits a signal according to the operation of the user of the authentication device 1 to the processor of the control unit 2. For example, as described above, the administrator or the like of the authentication device 1 can use the operation unit 7 to set the security level of the authentication device 1.
 通信部9は、有線通信または無線通信による、認証装置1に対するデータの入力または認証装置1から外部デバイスへのデータの出力を行う機能を有している。通信部9は、インターネットのようなネットワークに接続可能に構成されていてもよい。この場合、認証装置1は、通信部9を用いることにより、外部に設けられたウェブサーバーやデータサーバーのような外部デバイスと通信を行うことができる。 The communication unit 9 has a function of inputting data to the authentication device 1 or outputting data from the authentication device 1 to an external device by wire communication or wireless communication. The communication unit 9 may be configured to be connectable to a network such as the Internet. In this case, the authentication device 1 can communicate with an external device such as a web server or a data server provided outside by using the communication unit 9.
 このように、本実施形態の認証装置1は、通常の撮影によって観測可能な認証対象者100の顔の生体情報110(例えば、眼、鼻、口、耳等)に加え、認証対象者100の顔に特定波長帯域の光Lを照射することにより観測可能となる認証対象者100の顔の生体情報120(本実施形態では、認証対象者100の顔の静脈)を、測距のための特徴点として利用することができる。そのため、認証対象者100の顔の3次元情報を取得するために利用可能な特徴点の数が増加し、より正確に認証対象者100の顔の3次元情報を取得することができるので、認証対象者100の3次元顔認証の精度を向上させることができる。 As described above, the authentication device 1 according to the present embodiment adds the biometric information 110 (for example, eyes, nose, mouth, ears, etc.) of the face of the authentication target person 100, which is observable by normal photographing, to the authentication target person 100. A feature for distance measurement is biometric information 120 of the face of the authentication target person 100 (in this embodiment, a vein of the face of the authentication target person 100) that can be observed by irradiating the face with light L in a specific wavelength band. It can be used as a point. Therefore, the number of feature points that can be used to acquire the three-dimensional information of the face of the authentication target person 100 increases, and the three-dimensional information of the face of the authentication target person 100 can be acquired more accurately. The accuracy of the three-dimensional face authentication of the target person 100 can be improved.
 また、本実施形態の認証装置1では、認証対象者100の顔の中心線付近以外の箇所にも数多く存在している認証対象者100の顔の生体情報120(認証対象者100の顔の静脈)を認証対象者100の顔の3次元情報を生成するための特徴点として利用することができる。したがって、本実施形態の認証装置1によれば、一定パターンの光を認証対象者100に照射するプロジェクターを用いなくとも、認証対象者100の顔の頬や額のような凹凸が少なく、利用可能な特徴点が少ない箇所までの距離を算出することができる。そのため、認証装置1のシステム構成をシンプルにすることができる。これにより、一定パターンの光を認証対象者の顔に照射するプロジェクターを用いた場合と比較して、認証装置1の小型化、低消費電力化、および低コスト化を実現することができる。 In addition, in the authentication device 1 of the present embodiment, the biometric information 120 of the face of the authentication target person 100 (the veins of the face of the authentication target person 100) is present in many places other than near the center line of the authentication target person 100. ) Can be used as a feature point for generating three-dimensional information of the face of the authentication target person 100. Therefore, according to the authentication device 1 of the present embodiment, even if a projector that irradiates the authentication target person 100 with a certain pattern of light is not used, there are few irregularities such as the face cheeks and the forehead of the authentication target person 100, and it can be used. It is possible to calculate the distance to a place with few characteristic points. Therefore, the system configuration of the authentication device 1 can be simplified. As a result, the authentication apparatus 1 can be downsized, the power consumption can be reduced, and the cost can be reduced as compared with the case where the projector that irradiates the face of the authentication target person with a certain pattern of light is used.
 また、上述の説明では、光源LSは、認証対象者100の顔に対して近赤外光を照射するよう構成されているが、本発明はこれに限られない。光源LSは、認証対象者100の顔の静脈を観察可能とする任意の波長帯域の光を認証対象者100の顔に対して照射するよう構成されていてもよい。 Further, in the above description, the light source LS is configured to irradiate the face of the authentication subject 100 with near infrared light, but the present invention is not limited to this. The light source LS may be configured to irradiate the face of the authentication target person 100 with light in an arbitrary wavelength band that allows the veins of the face of the authentication target person 100 to be observed.
 また、本実施形態では、認証対象者100の顔に特定波長帯域の光Lを照射することにより観測可能となる認証対象者100の顔の生体情報120として認証対象者100の顔の静脈を挙げたが、本発明はこれに限られない。認証対象者100の顔に特定波長帯域の光Lを照射することにより観測可能となる認証対象者100の顔の任意の種類の生体情報120を、認証対象者100の顔の3次元情報を生成するための特徴点として利用することができる。 Further, in the present embodiment, as the biometric information 120 of the face of the authentication target person 100 that can be observed by irradiating the face of the authentication target person 100 with the light L in the specific wavelength band, the vein of the face of the authentication target person 100 is cited. However, the present invention is not limited to this. Generate biometric information 120 of any kind of the face of the authentication subject 100 that can be observed by irradiating the face of the authentication subject 100 with light L in a specific wavelength band, and generate three-dimensional information of the face of the authentication subject 100. It can be used as a feature point for doing.
 (認証方法)
 次に、図3を参照して、本実施形態の認証装置1によって実行される認証方法S100について詳述する。図3は、図1に示す認証装置によって実行される認証方法を示すフローチャートである。
(Authentication method)
Next, with reference to FIG. 3, the authentication method S100 executed by the authentication device 1 of the present embodiment will be described in detail. FIG. 3 is a flowchart showing an authentication method executed by the authentication device shown in FIG.
 図3に示す認証方法S100は、認証対象者100が操作部7を用いて、認証対象者100の3次元顔認証を実行するための操作を実行することにより開始される。工程S101において、制御部2のプロセッサーからの制御に応じて、光源LSが認証対象者100の顔に特定波長帯域の光L(本実施形態では、近赤外光)を照射する。工程S102において、第1の撮像系IS1の第1の撮像素子S1によって、第1の光学系OS1によって形成された認証対象者100の顔の第1の光学像が撮像され、第1の顔画像が取得される。第1の顔画像は、データバス10を介して、制御部2および特徴点抽出部3に送られる。 The authentication method S100 shown in FIG. 3 is started by the authentication target person 100 using the operation unit 7 to execute an operation for executing the three-dimensional face authentication of the authentication target person 100. In step S101, under the control of the processor of the control unit 2, the light source LS irradiates the face of the authentication target person 100 with light L in the specific wavelength band (near infrared light in this embodiment). In step S102, the first image sensor S1 of the first image capturing system IS1 captures a first optical image of the face of the authentication target person 100 formed by the first optical system OS1, and the first face image is obtained. Is obtained. The first face image is sent to the control unit 2 and the feature point extraction unit 3 via the data bus 10.
 一方、工程S103において、第2の撮像系IS2の第2の撮像素子S2によって、第2の光学系OS2によって形成された認証対象者100の顔の第2の光学像が撮像され、第2の顔画像が取得される。第2の顔画像は、データバス10を介して、制御部2および特徴点抽出部3に送られる。なお、工程S102および工程S103は、同時に実行されてもよいし、別々に実行されてもよい。しかしながら、同じ状態の認証対象者100の顔を第1の撮像系IS1および第2の撮像系IS2で撮影する方が、より正確に認証対象者100の顔の3次元情報を生成できることから、工程S102および工程S103は、同時に実行されることが好ましい。 On the other hand, in step S103, the second image sensor S2 of the second image capturing system IS2 captures a second optical image of the face of the authentication-subjected person 100 formed by the second optical system OS2, A face image is acquired. The second face image is sent to the control unit 2 and the feature point extraction unit 3 via the data bus 10. Note that step S102 and step S103 may be executed simultaneously or separately. However, the three-dimensional information of the face of the authentication target person 100 can be generated more accurately when the face of the authentication target person 100 in the same state is photographed by the first imaging system IS1 and the second imaging system IS2. It is preferable that S102 and step S103 are simultaneously performed.
 工程S102および工程S103の後、工程S104において、特徴点抽出部3によって、第1の顔画像に対して、Cannyのようなフィルター処理が施され、第1の顔画像内における認証対象者100の顔の第1の光学像の複数の生体情報110、120が、第1の顔画像内における認証対象者100の顔の複数の特徴点として抽出される。 After step S102 and step S103, in step S104, the feature point extraction unit 3 performs a filtering process such as Canny on the first face image, and the authentication target person 100 in the first face image is processed. The plurality of pieces of biometric information 110 and 120 of the first optical image of the face are extracted as the plurality of feature points of the face of the authentication target person 100 in the first face image.
 この際、特徴点抽出部3は、第1の顔画像において、眼、鼻、口等の通常の撮影でも観測可能な認証対象者100の顔の生体情報110と、認証対象者100の顔に特定波長帯域の光Lを照射することにより観測可能となる認証対象者100の顔の生体情報120(本実施形態では、認証対象者100の顔の静脈)と、を認証対象者100の顔の複数の特徴点として抽出する。次に、特徴点抽出部3は、制御部2のメモリー内に保存されている第1の撮像系IS1および第2の撮像系IS2の特性および配置に関するパラメーターを用いて、抽出された第1の顔画像中の認証対象者100の顔の複数の特徴点にそれぞれ対応する第2の顔画像中のエピポーラ線を導出し、導出された第2の顔画像中のエピポーラ線上を探索することにより、抽出された第1の顔画像中の認証対象者100の顔の複数の特徴点にそれぞれ対応する第2の顔画像中の認証対象者100の顔の複数の特徴点を検出する。その後、特徴点抽出部3によって抽出された第1の顔画像および第2の顔画像のそれぞれにおける認証対象者100の顔の複数の特徴点に関する情報が3次元情報生成部4に送信される。 At this time, the feature point extraction unit 3 detects the biometric information 110 of the face of the authentication target person 100 and the face of the authentication target person 100 that can be observed even in normal imaging such as eyes, nose, and mouth in the first face image. The biometric information 120 of the face of the authentication target person 100 (in this embodiment, the veins of the face of the authentication target person 100) that can be observed by irradiating the light L of the specific wavelength band with the face of the authentication target person 100. Extracted as a plurality of feature points. Next, the feature point extraction unit 3 uses the parameters related to the characteristics and arrangement of the first imaging system IS1 and the second imaging system IS2 stored in the memory of the control unit 2 to extract the extracted first By deriving an epipolar line in the second face image corresponding to each of a plurality of feature points of the face of the authentication target person 100 in the face image, and searching for the epipolar line in the derived second face image, A plurality of feature points of the face of the authentication target person 100 in the second face image corresponding to the plurality of feature points of the face of the authentication target person 100 in the extracted first face image are detected. After that, information about a plurality of feature points of the face of the authentication target person 100 in each of the first face image and the second face image extracted by the feature point extraction unit 3 is transmitted to the three-dimensional information generation unit 4.
 工程S105において、3次元情報生成部4によって、特徴点抽出部3によって抽出された第1の顔画像中の認証対象者100の顔の複数の特徴点に基づいて、認証対象者100の顔の第1の光学像の複数の箇所のサイズ(像幅または像高)YFD1が算出される。その後、工程S106において、3次元情報生成部4によって、サイズYFD1を取得するために用いられた第1の顔画像中の認証対象者100の顔の複数の特徴点に対応する第2の顔画像中の認証対象者100の顔の複数の特徴点に基づいて、対応する第2の光学像の複数の箇所のそれぞれのサイズYFD2が算出される。 In step S105, the three-dimensional information generation unit 4 determines the face of the authentication target person 100 based on the plurality of feature points of the face of the authentication target person 100 in the first face image extracted by the feature point extraction unit 3. Sizes (image widths or image heights) Y FD1 of a plurality of portions of the first optical image are calculated. Then, in step S106, the three-dimensional information generation unit 4 uses the second face corresponding to the plurality of feature points of the face of the authentication target person 100 in the first face image used to acquire the size Y FD1. Based on the plurality of feature points of the face of the authentication target person 100 in the image, the respective sizes Y FD2 of the plurality of locations of the corresponding second optical image are calculated.
 工程S107において、3次元情報生成部4によって、第1の光学像の複数の箇所のそれぞれのサイズYFD1および第2の光学像の対応する箇所のサイズYFD2から、MR=YFD2/YFD1に基づいて、第1の光学像の複数の箇所のそれぞれの倍率mと第2の光学像の対応する箇所の倍率mとの像倍比MRが算出される。 In step S107, the three-dimensional information generation unit 4 calculates MR=Y FD2 /Y FD1 from the size Y FD1 of each of the plurality of locations of the first optical image and the size Y FD2 of the corresponding location of the second optical image. based on, Zobaihi MR and magnification m 2 of the corresponding portions of each of the magnification m 1 and second optical images of the plurality of locations of the first optical image is calculated.
 次に、工程S108において、3次元情報生成部4によって、算出した像倍比MRに基づいて、認証対象者100の顔の複数の箇所のそれぞれまでの距離aが算出(特定)される。その後、工程S109において、3次元情報生成部4によって、認証対象者100の顔の複数の箇所のそれぞれまでの距離aに基づいて、認証対象者100の顔の3次元情報が生成される。 Next, in step S108, the three-dimensional information generation unit 4 calculates (identifies) the distance a to each of the plurality of portions of the face of the authentication subject 100 based on the calculated image magnification ratio MR. Then, in step S109, the three-dimensional information generation unit 4 generates three-dimensional information of the face of the authentication target person 100 based on the distances a to each of the plurality of portions of the face of the authentication target person 100.
 工程S110において、認証部6によって、3次元情報生成部4が生成した認証対象者100の顔の3次元情報と、認証情報記憶部5に事前に登録されている認証情報に含まれる顔の3次元情報とを比較することにより、認証対象者100の3次元顔認証が実行される。その後、認証部6による認証対象者100の3次元顔認証の結果が、制御部2に送信される。制御部2は、受信した認証結果を、通信部9を介して、任意の外部デバイスに送信し、認証方法S100が終了する。これにより、任意の外部デバイスは、認証結果に応じた処理を実行することができる。 In step S110, the authentication unit 6 generates the three-dimensional information of the face of the authentication target person 100 generated by the three-dimensional information generation unit 4 and the three-dimensional face included in the authentication information registered in advance in the authentication information storage unit 5. The three-dimensional face authentication of the authentication subject 100 is executed by comparing the three-dimensional information. After that, the result of the three-dimensional face authentication of the authentication target person 100 by the authentication unit 6 is transmitted to the control unit 2. The control unit 2 transmits the received authentication result to any external device via the communication unit 9, and the authentication method S100 ends. Thereby, an arbitrary external device can execute the processing according to the authentication result.
 <第2実施形態>
 次に、本発明の第2実施形態に係る認証装置1について詳述する。以下、第2実施形態の認証装置1について、第1実施形態の認証装置1との相違点を中心に説明し、同様の事項については、その説明を省略する。本実施形態の認証装置1は、第1の撮像系IS1および第2の撮像系IS2が同じ構成および特性を有するよう構成および配置されている点、さらに、3次元情報生成部4が、第1の顔画像中の認証対象者100の顔の複数の特徴点と、対応する第2の顔画像中の認証対象者100の顔の複数の特徴点との間の並進視差に基づいて、認証対象者100の顔の複数の特徴点のそれぞれまでの距離aを算出するよう構成されている点を除き、第1実施形態の認証装置1と同様である。
<Second Embodiment>
Next, the authentication device 1 according to the second embodiment of the present invention will be described in detail. Hereinafter, the authentication device 1 of the second embodiment will be described focusing on the differences from the authentication device 1 of the first embodiment, and the description of the same items will be omitted. The authentication apparatus 1 of the present embodiment is configured and arranged so that the first image pickup system IS1 and the second image pickup system IS2 have the same configuration and characteristics, and further, the three-dimensional information generation unit 4 is the first Based on the translational parallax between the plurality of feature points of the face of the authentication target person 100 in the face image and the plurality of feature points of the face of the authentication target person 100 in the corresponding second face image. The authentication device 1 is the same as the authentication device 1 of the first embodiment except that the distance a to each of the plurality of feature points of the face of the person 100 is calculated.
 本実施形態では、第1の撮像系IS1および第2の撮像系IS2は、互いに同じ構成および特性を有するよう構成および配置されている。一方、図1を参照して詳述した第1実施形態と同様に、第1の撮像系IS1および第2の撮像系IS2は、第1の撮像系IS1の第1の光学系OS1の光軸と、第2の撮像系IS2の第2の光学系OS2の光軸とは、平行であるが、一致していない。さらに、第2の光学系OS2は、第1の光学系OS1の光軸方向に対して垂直な方向に離間距離Pだけシフトして配置されている。 In the present embodiment, the first image pickup system IS1 and the second image pickup system IS2 are configured and arranged so as to have the same configuration and characteristics. On the other hand, similarly to the first embodiment described in detail with reference to FIG. 1, the first imaging system IS1 and the second imaging system IS2 are the optical axes of the first optical system OS1 of the first imaging system IS1. And the optical axis of the second optical system OS2 of the second imaging system IS2 are parallel, but do not match. Further, the second optical system OS2 is arranged so as to be shifted by the separation distance P in the direction perpendicular to the optical axis direction of the first optical system OS1.
 したがって、第1の撮像系IS1によって認証対象者100の顔を撮像することによって取得される第1の顔画像と、第2の撮像系IS2によって認証対象者100の顔を撮像することによって取得される第2の顔画像との相違は、第1の撮像系IS1の第1の光学系OS1の光軸と、第2の撮像系IS2の第2の光学系OS2の光軸との間の離間距離Pに起因する並進視差(第1の光学系OS1の光軸方向に対して垂直な方向の視差)のみとなる。 Therefore, the first face image obtained by capturing the face of the authentication target person 100 by the first image capturing system IS1 and the first face image obtained by capturing the face of the authentication target person 100 by the second image capturing system IS2. The difference from the second face image is that the optical axis of the first optical system OS1 of the first imaging system IS1 and the optical axis of the second optical system OS2 of the second imaging system IS2 are separated from each other. Only the translational parallax (parallax in the direction perpendicular to the optical axis direction of the first optical system OS1) due to the distance P becomes.
 したがって、本実施形態では、3次元情報生成部4は、第1の顔画像中の認証対象者100の顔の複数の特徴点と、対応する第2の顔画像中の認証対象者100の顔の複数の特徴点との間の並進視差に基づいて、認証対象者100の顔の複数の特徴点のそれぞれまでの距離aを算出し、算出された認証対象者100の顔の複数の特徴点のそれぞれまでの距離aに基づいて、認証対象者100の顔の3次元情報を生成する。 Therefore, in the present embodiment, the three-dimensional information generation unit 4 determines the plurality of feature points of the face of the authentication target person 100 in the first face image and the face of the authentication target person 100 in the corresponding second face image. Based on the translational parallax between the plurality of feature points of the authentication target person 100, the distances a to each of the plurality of feature points of the authentication target person 100 are calculated, and the calculated plurality of feature points of the face of the authentication target person 100 are calculated. 3D information of the face of the authentication subject 100 is generated based on the distance a to each of the above.
 このような構成によっても、前述した第1実施形態の認証装置1と同様の効果および作用を提供することができる。 With such a configuration, it is possible to provide the same effects and actions as those of the authentication device 1 of the first embodiment described above.
 (認証方法)
 図4には、本実施形態の認証装置1によって実行される認証方法S200について詳述する。図4は、本発明の第2実施形態に係る認証装置によって実行される認証方法を示すフローチャートである。
(Authentication method)
In FIG. 4, the authentication method S200 executed by the authentication device 1 of this embodiment will be described in detail. FIG. 4 is a flowchart showing an authentication method executed by the authentication device according to the second embodiment of the present invention.
 図4に示す認証方法S200における工程S201~工程S204は、図2を参照して詳述した第1実施形態の認証装置1によって実行される認証方法S100の工程S101~工程S104と同一であるので、説明を省略する。 Steps S201 to S204 in the authentication method S200 shown in FIG. 4 are the same as steps S101 to S104 of the authentication method S100 executed by the authentication device 1 of the first embodiment described in detail with reference to FIG. , Description is omitted.
 工程S205において、3次元情報生成部4によって、工程S204において特徴点抽出部3によって抽出された第1の顔画像中の認証対象者100の顔の複数の特徴点と、対応する第2の顔画像中の認証対象者100の顔の複数の特徴点との間の並進視差が算出される。3次元情報生成部4は、算出された並進視差に基づいて、認証対象者100の顔の複数の特徴点のそれぞれまでの距離aを算出する。 In step S205, the three-dimensional information generation unit 4 causes the plurality of feature points of the face of the authentication target person 100 in the first face image extracted by the feature point extraction unit 3 in step S204 to correspond to the second face. The translational parallax between the plurality of feature points of the face of the authentication target person 100 in the image is calculated. The three-dimensional information generation unit 4 calculates the distance a to each of the plurality of feature points on the face of the authentication target person 100 based on the calculated translation parallax.
 その後、工程S206において、第1実施形態の認証装置1によって実行される認証方法S100の工程S109と同様に、3次元情報生成部4によって、認証対象者100の顔の複数の特徴点のそれぞれまでの距離aに基づいて、認証対象者100の顔の3次元情報が生成される。 After that, in step S206, as in step S109 of the authentication method S100 executed by the authentication device 1 of the first embodiment, the three-dimensional information generation unit 4 allows each of the plurality of feature points of the face of the authentication target person 100 to be detected. The three-dimensional information of the face of the authentication subject 100 is generated based on the distance a.
 工程S207において、第1実施形態の認証装置1によって実行される認証方法S100の工程S110と同様に、認証部6によって、3次元情報生成部4が生成した認証対象者100の顔の3次元情報と、認証情報記憶部5に事前に登録されている認証情報に含まれる顔の3次元情報とを比較することにより、認証対象者100の3次元顔認証が実行される。その後、認証部6による認証対象者100の3次元顔認証の結果が、制御部2に送信される。制御部2は、受信した認証結果を、通信部9を介して、任意の外部デバイスに送信し、認証方法S200が終了する。 In step S207, similar to step S110 of the authentication method S100 executed by the authentication device 1 of the first embodiment, the authentication unit 6 generates the three-dimensional information of the face of the authentication target person 100 generated by the three-dimensional information generation unit 4. And the three-dimensional face information included in the authentication information registered in advance in the authentication information storage unit 5 are compared, whereby the three-dimensional face authentication of the authentication target person 100 is executed. After that, the result of the three-dimensional face authentication of the authentication target person 100 by the authentication unit 6 is transmitted to the control unit 2. The control unit 2 transmits the received authentication result to any external device via the communication unit 9, and the authentication method S200 ends.
 <第3実施形態>
 次に、図5を参照して、本発明の第3実施形態に係る認証装置1について詳述する。図5は、本発明の第3実施形態に係る認証装置の第1の撮像系によって取得される第1の顔画像または第2の撮像系によって取得される第2の顔画像を説明するための概略図である。
<Third Embodiment>
Next, the authentication device 1 according to the third embodiment of the present invention will be described in detail with reference to FIG. FIG. 5 is a view for explaining a first face image acquired by the first imaging system or a second face image acquired by the second imaging system of the authentication device according to the third embodiment of the present invention. It is a schematic diagram.
 以下、第3実施形態の認証装置1について、第1実施形態および第2実施形態の認証装置1との相違点を中心に説明し、同様の事項については、その説明を省略する。本実施形態の認証装置1は、光源LSの構成が変更されている点および生体情報120が認証対象者100の顔のメラニン色素である点を除き、第1実施形態および第2実施形態の認証装置1と同様である。 Hereinafter, the authentication device 1 of the third embodiment will be described focusing on the differences from the authentication device 1 of the first embodiment and the second embodiment, and the description of the same items will be omitted. The authentication device 1 of the present embodiment is the same as the authentication of the first and second embodiments except that the configuration of the light source LS is changed and the biometric information 120 is the melanin pigment of the face of the authentication subject 100. It is similar to the device 1.
 本実施形態の認証装置1では、光源LSは、認証対象者100の顔に対して紫外線帯域の光L(例えば、波長350~400nmの光L)を照射するよう構成されている。人の顔等に存在するメラニン色素は、短波長帯の光の吸収率が高いため、光源LSからの紫外線帯域の光が照射された状態の認証対象者100の顔を、第1の撮像系IS1および第2の撮像系IS2を用いて撮像することにより、得られる第1の顔画像および第2の顔画像において、認証対象者100の顔のメラニン色素が黒く写る。本実施形態の認証装置1は、このような現象を利用することにより、通常の撮影によって観測可能な認証対象者100の顔の生体情報110(例えば、眼、鼻、口、耳等)に加え、認証対象者100の顔に特定波長帯域の光Lを照射することにより観測可能となる認証対象者100の顔のメラニン色素(生体情報120)を観測することができる。 In the authentication device 1 of this embodiment, the light source LS is configured to irradiate the face L of the authentication subject 100 with light L in the ultraviolet band (for example, light L having a wavelength of 350 to 400 nm). Since the melanin pigment present in a human face or the like has a high absorptance of light in the short wavelength band, the face of the authentication target person 100 in the state where the light in the ultraviolet band from the light source LS is irradiated is taken as the first imaging system. By imaging using IS1 and the second imaging system IS2, the melanin pigment on the face of the authentication-subject person 100 appears black in the obtained first face image and second face image. By utilizing such a phenomenon, the authentication device 1 of the present embodiment can add to the biometric information 110 (for example, eyes, nose, mouth, ears, etc.) of the face of the authentication target person 100 that can be observed by normal imaging. It is possible to observe the melanin pigment (biological information 120) on the face of the authentication subject 100, which can be observed by irradiating the face of the authentication subject 100 with the light L in the specific wavelength band.
 本実施形態の認証装置1において、生体情報120として利用される認証対象者100の顔のメラニン色素は、認証対象者100の加齢や体調に応じて変化し得る。そのため、認証対象者100の顔のメラニン色素自体を認証対象者100の顔認証のための要素として直接利用してしまうと、認証対象者100の加齢や体調によって認証対象者の顔認証の精度が低下してしまう。このような理由により、認証対象者100の一般的な顔認証には、認証対象者100の加齢や体調に応じて変化し得る顔のメラニン色素が用いられることはない。 In the authentication device 1 of the present embodiment, the melanin pigment on the face of the authentication subject 100 used as the biometric information 120 may change according to the aging and physical condition of the authentication subject 100. Therefore, if the melanin pigment of the face of the authentication target person 100 is directly used as an element for face authentication of the authentication target person 100, the accuracy of the face authentication of the authentication target person 100 depends on the aging and physical condition of the authentication target person 100. Will decrease. For this reason, the general face authentication of the authentication target person 100 does not use the melanin pigment of the face that can change according to the aging or physical condition of the authentication target person 100.
 しかしながら、本実施形態の認証装置1では、認証対象者100の顔のメラニン色素は、認証対象者100の顔の3次元情報を生成するための特徴点として利用されるにすぎず、認証対象者100の顔のメラニン色素自体が、認証対象者100の認証に用いられるわけではない。認証対象者100の加齢や体調によって認証対象者100の顔のメラニン色素の位置や形状が変化したとしても、認証対象者100の顔の3次元形状は変化しない。本実施形態の認証装置1は、認証対象者100の一般的な顔認証では用いられない認証対象者100の顔のメラニン色素を、認証対象者100の顔の3次元情報を生成するための特徴点として利用する。そのため、本実施形態の認証装置1では、認証対象者100の顔の3次元情報を取得するために利用可能な特徴点の数が増加し、より正確に認証対象者100の顔の3次元情報を取得することができる。この結果、認証対象者100の3次元顔認証の精度を向上させることができる。 However, in the authentication device 1 of the present embodiment, the melanin pigment on the face of the authentication target person 100 is only used as a feature point for generating the three-dimensional information of the face of the authentication target person 100. The 100 face melanin pigment itself is not used for authentication of the authentication target person 100. Even if the position or shape of the melanin pigment on the face of the authentication target person 100 changes due to aging or physical condition of the authentication target person 100, the three-dimensional shape of the face of the authentication target person 100 does not change. The authentication device 1 of the present embodiment is characterized in that the melanin pigment of the face of the authentication target person 100, which is not used in the general face authentication of the authentication target person 100, is generated to generate the three-dimensional information of the face of the authentication target person 100. Use it as a point. Therefore, in the authentication device 1 of the present embodiment, the number of feature points that can be used to acquire the three-dimensional information of the face of the authentication target person 100 increases, and more accurately the three-dimensional information of the face of the authentication target person 100. Can be obtained. As a result, the accuracy of the three-dimensional face authentication of the authentication target person 100 can be improved.
 また、上述の説明では、光源LSは、認証対象者100の顔に対して紫外線帯域の光L(例えば、波長350~400nmの光L)を照射するよう構成されているが、本発明はこれに限られない。光源LSは、認証対象者100の顔のメラニン色素を観察可能とする任意の波長帯域の光を認証対象者100の顔に対して照射するよう構成されていてもよい。 Further, in the above description, the light source LS is configured to irradiate the face L of the authentication subject 100 with the light L in the ultraviolet band (for example, the light L having a wavelength of 350 to 400 nm). Not limited to The light source LS may be configured to irradiate the face of the authentication subject 100 with light in an arbitrary wavelength band that allows the melanin pigment of the face of the authentication subject 100 to be observed.
 また、本実施形態では、認証対象者100の顔に特定波長帯域の光Lを照射することにより観測可能となる認証対象者100の顔の生体情報120として、認証対象者100の加齢や体調に応じて変化し得る認証対象者100の顔のメラニン色素を挙げたが、本発明はこれに限られない。本実施形態の認証装置1は、顔のメラニン色素のように、認証対象者100の顔に特定波長帯域の光Lを照射することにより観測可能となり、かつ、認証対象者100の加齢や体調に応じて変化し得る、認証対象者100の一般的な顔認証では用いられない幅広い種類の生体情報を、認証対象者100の顔の3次元情報を生成するための特徴点として利用することができる。 In addition, in the present embodiment, as the biometric information 120 of the face of the authentication target person 100 that can be observed by irradiating the face of the authentication target person 100 with the light L in the specific wavelength band, the aging and physical condition of the authentication target person 100. Although the melanin pigment of the face of the authentication subject 100 that can be changed according to the above is mentioned, the present invention is not limited to this. The authentication device 1 of the present embodiment becomes observable by irradiating the face L of the authentication target person 100 with the light L in the specific wavelength band like the melanin pigment on the face, and the aging and physical condition of the authentication target person 100 can be achieved. It is possible to use a wide variety of biometric information that is not used in general face authentication of the authentication target person 100, which can change depending on the above, as feature points for generating three-dimensional information of the face of the authentication target person 100. it can.
 このような構成によっても、前述した第1実施形態および第2実施形態の認証装置1と同様の効果および作用を提供することができる。 Even with such a configuration, it is possible to provide the same effects and actions as those of the authentication device 1 according to the above-described first and second embodiments.
 以上、本発明の認証装置を図示の実施形態に基づいて説明したが、本発明はこれに限定されるものではない。本発明の各構成は、同様の機能を発揮し得る任意のものと置換することができ、あるいは、本発明の各構成に任意の構成のものを付加することができる。 The authentication device of the present invention has been described above based on the illustrated embodiment, but the present invention is not limited to this. Each configuration of the present invention can be replaced with any configuration capable of exhibiting the same function, or any configuration can be added to each configuration of the present invention.
 本発明の属する分野および技術における当業者であれば、本発明の原理、考え方、および範囲から有意に逸脱することなく、記述された本発明の認証装置の構成の変更を実行可能であろうし、変更された構成を有する認証装置もまた、本発明の範囲内である。例えば、第1実施形態から第3実施形態の認証装置を任意に組み合わせた態様も、本発明の範囲内である。 Those skilled in the art and technology to which the present invention pertains can make modifications to the configuration of the described authentication device of the present invention without departing significantly from the principle, concept and scope of the present invention, Authentication devices with modified configurations are also within the scope of the invention. For example, an aspect in which the authentication devices of the first to third embodiments are arbitrarily combined is also within the scope of the present invention.
 また、図1に示された認証装置のコンポーネントの数や種類は、説明のための例示にすぎず、本発明は必ずしもこれに限られない。本発明の原理および意図から逸脱しない範囲において、任意のコンポーネントが追加若しくは組み合わされ、または任意のコンポーネントが削除された態様も、本発明の範囲内である。また、認証装置の各コンポーネントは、ハードウェア的に実現されていてもよいし、ソフトウェア的に実現されていてもよいし、これらの組み合わせによって実現されていてもよい。 Also, the number and types of components of the authentication device shown in FIG. 1 are merely examples for explanation, and the present invention is not necessarily limited to this. Aspects in which arbitrary components are added or combined or arbitrary components are deleted without departing from the principle and intent of the present invention are also within the scope of the present invention. Further, each component of the authentication device may be realized by hardware, software, or a combination thereof.
 例えば、各実施形態において、認証装置は、第1の撮像系と、第2の撮像系と、を備えているものとして詳述されたが、本発明はこれに限られない。認証装置が第1の撮像系および第2の撮像系に加え、任意の数の追加的な撮像系を有しているような態様も、本発明の範囲内である。 For example, in each of the embodiments, the authentication device has been described in detail as including the first image pickup system and the second image pickup system, but the present invention is not limited to this. It is also within the scope of the present invention that the authentication device has any number of additional image pickup systems in addition to the first image pickup system and the second image pickup system.
 また、図3に示された認証方法S100および図4に示された認証方法S200の工程の数や種類は、説明のための例示にすぎず、本発明は必ずしもこれに限られない。本発明の原理および意図から逸脱しない範囲において、任意の工程が、任意の目的で追加若しくは組み合わされ、または、任意の工程が削除される態様も、本発明の範囲内である。 The number and types of steps of the authentication method S100 shown in FIG. 3 and the authentication method S200 shown in FIG. 4 are merely examples for explanation, and the present invention is not necessarily limited to this. An embodiment in which any step is added or combined for any purpose or any step is deleted without departing from the principle and intent of the present invention is also within the scope of the present invention.
 また、各実施形態を参照して詳述された本発明の認証装置は、認証対象者の3次元顔認証を実行する必要がある任意のシステムにおいて利用することが可能である。例えば、認証対象者の顔を撮影することにより認証対象者の3次元顔認証を行い、家の玄関のロック、車のロック、コンピューターのロック等を解除するためのシステムにおいて、本発明の認証装置を用いることができる。 Further, the authentication device of the present invention described in detail with reference to each embodiment can be used in any system that needs to execute three-dimensional face authentication of a person to be authenticated. For example, in the system for performing the three-dimensional face authentication of the authentication target person by photographing the face of the authentication target person and unlocking the front door of the house, the lock of the car, the lock of the computer, etc., the authentication device of the present invention Can be used.
 本発明の認証装置によれば、認証対象者の顔に特定波長帯域の光を照射することにより観測可能となる認証対象者の顔の生体情報を、認証対象者の顔の3次元情報を生成するための特徴点として利用することができる。そのため、認証対象者の顔の3次元情報を生成するために利用可能な特徴点の数が増加し、より正確に認証対象者の顔の3次元情報を生成することができるので、認証対象者の3次元顔認証の精度を向上させることができる。また、本発明の認証装置では、一定パターンの光を認証対象者に照射するプロジェクターを用いなくとも、認証対象者の顔の頬や額のような凹凸が少なく、利用可能な特徴点が少ない箇所までの距離を算出することができる。そのため、認証装置のシステム構成をシンプルにすることができる。これにより、一定パターンの光を認証対象者の顔に照射するプロジェクターを用いた場合と比較して、認証装置の小型化、低消費電力化、および低コスト化を実現することができる。したがって、本発明は、産業上の利用可能性を有する。 ADVANTAGE OF THE INVENTION According to the authentication device of this invention, the biometric information of the authentication subject's face which becomes observable by irradiating the authentication subject's face with the light of a specific wavelength band is produced|generated, and the three-dimensional information of the authentication subject's face is produced|generated. It can be used as a feature point for doing. Therefore, the number of feature points that can be used to generate the three-dimensional information of the face of the authentication target increases, and the three-dimensional information of the face of the authentication target can be generated more accurately. The accuracy of the three-dimensional face authentication can be improved. Further, in the authentication device of the present invention, even without using a projector that irradiates a person to be authenticated with a certain pattern of light, there are few irregularities such as the cheeks and foreheads of the person to be authenticated, and there are few feature points that can be used. The distance to can be calculated. Therefore, the system configuration of the authentication device can be simplified. As a result, the authentication apparatus can be downsized, the power consumption can be reduced, and the cost can be reduced, as compared with the case of using the projector that irradiates the face of the authentication target person with a certain pattern of light. Therefore, the present invention has industrial applicability.

Claims (7)

  1.  認証対象者の顔に対して特定波長帯域の光を照射するための光源と、
     前記特定波長帯域の光が照射された前記認証対象者の前記顔を撮像し、前記認証対象者の第1の顔画像を取得するための第1の撮像系と、
     前記特定波長帯域の光が照射された前記認証対象者の前記顔を撮像し、前記認証対象者の第2の顔画像を取得するための第2の撮像系と、
     前記第1の顔画像および前記第2の顔画像のそれぞれにおける前記認証対象者の前記顔の複数の特徴点を抽出するための特徴点抽出部と、
     前記特徴点抽出部が抽出した前記第1の顔画像および前記第2の顔画像のそれぞれにおける前記認証対象者の前記顔の前記複数の特徴点に基づいて、前記認証対象者の前記顔の3次元情報を生成するための3次元情報生成部と、
     前記3次元情報生成部が生成した前記認証対象者の前記顔の前記3次元情報を用いて、前記認証対象者の3次元顔認証を実行可能に構成された認証部と、を備え、
     前記特徴点抽出部によって抽出される前記第1の顔画像および前記第2の顔画像のそれぞれにおける前記認証対象者の前記顔の前記複数の特徴点は、前記認証対象者の前記顔に前記特定波長帯域の前記光を照射することにより観測可能となる前記認証対象者の前記顔の生体情報を含むことを特徴とする認証装置。
    A light source for irradiating the face of the authentication target with light in a specific wavelength band,
    A first imaging system for capturing an image of the face of the authentication target person irradiated with light of the specific wavelength band, and acquiring a first face image of the authentication target person;
    A second imaging system for capturing an image of the face of the authentication target person irradiated with light in the specific wavelength band and acquiring a second face image of the authentication target person;
    A feature point extraction unit for extracting a plurality of feature points of the face of the authentication target person in each of the first face image and the second face image;
    3 of the faces of the authentication target person based on the plurality of feature points of the face of the authentication target person in each of the first face image and the second face image extracted by the feature point extracting unit. A three-dimensional information generation unit for generating dimensional information,
    An authentication unit configured to be able to perform three-dimensional face authentication of the authentication target person using the three-dimensional information of the face of the authentication target person generated by the three-dimensional information generation unit,
    The plurality of feature points of the face of the authentication target person in each of the first face image and the second face image extracted by the feature point extraction unit are specified as the face of the authentication target person. An authentication device comprising biometric information of the face of the authentication target person that can be observed by irradiating the light in the wavelength band.
  2.  前記第1の撮像系および前記第2の撮像系のそれぞれは、前記光源から照射される前記光の前記特定波長帯域に対応する波長帯域以外の光を実質的に遮断するバンドパスフィルターを有している請求項1に記載の認証装置。 Each of the first imaging system and the second imaging system has a bandpass filter that substantially blocks light other than a wavelength band corresponding to the specific wavelength band of the light emitted from the light source. The authentication device according to claim 1.
  3.  前記認証対象者の前記顔に前記特定波長帯域の前記光を照射することにより観測可能となる前記認証対象者の前記顔の前記生体情報は、前記認証対象者の前記顔の静脈である請求項1または2に記載の認証装置。 The biometric information of the face of the authentication target person that can be observed by irradiating the face of the authentication target person with the light in the specific wavelength band is a vein of the face of the authentication target person. The authentication device according to 1 or 2.
  4.  前記認証対象者の前記顔に前記特定波長帯域の前記光を照射することにより観測可能となる前記認証対象者の前記顔の前記生体情報は、前記認証対象者の前記顔のメラニン色素である請求項1または2に記載の認証装置。 The biological information of the face of the authentication target person that can be observed by irradiating the face of the authentication target person with the light in the specific wavelength band is a melanin pigment of the face of the authentication target person. The authentication device according to Item 1 or 2.
  5.  前記第1の撮像系は、前記認証対象者の前記顔の第1の光学像を形成するための第1の光学系と、前記第1の光学系によって形成された前記第1の光学像を撮像し、前記第1の顔画像を取得するための第1の撮像素子とを有し、
     前記第2の撮像系は、前記認証対象者の前記顔の第2の光学像を形成するための第2の光学系と、前記第2の光学系によって形成された前記第2の光学像を撮像し、前記第2の顔画像を取得するための第2の撮像素子とを有し、
     前記3次元情報生成部は、前記特徴点抽出部が抽出した前記第1の顔画像および前記第2の顔画像のそれぞれにおける前記認証対象者の前記顔の前記複数の特徴点を用いて前記第1の光学像の前記倍率と前記第2の光学像の前記倍率との像倍比を算出し、算出された前記像倍比に基づいて、前記認証対象者の前記顔の前記3次元情報を生成するよう構成されている請求項1ないし4のいずれかに記載の認証装置。
    The first imaging system forms a first optical system for forming a first optical image of the face of the authentication target person and the first optical image formed by the first optical system. A first image sensor for capturing an image and acquiring the first face image;
    The second imaging system forms a second optical image for forming a second optical image of the face of the authentication target person and the second optical image formed by the second optical system. A second image sensor for capturing an image and obtaining the second face image,
    The three-dimensional information generation unit uses the plurality of feature points of the face of the authentication target person in each of the first face image and the second face image extracted by the feature point extraction unit, An image magnification ratio between the magnification of the first optical image and the magnification of the second optical image is calculated, and the three-dimensional information of the face of the authentication target person is calculated based on the calculated image magnification ratio. The authentication device according to claim 1, wherein the authentication device is configured to generate.
  6.  前記第1の撮像系および前記第2の撮像系は、前記第1の撮像系の前記第1の光学系の光軸と、前記第2の撮像系の前記第2の光学系の光軸とが、平行であるが、一致しないよう、構成されている請求項5に記載の認証装置。 The first imaging system and the second imaging system include an optical axis of the first optical system of the first imaging system and an optical axis of the second optical system of the second imaging system. Are parallel, but are configured such that they do not match.
  7.  前記3次元情報生成部は、前記第1の顔画像中の前記認証対象者の前記顔の前記複数の特徴点と、対応する前記第2の顔画像中の前記認証対象者の前記顔の前記複数の特徴点との間の並進視差に基づいて、前記顔の前記3次元情報を生成するよう構成されている請求項1ないし4のいずれかに記載の認証装置。 The three-dimensional information generation unit includes the plurality of feature points of the face of the authentication target person in the first face image, and the feature points of the face of the authentication target person in the corresponding second face image. The authentication device according to claim 1, wherein the authentication device is configured to generate the three-dimensional information of the face based on a translational parallax between a plurality of feature points.
PCT/JP2019/046890 2019-02-01 2019-11-29 Authentication device WO2020158158A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201980090988.6A CN113383367A (en) 2019-02-01 2019-11-29 Authentication device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019017469A JP2020126371A (en) 2019-02-01 2019-02-01 Authentication device
JP2019-017469 2019-02-01

Publications (1)

Publication Number Publication Date
WO2020158158A1 true WO2020158158A1 (en) 2020-08-06

Family

ID=71841275

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/046890 WO2020158158A1 (en) 2019-02-01 2019-11-29 Authentication device

Country Status (3)

Country Link
JP (1) JP2020126371A (en)
CN (1) CN113383367A (en)
WO (1) WO2020158158A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03200007A (en) * 1989-12-28 1991-09-02 Nippon Telegr & Teleph Corp <Ntt> Stereoscopic measuring instrument
JPH11242745A (en) * 1998-02-25 1999-09-07 Victor Co Of Japan Ltd Method for measuring and processing facial image
JP2001141422A (en) * 1999-11-10 2001-05-25 Fuji Photo Film Co Ltd Image pickup device and image processor
JP2008123312A (en) * 2006-11-14 2008-05-29 Matsushita Electric Ind Co Ltd Vein image collation device, and personal identification device and personal identification system using the same
JP2008217358A (en) * 2007-03-02 2008-09-18 Ricoh Co Ltd Biometric authentication device, and authentication method using biometric authentication device
WO2009107470A1 (en) * 2008-02-27 2009-09-03 日本電気株式会社 Mole identifying device, and personal authentication device, method, and program
WO2018051681A1 (en) * 2016-09-13 2018-03-22 株式会社デンソー Line-of-sight measurement device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03200007A (en) * 1989-12-28 1991-09-02 Nippon Telegr & Teleph Corp <Ntt> Stereoscopic measuring instrument
JPH11242745A (en) * 1998-02-25 1999-09-07 Victor Co Of Japan Ltd Method for measuring and processing facial image
JP2001141422A (en) * 1999-11-10 2001-05-25 Fuji Photo Film Co Ltd Image pickup device and image processor
JP2008123312A (en) * 2006-11-14 2008-05-29 Matsushita Electric Ind Co Ltd Vein image collation device, and personal identification device and personal identification system using the same
JP2008217358A (en) * 2007-03-02 2008-09-18 Ricoh Co Ltd Biometric authentication device, and authentication method using biometric authentication device
WO2009107470A1 (en) * 2008-02-27 2009-09-03 日本電気株式会社 Mole identifying device, and personal authentication device, method, and program
WO2018051681A1 (en) * 2016-09-13 2018-03-22 株式会社デンソー Line-of-sight measurement device

Also Published As

Publication number Publication date
JP2020126371A (en) 2020-08-20
CN113383367A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
JP7157303B2 (en) Authentication device
US9672406B2 (en) Touchless fingerprinting acquisition and processing application for mobile devices
KR101720957B1 (en) 4d photographing apparatus checking finger vein and fingerprint at the same time
US8493178B2 (en) Forged face detecting method and apparatus thereof
KR102538405B1 (en) Biometric authentication system, biometric authentication method and program
JP6769626B2 (en) Multi-faceted stereoscopic imaging device that authenticates fingerprints and finger veins at the same time
JP2007135149A (en) Mobile portable terminal
KR20170078729A (en) Systems and methods for spoof detection in iris based biometric systems
JP6443842B2 (en) Face detection device, face detection system, and face detection method
CN110678871A (en) Face authentication device and face authentication method
JP2020129175A (en) Three-dimensional information generation device, biometric authentication device, and three-dimensional image generation device
KR20140053647A (en) 3d face recognition system and method for face recognition of thterof
KR20150069799A (en) Method for certifying face and apparatus thereof
JP2009015518A (en) Eye image photographing device and authentication device
WO2009110323A1 (en) Living body judgment system, method for judging living body and program for judging living body
KR101053253B1 (en) Apparatus and method for face recognition using 3D information
KR20130133676A (en) Method and apparatus for user authentication using face recognition througth camera
KR101919138B1 (en) Method and apparatus for remote multi biometric
WO2020158158A1 (en) Authentication device
JP2007164401A (en) Solid body registration device, solid body authentication device, solid body authentication system and solid body authentication method
Zhong et al. VeinDeep: Smartphone unlock using vein patterns
JP2004126738A (en) Personal authentication device and authentication method using three-dimensional measurement
KR101792011B1 (en) Multifaceted photographing apparatus checking finger vein and fingerprint at the same time
US11544961B2 (en) Passive three-dimensional face imaging based on macro-structure and micro-structure image sizing
JP2007004534A (en) Face-discriminating method and apparatus, and face authenticating apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19912410

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19912410

Country of ref document: EP

Kind code of ref document: A1