WO2017043314A1 - ガイダンス取得装置、ガイダンス取得方法及びプログラム - Google Patents
ガイダンス取得装置、ガイダンス取得方法及びプログラム Download PDFInfo
- Publication number
- WO2017043314A1 WO2017043314A1 PCT/JP2016/074639 JP2016074639W WO2017043314A1 WO 2017043314 A1 WO2017043314 A1 WO 2017043314A1 JP 2016074639 W JP2016074639 W JP 2016074639W WO 2017043314 A1 WO2017043314 A1 WO 2017043314A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- unit
- face image
- guidance
- difference
- face
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 67
- 238000003384 imaging method Methods 0.000 claims description 82
- 238000001514 detection method Methods 0.000 claims description 76
- 239000011521 glass Substances 0.000 claims description 44
- 230000001815 facial effect Effects 0.000 abstract description 15
- 238000010586 diagram Methods 0.000 description 11
- 230000004044 response Effects 0.000 description 11
- 230000008921 facial expression Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 238000012795 verification Methods 0.000 description 3
- 206010047571 Visual impairment Diseases 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 208000029257 vision disease Diseases 0.000 description 2
- 230000004393 visual impairment Effects 0.000 description 2
- 230000009118 appropriate response Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
- G06V40/173—Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/10—Movable barriers with registering means
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/20—Individual registration on entry or exit involving the use of a pass
- G07C9/22—Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder
- G07C9/25—Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition
- G07C9/253—Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition visually
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/20—Individual registration on entry or exit involving the use of a pass
- G07C9/22—Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder
- G07C9/25—Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition
- G07C9/257—Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition electronically
Definitions
- the present invention relates to a guidance acquisition device, a guidance acquisition method, and a program.
- Face authentication is one method of identity verification such as entrance / exit management for a predetermined area.
- a face image of the person to be authenticated is photographed and compared with a face image photographed in advance to determine whether the person is the person himself or herself.
- the face collation apparatus described in Patent Literature 1 includes an imaging unit that captures a face image of a subject, a light direction detection unit that detects the direction of the brightest light in front of the imaging unit, and the detected light Guidance means for guiding the subject to stand in the direction of the light based on the direction.
- Japanese Patent Laid-Open No. 2004-26883 it is possible to shoot an appropriate face image by guiding the subject to an appropriate shooting position even when the face matching device is installed outdoors or in a place where external light is incident. It is supposed to be possible.
- An object of the present invention is to provide a guidance acquisition device, a guidance acquisition method, and a program that can solve the above-described problems.
- the present invention includes a data acquisition unit that acquires face image data; A shooting section for shooting a face image; The face image indicated by the face image data acquired by the data acquisition unit based on at least one of the face image indicated by the face image data acquired by the data acquisition unit and the face image captured by the imaging unit; A difference detection unit for detecting a difference or a candidate for a difference from the face image captured by the imaging unit; A guidance acquisition unit that acquires guidance based on a difference or a difference candidate detected by the difference detection unit; An output control unit that controls the output unit so that the output unit outputs the guidance acquired by the guidance acquisition unit; A guidance acquisition device is provided.
- the present invention also prestores correspondence information indicating the correspondence between the difference between the face image indicated by the face image data and the face image obtained by shooting and the guidance to the subject regarding the difference.
- a guidance acquisition method performed by a guidance acquisition device including a relationship information storage unit, A data acquisition step of acquiring the face image data; A shooting step of shooting the face image; A difference detection step for detecting a difference between the face image indicated by the face image data acquired in the data acquisition step and the face image shot in the shooting step; A guidance obtaining step for obtaining guidance indicating guidance associated with the difference detected in the difference detection step by the correspondence relationship information; An output control step of controlling the output unit so that the output unit outputs the guidance acquired in the guidance acquisition step; Guidance acquisition method including is also provided.
- the present invention also relates to correspondence information indicating a correspondence relationship between a difference between the face image indicated by the photographing unit and the face image data and the face image obtained by photographing, and guidance to the subject regarding the difference.
- a computer having a correspondence information storage unit for storing A data acquisition step of acquiring the face image data; A shooting step of shooting a face image in the shooting unit; A difference detection step for detecting a difference between the face image indicated by the face image data acquired in the data acquisition step and the face image shot in the shooting step; A guidance acquisition step for acquiring guidance associated with the difference detected in the difference detection step in the correspondence information; An output control step of controlling the output unit so that the output unit outputs the guidance acquired in the guidance acquisition step; We also provide a program for running
- the present invention it is possible to increase the possibility that the person to be authenticated can grasp the handling method when the device that performs face authentication cannot appropriately perform face authentication.
- FIG. 6 is an explanatory diagram illustrating an example of an image captured by the imaging unit when the distance between the subject and the imaging unit is long in the embodiment.
- FIG. 6 is an explanatory diagram illustrating an example of an image captured by the imaging unit when the position of the subject is shifted in the horizontal direction in the embodiment. It is a flowchart which shows the example of the procedure of the process which the face authentication apparatus of the embodiment performs. It is a schematic block diagram which shows the minimum structure of the guidance acquisition apparatus which concerns on this invention.
- FIG. 1 is a schematic block diagram showing a functional configuration of a face authentication apparatus according to an embodiment of the present invention.
- the face authentication device 100 includes a data acquisition unit 110, a photographing unit 120, an output unit 130, a storage unit 170, and a control unit 180.
- the output unit 130 includes a display unit 131 and an audio output unit 132.
- the storage unit 170 includes a correspondence information storage unit 171.
- the control unit 180 includes a difference detection unit 181, a guidance acquisition unit 182, a face authentication unit 183, and an output control unit 184.
- the face authentication device 100 is a device that performs identity verification by face authentication. Below, the case where the face authentication apparatus 100 performs identity verification at the time of immigration at an airport or the like will be described as an example. However, the application scene of the face authentication apparatus 100 is not limited to immigration.
- the face authentication apparatus 100 may be an apparatus that confirms the presence / absence of entry / exit authority to a specific facility by face authentication. In this case, the face authentication device 100 may perform face authentication by reading face image data from an identification card (for example, an ID card) of the person to be authenticated and comparing it with a captured image.
- the face authentication device 100 may store face image data of an entrance / exit authority person in advance.
- the face authentication device 100 corresponds to an example of a guidance acquisition device, and outputs guidance when face authentication fails.
- the face authentication error here is that the result of face authentication cannot be obtained, that is, the face authentication apparatus 100 cannot determine whether or not the person to be authenticated is the same person as the person indicated in the face authentication data.
- the guidance output by the face authentication apparatus 100 is information indicating a method of handling the person to be authenticated for a face authentication error.
- the face authentication device 100 is realized, for example, by a computer having a camera or the like executing a program. Alternatively, the face authentication device 100 may be configured using dedicated hardware.
- the data acquisition unit 110 acquires face image data.
- the data acquisition unit 110 includes a passport reader device, and reads face image data registered in advance in an IC (Integrated Circuit) chip embedded in the passport.
- the data acquisition unit 110 obtains, from the IC chip embedded in the passport, the nationality, gender and the person to be authenticated (the person to be photographed by the photographing unit 120 (hereinafter simply referred to as the person to be photographed)).
- Attribute data indicating attributes such as age is acquired.
- the attribute of the person to be authenticated here is an innate property of the person to be authenticated (that is, a property determined when the person is born).
- the nationality, gender and age of the person to be authenticated can be mentioned as described above.
- the data acquisition unit 110 may acquire at least one of face image data and attribute data from other than the passport.
- the face authentication device 100 is a device that confirms the presence / absence of entry / exit authority to a specific facility by face authentication
- the data acquisition unit 110 authenticates from an identification card (for example, ID card) of the person to be authenticated
- ID card for example, ID card
- the face image data of the subject face image data indicating the face image of the person to be authenticated
- attribute data of the person to be authenticated attribute data indicating the attribute of the person to be authenticated
- the storage unit 170 stores face image data and attribute data in advance (before the face authentication apparatus 100 performs face authentication), and the data acquisition unit 110 stores the face image data of the person to be authenticated and The attribute data of the person to be authenticated may be read out. Note that it is not an essential requirement for the data acquisition unit 110 to acquire the attribute data of the person to be authenticated.
- the data acquisition unit 110 may acquire at least face image data.
- the photographing unit 120 is equipped with a camera for photographing.
- the photographing unit 120 repeatedly photographs the face image of the person to be authenticated.
- the repeated photographing here may be any photographing that can repeatedly obtain a face image.
- the photographing unit 120 may photograph a moving image, or may repeat photographing a still image every predetermined time (for example, every second).
- the output unit 130 outputs information.
- the output unit 130 outputs guidance.
- the guidance output here may be an output of a signal indicating guidance.
- the guidance output here may be guidance presentation in a manner in which the person to be authenticated can grasp the guidance, such as guidance display or guidance voice output.
- the display unit 131 has a display screen such as a liquid crystal panel or an LED (light emitting diode) panel, and displays various images.
- the display unit 131 displays a face image indicated by the face image data acquired by the data acquisition unit 110 and a face image captured by the imaging unit 120 (a face image obtained by imaging by the imaging unit 120).
- the display unit 131 displays guidance.
- the display of guidance by the display unit 131 corresponds to an example of guidance output.
- FIG. 2 is an explanatory diagram illustrating an example of a display screen image of the display unit 131 in a state where the face authentication apparatus 100 is executing face authentication.
- the image P11 shows an example of a passport face image (a face image indicated by the face image data acquired by the data acquisition unit 110).
- This passport face image is an image used for face authentication.
- the display unit 131 displays the face image of the passport, and the photographed person (person to be authenticated) matches the facial expression and the position of the photographed person to the facial expression and position at the time of photographing the face image of the passport. It is possible to reduce the possibility of face authentication error.
- the image P12 shows an example of an image obtained by horizontally inverting the image captured by the imaging unit 120.
- the display unit 131 displays an image obtained by horizontally inverting the latest image captured by the imaging unit 120, so that the subject is positioned at the position of the imaging unit 120 (the imaging range of the imaging unit 120). , It is possible to intuitively know whether it is shifted to the left or right. Note that the image P12 may be displayed as it is without horizontally flipping the latest image captured by the imaging unit 120.
- the display unit 131 displays a message “A face is being photographed” in the area A11. When the display unit 131 displays the message, the subject can know that the face authentication apparatus 100 is executing face authentication.
- FIG. 3 is an explanatory diagram illustrating an example of a display screen image of the display unit 131 when the face authentication of the face authentication apparatus 100 results in an error.
- the image P21 shows an example of a face image of the passport (a face image indicated by the face image data acquired by the data acquisition unit 110).
- An image P22 shows an example of an image used by the face authentication device 100 for face authentication among images taken by the photographing unit 120. Both of the images P21 and P22 are images used by the face authentication device 100 for face authentication.
- the display unit 131 displays the two images used for face authentication, so that the person to be authenticated can estimate the cause of the error by comparing these two images.
- An image P23 shows an example of an image obtained by horizontally inverting the latest image captured by the imaging unit 120.
- the display unit 131 displays the face image of the passport and the latest image captured by the imaging unit 120, and the person to be authenticated detects the difference between the two images and performs a response to reduce the difference. It is possible to reduce the possibility that face authentication performed by the face authentication apparatus 100 will be an error. Note that the image P23 may be displayed as it is without inverting the latest image taken by the photographing unit 120.
- the display unit 131 displays a message “Please approach” to the area A21.
- This message corresponds to an example of guidance, and indicates a handling method in which the subject is approaching the camera of the photographing unit 120.
- the display unit 131 displays an arrow B21.
- This arrow B21 also corresponds to an example of guidance, and shows a handling method in which the person to be photographed approaches the camera of the photographing unit 120, similarly to the message in the area A21.
- the display unit 131 displays the guidance by displaying an icon such as an arrow, so that various subjects who use different languages can grasp the guidance.
- the display unit 131 displays the moving direction of the photographed person with an arrow, so that various photographed persons with different languages can grasp the moving direction.
- the audio output unit 132 includes a speaker and outputs sound.
- the voice output unit 132 outputs the guidance by voice.
- the method by which the face authentication apparatus 100 outputs guidance is not limited to guidance display or voice output.
- the output unit 130 may be configured as a device different from the face authentication device 100, and the face authentication device 100 may output (transmit) a signal indicating guidance to the output unit 130.
- the storage unit 170 is configured using a storage device provided in the face authentication apparatus 100 and stores various types of information.
- the correspondence information storage unit 171 stores correspondence information.
- the correspondence relationship information stored in the correspondence relationship information storage unit 171 includes the difference between the face image of the passport (the face image indicated by the face image data acquired by the data acquisition unit 110) and the face image captured by the imaging unit 120, and It is information which shows the correspondence with the guidance to a to-be-photographed person regarding a difference.
- FIG. 4 is an explanatory diagram illustrating an example of correspondence information stored in the correspondence information storage unit 171.
- the correspondence relationship information storage unit 171 shown in the figure has a tabular data structure, and includes a difference column and a correspondence method column in each row.
- the difference column is a column for storing information indicating the difference between the face image of the passport and the face image taken by the photographing unit 120.
- the handling method column is a column for storing information indicating a handling method of the person to be photographed (person to be authenticated) for the difference indicated in the difference column.
- the information stored in the response method column corresponds to an example of guidance to the subject.
- the correspondence relationship information the difference between the face image indicated by the face image data acquired by the data acquisition unit 110 and the face image captured by the imaging unit 120 and the guidance to the subject regarding the difference are included. Corresponds to each line.
- the correspondence information in FIG. 4 shows a correspondence method of “please make no expression” as a correspondence method when the difference is “expression”.
- the correspondence information in FIG. 4 corresponds to “please inform the staff” as a response method when it is difficult to immediately resolve the difference between the face image of the passport and the face image taken by the photographing unit 120. Shows how. For example, if there is no beard in the face image of the passport, but there is a beard in the face image taken by the photographing unit 120, it is considered difficult to shave the beard on the spot. Therefore, the correspondence relationship information in FIG. 4 shows a correspondence method of “Please inform the staff” as a correspondence method when the difference is “beard”. When the face authentication apparatus 100 outputs (for example, displays) this response method, the person to be authenticated can recognize the response method of contacting the staff and asking for a response method.
- the correspondence relationship information stored in the correspondence relationship information storage unit 171 is not limited to storing information indicating the correspondence method of the photographed person as guidance to the photographed person.
- the correspondence information storage unit 171 stores correspondence information storing information indicating the difference between the face image of the passport and the face image taken by the photographing unit 120 as guidance to the subject. Also good.
- information indicating a difference between the face image of the passport and the face image taken by the photographing unit 120 corresponds to an example of the guidance.
- the display unit 131 may display a face image of a passport and a guidance that “the facial expression is different”. In this case, the person to be photographed can grasp and execute the corresponding method of looking at the face image of the passport and expressing the same expression as in the case of the face image.
- the control unit 180 controls each unit of the face authentication device 100 to perform various processes.
- the control unit 180 is realized, for example, when a CPU (Central Processing Unit) provided in the face authentication device 100 reads out and executes a program from the storage unit 170.
- the difference detection unit 181 is the face image data acquired by the data acquisition unit 110 based on at least one of the face image indicated by the face image data acquired by the data acquisition unit 110 and the face image captured by the imaging unit 120. And a candidate for the difference or the difference between the face image shown by the photographing unit 120 and the face image photographed by the photographing unit 120.
- the difference detection unit 181 may detect the difference candidate based on the face image indicated by the face image data acquired by the data acquisition unit 110. For example, when the face image of the passport acquired by the data acquisition unit 110 is a face image wearing glasses (glasses), the difference detection unit 181 may detect whether or not glasses are worn as a candidate for difference. Good. When the difference detection unit 181 detects the candidate, for example, the display unit 131 displays the face image of the passport with the glasses highlighted, so that the subject can notice that the glasses should be worn. The possibility that the face authentication performed by the face authentication apparatus 100 will cause an error can be reduced by the person who has not been wearing glasses wearing glasses.
- the difference detection unit 181 can detect a difference candidate even when the imaging unit 120 is not imaging the face of the subject. Therefore, the display unit 131 can display the suggestion of wearing glasses at an early stage, for example, before the subject stands within the imaging range of the imaging unit 120.
- the difference detection unit 181 is based on the face image captured by the image capturing unit 120, the difference between the face image indicated by the face image data acquired by the data acquisition unit 110 and the face image captured by the image capturing unit 120 or You may make it detect the candidate of a difference. For example, when the face image photographed by the photographing unit 120 is a face image wearing a mask, the difference detection unit 181 may detect whether or not the mask is worn as a candidate for difference. When the difference detection unit 181 detects the candidate, for example, the display unit 131 displays a message “please remove the mask”, so that the subject can notice that the mask should be removed. When the person to be photographed removes the mask, the possibility that the face authentication performed by the face authentication apparatus 100 may cause an error can be reduced.
- the difference detection unit 181 can detect a difference candidate even when the data acquisition unit 110 has not acquired the face image data. Therefore, the display unit 131 can display a suggestion to remove the mask at an early stage, for example, before the subject places the passport on the passport reader device. Note that the difference detection unit 181 may detect a difference candidate other than the presence / absence of wearing a mask, such as detecting that the face image captured by the imaging unit 120 is in the landscape orientation.
- the difference detection unit 181 may detect a difference between the face image indicated by the face image data acquired by the data acquisition unit 110 and the face image captured by the imaging unit 120. In particular, when the face authentication performed by the face authentication unit 183 results in an error, the difference detection unit 181 determines the difference between the face image indicated by the face image data acquired by the data acquisition unit 110 and the face image captured by the image capturing unit 120. A difference that is a cause of the face authentication error is detected. Differences detected by the difference detection unit 181 will be described with reference to FIGS. 5 and 6.
- FIG. 5 is an explanatory diagram illustrating an example of an image photographed by the photographing unit 120 when the distance between the subject and the photographing unit 120 is long.
- This figure shows an example of a display screen image of the display unit 131 (an image displayed on the display screen by the display unit 131).
- the image P31 shows an example of a face image of the passport (a face image indicated by the face image data acquired by the data acquisition unit 110).
- the image P32 shows an example of an image obtained by horizontally inverting the image captured by the image capturing unit 120.
- the imaging unit 120 captures an image of the subject including the subject's face image (image P33).
- the difference detection unit 181 detects the position of the eye in the image captured by the imaging unit 120, and extracts a face portion from the image captured by the imaging unit 120. Then, the difference detection unit 181 determines whether or not the size of the interval between the left eye and the right eye (the length of the arrow B31) is smaller than the interval threshold, and whether or not the size of the face portion is smaller than the face portion threshold. Determine.
- the interval threshold value and the face portion threshold value are both predetermined threshold values (for example, constants) set as determination threshold values for determining whether or not the distance between the subject and the imaging unit 120 is long.
- the size of the face part may be any one of the area of the face part image, the vertical length of the face part image, the horizontal width of the face part image, or a combination thereof.
- the difference that the distance between the subject and the imaging unit 120 is long corresponds to the difference “distance (far)” in the example of FIG. 4.
- the method of detecting the difference that the difference detection unit 181 has a long distance between the subject and the imaging unit 120 is not limited to the above method.
- the difference detection unit 181 is different in that the distance between the subject and the imaging unit 120 is long based on only one of the size of the interval between the left eye and the right eye and the size of the face part. You may make it detect a point.
- FIG. 6 is an explanatory diagram illustrating an example of an image captured by the imaging unit 120 when the position of the subject is shifted in the horizontal direction.
- the figure shows an example of a display screen image of the display unit 131.
- the image P41 shows an example of a passport face image (a face image acquired by the data acquisition unit 110).
- the image P42 shows an example of an image obtained by horizontally inverting the image captured by the imaging unit 120.
- the imaging unit 120 captures an image of the subject including the face image (image P43) of the subject.
- the x axis is a horizontal coordinate axis in the image P42.
- the y axis is a vertical coordinate axis in the image P42.
- the difference detection unit 181 detects the position of the eye in the image captured by the imaging unit 120, and extracts a face portion from the image captured by the imaging unit 120. Then, the difference detection unit 181 determines whether the x coordinate of the left eye is larger than the left eye coordinate right shift threshold, whether the right eye x coordinate is larger than the right eye coordinate right shift threshold, and the x coordinate of the face part (for example, Then, it is determined whether or not the average value of the left end x-coordinate and the right end x-coordinate of the region extracted as the face portion is larger than the face portion coordinate right shift threshold.
- the left eye coordinate right shift threshold, the right eye coordinate right shift threshold, and the face part coordinate right shift threshold all indicate whether or not the position of the subject is shifted to the right with respect to the photographing unit 120 as viewed from the subject. It is a predetermined threshold (for example, a constant) set as the determination threshold.
- the difference detection unit 181 determines (detects) that the position of the subject is deviated to the right with respect to the photographing unit 120 when viewed from the subject.
- the difference detection unit 181 determines (detects) that the position of the subject is deviated to the left with respect to the photographing unit 120 when viewed from the subject.
- the left-eye coordinate left-shift threshold, the right-eye coordinate left-shift threshold, and the face-part coordinate left-shift threshold all indicate whether the position of the subject is shifted to the left with respect to the photographing unit 120 when viewed from the subject. It is a predetermined threshold (for example, a constant) set as the determination threshold.
- the output unit 130 outputs (for example, displays) a response method “Please move to the left”.
- the difference that the position of the person to be photographed is shifted to the left with respect to the photographing unit 120 when viewed from the person to be photographed is that the position deviation (left) of the “positional deviation (left and right)” in the example of FIG. It corresponds to.
- the output unit 130 outputs (for example, displays) a response method “Please move to the right”.
- the method for the difference detection unit 181 to detect the difference that the position of the subject is displaced in the horizontal direction is not limited to the above method.
- the difference detection unit 181 may replace the left-eye x-coordinate, the right-eye x-coordinate, and the x-coordinate of the face portion, or in addition to one or more of them, the x-coordinate of the left ear, The difference that the position of the subject is shifted laterally (shifted to the left or shifted to the right) based on one or more of the x coordinate, the x coordinate of the nose, and the x coordinate of the mouth You may make it detect a point.
- the difference detection unit 181 may use the x coordinate at the center of the part as the x coordinate of the face part, or may use the x coordinate at the left end of the part. You may make it use the x coordinate of the right end of a part.
- the guidance acquisition unit 182 acquires guidance based on the difference or the difference candidate detected by the difference detection unit 181. For example, when the difference detection unit 181 detects that the face image of the passport is a face image wearing glasses as described above, the guidance acquisition unit 182 displays the face image of the passport with glasses emphasized. Information indicating the processing may be acquired. In order for the difference detection unit 181 to acquire the information, the difference detection unit 181 or the storage unit 170 is a candidate for difference that the face image of the passport is a face image wearing glasses (the presence or absence of wearing glasses). Information that associates a candidate for difference) with a process of displaying a face image of a passport with glasses emphasized is stored in advance. It should be noted that displaying the face image of the passport with the glasses highlighted corresponds to an example of guidance that suggests that the subject wears glasses.
- the difference detection unit 181 detects that the face image photographed by the photographing unit 120 is a face image wearing a mask as described above
- the guidance acquisition unit 182 says “please remove the mask”. You may make it acquire a message.
- the difference detection unit 181 or the storage unit 170 may select a difference candidate (a pair of glasses) that the face image captured by the imaging unit 120 is a face image wearing a mask.
- Information that associates the difference candidate (whether wearing or not) with the message “Please remove the mask” is stored in advance.
- the message “please remove the mask” corresponds to an example of guidance that suggests to the subject to remove the mask.
- the guidance acquisition unit 182 displays a message “Please turn to the front”. You may make it acquire.
- the guidance acquisition unit 182 may acquire information indicating processing such as indicating the camera position to the subject.
- the guidance acquisition part 182 acquires the guidance matched with the difference detected by the difference detection part 181 by correspondence information.
- the correspondence information storage unit 171 stores correspondence information in which a difference between a passport image and an image captured by the photographing unit 120 is associated with a message as a guidance.
- the guidance acquisition part 182 acquires guidance by reading the message as guidance from correspondence information.
- the guidance acquisition part 182 acquire guidance by methods other than reading guidance from correspondence information.
- the guidance acquisition unit 182 may read the message from the correspondence information and acquire the guidance by translating the message into the language used by the person to be authenticated.
- the data acquisition unit 110 reads attribute data indicating the nationality of the person to be authenticated from the passport, and translates the message into a language corresponding to the read nationality, thereby acquiring language guidance according to the nationality. It may be.
- the guidance acquisition unit 182 may acquire guidance according to the attribute indicated by the attribute data acquired by the data acquisition unit 110.
- the correspondence information storage unit 171 stores correspondence information for each of a plurality of languages as messages as guidance, and the guidance acquisition unit 182 sends a message in a language according to the nationality read by the data acquisition unit 110. You may make it read from correspondence information.
- the face authentication unit 183 performs face authentication using the face image indicated by the face image data acquired by the data acquisition unit 110 and the face image captured by the imaging unit 120. With the face authentication, the face authentication unit 183 confirms the identity of the person to be authenticated. That is, the face authentication unit 183 determines whether the person of the face image indicated by the face image data acquired by the data acquisition unit 110 and the person indicated by the face image captured by the image capturing unit 120 are the same person. Note that the face authentication unit 183 may be configured as a part of the face authentication device 100, or may be configured as a device different from the face authentication device 100.
- the output control unit 184 controls the output unit 130 so that the output unit 130 outputs the guidance acquired by the guidance acquisition unit 182.
- the output control unit 184 is glasses that prompt the subject to wear glasses.
- the output unit 130 may be controlled so that the output unit 130 outputs a wearing instruction.
- the possibility that the face authentication performed by the face authentication unit 183 will cause an error can be reduced by the person wearing the glasses in accordance with the glasses wearing instruction.
- the output control unit 184 causes the display unit 131 to display the face image indicated by the face image data acquired by the data acquisition unit 110. Further, in the output control unit 184, the output unit 130 outputs a condition matching instruction that prompts the subject to match the condition for the photographing unit 120 to photograph the facial image with the condition indicated by the facial image displayed on the display unit 131.
- the output unit 130 may be controlled as described above. For example, in the example of the glasses wearing instruction, the output control unit 184 may cause the display unit 131 to display a passport face image (a face image indicated by the face image data acquired by the data acquisition unit 110). When the display unit 131 displays the face image of the passport, the subject can check the content of the condition adjustment instruction with reference to the face image.
- the output control unit 184 may control the output unit 130 so that the output unit 130 outputs the difference information indicating the difference detected by the difference detection unit 181.
- the output control unit 184 may control the display unit 131 so that the display unit 131 displays the guidance that “the facial expression is different”.
- the person to be photographed can grasp the difference between the face image of the passport and the face image photographed by the photographing unit 120.
- grasping the difference the person to be photographed can increase the possibility of grasping a method for dealing with the difference (for example, a method for reducing the difference).
- the output control unit 184 determines a different part between the face image of the passport (the face image indicated by the face image data acquired by the data acquisition unit 110) and the face image shot by the shooting unit 120, and The difference information shown in either or both of the face images photographed by the photographing unit 120 may be displayed on the display unit 131.
- the display unit 131 applies a broken line to the face image of the passport according to the control of the output control unit 184. It is also possible to display in glasses or to perform cooperative display such as enclosing the glasses of the face image captured by the imaging unit 120 with a broken line.
- the subject can more reliably grasp the difference between the face image of the passport and the face image taken by the photographing unit 120, and a method for dealing with the difference (for example, a method for reducing the difference). Can increase the possibility of grasping.
- the output control unit 184 also includes the passport face image (the face image indicated by the face image data acquired by the data acquisition unit 110), the latest face image captured by the imaging unit 120, and the guidance acquired by the guidance acquisition unit 182.
- An image including the above display may be displayed on the display unit 131.
- the image P21 in FIG. 3 corresponds to an example of a passport face image.
- the image P23 in FIG. 3 corresponds to an example of the latest face image photographed by the photographing unit 120.
- the display of the message in the area A21 in FIG. 3 corresponds to an example of the display of the guidance acquired by the guidance acquisition unit 182.
- the output control unit 184 acquires the face image that has been captured by the image capturing unit 120 and used as an error by the face authentication unit 183, the latest face image captured by the image capturing unit 120, and the guidance acquisition unit 182.
- An image including the displayed guidance may be displayed on the display unit 131.
- the image P22 in FIG. 3 corresponds to an example of a face image that is photographed by the photographing unit 120 and used as an error by the face authentication unit 183 for face authentication.
- the image P23 in FIG. 3 corresponds to an example of the latest face image photographed by the photographing unit 120.
- the display of the message in the area A21 in FIG. 3 corresponds to an example of the display of the guidance acquired by the guidance acquisition unit 182.
- the person to be photographed has solved the cause of the error in the face image photographed by the photographing unit 120 and used as an error by the face authentication unit 183 for the face authentication in the latest face image photographed by the photographing unit 120. You can check whether or not.
- the output control unit 184 has taken an error when the face image of the passport (the face image indicated by the face image data acquired by the data acquisition unit 110) and the face authentication unit 183 used for face authentication.
- An image including the face image, the latest face image captured by the image capturing unit 120, and the guidance display acquired by the guidance acquisition unit 182 may be displayed on the display unit 131.
- the image P21 in FIG. 3 corresponds to an example of a passport face image.
- the image P22 in FIG. 3 corresponds to an example of a face image that is photographed by the photographing unit 120 and used as an error by the face authentication unit 183 for face authentication.
- the image P23 in FIG. 3 corresponds to an example of the latest face image photographed by the photographing unit 120.
- the display of the message in the area A21 in FIG. 3 corresponds to an example of the display of the guidance acquired by the guidance acquisition unit 182.
- the output control unit 184 may display the guidance acquired by the guidance acquisition unit 182 on the display unit 131 as character information.
- the subject can read the guidance display based on the text information and grasp the guidance.
- the display of the message in the area A21 in FIG. 3 corresponds to an example of display with the text information of the guidance acquired by the guidance acquisition unit 182.
- the output control unit 184 may display guidance to the subject on the display unit 131 with icons.
- the display of arrow B21 in FIG. 3 corresponds to an example of icon display of guidance to the subject.
- the correspondence relationship information storage unit 171 determines the difference in the face position between the face image of the passport (the face image indicated by the face image data acquired by the data acquisition unit 110) and the face image captured by the image capturing unit 120, and the face In order to reduce the difference in position, correspondence information associated with guidance indicating the direction in which the subject moves the face may be stored.
- the difference detection unit 181 detects a difference in the position of the face between the face image of the passport and the face image captured by the imaging unit 120
- the output control unit 184 moves the subject (particularly the face It is also possible to display on the display unit 131 guidance indicating the direction in which the arrow is moved.
- the display of the arrow B21 in FIG. 3 corresponds to an example of guidance display in which the direction in which the subject moves the face is indicated by an arrow.
- the output control unit 184 may cause the voice output unit 132 to output the guidance at least by voice.
- the subject can grasp the guidance even when the display screen of the display unit 131 is not viewed or when the display screen of the display unit 131 cannot be viewed due to visual impairment or the like.
- the output control unit 184 may cause the output unit 130 to output an alarm indicating that the output unit 130 outputs guidance.
- the output control unit 184 may display the guidance on the display unit 131 and cause the voice output unit 132 to output a voice message “Please check the guidance”. Thereby, the possibility that the photographed person may fail to grasp the guidance (for example, the possibility of overlooking the guidance display) can be reduced.
- FIG. 7 is a flowchart illustrating an example of a processing procedure performed by the face authentication apparatus 100.
- the face authentication device 100 starts the process of FIG.
- the data acquisition unit 110 reads data from the IC chip embedded in the passport (step S101).
- the data acquisition unit 110 reads image data of a face image (passport face image) registered in advance from the IC chip.
- the photographing unit 120 photographs a person to be authenticated (photographed person) with a camera and acquires a face image (step S102). Then, the face authentication unit 183 performs face authentication using the face image indicated by the image data acquired by the data acquisition unit 110 and the face image captured by the imaging unit 120 (step S103). Then, the face authentication unit 183 determines whether the result of face authentication has been obtained or whether the face authentication has failed (step S104).
- step S111 the face authentication apparatus 100 performs processing based on the authentication result (step S111). For example, when the authentication is successful (that is, when it is determined that the person to be authenticated is the same person as described in the passport), the face authentication apparatus 100 displays a message indicating that entry is possible. In addition, the face authentication device 100 allows the person to be authenticated to pass through the gate by opening the door of the gate. On the other hand, when the authentication fails (that is, when it is determined that the person to be authenticated is different from the person described in the passport), the face authentication apparatus 100 displays a message that prompts the person to be authenticated to contact the staff. After step S111, the process of FIG. 7 ends.
- step S104 when it is determined that the face authentication has failed (step S104: NO), the difference detection unit 181 determines the face image obtained from the passport in step S101 and the face image captured by the image capturing unit 120 in step S102. Are detected (step S121). And the guidance acquisition part 182 acquires the guidance which shows the response
- the correspondence relationship information storage unit 171 stores the correspondence relationship information in which the difference between the face images is associated with the guidance indicating the correspondence method for the difference.
- the guidance acquisition part 182 reads the guidance matched with the difference obtained by step S121 from correspondence information.
- the output part 130 outputs the guidance obtained by step S122 according to control of the output control part 184 (step S123). After step S123, the process returns to step S102.
- the face authentication apparatus 100 may perform a predetermined process such as displaying a message prompting the person to be authenticated to contact a staff member. Specifically, the number of times that the face authentication unit 183 determines that the face authentication has failed in step S104 is counted, and it is determined whether or not the number of times has exceeded a predetermined number. If it is determined in step S104 that the number of times it has been determined that face authentication has failed is greater than or equal to a predetermined number, for example, the output control unit 184 displays a message on the display unit 131 prompting the person to be authenticated to contact a staff member. Display. Thereby, the face authentication apparatus 100 can cope with a case where it is difficult to eliminate a face authentication error.
- the data acquisition unit 110 acquires face image data.
- the photographing unit 120 photographs a face image of the person to be authenticated.
- the correspondence relationship information storage unit 171 includes differences between the face image indicated by the face image data acquired by the data acquisition unit 110 and the face image captured by the image capturing unit 120, and guidance to the subject regarding the difference. Correspondence relation information indicating the correspondence relation is stored in advance.
- the difference detection unit 181 detects a difference between the face image indicated by the face image data acquired by the data acquisition unit 110 and the face image captured by the imaging unit 120.
- the guidance acquisition part 182 acquires the guidance matched with the difference detected by the difference detection part 181 by correspondence information.
- the output control unit 184 controls the output unit 130 so that the output unit 130 outputs the guidance acquired by the guidance acquisition unit 182.
- the data acquisition unit 110 acquires face image data from the passport. Thereby, it is not necessary to prepare face image data exclusively for processing of the face authentication apparatus 100, and in this respect, the burden on the administrator of the face authentication apparatus 100 can be reduced. Further, the storage unit 170 does not need to store face image data in advance. In this respect, the storage capacity of the storage unit 170 can be small, and the manufacturing cost of the face authentication apparatus 100 can be reduced.
- the data acquisition unit 110 acquires attribute data indicating the attributes of the subject from the passport. Then, the guidance acquisition unit 182 acquires guidance according to the attribute indicated by the attribute data acquired by the data acquisition unit 110. Thereby, the face authentication apparatus 100 can output appropriate guidance according to the attribute of the subject.
- the data acquisition unit 110 acquires attribute data indicating the nationality of the subject from the passport.
- the guidance acquisition part 182 acquires the guidance of the language according to the nationality which attribute data shows. Thereby, the face authentication apparatus 100 can output guidance in a language that can be understood by the subject.
- the output control unit 184 outputs a glasses wearing instruction that prompts the subject to wear glasses.
- the output unit 130 is controlled to output.
- the person wearing the glasses in accordance with the glasses wearing instruction reduces the difference between the face image indicated by the face image data acquired by the data acquisition unit 110 and the face image captured by the imaging unit 120, and the face authentication unit 183 It is possible to reduce the possibility that the authentication to be performed becomes an error.
- the output control unit 184 may cause the output unit 130 to output a glasses wearing instruction that prompts the subject to wear glasses.
- the output control unit 184 causes the output unit 130 to output a glasses wearing instruction that prompts the subject to wear glasses. It may be.
- the glasses wearing instruction corresponds to an example of guidance.
- the output control unit 184 causes the output unit 130 to display the face image indicated by the face image data acquired by the data acquisition unit 110. Further, in the output control unit 184, the output unit 130 outputs a condition adjustment instruction that prompts the subject to match the condition for the photographing unit 120 to capture the face image with the condition indicated by the face image displayed on the display unit 131.
- the output unit 130 is controlled.
- the face authentication performed by the face authenticating unit 183 is performed when the subject performs matching that matches the condition indicated by the face image displayed on the display unit 131 with the condition that the image capturing unit 120 captures the face image according to the condition matching instruction. Can reduce the possibility of errors.
- the output control unit 184 controls the output unit 130 so that the output unit 130 outputs the difference information indicating the difference detected by the difference detection unit 181.
- the subject can grasp the difference between the face image indicated by the face image data acquired by the data acquisition unit 110 and the face image captured by the imaging unit 120 with reference to the difference information. .
- the subject can increase the possibility of grasping a method for dealing with the difference (for example, a method for reducing the difference).
- the output control unit 184 also includes a face indicated by the face image data acquired by the data acquisition unit 110 that is different between the face image indicated by the face image data acquired by the data acquisition unit 110 and the face image acquired by the shooting unit 120. Difference information shown in either or both of the image and the face image taken by the photographing unit 120 may be displayed on the display unit 131. For example, if the face image of the passport (the face image indicated by the face image data acquired by the data acquisition unit 110) is not wearing glasses, and the face image captured by the imaging unit 120 is wearing glasses, The face image may be displayed with a broken line, or the face image captured by the photographing unit 120 may be highlighted with a broken line.
- the person to be photographed can more surely understand the difference between the face image of the passport and the face image photographed by the photographing unit 120.
- the person to be photographed can increase the possibility of grasping a method for dealing with the difference (for example, a method for reducing the difference).
- the face detection performed by the face authentication unit 183 causes an error in the difference detection unit 181
- the face image indicated by the face image data acquired by the data acquisition unit 110 and the face image captured by the shooting unit 120 are detected.
- a difference that is a cause of the face authentication error may be estimated (detected).
- the output control unit 184 controls the output unit so that the output unit 130 outputs a corresponding method according to the difference, so that the photographed person can respond to the cause of the face authentication error (for example, the face This can be understood as a response method that reduces the difference that caused the authentication error.
- the output control unit 184 includes an image including the face image indicated by the face image data acquired by the data acquisition unit 110, the latest face image captured by the imaging unit 120, and the guidance display acquired by the guidance acquisition unit 182. Is displayed on the display unit 131.
- the photographed person grasps a method for dealing with the difference between the face image indicated by the face image data acquired by the data acquisition unit 110 and the face image captured by the shooting unit 120. be able to.
- the display unit 131 displays the face image indicated by the face image data acquired by the data acquisition unit 110 and the latest face image captured by the imaging unit 120, so that the data acquisition unit 110 can capture the subject.
- the difference between the face image indicated by the acquired face image data and the face image photographed by the photographing unit 120 can be grasped, and the possibility that the correspondence method for the difference can be grasped can be increased.
- the output control unit 184 is acquired by the guidance acquisition unit 182 and the face image that has been imaged by the imaging unit 120 and used as an error by the face authentication unit for facial authentication, the latest face image captured by the imaging unit 120, and An image including the displayed guidance is displayed on the display unit 131.
- the person to be photographed has solved the cause of the error in the face image photographed by the photographing unit 120 and used as an error by the face authentication unit 183 for the face authentication in the latest face image photographed by the photographing unit 120. You can check whether or not.
- the output control unit 184 also includes a face image indicated by the face image data acquired by the data acquisition unit 110, a face image captured by the image capturing unit 120 and used as an error by the face authentication unit 183, and an image capturing unit.
- An image including the latest face image photographed by 120 and the guidance display acquired by the guidance acquisition unit 182 is displayed on the output unit.
- the display unit 131 displays the face image indicated by the face image data acquired by the data acquisition unit 110 and the face image captured by the imaging unit 120 and used as an error by the face authentication unit 183 for the face authentication.
- the subject can confirm the difference between these two face images. It is expected that the photographed person can accurately grasp the guidance displayed on the display unit 131 by confirming the difference.
- the output control unit 184 causes the display unit 131 to display the guidance acquired by the guidance acquisition unit 182 as character information.
- the subject can read the guidance display based on the text information and grasp the guidance.
- the output control unit 184 causes the display unit 131 to display guidance to the subject as an icon.
- the display unit 131 displays the guidance by displaying an icon such as an arrow, so that various subjects who have different languages can grasp the guidance.
- the correspondence information storage unit 171 reduces the difference in the position of the face between the face image indicated by the face image data acquired by the data acquisition unit 110 and the face image captured by the imaging unit 120, and the difference in the position of the face.
- correspondence information associated with guidance indicating the direction in which the subject moves the face is stored.
- the difference detection unit 181 detects a difference in the position of the face between the face image indicated by the face image data acquired by the data acquisition unit 110 and the face image captured by the imaging unit
- Guidance indicating the direction with an arrow is displayed on the display unit 131.
- various subjects who use different languages can grasp the moving direction.
- the output control unit 184 causes the voice output unit 132 to output the guidance at least by voice.
- the subject can grasp the guidance even when the display screen of the display unit 131 is not viewed or when the display screen of the display unit 131 cannot be viewed due to visual impairment or the like.
- the output control unit 184 causes the output unit 130 to output an alarm indicating that the output unit 130 outputs guidance. This can reduce the possibility that the subject will miss the guidance.
- FIG. 8 is a schematic block diagram showing the minimum configuration of the guidance acquisition apparatus according to the present invention.
- a guidance acquisition device 10 shown in FIG. 1 includes a data acquisition unit 11, a photographing unit 12, a correspondence relationship information storage unit 13, a difference detection unit 14, a guidance acquisition unit 15, and an output control unit 16.
- the data acquisition unit 11 acquires face image data.
- the photographing unit 12 photographs a face image.
- the correspondence relationship information storage unit 13 also includes differences between the face image indicated by the face image data acquired by the data acquisition unit 11 and the face image captured by the imaging unit 12, and guidance to the subject regarding the differences. Corresponding relationship information indicating the corresponding relationship is stored in advance.
- the difference detection unit 14 detects a difference between the face image indicated by the face image data acquired by the data acquisition unit 11 and the face image captured by the imaging unit 12.
- the guidance acquisition part 15 acquires the guidance matched with the difference detected by the difference detection part 14 by correspondence information.
- the output control unit 16 controls the output unit so that the output unit outputs the guidance acquired by the guidance acquisition unit 15.
- the authentication target It is possible to increase the possibility that the person can grasp the handling method.
- control unit 180 the difference detection unit 14, the guidance acquisition unit 15, and the output control unit 16 in the above-described embodiment can be realized by the CPU reading and executing the program.
- a program for realizing this function may be recorded on a computer-readable recording medium, and the program recorded on the recording medium may be read into a computer system and executed.
- the “computer system” includes an OS and hardware such as peripheral devices.
- the “computer-readable recording medium” refers to a storage device such as a flexible medium, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, and a hard disk incorporated in a computer system.
- the “computer-readable recording medium” dynamically holds a program for a short time like a communication line when transmitting a program via a network such as the Internet or a communication line such as a telephone line.
- a volatile memory inside a computer system serving as a server or a client in that case may be included and a program held for a certain period of time.
- the program may be a program for realizing a part of the above-described functions, and may be a program capable of realizing the functions described above in combination with a program already recorded in a computer system. You may implement
- a guidance acquisition device comprising:
- Appendix 3 The guidance acquisition device according to Appendix 1 or Appendix 2, wherein the data acquisition unit acquires face image data from a passport.
- the said data acquisition part acquires the attribute data which show a to-be-photographed person's attribute from the said passport
- the said guidance acquisition part acquires the guidance according to the attribute which the attribute data which the said data acquisition part acquired shows The guidance acquisition apparatus according to Supplementary Note 3.
- the output control unit When the face image indicated by the face image data acquired by the data acquisition unit is an image wearing glasses, the output control unit outputs the glasses wearing instruction that prompts the subject to wear glasses.
- the guidance acquisition device according to any one of appendices 1 to 5, wherein the output unit is controlled such that the output unit outputs.
- the said output control part controls the said output part so that the said output part may display the face image which the face image data acquired by the said data acquisition part shows, and also the said imaging
- the output unit is controlled so that the output unit outputs a condition matching instruction that prompts the subject to match the condition indicated by the face image displayed by the output unit.
- the said output control part controls the said output part so that the said output part may output the difference information which shows the difference detected by the said different point detection part.
- the said output control part is the face image data which the said data acquisition part acquired the part which differs in the face image which the face image data which the said data acquisition part acquired shows, and the face image which the said imaging
- the guidance acquisition apparatus according to appendix 8, wherein the output unit is controlled so that the output unit displays difference information shown in at least one of the face image shown and the face image taken by the photographing unit.
- the said difference detection part is the said face authentication.
- the face authentication performed by the image acquisition unit is an error
- the difference between the face image indicated by the face image data acquired by the data acquisition unit and the face image captured by the image capturing unit is the cause of the face authentication error.
- the guidance acquisition device according to any one of appendices 1 to 9, wherein the difference is detected.
- photography part repeats imaging
- the said output control part is the latest facial image which the said facial image data which the said data acquisition part acquired shows, and the said imaging
- the guidance acquisition device according to any one of appendices 1 to 10, wherein the output unit is controlled so that the output unit displays an image including a display of guidance acquired by the guidance acquisition unit.
- photography part repeats imaging
- the said output control part is a face image which the said imaging
- the guidance acquisition apparatus according to appendix 10, wherein the output unit is controlled so that the output unit displays an image including the latest face image taken by and the display of the guidance acquired by the guidance acquisition unit.
- photography part repeats imaging
- the output unit displays an image including an error face image used for face authentication, the latest face image captured by the imaging unit, and the guidance display acquired by the guidance acquisition unit.
- the guidance acquisition device according to appendix 10, wherein the guidance acquisition device is controlled.
- the correspondence information storage unit includes a difference in face position between a face image indicated by the face image data acquired by the data acquisition unit and a face image captured by the photographing unit, and a difference in position of the face.
- the correspondence information associated with the guidance indicating the direction in which the subject moves the face is stored, and the output control unit is acquired by the data acquisition unit by the difference detection unit.
- the output unit is controlled so that the output unit displays guidance indicating the direction with an arrow;
- the guidance acquisition device according to attachment 15.
- Correspondence Relationship Preliminarily Stores Correspondence Relationship Information that Shows Correspondence Relationship between Difference between Face Image Represented by Face Image Data and Face Image Obtained by Shooting, and Guidance to Subject for Relating to the Difference
- a guidance acquisition method performed by a guidance acquisition device including an information storage unit, the data acquisition step for acquiring the face image data, the shooting step for capturing the face image, and the face image data acquired in the data acquisition step.
- a difference detection step for detecting a difference between the face image shown and the face image photographed in the photographing step, and guidance associated with the difference detected in the difference detection step in the correspondence information.
- the guidance acquisition step for acquiring the guidance to be displayed and the output unit outputs the guidance acquired in the guidance acquisition step.
- Guidance acquisition method comprising an output control step of controlling the output unit.
- the present invention it is possible to increase the possibility that the person to be authenticated can grasp the handling method when the device that performs face authentication cannot appropriately perform face authentication.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Geometry (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Collating Specific Patterns (AREA)
- Image Input (AREA)
Abstract
Description
ここで、顔認証の精度を向上させるための技術が幾つか提案されている。例えば、特許文献1に記載の顔照合装置は、被撮影者の顔画像を撮影する撮影手段と、撮影手段の前方の最も明るい光の方向を検出する光方向検出手段と、検出された前記光の方向に基づいて、前記被撮影者を前記光の方向に立つように案内する案内手段とを備える。特許文献1では、これにより、顔照合装置が屋外や外光が射しこむ場所に設置された場合であっても、被撮影者を適切な撮影位置に案内して適切な顔画像を撮影することができるとされている。
顔画像を撮影する撮影部と、
前記データ取得部が取得した顔画像データが示す顔画像、及び、前記撮影部が撮影した顔画像の少なくともいずれかに基づいて、前記データ取得部が取得した顔画像データが示す顔画像と、前記撮影部が撮影した顔画像との相違点又は相違点の候補を検出する相違点検出部と、
前記相違点検出部が検出した相違点又は相違点の候補に基づいてガイダンスを取得するガイダンス取得部と、
前記ガイダンス取得部が取得したガイダンスを出力部が出力するよう当該出力部を制御する出力制御部と、
を備えるガイダンス取得装置を提供する。
前記顔画像データを取得するデータ取得ステップと、
前記顔画像を撮影する撮影ステップと、
前記データ取得ステップで取得した顔画像データが示す顔画像と、前記撮影ステップで撮影した顔画像との相違点を検出する相違点検出ステップと、
前記相違点検出ステップで検出した相違点に前記対応関係情報にて対応付けられているガイダンスを示すガイダンスを取得するガイダンス取得ステップと、
前記ガイダンス取得ステップで取得したガイダンスを出力部が出力するよう当該出力部を制御する出力制御ステップと、
を含むガイダンス取得方法も提供する。
前記顔画像データを取得するデータ取得ステップと、
前記撮影部にて顔画像を撮影する撮影ステップと、
前記データ取得ステップで取得した顔画像データが示す顔画像と、前記撮影ステップで撮影した顔画像との相違点を検出する相違点検出ステップと、
前記相違点検出ステップで検出した相違点に前記対応関係情報にて対応付けられているガイダンスを取得するガイダンス取得ステップと、
前記ガイダンス取得ステップで取得したガイダンスを出力部が出力するよう当該出力部を制御する出力制御ステップと、
を実行させるためのプログラムも提供する。
図1は、本発明の一実施形態における顔認証装置の機能構成を示す概略ブロック図である。同図において、顔認証装置100は、データ取得部110と、撮影部120と、出力部130と、記憶部170と、制御部180とを備える。
出力部130は、表示部131と、音声出力部132とを備える。
記憶部170は、対応関係情報記憶部171を備える。
制御部180は、相違点検出部181と、ガイダンス取得部182と、顔認証部183と、出力制御部184とを備える。
但し、顔認証装置100の適用場面は入国審査に限らない。例えば、顔認証装置100が、特定の施設への入退場権限の有無を顔認証にて確認する装置であってもよい。この場合、顔認証装置100が、認証対象者の有する身分証(例えばIDカード)から顔画像データを読み出して撮像画像と比較することで顔認証を行うようにしてもよい。あるいは、顔認証装置100が、入退場権限者の顔画像データを予め記憶しておくようにしてもよい。
顔認証装置100は、例えば、カメラ等を備えたコンピュータがプログラムを実行することで実現される。あるいは、顔認証装置100が、専用のハードウェアを用いて構成されていてもよい。
また、データ取得部110は、パスポートに埋め込まれたICチップから、顔画像データに加えて認証対象者(撮影部120による被撮影者(以下、単に被撮影者と称する))の国籍、性別及び年齢等の属性を示す属性データを取得する。ここでいう認証対象者の属性とは、認証対象者の生得的な性質(すなわち、生まれたときに決まっている性質)である。認証対象者の属性の例として、上述したように認証対象者の国籍、性別及び年齢を挙げられる。
例えば、顔認証装置100が、特定の施設への入退場権限の有無を顔認証にて確認する装置である場合、データ取得部110が、認証対象者の有する身分証(例えばIDカード)から認証対象者の顔画像データ(認証対象者の顔画像を示す顔画像データ)及び認証対象者の属性データ(認証対象者の属性を示す属性データ)を読み出すようにしてもよい。
あるいは、記憶部170が予め(顔認証装置100が顔認証を行う前に)顔画像データ及び属性データを記憶しておき、データ取得部110が、記憶部170から認証対象者の顔画像データ及び認証対象者の属性データを読み出すようにしてもよい。
なお、データ取得部110が認証対象者の属性データを取得することは、必須の要件ではない。データ取得部110は、少なくとも顔画像データを取得すればよい。
ここでいうガイダンスの出力は、ガイダンスを示す信号の出力であってもよい。あるいは、ここでいうガイダンスの出力は、ガイダンスの表示又はガイダンスの音声出力など、認証対象者がガイダンスを把握可能な態様でのガイダンスの提示であってもよい。
表示部131は、例えば液晶パネル又はLED(発光ダイオード)パネルなどの表示画面を有し、各種画像を表示する。特に、表示部131は、データ取得部110が取得した顔画像データが示す顔画像、及び、撮影部120が撮影した顔画像(撮影部120の撮影によって得られた顔画像)を表示する。また、表示部131は、ガイダンスを表示する。
表示部131によるガイダンスの表示は、ガイダンスの出力の例に該当する。
また、表示部131は、領域A11に「顔を撮影中です」とのメッセージを表示している。表示部131が当該メッセージを表示することで、被撮影者は、顔認証装置100が顔認証を実行中であることを把握し得る。
なお、画像P23は、撮影部120が撮影した最新の画像を左右反転させずにそのまま表示させても良い。
また、表示部131は、矢印B21を表示している。この矢印B21もガイダンスの例に該当し、領域A21のメッセージと同じく、被撮影者が撮影部120のカメラに近づくという対応方法を示している。
なお、顔認証装置100がガイダンスを出力する方法は、ガイダンスの表示又は音声出力に限らない。例えば、出力部130が、顔認証装置100とは別の装置として構成され、顔認証装置100がガイダンスを示す信号を出力部130に出力(送信)するようにしてもよい。
対応関係情報記憶部171は、対応関係情報を記憶する。対応関係情報記憶部171が記憶する対応関係情報は、パスポートの顔画像(データ取得部110が取得した顔画像データが示す顔画像)と撮影部120が撮影した顔画像との相違点と、当該相違点に関する被撮影者へのガイダンスとの対応関係を示す情報である。
同図に示す対応関係情報記憶部171は、表形式のデータ構造を有し、各行に相違点欄と対応方法欄とを含む。
相違点欄は、パスポートの顔画像と撮影部120が撮影した顔画像との相違点を示す情報を格納する欄である。また、対応方法欄は、相違点欄にて示される相違点に対する被撮影者(認証対象者)の対応方法を示す情報を格納する欄である。対応方法欄に格納される情報は、被撮影者へのガイダンスの例に該当する。
このように、対応関係情報では、データ取得部110が取得した顔画像データが示す顔画像と撮影部120が撮影した顔画像との相違点と、当該相違点に関する被撮影者へのガイダンスとが行毎に対応付けられている。
例えば、パスポートの顔画像ではヒゲが無いが、撮影部120が撮影した顔画像ではヒゲが生えている場合、その場でヒゲを剃るといった対応は困難と考えられる。そこで、図4の対応関係情報は、相違点が「ヒゲ」である場合の対応方法として、「係員にお知らせください」との対応方法を示している。顔認証装置100がこの対応方法を出力(例えば表示)することで、認証対象者は、係員に連絡して対処方法を尋ねるという対応方法に気付くことができる。
例えば、表示部131が、パスポートの顔画像と、「顔の表情が違います」とのガイダンスとを表示するようにしてもよい。この場合、被撮影者は、パスポートの顔画像を見て、当該顔画像の場合と同様の表情をするといった対応方法を把握し実行することができる。
相違点検出部181は、データ取得部110が取得した顔画像データが示す顔画像、及び、撮影部120が撮影した顔画像の少なくともいずれかに基づいて、データ取得部110が取得した顔画像データが示す顔画像と、撮影部120が撮影した顔画像との相違点又は相違点の候補を検出する。
相違点検出部181が当該候補を検出した場合、例えば表示部131がパスポートの顔画像をメガネを強調して表示することで、被撮影者は、メガネを着用すべきことに気付き得る。メガネを着用していなかった被撮影者がメガネを着用することで、顔認証装置100が行う顔認証がエラーになる可能性を低減させることができる。
また、この場合、相違点検出部181は、撮影部120が被撮影者の顔を撮影していない状態でも、相違点の候補を検出することができる。従って、表示部131は、例えば被撮影者が撮影部120の撮影範囲内に立つ前など早い段階で、メガネ着用の示唆を表示することができる。
また、この場合、相違点検出部181は、データ取得部110が顔画像のデータを取得していない状態でも、相違点の候補を検出することができる。従って、表示部131は、例えば被撮影者がパスポートリーダ装置にパスポートを置く前など早い段階で、マスクを外す示唆を表示することができる。
なお、相違点検出部181が、撮影部120が撮影した顔画像が横向きであることを検出するなど、マスク着用の有無以外の相違点の候補を検出するようにしてもよい。
そして、相違点検出部181は、左目と右目との間隔の大きさ(矢印B31の長さ)が間隔閾値より小さいか否か、及び、顔部分の大きさが顔部分閾値より小さいか否かを判定する。ここで、間隔閾値、顔部分閾値は、いずれも、被撮影者と撮影部120との距離が遠いか否かの判定閾値として設定された所定の閾値(例えば定数)である。なお、ここでの顔部分の大きさは、顔部分の画像の面積、顔部分の画像の縦の長さ、顔部分の画像の横幅のいずれか、又はこれらの組み合わせであってもよい。
左目と右目との間隔の大きさが間隔閾値より小さい、顔部分の大きさが顔部分閾値より小さいという2つの条件のうちいずれか1つ以上が成立すると判定した場合、相違点検出部181は、被撮影者と撮影部120との距離が遠いと判定(検出)する。
なお、相違点検出部181が、被撮影者と撮影部120との距離が遠いという相違点を検出する方法は、上記の方法に限らない。例えば、相違点検出部181が、左目と右目との間隔の大きさ、及び、顔部分の大きさのうちいずれか一方のみに基づいて、被撮影者と撮影部120との距離が遠いという相違点を検出するようにしてもよい。
そして、相違点検出部181は、左目のx座標が左目座標右ずれ閾値より大きいか否か、右目のx座標が右目座標右ずれ閾値より大きいか否か、及び、顔部分のx座標(例えば、顔部分として抽出した領域の左端のx座標と右端のx座標との平均値)が顔部分座標右ずれ閾値より大きいか否かを判定する。
ここで、左目座標右ずれ閾値、右目座標右ずれ閾値、顔部分座標右ずれ閾値は、いずれも、被撮影者の位置が被撮影者から見て撮影部120に対して右にずれているか否かの判定閾値として設定された所定の閾値(例えば定数)である。
ここで、左目座標左ずれ閾値、右目座標左ずれ閾値、顔部分座標左ずれ閾値は、いずれも、被撮影者の位置が被撮影者から見て撮影部120に対して左にずれているか否かの判定閾値として設定された所定の閾値(例えば定数)である。
この場合、出力部130は、「左に移動してください」との対応方法を出力(例えば、表示)する。
また、被撮影者の位置が被撮影者から見て撮影部120に対して左にずれているという相違点は、図4の例における「位置ずれ(左右)」のうち、位置ずれ(左)に相当する。
この場合、出力部130は、「右に移動してください」との対応方法を出力(例えば、表示)する。
例えば、相違点検出部181が、左目のx座標、右目のx座標、及び、顔部分のx座標に代えて、あるいはこれらのうち1つ以上に加えて、左耳のx座標、右耳のx座標、鼻のx座標、口のx座標のうち1つ以上に基づいて、被撮影者の位置が横方向にずれている(左にずれている、又は、右にずれている)という相違点を検出するようにしてもよい。
なお、相違点検出部181が、顔の部分のx座標として、当該部分の中央のx座標を用いるようにしてもよいし、当該部分の左端のx座標を用いるようにしてもよいし、当該部分の右端のx座標を用いるようにしてもよい。
例えば、上記のように相違点検出部181が、パスポートの顔画像がメガネを着用した顔画像であることを検出した場合、ガイダンス取得部182が、パスポートの顔画像をメガネを強調して表示するという処理を示す情報を取得するようにしてもよい。相違点検出部181が当該情報を取得するために、相違点検出部181または記憶部170が、パスポートの顔画像がメガネを着用した顔画像であるという相違点の候補(メガネの着用の有無という相違点の候補)と、パスポートの顔画像をメガネを強調して表示するという処理とを対応付けた情報を予め記憶しておく。
なお、パスポートの顔画像をメガネを強調して表示することは、被撮影者にメガネを着用することを示唆するガイダンスの例に該当する。
なお、「マスクを外してください」というメッセージは、被撮影者にマスクを外すことを示唆するガイダンスの例に該当する。
例えば、対応関係情報記憶部171は、パスポートの画像と撮影部120が撮影する画像との相違点と、ガイダンスとしてのメッセージとを対応付けた対応関係情報を記憶しておく。そして、ガイダンス取得部182は、対応関係情報からガイダンスとしてのメッセージを読み出すことでガイダンスを取得する。
さらに例えば、データ取得部110は、認証対象者の国籍を示す属性データをパスポートから読出し、読み出した国籍に応じた言語にメッセージを翻訳することで、当該国籍に応じた言語のガイダンスを取得するようにしてもよい。このように、ガイダンス取得部182が、データ取得部110が取得した属性データが示す属性に応じたガイダンスを取得するようにしてもよい。
なお、顔認証部183が顔認証装置100の一部として構成されていてもよいし、顔認証装置100とは別の装置として構成されていてもよい。
出力制御部184は、ガイダンス取得部182が取得したガイダンスを出力部130が出力するよう出力部130を制御する。
例えば、上記のメガネ着用指示の例において、出力制御部184が、パスポートの顔画像(データ取得部110が取得した顔画像データが示す顔画像)を表示部131に表示させるようにしてもよい。
表示部131がパスポートの顔画像を表示することで、被撮影者は、当該顔画像を参照して条件合わせ指示の内容を確認することができる。
これにより、被撮影者は、パスポートの顔画像と、撮影部120が撮影した顔画像の相違点を把握することができる。被撮影者が、当該相違点を把握することで、相違点に対する対応方法(例えば、相違を低減させる対応方法)を把握できる可能性を高めることができる。
例えば、パスポートの顔画像がメガネを着用しておらず、撮影部120が撮影した顔画像がメガネを着用している場合、表示部131が出力制御部184の制御に従って、パスポートの顔画像に破線でメガネを表示する、あるいは、撮影部120が撮影した顔画像のメガネを破線で囲う等の協調表示を行うようにしてもよい。
これにより、被撮影者は、パスポートの顔画像と撮影部120が撮影した顔画像との相違点をより確実に把握することができ、相違点に対する対応方法(例えば、相違を低減させる対応方法)を把握できる可能性を高めることができる。
図3の画像P21が、パスポートの顔画像の例に該当する。また、図3の画像P23が、撮影部120が撮影した最新の顔画像の例に該当する。また、図3の領域A21におけるメッセージの表示が、ガイダンス取得部182が取得したガイダンスの表示の例に該当する。
図3の画像P22が、撮影部120が撮影し顔認証部183が顔認証に用いてエラーになった顔画像の例に該当する。また、図3の画像P23が、撮影部120が撮影した最新の顔画像の例に該当する。また、図3の領域A21におけるメッセージの表示が、ガイダンス取得部182が取得したガイダンスの表示の例に該当する。
これにより、被撮影者は、撮影部120が撮影し顔認証部183が顔認証に用いてエラーになった顔画像におけるエラー原因が、撮影部120が撮影した最新の顔画像で解消されているか否かを確認することができる。
図3の画像P21が、パスポートの顔画像の例に該当する。また、図3の画像P22が、撮影部120が撮影し顔認証部183が顔認証に用いてエラーになった顔画像の例に該当する。また、図3の画像P23が、撮影部120が撮影した最新の顔画像の例に該当する。また、図3の領域A21におけるメッセージの表示が、ガイダンス取得部182が取得したガイダンスの表示の例に該当する。
図3の領域A21におけるメッセージの表示が、ガイダンス取得部182が取得したガイダンスの文字情報での表示の例に該当する。
図3の矢印B21の表示が、被撮影者が顔を移動させる方向を矢印で示すガイダンスの表示の例に該当する。
これにより、被撮影者は、表示部131の表示画面を見ていない場合や、視覚障害等により、表示部131の表示画面を見ることができない場合でも、ガイダンスを把握することができる。
図7の処理にて、データ取得部110は、パスポートに埋め込まれたICチップからデータを読み出す(ステップS101)。特に、データ取得部110は、当該ICチップから、予め登録されている顔画像(パスポートの顔画像)の画像データを読み出す。
そして、顔認証部183は、データ取得部110が取得した画像データが示す顔画像と、撮影部120が撮影した顔画像とを用いて顔認証を行う(ステップS103)。
そして、顔認証部183は、顔認証の結果を得られたか、あるいは顔認証がエラーになったかを判定する(ステップS104)。
例えば、認証に成功した場合(すなわち、認証対象者がパスポートに記載の人物と同一であると判定した場合)、顔認証装置100は、入国可能であることを示すメッセージを表示する。また、顔認証装置100は、ゲートの扉(ドア)を開かせることで、認証対象者がゲートを通過できるようにする。一方、認証に失敗した場合(すなわち、認証対象者がパスポートに記載の人物と異なると判定した場合)、顔認証装置100は、認証対象者が係員に連絡するよう促すメッセージを表示する。
ステップS111の後、図7の処理を終了する。
そして、ガイダンス取得部182が、ステップS121で得られた相違点に対する対応方法を示すガイダンスを取得する(ステップS122)。例えば、上述したように、対応関係情報記憶部171が、顔画像の相違点と、当該相違点に対する対応方法を示すガイダンスとが対応付けられた対応関係情報を記憶しておく。そして、ガイダンス取得部182は、ステップS121で得られた相違点と対応付けられているガイダンスを対応関係情報から読み出す。
そして、出力部130は、ステップS122で得られたガイダンスを出力制御部184の制御に従って出力する(ステップS123)。
ステップS123の後、ステップS102へ戻る。
具体的には、顔認証部183が、ステップS104で顔認証がエラーになったと判定した回数をカウントし、当該回数が所定の回数以上になったか否かを判定する。ステップS104で顔認証がエラーになったと判定した回数が所定の回数以上になったと判定した場合、例えば、出力制御部184が、表示部131に、認証対象者が係員に連絡するよう促すメッセージを表示させる。
これにより、顔認証装置100は、顔認証のエラーの解消が困難な場合に対応することができる。
そして、相違点検出部181は、データ取得部110が取得した顔画像データが示す顔画像と、撮影部120が撮影した顔画像との相違点を検出する。そして、ガイダンス取得部182は、相違点検出部181が検出した相違点に対応関係情報にて対応付けられているガイダンスを取得する。そして、出力制御部184は、ガイダンス取得部182が取得したガイダンスを出力部130が出力するよう出力部130を制御する。
これにより、顔認証装置100が顔認証を適切に行えない場合に、認証対象者が対応方法を把握できる可能性を高めることができる。
データ取得部110は、パスポートから顔画像データを取得する。
これにより、顔認証装置100の処理専用に顔画像データを用意する必要がなく、この点で、顔認証装置100の管理者の負担が小さくて済む。
また、記憶部170が顔画像データを予め記憶しておく必要がない。この点で、記憶部170の記憶容量が小さくて済み、顔認証装置100の製造コストが安くて済む。
これにより、顔認証装置100は、被撮影者の属性に応じて適切なガイダンスを出力し得る。
これにより、顔認証装置100は、被撮影者が理解可能な言語でガイダンスを出力することができる。
被撮影者がメガネ着用指示に従ってメガネを着用することで、データ取得部110が取得した顔画像データ示す顔画像と、撮影部120が撮影する顔画像との相違を低減させ、顔認証部183が行う認証がエラーとなる可能性を低減させることができる。
あるいは、顔認証部183が顔認証を行い、当該顔認証がエラーになった場合に、出力制御部184が、被撮影者がメガネを着用するよう促すメガネ着用指示を出力部130に出力させるようにしてもよい。この場合のメガネ着用指示は、ガイダンスの例に該当する。
被撮影者が、当該条件合わせ指示に従って、撮影部120が顔画像を撮影する条件を、表示部131が表示する顔画像の示す条件に合せる対応を行うことで、顔認証部183が行う顔認証がエラーになる可能性を低減させることができる。
これにより、被撮影者は、当該相違点情報を参照して、データ取得部110が取得した顔画像データが示す顔画像と、撮影部120が撮影した顔画像の相違点を把握することができる。被撮影者は、当該相違点を把握することで、相違点に対する対応方法(例えば、相違を低減させる対応方法)を把握できる可能性を高めることができる。
これにより、被撮影者は、パスポートの顔画像と撮影部120が撮影した顔画像との相違点をより確実に把握することができる。被撮影者が、当該相違点を把握することで、相違点に対する対応方法(例えば、相違を低減させる対応方法)を把握できる可能性を高めることができる。
出力制御部184が、当該相違点に応じた対応方法を出力部130が出力するよう当該出力部を制御することで、被撮影者は、顔認証エラーの原因に対応した対応方法(例えば、顔認証エラーの原因となった相違を低減させる対応方法)と把握することができる。
表示部131がガイダンスを表示することで、被撮影者は、データ取得部110が取得した顔画像データが示す顔画像と、撮影部120が撮影する顔画像との相違点に対する対応方法を把握することができる。また、表示部131が、データ取得部110が取得した顔画像データが示す顔画像と、撮影部120が撮影した最新の顔画像とを表示することでも、被撮影者は、データ取得部110が取得した顔画像データが示す顔画像と、撮影部120が撮影する顔画像との相違点を把握することができ、当該相違点に対する対応方法を把握できる可能性を高めることができる。
これにより、被撮影者は、撮影部120が撮影し顔認証部183が顔認証に用いてエラーになった顔画像におけるエラー原因が、撮影部120が撮影した最新の顔画像で解消されているか否かを確認することができる。
表示部131が、データ取得部110が取得した顔画像データが示す顔画像と、撮影部120が撮影し顔認証部183が顔認証に用いてエラーになった顔画像とを表示することで、被撮影者は、これら2つの顔画像の相違点を確認することができる。被撮影者が、当該相違点を確認することで、表示部131が表示するガイダンスを正確に把握できると期待される。
これにより、被撮影者は、文字情報によるガイダンスの表示を読んで、ガイダンスを把握することができる。
このように、表示部131が矢印等のアイコンの表示によってガイダンスを表示することで、使用言語が異なる色々な被撮影者がガイダンスを把握することができる。
これにより、使用言語が異なる色々な被撮影者が移動方向を把握することができる。
これにより、被撮影者は、表示部131の表示画面を見ていない場合や、視覚障害等により、表示部131の表示画面を見ることができない場合でも、ガイダンスを把握することができる。
これにより、被撮影者がガイダンスを把握し損なう可能性を低減させることができる。
図8は、本発明に係るガイダンス取得装置の最小構成を示す概略ブロック図である。同図に示すガイダンス取得装置10は、データ取得部11と、撮影部12と、対応関係情報記憶部13と、相違点検出部14と、ガイダンス取得部15と、出力制御部16とを備える。
これにより、データ取得部11が取得した顔画像データが示す顔画像と、撮影部12が撮影した顔画像とを用いて顔認証を行う装置が、顔認証を適切に行えない場合に、認証対象者が対応方法を把握できる可能性を高めることができる。
その場合、この機能を実現するためのプログラムをコンピュータ読み取り可能な記録媒体に記録して、この記録媒体に記録されたプログラムをコンピュータシステムに読み込ませ、実行することによって実現してもよい。
なお、ここでいう「コンピュータシステム」とは、OSや周辺機器等のハードウェアを含むものとする。
また、「コンピュータ読み取り可能な記録媒体」とは、フレキシブルディスク、光磁気ディスク、ROM、CD-ROM等の可搬媒体、コンピュータシステムに内蔵されるハードディスク等の記憶装置のことをいう。
さらに「コンピュータ読み取り可能な記録媒体」とは、インターネット等のネットワークや電話回線等の通信回線を介してプログラムを送信する場合の通信線のように、短時間の間、動的にプログラムを保持するもの、その場合のサーバやクライアントとなるコンピュータシステム内部の揮発性メモリのように、一定時間プログラムを保持しているものも含んでもよい。
また上記プログラムは、前述した機能の一部を実現するためのものであってもよく、さらに前述した機能をコンピュータシステムにすでに記録されているプログラムとの組み合わせで実現できるものであってもよく、FPGA(Field Programmable Gate Array)等のプログラマブルロジックデバイスを用いて実現されるものであってもよい。
11、110 データ取得部
12、120 撮影部
13、171 対応関係情報記憶部
14、181 相違点検出部
15、182 ガイダンス取得部
16、184 出力制御部
100 顔認証装置
130 出力部
131 表示部
132 音声出力部
170 記憶部
180 制御部
183 顔認証部
Claims (10)
- 顔画像データを取得するデータ取得部と、
顔画像を撮影する撮影部と、
前記データ取得部が取得した顔画像データが示す顔画像、及び、前記撮影部が撮影した顔画像の少なくともいずれかに基づいて、前記データ取得部が取得した顔画像データが示す顔画像と、前記撮影部が撮影した顔画像との相違点又は相違点の候補を検出する相違点検出部と、
前記相違点検出部が検出した相違点又は相違点の候補に基づいてガイダンスを取得するガイダンス取得部と、
前記ガイダンス取得部が取得したガイダンスを出力部が出力するよう当該出力部を制御する出力制御部と、
を備えるガイダンス取得装置。 - 前記データ取得部が取得する顔画像データが示す顔画像と前記撮影部が撮影する顔画像との相違点と、当該相違点に関する被撮影者へのガイダンスとの対応関係を示す対応関係情報を予め記憶する対応関係情報記憶部を備え、
前記相違点検出部は、前記データ取得部が取得した顔画像データが示す顔画像と、前記撮影部が撮影した顔画像との相違点を検出し、
前記ガイダンス取得部は、前記相違点検出部が検出した相違点に前記対応関係情報にて対応付けられているガイダンスを取得する、
請求項1に記載のガイダンス取得装置。 - 前記データ取得部は、パスポートから顔画像データを取得する、請求項1又は請求項2に記載のガイダンス取得装置。
- 前記データ取得部は、被撮影者の属性を示す属性データを前記パスポートから取得し、
前記ガイダンス取得部は、前記データ取得部が取得した属性データが示す属性に応じたガイダンスを取得する、
請求項3に記載のガイダンス取得装置。 - 前記データ取得部は、被撮影者の国籍を示す属性データを取得し、
前記ガイダンス取得部は、前記国籍に応じた言語のガイダンスを取得する、
請求項4に記載のガイダンス取得装置。 - 前記データ取得部が取得した顔画像データが示す顔画像がメガネを着用した画像である場合、前記出力制御部は、被撮影者がメガネを着用するよう促すメガネ着用指示を前記出力部が出力するよう当該出力部を制御する、請求項1から5のいずれか一項に記載のガイダンス取得装置。
- 前記出力制御部は、前記データ取得部が取得した顔画像データが示す顔画像を前記出力部が表示するよう当該出力部を制御し、さらに、前記撮影部が顔画像を撮影する条件を、前記出力部が表示する顔画像の示す条件に合せるよう被撮影者に促す条件合わせ指示を前記出力部が出力するよう当該出力部を制御する、請求項1から6のいずれか一項に記載のガイダンス取得装置。
- 前記データ取得部が取得した顔画像データが示す顔画像と、前記撮影部が撮影した顔画像との顔認証を行う顔認証部を備え、
前記相違点検出部は、前記顔認証部が行う顔認証がエラーになった場合、前記データ取得部が取得した顔画像データが示す顔画像と、前記撮影部が撮影した顔画像との相違点であって、顔認証エラーの原因となった相違点を検出する、
請求項1から7のいずれか一項に記載のガイダンス取得装置。 - 顔画像データが示す顔画像と撮影にて得られる顔画像との相違点と、当該相違点に関する被撮影者へのガイダンスとの対応関係を示す対応関係情報を予め記憶する対応関係情報記憶部を備えるガイダンス取得装置が行うガイダンス取得方法であって、
前記顔画像データを取得するデータ取得ステップと、
前記顔画像を撮影する撮影ステップと、
前記データ取得ステップで取得した顔画像データが示す顔画像と、前記撮影ステップで撮影した顔画像との相違点を検出する相違点検出ステップと、
前記相違点検出ステップで検出した相違点に前記対応関係情報にて対応付けられているガイダンスを示すガイダンスを取得するガイダンス取得ステップと、
前記ガイダンス取得ステップで取得したガイダンスを出力部が出力するよう当該出力部を制御する出力制御ステップと、
を含むガイダンス取得方法。 - 撮影部、及び、顔画像データが示す顔画像と撮影にて得られる顔画像との相違点と、当該相違点に関する被撮影者へのガイダンスとの対応関係を示す対応関係情報を予め記憶する対応関係情報記憶部を備えるコンピュータに、
前記顔画像データを取得するデータ取得ステップと、
前記撮影部にて顔画像を撮影する撮影ステップと、
前記データ取得ステップで取得した顔画像データが示す顔画像と、前記撮影ステップで撮影した顔画像との相違点を検出する相違点検出ステップと、
前記相違点検出ステップで検出した相違点に前記対応関係情報にて対応付けられているガイダンスを取得するガイダンス取得ステップと、
前記ガイダンス取得ステップで取得したガイダンスを出力部が出力するよう当該出力部を制御する出力制御ステップと、
を実行させるためのプログラム。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017539101A JP6451861B2 (ja) | 2015-09-09 | 2016-08-24 | 顔認証装置、顔認証方法およびプログラム |
US15/758,136 US10706266B2 (en) | 2015-09-09 | 2016-08-24 | Guidance acquisition device, guidance acquisition method, and program |
US16/279,220 US10509950B2 (en) | 2015-09-09 | 2019-02-19 | Guidance acquisition device, guidance acquisition method, and program |
US16/884,801 US11501567B2 (en) | 2015-09-09 | 2020-05-27 | Guidance acquisition device, guidance acquisition method, and program |
US17/958,535 US11861939B2 (en) | 2015-09-09 | 2022-10-03 | Guidance acquisition device, guidance acquisition method, and program |
US18/514,316 US20240087363A1 (en) | 2015-09-09 | 2023-11-20 | Guidance acquisition device, guidance acquisition method, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015-177813 | 2015-09-09 | ||
JP2015177813 | 2015-09-09 |
Related Child Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/758,136 A-371-Of-International US10706266B2 (en) | 2015-09-09 | 2016-08-24 | Guidance acquisition device, guidance acquisition method, and program |
US16/279,220 Continuation US10509950B2 (en) | 2015-09-09 | 2019-02-19 | Guidance acquisition device, guidance acquisition method, and program |
US16/884,801 Continuation US11501567B2 (en) | 2015-09-09 | 2020-05-27 | Guidance acquisition device, guidance acquisition method, and program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017043314A1 true WO2017043314A1 (ja) | 2017-03-16 |
Family
ID=58239649
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/074639 WO2017043314A1 (ja) | 2015-09-09 | 2016-08-24 | ガイダンス取得装置、ガイダンス取得方法及びプログラム |
Country Status (3)
Country | Link |
---|---|
US (5) | US10706266B2 (ja) |
JP (4) | JP6451861B2 (ja) |
WO (1) | WO2017043314A1 (ja) |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018179723A1 (ja) * | 2017-03-30 | 2018-10-04 | パナソニックIpマネジメント株式会社 | 顔認証処理装置、顔認証処理方法及び顔認証処理システム |
WO2019032722A1 (en) * | 2017-08-09 | 2019-02-14 | Jumio Corporation | AUTHENTICATION USING A FACIAL IMAGE COMPARISON |
CN109408676A (zh) * | 2018-01-25 | 2019-03-01 | 维沃移动通信有限公司 | 一种显示用户信息的方法及终端设备 |
JP2019091318A (ja) * | 2017-11-15 | 2019-06-13 | 富士ゼロックス株式会社 | 情報処理装置、及びプログラム |
JP2020520031A (ja) * | 2017-05-16 | 2020-07-02 | アップル インコーポレイテッドApple Inc. | 拡張されたユーザ対話のための画像データに関する米国特許商標局における特許出願 |
JP2020524928A (ja) * | 2017-06-07 | 2020-08-20 | タレス・ディス・フランス・エス・ア | 制限エリア内の不認可のユーザを識別することを可能にする情報要素をデバイスに提供する方法 |
US10878274B2 (en) | 2012-08-15 | 2020-12-29 | Jumio Corporation | Systems and methods of image processing for remote validation |
US10977651B2 (en) | 2014-05-29 | 2021-04-13 | Apple Inc. | User interface for payments |
US11100349B2 (en) | 2018-09-28 | 2021-08-24 | Apple Inc. | Audio assisted enrollment |
US11107261B2 (en) | 2019-01-18 | 2021-08-31 | Apple Inc. | Virtual avatar animation based on facial feature movement |
US11170085B2 (en) | 2018-06-03 | 2021-11-09 | Apple Inc. | Implementation of biometric authentication |
US11200309B2 (en) | 2011-09-29 | 2021-12-14 | Apple Inc. | Authentication with secondary approver |
US11206309B2 (en) | 2016-05-19 | 2021-12-21 | Apple Inc. | User interface for remote authorization |
US11287942B2 (en) | 2013-09-09 | 2022-03-29 | Apple Inc. | Device, method, and graphical user interface for manipulating user interfaces |
JP2022091805A (ja) * | 2019-03-04 | 2022-06-21 | パナソニックIpマネジメント株式会社 | 顔認証登録装置および顔認証登録方法 |
US11380077B2 (en) | 2018-05-07 | 2022-07-05 | Apple Inc. | Avatar creation user interface |
US11386189B2 (en) | 2017-09-09 | 2022-07-12 | Apple Inc. | Implementation of biometric authentication |
US11393258B2 (en) | 2017-09-09 | 2022-07-19 | Apple Inc. | Implementation of biometric authentication |
US11468155B2 (en) | 2007-09-24 | 2022-10-11 | Apple Inc. | Embedded authentication systems in an electronic device |
US11532112B2 (en) | 2017-05-16 | 2022-12-20 | Apple Inc. | Emoji recording and sending |
US11619991B2 (en) | 2018-09-28 | 2023-04-04 | Apple Inc. | Device control using gaze information |
US11676373B2 (en) | 2008-01-03 | 2023-06-13 | Apple Inc. | Personal computing device control using face detection and recognition |
WO2023157720A1 (ja) * | 2022-02-17 | 2023-08-24 | 株式会社デンソー | 車両用顔登録制御装置及び車両用顔登録制御方法 |
US12033296B2 (en) | 2023-04-24 | 2024-07-09 | Apple Inc. | Avatar creation user interface |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6728699B2 (ja) * | 2016-01-15 | 2020-07-22 | 富士通株式会社 | 生体認証装置、生体認証方法および生体認証プログラム |
JP6409929B1 (ja) * | 2017-09-19 | 2018-10-24 | 日本電気株式会社 | 照合システム |
CN109583287B (zh) * | 2017-09-29 | 2024-04-12 | 浙江莲荷科技有限公司 | 实物识别方法及验证方法 |
US11328513B1 (en) * | 2017-11-07 | 2022-05-10 | Amazon Technologies, Inc. | Agent re-verification and resolution using imaging |
CN108268619B (zh) | 2018-01-08 | 2020-06-30 | 阿里巴巴集团控股有限公司 | 内容推荐方法及装置 |
CN108446817B (zh) | 2018-02-01 | 2020-10-02 | 阿里巴巴集团控股有限公司 | 确定业务对应的决策策略的方法、装置和电子设备 |
US10574881B2 (en) * | 2018-02-15 | 2020-02-25 | Adobe Inc. | Smart guide to capture digital images that align with a target image model |
US11521460B2 (en) | 2018-07-25 | 2022-12-06 | Konami Gaming, Inc. | Casino management system with a patron facial recognition system and methods of operating same |
AU2019208182B2 (en) | 2018-07-25 | 2021-04-08 | Konami Gaming, Inc. | Casino management system with a patron facial recognition system and methods of operating same |
CN110569856B (zh) | 2018-08-24 | 2020-07-21 | 阿里巴巴集团控股有限公司 | 样本标注方法及装置、损伤类别的识别方法及装置 |
CN110570316A (zh) | 2018-08-31 | 2019-12-13 | 阿里巴巴集团控股有限公司 | 训练损伤识别模型的方法及装置 |
CN110569696A (zh) | 2018-08-31 | 2019-12-13 | 阿里巴巴集团控股有限公司 | 用于车辆部件识别的神经网络***、方法和装置 |
CN110569864A (zh) | 2018-09-04 | 2019-12-13 | 阿里巴巴集团控股有限公司 | 基于gan网络的车损图像生成方法和装置 |
US20220078338A1 (en) * | 2018-12-28 | 2022-03-10 | Sony Group Corporation | Information processing apparatus, information processing method, and information processing program |
JP7211266B2 (ja) * | 2019-05-27 | 2023-01-24 | 富士フイルムビジネスイノベーション株式会社 | 情報処理装置、及び情報処理プログラム |
WO2020261423A1 (ja) * | 2019-06-26 | 2020-12-30 | 日本電気株式会社 | 認証システム、認証方法、制御装置、コンピュータプログラム及び記録媒体 |
US20220319512A1 (en) * | 2019-09-10 | 2022-10-06 | Nec Corporation | Language inference apparatus, language inference method, and program |
CN112115803B (zh) * | 2020-08-26 | 2023-10-13 | 深圳市优必选科技股份有限公司 | 口罩状态提醒方法、装置及移动终端 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003151016A (ja) * | 2001-11-08 | 2003-05-23 | Nippon Signal Co Ltd:The | 本人認証装置及びこれを備えた自動販売装置 |
JP2008033810A (ja) * | 2006-07-31 | 2008-02-14 | Secom Co Ltd | 顔画像照合装置 |
JP2009176208A (ja) * | 2008-01-28 | 2009-08-06 | Nec Corp | 顔認証装置、システム、方法及びプログラム |
JP2013097760A (ja) * | 2011-11-07 | 2013-05-20 | Toshiba Corp | 認証システム、端末装置、認証プログラム、認証方法 |
JP2014078052A (ja) * | 2012-10-09 | 2014-05-01 | Sony Corp | 認証装置および方法、並びにプログラム |
Family Cites Families (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06119433A (ja) * | 1992-10-01 | 1994-04-28 | Toshiba Corp | 人物認証装置 |
JP2000293663A (ja) | 1999-04-01 | 2000-10-20 | Oki Electric Ind Co Ltd | 個人識別装置 |
JP2002008070A (ja) * | 2000-06-26 | 2002-01-11 | Toshiba Corp | 通行審査システム |
JP3930303B2 (ja) | 2001-11-27 | 2007-06-13 | 永大産業株式会社 | 階段室の曲がり部のアウトサイドの踏板支持部材の組み合わせ |
JP2005063172A (ja) | 2003-08-13 | 2005-03-10 | Toshiba Corp | 顔照合装置および通行制御装置 |
JP2005078589A (ja) * | 2003-09-03 | 2005-03-24 | Toshiba Corp | 人物認識装置および通行制御装置 |
US20070086626A1 (en) * | 2003-10-08 | 2007-04-19 | Xid Technologies Pte Ltd | Individual identity authentication systems |
JP4206903B2 (ja) | 2003-10-31 | 2009-01-14 | 沖電気工業株式会社 | チェックインシステム |
JP2005149370A (ja) * | 2003-11-19 | 2005-06-09 | Matsushita Electric Ind Co Ltd | 画像撮影装置、個人認証装置及び画像撮影方法 |
JP4704185B2 (ja) * | 2005-10-27 | 2011-06-15 | 富士通株式会社 | 生体認証システム及び生体認証方法 |
JP2007140846A (ja) | 2005-11-17 | 2007-06-07 | Canon Inc | データ管理システム及びデータ管理方法 |
JP4883987B2 (ja) | 2005-11-18 | 2012-02-22 | セコム株式会社 | 顔画像照合装置及び照合用顔画像登録方法 |
JP4868441B2 (ja) | 2006-03-23 | 2012-02-01 | グローリー株式会社 | 遊技媒体取引処理システム |
JP2007322521A (ja) * | 2006-05-30 | 2007-12-13 | Konica Minolta Medical & Graphic Inc | 証明写真撮影装置 |
JP4811863B2 (ja) | 2006-06-26 | 2011-11-09 | グローリー株式会社 | 顔撮影装置および顔照合装置 |
JP2008250829A (ja) | 2007-03-30 | 2008-10-16 | Toshiba Corp | 歩行者照合システムおよび歩行者照合方法 |
KR20100027700A (ko) | 2008-09-03 | 2010-03-11 | 삼성디지털이미징 주식회사 | 촬영 방법 및 장치 |
JP2010067008A (ja) | 2008-09-10 | 2010-03-25 | Oki Electric Ind Co Ltd | 撮像管理システム、撮像管理方法、認証システム及び認証方法 |
RS51531B (en) | 2009-05-29 | 2011-06-30 | Vlatacom D.O.O. | MANUAL PORTABLE DEVICE FOR VERIFICATION OF PASSENGERS AND PERSONAL DOCUMENTS, READING BIOMETRIC DATA |
JP2010277504A (ja) | 2009-06-01 | 2010-12-09 | Seiko Epson Corp | 顔認証装置、携帯端末、および顔画像表示方法 |
JP2011039959A (ja) | 2009-08-18 | 2011-02-24 | Hitachi Kokusai Electric Inc | 監視システム |
JP5431830B2 (ja) | 2009-08-18 | 2014-03-05 | Necソフト株式会社 | 部品検出装置、部品検出方法、プログラムおよび記録媒体 |
KR101195539B1 (ko) * | 2010-02-18 | 2012-10-29 | (주)유런아이 | 얼굴 인식 및 검출을 이용한 출입문 개폐 시스템 및 그 방법 |
JP2011203992A (ja) * | 2010-03-25 | 2011-10-13 | Sony Corp | 情報処理装置、情報処理方法、およびプログラム |
JP5794410B2 (ja) | 2010-12-20 | 2015-10-14 | 日本電気株式会社 | 認証カード、認証システム、ガイダンス方法及びプログラム |
EP2710514A4 (en) * | 2011-05-18 | 2015-04-01 | Nextgenid Inc | REGISTRATION TERMINAL HAVING MULTIPLE BIOMETRIC APPARATUSES INCLUDING BIOMETRIC INSCRIPTION AND VERIFICATION SYSTEMS, FACIAL RECOGNITION AND COMPARISON OF FINGERPRINTS |
US9235750B1 (en) * | 2011-09-16 | 2016-01-12 | Lytx, Inc. | Using passive driver identification and other input for providing real-time alerts or actions |
US9177130B2 (en) * | 2012-03-15 | 2015-11-03 | Google Inc. | Facial feature detection |
EP2704107A3 (en) * | 2012-08-27 | 2017-08-23 | Accenture Global Services Limited | Virtual Access Control |
EP2704077A1 (en) * | 2012-08-31 | 2014-03-05 | Nxp B.V. | Authentication system and authentication method |
US11031790B2 (en) * | 2012-12-03 | 2021-06-08 | ChargeItSpot, LLC | System and method for providing interconnected and secure mobile device charging stations |
US20140270404A1 (en) * | 2013-03-15 | 2014-09-18 | Eyelock, Inc. | Efficient prevention of fraud |
WO2015001791A1 (ja) * | 2013-07-03 | 2015-01-08 | パナソニックIpマネジメント株式会社 | 物体認識装置及び物体認識方法 |
KR101536816B1 (ko) * | 2013-09-12 | 2015-07-15 | 장재성 | 차폐 통로를 이용한 보안 출입 통제 시스템 및 방법 |
KR102270674B1 (ko) * | 2013-09-30 | 2021-07-01 | 삼성전자주식회사 | 생체인식 카메라 |
US10129251B1 (en) * | 2014-02-11 | 2018-11-13 | Morphotrust Usa, Llc | System and method for verifying liveliness |
WO2015137645A1 (ko) | 2014-03-13 | 2015-09-17 | 엘지전자 주식회사 | 이동 단말기 및 그 제어 방법 |
US10657749B2 (en) * | 2014-04-25 | 2020-05-19 | Vivint, Inc. | Automatic system access using facial recognition |
US10235822B2 (en) * | 2014-04-25 | 2019-03-19 | Vivint, Inc. | Automatic system access using facial recognition |
US9405967B2 (en) | 2014-09-03 | 2016-08-02 | Samet Privacy Llc | Image processing apparatus for facial recognition |
US20160070952A1 (en) * | 2014-09-05 | 2016-03-10 | Samsung Electronics Co., Ltd. | Method and apparatus for facial recognition |
EP3692896A1 (en) * | 2014-11-04 | 2020-08-12 | Samsung Electronics Co., Ltd. | Electronic device, and method for analyzing face information in electronic device |
US9886639B2 (en) * | 2014-12-31 | 2018-02-06 | Morphotrust Usa, Llc | Detecting facial liveliness |
JP6483485B2 (ja) * | 2015-03-13 | 2019-03-13 | 株式会社東芝 | 人物認証方法 |
US9652919B2 (en) * | 2015-04-22 | 2017-05-16 | Dell Products Lp | Dynamic authentication adaptor systems and methods |
-
2016
- 2016-08-24 WO PCT/JP2016/074639 patent/WO2017043314A1/ja active Application Filing
- 2016-08-24 JP JP2017539101A patent/JP6451861B2/ja active Active
- 2016-08-24 US US15/758,136 patent/US10706266B2/en active Active
-
2018
- 2018-12-13 JP JP2018233808A patent/JP2019040642A/ja active Pending
-
2019
- 2019-02-19 US US16/279,220 patent/US10509950B2/en active Active
-
2020
- 2020-05-27 US US16/884,801 patent/US11501567B2/en active Active
- 2020-07-21 JP JP2020124731A patent/JP2020170569A/ja active Pending
-
2022
- 2022-09-07 JP JP2022142295A patent/JP7420183B2/ja active Active
- 2022-10-03 US US17/958,535 patent/US11861939B2/en active Active
-
2023
- 2023-11-20 US US18/514,316 patent/US20240087363A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003151016A (ja) * | 2001-11-08 | 2003-05-23 | Nippon Signal Co Ltd:The | 本人認証装置及びこれを備えた自動販売装置 |
JP2008033810A (ja) * | 2006-07-31 | 2008-02-14 | Secom Co Ltd | 顔画像照合装置 |
JP2009176208A (ja) * | 2008-01-28 | 2009-08-06 | Nec Corp | 顔認証装置、システム、方法及びプログラム |
JP2013097760A (ja) * | 2011-11-07 | 2013-05-20 | Toshiba Corp | 認証システム、端末装置、認証プログラム、認証方法 |
JP2014078052A (ja) * | 2012-10-09 | 2014-05-01 | Sony Corp | 認証装置および方法、並びにプログラム |
Cited By (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11468155B2 (en) | 2007-09-24 | 2022-10-11 | Apple Inc. | Embedded authentication systems in an electronic device |
US11676373B2 (en) | 2008-01-03 | 2023-06-13 | Apple Inc. | Personal computing device control using face detection and recognition |
US11200309B2 (en) | 2011-09-29 | 2021-12-14 | Apple Inc. | Authentication with secondary approver |
US11755712B2 (en) | 2011-09-29 | 2023-09-12 | Apple Inc. | Authentication with secondary approver |
US10878274B2 (en) | 2012-08-15 | 2020-12-29 | Jumio Corporation | Systems and methods of image processing for remote validation |
US11455786B2 (en) | 2012-08-15 | 2022-09-27 | Jumio Corporation | Systems and methods of image processing for remote validation |
US11494046B2 (en) | 2013-09-09 | 2022-11-08 | Apple Inc. | Device, method, and graphical user interface for manipulating user interfaces based on unlock inputs |
US11768575B2 (en) | 2013-09-09 | 2023-09-26 | Apple Inc. | Device, method, and graphical user interface for manipulating user interfaces based on unlock inputs |
US11287942B2 (en) | 2013-09-09 | 2022-03-29 | Apple Inc. | Device, method, and graphical user interface for manipulating user interfaces |
US10977651B2 (en) | 2014-05-29 | 2021-04-13 | Apple Inc. | User interface for payments |
US11836725B2 (en) | 2014-05-29 | 2023-12-05 | Apple Inc. | User interface for payments |
US11206309B2 (en) | 2016-05-19 | 2021-12-21 | Apple Inc. | User interface for remote authorization |
WO2018179723A1 (ja) * | 2017-03-30 | 2018-10-04 | パナソニックIpマネジメント株式会社 | 顔認証処理装置、顔認証処理方法及び顔認証処理システム |
JP2018169943A (ja) * | 2017-03-30 | 2018-11-01 | パナソニックIpマネジメント株式会社 | 顔認証処理装置、顔認証処理方法及び顔認証処理システム |
JP2020520031A (ja) * | 2017-05-16 | 2020-07-02 | アップル インコーポレイテッドApple Inc. | 拡張されたユーザ対話のための画像データに関する米国特許商標局における特許出願 |
US11532112B2 (en) | 2017-05-16 | 2022-12-20 | Apple Inc. | Emoji recording and sending |
JP2020524928A (ja) * | 2017-06-07 | 2020-08-20 | タレス・ディス・フランス・エス・ア | 制限エリア内の不認可のユーザを識別することを可能にする情報要素をデバイスに提供する方法 |
US11398121B2 (en) | 2017-06-07 | 2022-07-26 | Thales Dis France Sa | Method for provisioning a device with an information element allowing to identify unauthorized users in a restricted area |
JP2022122858A (ja) * | 2017-06-07 | 2022-08-23 | タレス・ディス・フランス・エス・ア | 制限エリア内の不認可のユーザを識別することを可能にする情報要素をデバイスに提供する方法 |
US10977356B2 (en) | 2017-08-09 | 2021-04-13 | Jumio Corporation | Authentication using facial image comparison |
CN110214320A (zh) * | 2017-08-09 | 2019-09-06 | 居米奥公司 | 使用面部图像比对的认证 |
CN110214320B (zh) * | 2017-08-09 | 2022-07-15 | 居米奥公司 | 使用面部图像比对的认证 |
WO2019032722A1 (en) * | 2017-08-09 | 2019-02-14 | Jumio Corporation | AUTHENTICATION USING A FACIAL IMAGE COMPARISON |
US11783017B2 (en) | 2017-08-09 | 2023-10-10 | Jumio Corporation | Authentication using facial image comparison |
US10606993B2 (en) | 2017-08-09 | 2020-03-31 | Jumio Corporation | Authentication using facial image comparison |
US11393258B2 (en) | 2017-09-09 | 2022-07-19 | Apple Inc. | Implementation of biometric authentication |
US11386189B2 (en) | 2017-09-09 | 2022-07-12 | Apple Inc. | Implementation of biometric authentication |
US11765163B2 (en) | 2017-09-09 | 2023-09-19 | Apple Inc. | Implementation of biometric authentication |
JP2019091318A (ja) * | 2017-11-15 | 2019-06-13 | 富士ゼロックス株式会社 | 情報処理装置、及びプログラム |
CN109408676A (zh) * | 2018-01-25 | 2019-03-01 | 维沃移动通信有限公司 | 一种显示用户信息的方法及终端设备 |
US11682182B2 (en) | 2018-05-07 | 2023-06-20 | Apple Inc. | Avatar creation user interface |
US11380077B2 (en) | 2018-05-07 | 2022-07-05 | Apple Inc. | Avatar creation user interface |
US11170085B2 (en) | 2018-06-03 | 2021-11-09 | Apple Inc. | Implementation of biometric authentication |
US11928200B2 (en) | 2018-06-03 | 2024-03-12 | Apple Inc. | Implementation of biometric authentication |
US11619991B2 (en) | 2018-09-28 | 2023-04-04 | Apple Inc. | Device control using gaze information |
US11100349B2 (en) | 2018-09-28 | 2021-08-24 | Apple Inc. | Audio assisted enrollment |
US11809784B2 (en) | 2018-09-28 | 2023-11-07 | Apple Inc. | Audio assisted enrollment |
US11107261B2 (en) | 2019-01-18 | 2021-08-31 | Apple Inc. | Virtual avatar animation based on facial feature movement |
JP2022091805A (ja) * | 2019-03-04 | 2022-06-21 | パナソニックIpマネジメント株式会社 | 顔認証登録装置および顔認証登録方法 |
WO2023157720A1 (ja) * | 2022-02-17 | 2023-08-24 | 株式会社デンソー | 車両用顔登録制御装置及び車両用顔登録制御方法 |
US12033296B2 (en) | 2023-04-24 | 2024-07-09 | Apple Inc. | Avatar creation user interface |
Also Published As
Publication number | Publication date |
---|---|
US20240087363A1 (en) | 2024-03-14 |
JP2020170569A (ja) | 2020-10-15 |
US10706266B2 (en) | 2020-07-07 |
US11501567B2 (en) | 2022-11-15 |
US20180247112A1 (en) | 2018-08-30 |
JP6451861B2 (ja) | 2019-01-16 |
US20190180088A1 (en) | 2019-06-13 |
US20200285843A1 (en) | 2020-09-10 |
JP7420183B2 (ja) | 2024-01-23 |
US11861939B2 (en) | 2024-01-02 |
JP2022172325A (ja) | 2022-11-15 |
JP2019040642A (ja) | 2019-03-14 |
JPWO2017043314A1 (ja) | 2018-01-18 |
US10509950B2 (en) | 2019-12-17 |
US20230023000A1 (en) | 2023-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6451861B2 (ja) | 顔認証装置、顔認証方法およびプログラム | |
US10205883B2 (en) | Display control method, terminal device, and storage medium | |
US11550890B2 (en) | Biometric authentication device, method and recording medium | |
JP6483485B2 (ja) | 人物認証方法 | |
US9959454B2 (en) | Face recognition device, face recognition method, and computer-readable recording medium | |
US20190392654A1 (en) | Inspection assistance device, inspection assistance method, and recording medium | |
JP6769475B2 (ja) | 情報処理システム、認証対象の管理方法、及びプログラム | |
CN109034029A (zh) | 检测活体的人脸识别方法、可读存储介质和电子设备 | |
CN109196517B (zh) | 对照装置和对照方法 | |
US20230116514A1 (en) | Authentication control device, authentication system, authentication control method and non-transitory computer readable medium | |
US20230059889A1 (en) | Gate apparatus, control method of gate apparatus, and storage medium | |
JP2015169977A (ja) | 本人認証装置、本人認証方法、本人認証プログラム、および自動取引システム | |
WO2018133584A1 (zh) | 一种身份验证方法及装置 | |
JP7067593B2 (ja) | 情報処理システム、認証対象の管理方法、及びプログラム | |
JP2005275869A (ja) | 個人認証システム | |
CN113705428A (zh) | 活体检测方法及装置、电子设备及计算机可读存储介质 | |
KR102567798B1 (ko) | 비대면 출입통제 장치 | |
JP7327571B2 (ja) | 情報処理システム、端末装置、認証対象の管理方法、及びプログラム | |
JP7248348B2 (ja) | 顔認証装置、顔認証方法、及びプログラム | |
WO2023144929A1 (ja) | 認証システム、認証装置、認証方法、およびプログラム | |
US10970988B2 (en) | Information processing apparatus, information processing system, method, and program | |
US20220392256A1 (en) | Authentication device, registration device, authentication method, registration method, and storage medium | |
JP2018151782A (ja) | 画像処理装置、画像処理システム、および、制御方法 | |
WO2022215248A1 (ja) | 当人認証支援、当人認証支援方法及びプログラム | |
CN112597466A (zh) | 用户认证方法和*** |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16844177 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017539101 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15758136 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16844177 Country of ref document: EP Kind code of ref document: A1 |