WO2019200578A1 - 电子设备及其身份识别方法 - Google Patents

电子设备及其身份识别方法 Download PDF

Info

Publication number
WO2019200578A1
WO2019200578A1 PCT/CN2018/083621 CN2018083621W WO2019200578A1 WO 2019200578 A1 WO2019200578 A1 WO 2019200578A1 CN 2018083621 W CN2018083621 W CN 2018083621W WO 2019200578 A1 WO2019200578 A1 WO 2019200578A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
image
light
infrared
preset
Prior art date
Application number
PCT/CN2018/083621
Other languages
English (en)
French (fr)
Inventor
田浦延
Original Assignee
深圳阜时科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳阜时科技有限公司 filed Critical 深圳阜时科技有限公司
Priority to CN201880000318.6A priority Critical patent/CN108496172A/zh
Priority to PCT/CN2018/083621 priority patent/WO2019200578A1/zh
Publication of WO2019200578A1 publication Critical patent/WO2019200578A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Definitions

  • the application relates to an electronic device and an identification method thereof.
  • buttons and fingerprint recognition modules cannot be set because the space is limited. Since the fingerprint recognition module cannot be set, in order to realize the identification, the industry has proposed to use the front camera on the front of the mobile phone to obtain facial information and perform facial recognition.
  • the embodiments of the present application aim to at least solve one of the technical problems existing in the prior art. To this end, the embodiments of the present application need to provide an electronic device and an identification method thereof.
  • the application provides an identification method of an electronic device, including the following steps:
  • the determination of the stereoscopic face is performed, and when it is determined that the object in front is the stereoscopic face, the identity of the object in front is determined, thereby preventing people from using the photo for face recognition and improving Security, and no need to face recognition based on 3D image information, which simplifies the data processing of face recognition, speeds up face recognition and improves user experience.
  • the stereoscopic face is judged after the 2D image is successfully matched, if the 2D image matching is unsuccessful, the determination of the stereoscopic face is omitted, thereby speeding up the face recognition speed.
  • the electronic device includes an infrared image sensor and an infrared floodlight on the front side of the electronic device; the step S1 includes: controlling the infrared floodlight to be turned on when the electronic device needs to be identified, so that The infrared floodlight emits an infrared beam to an object in front of the electronic device, and controls the infrared image sensor to acquire an optical signal reflected by the object to the infrared beam and form a 2D image.
  • the electronic device further includes an RGB image sensor located on the front side of the electronic device; the step S1 further comprising: acquiring an illumination intensity of the current environment when the electronic device needs to be identified; when the current environment is illuminated Controlling, by the RGB image sensor, a 2D image of an object in front of the electronic device when the intensity is within a preset light intensity range; otherwise controlling the infrared floodlight to be turned on, and controlling the infrared image sensor to collect 2D of the object in front of the electronic device image.
  • the electronic device includes an RGB image sensor and a fill light device on the front side of the electronic device; and the step S1 includes: acquiring an ambient light intensity around the electronic device when the electronic device needs to perform identification; The ambient light intensity is located within a preset second light intensity range, and the RGB image sensor is controlled to acquire a 2D image of an object in front of the electronic device; otherwise, the fill light device is controlled to be turned on, and the RGB image sensor is collected to control the electronic device. 2D image of the object in front.
  • the preset light intensity range includes an upper limit value and a lower limit value; the upper limit value is a critical value of an object in front of the electronic device in a backlight environment; and the lower limit value is an electronic device.
  • the threshold of the object in front is in a low light environment.
  • the electronic device includes an infrared image sensor and an infrared projection device on the front side of the electronic device
  • obtaining the surface contour information of the object in front of the electronic device in the step S3 includes: controlling the infrared projection device projection structure of the electronic device The light beam reaches an object in front of the electronic device, and controls the infrared image sensor to acquire an optical signal reflected by the object to the structural light beam, and forms a 3D contour image of the front object according to the light signal acquired by the infrared image sensor.
  • the structural beam projected by the infrared projection device forms a pattern, and the pattern includes one or more of a dot matrix, a stripe pattern, a mesh format, and a speckle pattern.
  • the electronic device includes a light emitter and a light receiver located on the front side of the electronic device, and obtaining the surface contour information of the object in front of the electronic device in the step S3 includes: controlling the light emitter of the electronic device to emit the pre- Setting an optical pulse signal to an object in front of the electronic device, controlling the optical receiver to receive an optical signal reflected by the object on the preset optical pulse signal, and according to a time difference or phase of the transmitted optical pulse signal and the received optical pulse signal Poor, the distance between the surface of the object and the light receiver is obtained, and a 3D contour image of the front object is formed according to the obtained distance between the surface of the object and the light receiver.
  • the method before the step S1, the method further comprises: determining a current working state of the electronic device; determining whether identity recognition is required according to the current working state of the electronic device.
  • the current working state of the electronic device includes a state of the screen, a bright screen unlocked state, and a bright screen unlocked state; and determining, according to the current working state of the electronic device, whether the identification needs to be performed includes:
  • the screen wake-up operation includes: picking up an electronic device, touching a display screen of the electronic device, approaching a preset range of the front surface of the electronic device, and pressing one or more of the function keys of the electronic device.
  • the step S4 includes: matching the obtained surface contour information with the pre-stored contour template to determine whether the object in front of the electronic device is a stereoscopic surface.
  • the pre-stored contour template includes depth reference information of the object facial preset feature; the step S4 further includes: extracting depth information of the facial preset feature from the obtained surface contour information, and The depth reference information corresponding to the facial preset feature is compared to determine whether the object in front of the electronic device is a stereoscopic face.
  • the embodiment of the present application provides an electronic device, including a processor and an image collecting device and a contour information acquiring device located on the front side of the electronic device.
  • the processor is configured to: when the electronic device needs to perform identity recognition, control the image capturing device. Obtaining a 2D image of the object in front of the electronic device, and matching the obtained 2D image with the preset 2D image template; and after the obtained 2D image is successfully matched with the preset 2D image template, controlling the contour information acquiring device to obtain the electronic device
  • the surface contour information of the front object is determined according to the obtained surface contour information of the front object, and whether the object in front of the electronic device is a stereoscopic face; when the object in front of the electronic device is a stereoscopic face, the identity of the front object is determined to be legal.
  • the determination of the stereoscopic face is performed, and when it is determined that the object in front is the stereoscopic face, the identity of the object in front is determined, thereby preventing people from using the photo for face recognition and improving Security, and no need to face recognition based on 3D image information, which simplifies the data processing of face recognition, speeds up face recognition and improves user experience.
  • the stereoscopic face is judged after the 2D image is successfully matched, if the 2D image matching is unsuccessful, the determination of the stereoscopic face is omitted, thereby speeding up the face recognition speed.
  • the image capture device includes an infrared image sensor and an infrared floodlight; the processor is further configured to: when the electronic device needs to be identified, control the infrared floodlight to be turned on to enable infrared flooding
  • the light lamp emits an infrared light beam to an object in front of the electronic device, and controls the infrared image sensor to acquire an optical signal reflected by the object to the infrared light beam, and forms a 2D image.
  • the front surface of the electronic device is further provided with an ambient light sensor for collecting ambient light intensity around the electronic device;
  • the image capture device further includes an RGB image sensor;
  • the processor is further configured to: when the electronic When the device needs to be identified, the current ambient light intensity is read from the ambient light sensor; when the current ambient light intensity is within a preset light intensity range, the infrared floodlight is controlled to be turned on, and the The infrared image sensor collects a 2D image of the object in front of the electronic device; otherwise, the RGB image sensor is controlled to acquire a 2D image of the object in front of the electronic device.
  • the front surface of the electronic device is further provided with an ambient light sensor for collecting ambient light intensity around the electronic device;
  • the image capturing device comprises an RGB image sensor and a fill light device;
  • the processor is further used for Obtaining an ambient light intensity around the electronic device when the electronic device needs to be identified; if the ambient light intensity is within a preset light intensity range, controlling the RGB image sensor to collect a 2D image of an object in front of the electronic device; otherwise Controlling the fill light device to turn on, and controlling the RGB image sensor to collect a 2D image of an object in front of the electronic device.
  • the fill light device comprises a soft light located on the front side of the electronic device.
  • one or more of the brightness, color temperature, and color of the fill light device are further adjusted when the fill light device on the front of the control electronics is turned on.
  • the preset light intensity range includes an upper limit value and a lower limit value; the upper limit value is a critical value of an object in front of the electronic device in a backlight environment; and the lower limit value is an electronic device.
  • the threshold of the object in front is in a low light environment.
  • the image capture device includes an infrared image sensor and an infrared projection device
  • the processor further configured to: control the infrared projection device to project a structural beam of light to an object in front of the electronic device, and control the infrared
  • the image sensor acquires an optical signal reflected by the object to the structural beam and forms a 3D contour image of the object.
  • the structural beam projected by the infrared projection device forms a pattern, and the pattern includes one or more of a dot matrix, a stripe pattern, a mesh format, and a speckle pattern.
  • the image capture device includes a light emitter, a light receiver, and a signal processing device
  • the processor is further configured to: control the light emitter to emit a preset light pulse signal to the front of the electronic device And controlling the light receiver to receive an optical signal reflected by the object on the preset optical pulse signal, wherein the signal processing device calculates a time difference or a phase difference between the emission time and the reflection time of the preset optical pulse to obtain an object surface
  • the distance from the light receiver and the 3D contour image of the front object is formed according to the obtained distance between the surface of the object and the light receiver.
  • the processor is configured to determine a current working state of the electronic device, and determine whether identification is required according to a current working state of the electronic device.
  • the current working state of the electronic device includes a screen state, a bright screen unlocked state, and a bright screen unlocked state; the processor is further configured to:
  • the screen wake-up operation includes: picking up an electronic device, touching a display screen of the electronic device, approaching a preset range of the front surface of the electronic device, and pressing one or more of the function keys of the electronic device.
  • the processor is further configured to: match the obtained surface contour information with the pre-stored contour template to determine whether the object in front of the electronic device is a stereoscopic surface.
  • the pre-stored contour template includes depth reference information of an object facial preset feature; the processor is further configured to: extract depth information of the facial preset feature from the obtained surface contour information, and It compares the depth reference information corresponding to the facial preset feature, and determines whether the object in front of the electronic device is a stereoscopic face.
  • FIG. 1 is a schematic flowchart of an identity recognition method of an electronic device according to a first embodiment of the present application
  • FIG. 2 is a schematic diagram showing a refinement of an embodiment of obtaining a 2D image of an object in front of an electronic device in FIG. 1;
  • FIG. 3 is a schematic diagram showing a refinement of another embodiment of obtaining a 2D image of an object in front of an electronic device in FIG. 1;
  • FIG. 4 is a schematic diagram showing a refinement of another embodiment of obtaining a 2D image of an object in front of an electronic device in FIG. 1;
  • FIG. 5 is a schematic flow chart showing an embodiment of obtaining surface contour information of an object in front of an electronic device in FIG. 1;
  • FIG. 6 is a schematic flow chart showing another embodiment of obtaining surface contour information of an object in front of an electronic device in FIG. 1;
  • FIG. 7 is a schematic flowchart of an identity recognition method of an electronic device according to a second embodiment of the present application.
  • FIG. 8 is a schematic diagram of functional modules of a face recognition module according to an embodiment of the present application.
  • FIG. 9 is a schematic front structural view of an electronic device to which a facial recognition module according to an embodiment of the present application is applied;
  • FIG. 10 is a schematic diagram of functional modules of an embodiment of an image capture device in the face recognition module of FIG. 8;
  • FIG. 11 is a schematic diagram of functional modules of another embodiment of an image capture device in the face recognition module of FIG. 8;
  • FIG. 12 is a schematic diagram of functional blocks of an embodiment of a contour information acquiring apparatus in the face recognition module of FIG. 8;
  • FIG. 13 is a schematic diagram of functional modules of another embodiment of a contour information acquiring apparatus in the face recognition module of FIG. 8;
  • FIG. 14 is a schematic diagram of functional modules when an electronic device according to an embodiment of the present disclosure is a mobile terminal.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, features defining “first” or “second” may include one or more of the described features either explicitly or implicitly. In the description of the present application, the meaning of “a plurality” is two or more unless specifically and specifically defined otherwise.
  • Contact or “touch” includes direct or indirect contact.
  • connection In the description of the present application, it should be noted that the terms “installation”, “connected”, and “connected” are to be understood broadly, and may be fixed or detachable, for example, unless otherwise specifically defined and defined. Connected, or integrally connected; may be mechanically connected, or may be electrically connected or may communicate with each other; may be directly connected or indirectly connected through an intermediate medium, may be internal communication of two elements or interaction of two elements relationship.
  • Connected, or integrally connected may be mechanically connected, or may be electrically connected or may communicate with each other; may be directly connected or indirectly connected through an intermediate medium, may be internal communication of two elements or interaction of two elements relationship.
  • the specific meanings of the above terms in the present application can be understood on a case-by-case basis.
  • the identification method refers to collecting biometric information of an object, and determining whether the identity of the object is legal according to the collected biometric information. Taking facial recognition as an example, the facial information of the object is collected by the camera, and the collected facial information is matched with the registered facial template. If the matching is successful, the identity of the object is determined to be legal, and if the matching fails, the identity of the object is determined to be illegal.
  • the objects herein include, for example, human bodies or other living organisms.
  • the face information includes, for example, 2D image information or 3D image information.
  • 2D image information When facial recognition is performed based on 2D image information, it can be realized by using an existing camera, and it is not necessary to additionally provide an imaging device. However, it is easy for another person to use an object's face photo to confirm the identity, and thus the security is not high.
  • face recognition is performed based on 3D image information, the matching process of 3D image information is very complicated and time consuming, and although the security is improved, the recognition speed is slow and the user experience is poor.
  • FIG. 1 is a schematic flowchart of an identity recognition method of an electronic device according to a first embodiment of the present application.
  • the method for identifying an electronic device includes the following steps:
  • the identification method operates on an electronic device.
  • the electronic device is, for example, but not limited to, a suitable type of electronic product such as a consumer electronic product, a home-based electronic product, a vehicle-mounted electronic product, or a financial terminal product.
  • consumer electronic products such as mobile phones, tablets, notebook computers, desktop monitors, computer integrated machines.
  • Home-based electronic products such as smart door locks, TVs, refrigerators, wearable devices, etc.
  • Vehicle-mounted electronic products such as car navigation systems, car DVDs, etc.
  • Financial terminal products such as ATM machines, terminals for self-service business, etc.
  • a front side of the electronic device is provided with a corresponding image capturing device, such as an RGB image sensor, an infrared image sensor, etc., to obtain a 2D image of the object in front of the electronic device when the electronic device needs to be identified.
  • a corresponding image capturing device such as an RGB image sensor, an infrared image sensor, etc.
  • the electronic device registers the face information of the user before use, and forms a face template, and stores it for matching of the 2D image at the time of face recognition.
  • Image matching for example, using facial feature extraction, first extracting one or several facial features, such as eyes, nose, eyebrows, lips, jaws, etc. from the 2D image, and then extracting the extracted facial features with the 2D image template Corresponding facial features are compared separately. If the comparisons are consistent, the matching is successful, otherwise the matching fails.
  • the front side of the electronic device is provided with a corresponding contour information collecting device for obtaining surface contour information of the object in front of the electronic device.
  • the contour information collecting device uses, for example, one or more of structured light technology, TOF (Time of Flight) technology, and binocular stereo imaging technology. Since the surface contour information is only used to represent the facial contour of the object, for example, coordinate information of each pixel point of the surface of the object in three-dimensional space, not only the amount of information contained in the 3D image of the object is less, but also It is also more concise when judging the stereo face.
  • the corresponding 3D attribute such as the depth information of the facial feature, is extracted from the obtained surface contour information of the front object, thereby determining whether the object face in front of the electronic device is a three-dimensional face, so that people can avoid facial recognition by using a photo.
  • the determination of the stereoscopic face is performed, and when it is determined that the object in front is the stereoscopic face, the identity of the object in front is determined, thereby preventing people from using the photo for face recognition and improving Security, and no need to face recognition based on 3D image information, which simplifies the data processing of face recognition, speeds up face recognition and improves user experience.
  • the stereoscopic face is judged after the 2D image is successfully matched, if the 2D image matching is unsuccessful, the determination of the stereoscopic face is omitted, thereby speeding up the face recognition speed.
  • FIG. 2 is a detailed flow diagram of an embodiment of obtaining a 2D image of an object in front of an electronic device in FIG.
  • Obtaining the 2D image of the object in front of the electronic device in the above step S1 includes the following steps:
  • the 2D facial image of the object in front of the electronic device is acquired by an infrared sensing device located on the front side of the electronic device. Since the infrared sensing technology uses infrared sensing, it is not limited to the intensity of visible light, and can be used not only during the day but also at night. In addition, considering the insufficient infrared light signal in the surrounding environment of the electronic device, the infrared light signal reflected by the object will be insufficient, so the image collected by the infrared sensing device will be relatively blurred, so that accurate identification cannot be performed.
  • an infrared floodlight is further disposed on the front surface of the electronic device to be turned on when the electronic device needs to be identified, and the infrared light beam is emitted by the infrared floodlight and irradiated to an object in front of the electronic device to supplement Infrared light in the environment surrounding the electronic device.
  • FIG. 3 is a refinement flow diagram of another embodiment of obtaining a 2D image of an object in front of the electronic device in FIG.
  • Obtaining the 2D image of the object in front of the electronic device in the above step S1 includes the following steps:
  • step S13 determining whether the current ambient light intensity is within a preset light intensity range, if yes, proceeding to step S14, otherwise performing step S15;
  • the front side of the electronic device is provided with an RGB image sensor in addition to the infrared image sensor and the infrared floodlight.
  • an RGB image sensor in addition to the infrared image sensor and the infrared floodlight.
  • a mobile terminal captures an object in front of an electronic device through an RGB image sensor, thereby achieving a self-photographing effect. Therefore, in the embodiment of the present application, the existing RGB image sensor is combined with the infrared image sensor and the infrared floodlight to collect the 2D image in the face recognition, thereby saving cost.
  • the RGB image sensor can acquire a clear facial image under normal lighting conditions, and in a backlight or low light environment, a clear facial image cannot be acquired, thereby affecting the facial recognition effect, for example, due to the acquired image. Facial images are blurry and require a long recognition process, and even facial recognition is not possible. In this way, the user experience is also poor.
  • the backlight environment here refers to the situation when the object is facing away from the light source. At this time, the light signal reflected by the object surface is very weak, and the light signals collected by the image sensor are almost all light signals emitted by the light source; the low light environment refers to the surrounding environment. In the dark case, the light signal reflected from the surface of the object is also very weak, and the image sensor collects very few light signals. It should be noted that a clear face image refers to a face image that can accurately realize face recognition.
  • an ambient light sensor is disposed on the front surface of the electronic device for collecting the light intensity of the surrounding environment of the electronic device when the electronic device needs to perform face recognition, so as to determine whether the image is in a backlight environment or a low light environment when the image is captured.
  • the ambient light sensor on the front side of the electronic device faces the light source, so if the light of the light source is strong, the ambient light sensor has a very strong light intensity.
  • the ambient light sensor collects light intensity weakly.
  • the illumination intensity of the surrounding environment of the electronic device can also be collected by other sensing devices, such as an RGB image sensor disposed on the front of the electronic device.
  • a threshold range that is, a preset light intensity range, including an upper limit value and a lower limit value, wherein the upper limit value is a light intensity critical value of the object in a backlight environment, and the lower limit value is The light intensity threshold of an object in a low light environment. If the ambient light intensity is within the preset light intensity range, that is, the current ambient light intensity is greater than or equal to the lower limit value and less than or equal to the upper limit value, it indicates that the RGB image sensor can acquire a clear facial image in the current environment.
  • the ambient light intensity is outside the preset light intensity range, that is, the current ambient light intensity is greater than the upper limit value, or less than the lower limit value, it means that the RGB image sensor cannot be captured in the current environment. Facial images that cause facial recognition to fail.
  • the RGB image sensor when the current ambient light intensity is within a preset light intensity range, the RGB image sensor is controlled to collect a 2D image of the object in front of the electronic device, when the current ambient light intensity is outside the preset light intensity. Controls the infrared floodlights on and controls the infrared image sensor to capture 2D images of objects in front of the electronic device. In this way, facial recognition in different environments is realized, and the frequent use of infrared floodlights is avoided to affect the service life of the infrared floodlight.
  • FIG. 4 is a refinement flow diagram of still another embodiment of obtaining a 2D image of an object in front of the electronic device in FIG.
  • Obtaining the 2D image of the object in front of the electronic device in the above step S1 includes the following steps:
  • step S17 determining whether the current ambient light intensity is within a preset light intensity range; if yes, proceeding to step S19, otherwise performing step S18;
  • step S18 the control light filling device is turned on, and step S19 is performed;
  • an RGB image sensor and a fill light device are provided on the front surface of the electronic device.
  • the RGB image sensor cannot acquire a clear 2D image in a backlight or low light environment. Therefore, in the embodiment of the present application, the electronic device is complemented with light by a light-filling device in a backlight environment or a low-light environment.
  • the control electronic device performs front fill light, thereby The intensity of the light around the electronic device is within a preset range of light intensities to capture a clear 2D image.
  • the sensor collects a 2D image of the object in front of the electronic device.
  • the light-filling device on the front surface of the electronic device may be a display screen of the electronic device or a fill light source located at the top of the electronic device, and the fill light source is an additional component that is dedicated to fill light. , for example, a soft light.
  • the existing structure can be used to fill the light, which saves the cost of additionally setting the light source.
  • the fill light source is used to fill light, although the cost is increased, it does not affect the normal operation of other components (such as a display screen), and can also achieve a better fill light effect.
  • one or more parameters of the brightness, color temperature or color of the light-filling device are also adjusted while controlling the light-filling device on the front side of the electronic device.
  • the fill light device comprises, for example, a plurality of LEDs or other light-emitting elements.
  • the brightness, color and color temperature of the light-emitting element can be controlled as needed.
  • the specific values of the foregoing parameters are not limited, and can be flexibly selected according to actual use conditions.
  • step S19 when the current ambient light intensity is less than or equal to the lower limit value of the preset light intensity range, the display screen of the electronic device is adjusted to the maximum brightness to perform front fill light.
  • the display screen is gradually increased to the maximum brightness to perform front fill light.
  • the brightness of the display can be adjusted one by one to avoid discomfort caused by sudden increase in brightness.
  • step S19 if the sum of the current ambient light intensity and the maximum brightness of the display screen is still less than the lower limit value of the preset light intensity range, the fill light source of the front end of the control electronic device is turned on, so that the The fill light source is complemented with the display screen.
  • the display screen is first adjusted to the maximum brightness, and the current ambient light intensity is collected. If the current ambient light intensity is still less than the lower limit value of the preset light intensity range, it means that the fill light of the display cannot be compensated. Light requires that the fill light source at the top of the front of the control electronics is turned on. If the current ambient light intensity is greater than or equal to the lower limit of the preset light intensity range, it means that the brightness of the display can meet the fill light requirement, so the fill light can only be performed by the brightness of the display screen.
  • FIG. 5 is a refinement flow diagram of an embodiment of obtaining surface contour information of an object in front of an electronic device in FIG.
  • Obtaining the surface contour information of the object in front of the electronic device in the above step S3 includes:
  • the infrared projection device of the control electronic device projects the structural beam to an object in front of the electronic device
  • S31 Acquire an optical signal reflected by the object on the structural beam by an infrared image sensor of the electronic device, and form a 3D contour image of the front object.
  • the infrared projection device and the infrared image sensor are disposed on the front side of the electronic device, and the infrared projection device and the infrared image sensor are separately disposed.
  • the infrared projection device and the infrared image sensor can also be integrated to facilitate installation and save installation space.
  • the infrared projection device is configured to project an infrared light beam on an object in front of the electronic device, the infrared light beam will be reflected when the object is in front of the electronic device, and the reflected light signal is sensed by the infrared image sensor of the electronic device, according to
  • the sensing signal of the infrared image sensor will form a 3D contour image of the front object, that is, surface contour information.
  • the infrared projection device includes, for example, a light source, a collimating lens, and an optical diffraction element (DOE), wherein the light source is used to generate an infrared laser beam; and the collimating lens calibrates the infrared laser beam to form approximately parallel light; The optical diffraction element modulates the calibrated infrared laser beam to form a corresponding speckle pattern.
  • the pattern includes, for example, one or more of a dot matrix, a stripe pattern, a mesh format, and a speckle pattern.
  • other coding patterns can also be included. It should be noted that the infrared beam projected by the infrared projection device is formed by an infinite number of light spots, and the more the light spots, the higher the resolution of the infrared image obtained by the infrared image sensor.
  • FIG. 6 is a refinement flow diagram of another embodiment of obtaining surface contour information of an object in front of an electronic device in FIG.
  • Obtaining the surface contour information of the object in front of the electronic device in the above step S3 includes:
  • the light emitter of the control electronic device emits a preset optical pulse signal to an object in front of the electronic device
  • the control light receiver receives the optical signal reflected by the object on the preset optical pulse signal, and obtains the surface of the object and the optical receiver according to the time difference or/and the phase difference of the transmitted optical pulse signal and the received optical pulse signal.
  • the distance between the two, and the 3D contour image of the front object is formed according to the obtained distance between the surface of the object and the light receiver.
  • the light emitter includes, for example, a light source, a light modulator that modulates the light signal emitted by the light source, and modulates the light signal emitted by the light source through the light modulator, so that the light emitter continuously emits preset light toward the object in front of the electronic device.
  • Pulse signal The light receiver comprises, for example, a light sensor and a signal processing device for receiving a light signal reflected by the object by a predetermined light pulse. Since the optical pulse signal emitted by the light emitter is synchronized with the light receiving device, such as the emission time and waveform of the optical pulse signal, the signal processing device obtains the surface of the object according to the time difference or phase difference of the transmitted optical signal and the received optical pulse signal.
  • the distance from the light receiver and the 3D contour image of the front object is formed according to the distance between the surface of the object and the light receiver.
  • the optical pulse signals may be transmitted point by point or simultaneously to transmit optical signals of a plurality of points.
  • the contour image of the entire face of the object can be acquired at one time.
  • the step S4 includes: matching the obtained surface contour information with the pre-stored contour template to determine whether the object in front of the electronic device is a stereoscopic surface.
  • the outline template of the face outline information may also be formed and stored.
  • the contour template includes, for example, depth reference information of a preset feature of the object face.
  • the depth information of the face preset feature is extracted from the surface contour information, and is compared with the depth reference information corresponding to the facial feature in the pre-stored contour template, thereby Determine whether the object in front of the electronic device is a stereo face.
  • step S4 includes: extracting depth information of the face preset feature from the obtained surface contour information, and determining whether the object in front of the electronic device is a stereo face according to the obtained depth information of the object face preset feature.
  • Object facial preset features such as eyes, nose, lips, jaw, cheekbones, and the like.
  • the depth information of the surface preset feature of the object includes, for example, the depth of the eye, and the height of the nose, lips, jaw, and tibia.
  • it is determined whether the object in front of the electronic device is a stereoscopic face by setting a depth threshold corresponding to the preset feature of the object. That is, after the depth information of the object face preset feature in front of the electronic device is obtained, it is compared with a preset depth threshold to determine whether the object in front of the electronic device is a stereo face.
  • the depth information of the preset feature of the object face cannot be extracted from the obtained surface contour information. That is to say, if another person holds a photo or a picture for identification, based on the obtained surface contour information, it is determined that the object in front of the electronic device is not a stereoscopic face, and thus the identity cannot be recognized, thereby improving the security of the electronic device.
  • the depth information of the preset feature of the object is used to determine whether the object in front of the electronic device has a 3D attribute, for example, whether the front of the electronic device is a real face, rather than a photo or a picture. Therefore, the embodiment of the present application is simpler than the prior art 3D image information, which not only speeds up the recognition speed, but also improves the user experience.
  • FIG. 7 is a schematic flowchart diagram of an identity recognition method of an electronic device according to a second embodiment of the present application.
  • the method for identifying the electronic device before the step S1, the method further includes:
  • Step S6 determining a current working state of the electronic device
  • the current working state is divided according to the display state of the display screen, for example, including the state of the information screen, the unlit state of the bright screen, and the unlocked state of the bright screen.
  • the state of the information screen means that the display screen is not lit and is in an extinguished state.
  • the unlit state of the bright screen means that the display is lit, but the unlocked interface is displayed on the display. In this state, the electronic device is not unlocked and cannot enter the main page of the electronic device.
  • the unlocked state of the bright screen means that the display is lit and the corresponding interface is displayed according to the specific operation. In this state, the electronic device has been unlocked, and the user can use the electronic device normally.
  • step S7 it is determined whether identification is required according to the current working state of the electronic device.
  • the user may use the electronic device at any time, so in order to quickly respond to the user's operation, some sensors on the electronic device are still in operation to monitor the electronic device.
  • Some sensors on the electronic device are still in operation to monitor the electronic device.
  • Use case to determine if there is a screen wake-up operation For example, touch sensors, gravity sensors, distance sensors, and the like.
  • the screen wake-up operation includes one or more of picking up the electronic device, touching the display of the electronic device, the object is close to the front preset range of the device, and pressing the function button of the electronic device. It can be understood that the screen wake-up operation of the electronic device can be flexibly set according to the needs of the user.
  • the touch sensing module of the control electronic device when the electronic device is in the state of the interest screen, the touch sensing module of the control electronic device performs touch detection, and performs a touch action on the display screen (eg, a normal touch operation, a pressing operation, and a sliding Operation) to detect and respond.
  • the touch sensing module When the electronic device is in the interest screen state, the touch sensing module will perform touch detection at the first scanning frequency (lower power consumption state).
  • the first scanning frequency lower power consumption state
  • the touch sensing module When the electronic device is in a bright-screen unlocked state or a bright-screen unlocked state, the touch sensing module performs touch detection at a second scanning frequency (higher power consumption state or normal power consumption state), thereby reducing the electronic device state of the screen. Power consumption. Wherein the first scanning frequency is less than the second scanning frequency. Of course, the touch sensing module can also perform touch detection in a normal power consumption state.
  • the user may use the electronic device at any time in this case, so it is determined that identification is required, and the process proceeds to step S1.
  • the electronic device When the electronic device is in the bright-screen unlocked state, it is further determined whether there is currently an authentication request initiated by the application to the operating system, for example, the payment application initiates an authentication request for mobile payment. If there is an authentication request, it is determined that identification is required, and the process proceeds to step S1, otherwise the electronic device performs normal operations.
  • the method further comprises: after the identity is successfully determined, controlling the electronic device to perform a corresponding operation. For example, controlling electronic device unlocking, performing payment transaction operations, and the like.
  • FIG. 8 is a schematic diagram of functional modules of a facial recognition module according to an embodiment of the present application
  • FIG. 9 is a schematic front structural view of an electronic device to which the facial recognition module according to an embodiment of the present application is applied.
  • the face recognition module 100 is disposed in a non-display area of the electronic device, such as a top end of the front side of the electronic device.
  • the facial recognition module 100 can also be disposed at other locations of the electronic device.
  • components in the facial recognition module 100 are integrated into the display device of the electronic device, so that the facial recognition module is located in the electronic device.
  • the face recognition module 100 when the facial recognition module 100 is disposed in the non-display area of the electronic device, it may also be located at the bottom end or the side end of the front surface of the electronic device.
  • the face recognition module 100 includes a substrate 40, a substrate image acquisition device 10, a contour information acquisition device 20, and a face recognition processor 30.
  • the image capture device 10 is disposed, for example, on the substrate 10.
  • the outline information acquiring device 20 is provided, for example, on the substrate 40.
  • the face recognition processor 30 is, for example, a processing chip.
  • the processing chip is electrically connected to the image capturing device 10 and the contour information acquiring device 20 on the substrate 40, for example, by an FPC flexible circuit board, or the processing chip is disposed on the substrate 40 and the image capturing device 10 and the contour on the substrate 40.
  • the information acquisition device 20 is electrically connected.
  • the substrate 40 is, for example, a printed circuit board, a silicon substrate, a metal substrate, or the like.
  • the image capture device 10, the contour information acquisition device 20, and the face recognition processor 30 may be separately provided, for example, and the substrate 40 may be omitted.
  • the image acquisition device 10 is for obtaining a 2D image of an object in front of the electronic device
  • the contour information acquisition device 20 is configured to obtain surface contour information of the object in front of the electronic device.
  • the facial recognition processor 30 is configured to control the image capturing device 10 to obtain a 2D image of an object in front of the electronic device when the electronic device needs to perform identity recognition, and match the obtained 2D image with a preset 2D image template; After the 2D image is successfully matched with the preset 2D image template, the contour information acquiring device 20 is controlled to obtain surface contour information of the object in front of the electronic device, and according to the obtained surface contour information of the front object, whether the object in front of the electronic device is A three-dimensional face; when the object in front of the electronic device is a three-dimensional face, it is determined that the identity of the front object is legal.
  • the determination of the stereoscopic face is performed, and when it is determined that the object in front is the stereoscopic face, the identity of the object in front is determined, thereby preventing people from using the photo for face recognition and improving Security, and no need to face recognition based on 3D image information, which simplifies the data processing of face recognition, speeds up face recognition and improves user experience.
  • the stereoscopic face is judged after the 2D image is successfully matched, if the 2D image matching is unsuccessful, the determination of the stereoscopic face is omitted, thereby speeding up the face recognition speed.
  • FIG. 10 is a functional block diagram of an embodiment of an image capture device in the face recognition module of FIG.
  • the image pickup device 10 includes, for example, an infrared image sensor 11 and an infrared floodlight 12.
  • the 2D image of the object in front of the electronic device is acquired by the infrared image sensor 11 located on the front side of the electronic device. Since the infrared sensing technology uses infrared sensing, it is not limited to the intensity of visible light, and can be used not only during the day but also at night.
  • the infrared floodlight 12 is also disposed on the front surface of the electronic device to be turned on when the electronic device needs to be identified, and the infrared floodlight 12 emits an infrared light beam and is irradiated to an object in front of the electronic device to supplement Infrared light in the environment surrounding the electronic device.
  • the facial recognition processor 30 is further configured to: when the electronic device needs to perform identification, control the infrared floodlight 12 to be turned on, so that the infrared floodlight 12 emits an infrared light beam to an object in front of the electronic device, and controls the infrared
  • the image sensor 11 acquires an optical signal reflected by the object to the infrared beam, and forms a 2D image according to the sensing signal of the infrared image sensor 11.
  • an ambient light sensor 50 is further disposed on the front surface of the electronic device for collecting ambient light intensity around the electronic device.
  • the image pickup device 10 includes an RGB image sensor 13 in addition to the infrared image sensor 11 and the infrared floodlight 12 located on the front side of the electronic device.
  • the mobile terminal captures an object in front of the electronic device through the RGB image sensor 13 to achieve a self-photographing effect. Therefore, in the embodiment of the present application, the existing RGB image sensor 13 is used in combination with the infrared image sensor 11 and the infrared floodlight 12 to collect 2D images in face recognition, thereby saving cost.
  • the ambient light sensor 50 can also be disposed on the substrate 40.
  • the RGB image sensor 13 can acquire a clear face image under normal lighting conditions, and in the case of backlight or low light, a clear face image cannot be acquired, thereby affecting the face recognition effect, for example, due to the collected face.
  • the image is blurry and requires a long recognition process, and even facial recognition is not possible. In this way, the user experience is also poor.
  • Backlighting here refers to the situation when the object is facing away from the light source. At this time, the light signal reflected from the object surface is very weak.
  • the light signals collected by the image sensor are almost all light signals emitted by the light source; the weak light refers to the dark environment around the object. In this case, the optical signal reflected from the surface of the object is also very weak, and the optical signal collected by the image sensor is very small.
  • a clear face image refers to a face image that can accurately realize face recognition.
  • the embodiment of the present application collects the ambient light sensor 50 on the front side of the electronic device to collect the illumination intensity of the environment surrounding the electronic device.
  • a critical range that is, a preset light intensity range, including an upper limit value and a lower limit value, wherein the upper limit value is a critical value in a backlight environment and the lower limit value is a low light
  • the ambient light intensity is within the preset light intensity range, that is, the current ambient light intensity is greater than or equal to the lower limit value and less than or equal to the upper limit value, it indicates that the RGB image sensor 13 can acquire a clear face in the current environment.
  • the image is used to realize facial recognition; if the ambient light intensity is outside the preset light intensity range, that is, the current ambient light intensity is greater than the upper limit value, or is less than the lower limit value, it indicates that the RGB image sensor 13 cannot be collected in the current environment. Facial recognition failed due to a clear facial image.
  • the facial recognition processor 30 is further configured to: when the electronic device needs to perform identity recognition, read the current ambient light intensity from the ambient light sensor 50; when the current ambient light intensity is at the preset light
  • the infrared floodlight 12 is controlled to be turned on, and the infrared image sensor 11 is controlled to collect a 2D image of an object in front of the electronic device; otherwise, the RGB image sensor 13 is controlled to collect a 2D image of an object in front of the electronic device.
  • an environmental light sensor 50 is further disposed on the front side of the electronic device for collecting the light intensity of the environment surrounding the electronic device.
  • FIG. 11 is a schematic diagram of functional modules of another embodiment of an image capture device in the face recognition module of FIG.
  • the image pickup device 10 includes, for example, an RGB image sensor 14 and a fill light device 15. As described above, the RGB image sensor 14 cannot acquire a clear 2D image in a backlight or low light environment. Therefore, in the embodiment of the present application, the electronic device is supplemented with light by the light-filling device 15 in a backlight environment or a low-light environment.
  • the facial recognition processor 30 is further configured to: when the electronic device needs to perform identity recognition, acquire ambient light intensity around the electronic device; if the ambient light intensity is within a preset light intensity range, control the RGB image sensor 14 to collect A 2D image of the object in front of the electronic device; otherwise controlling the fill light device 15 to turn on, and controlling the RGB image sensor 14 to capture a 2D image of the object in front of the electronic device.
  • the fill light device 15 on the front side of the electronic device may be either the display screen 151 of the electronic device or the fill light source 152 located at the top of the electronic device.
  • the fill light source 152 is additionally provided and dedicated to Fill light parts, such as soft lights.
  • the light can be supplemented by using the existing structure, and the cost is saved.
  • the fill light is applied by the fill light source 152, although the cost is increased, it does not affect the normal operation of other components (such as a display screen), and a better fill light effect can be achieved.
  • one or more parameters of the brightness, color temperature or color of the light-filling device 15 are also adjusted while controlling the light-filling device 15 on the front side of the electronic device.
  • the light-filling device 15 includes, for example, a plurality of LEDs or other light-emitting elements.
  • the brightness, color and color temperature of the light-emitting element can be controlled as needed.
  • the optical signal emitted by the light-filling device 15 is softened, so that when the facial image is collected, the user does not feel too glaring and improves. The user experience. It can be understood that the specific values of the foregoing parameters are not limited, and can be flexibly selected according to actual use conditions.
  • the facial recognition processor 30 is further configured to: when the current ambient light intensity is less than or equal to a lower limit value of the preset light intensity range, adjust the display screen 151 of the electronic device to the maximum brightness to perform the front view. Fill light.
  • the display screen 151 is gradually increased to the maximum brightness for the front side. Fill light.
  • the brightness adjustment of the display screen 151 can be adjusted one by one to prevent the sudden increase of brightness from causing discomfort to the user.
  • the facial recognition processor 30 is further configured to control the fill light source 152 of the front end of the electronic device if the sum of the current ambient light intensity and the maximum brightness of the display screen 151 is still less than the lower limit value of the preset light intensity range. Turning on, the fill light source 152 is complemented with the display screen 151 to complement the front side.
  • the fill light source 152 of the front end of the control electronic device is turned on, so that the fill light source 152 and the display screen 151 together perform front fill light. Therefore, in this embodiment, the display screen 151 is first adjusted to the maximum brightness, and the current ambient light intensity is collected. If the current ambient light intensity is still less than the lower limit value of the preset light intensity range, the display screen 151 cannot be filled with light. The fill light requirement is reached, at which time the fill light source 152 at the top of the front of the control electronics is turned on. If the current ambient light intensity is greater than or equal to the lower limit value of the preset light intensity range, it means that the brightness of the display screen 151 can reach the fill light requirement, and therefore the light can be complemented only by the brightness of the display screen 151.
  • the display screen 151 is adjusted to the maximum brightness and the control fill light source 152 is turned on, so that The display screen 151 performs front fill light together with the fill light source 152. It's so simple and convenient.
  • the contour information acquisition device 20 includes, for example, a depth imaging sensor for acquiring contour information of an object surface.
  • Depth imaging sensors can be divided into active and passive.
  • the active sensor mainly emits an energy beam (such as laser, electromagnetic wave, ultrasonic wave) to the target, and detects the reflected optical signal, such as structured light technology and TOF technology.
  • Passive sensors use imaging of ambient conditions, such as binocular stereo imaging.
  • an active depth imaging sensor is used to acquire contour information of an object surface in front of the electronic device.
  • FIG. 12 is a functional block diagram of an embodiment of the contour information acquiring apparatus 20 in the face recognition module of FIG.
  • the contour information acquiring device 20 includes an infrared projection device 21 for projecting an infrared light beam to an object surface in front of the electronic device, and an infrared image sensor 22 for acquiring light reflected from the infrared light beam by the surface of the object. Signal and form an infrared image.
  • the infrared projection device 21 and the infrared image sensor 22 are provided separately.
  • the infrared projection device 21 and the infrared image sensor 22 can also be integrated to facilitate installation while saving installation space.
  • the infrared projection device 21 described above includes, for example, a light source, a collimating lens, and an optical diffraction element (DOE).
  • the light source is used to generate an infrared laser beam; the collimating lens calibrates the infrared laser beam to form approximately parallel light; the optical diffraction element modulates the calibrated infrared laser beam to form a corresponding speckle pattern.
  • the pattern includes, for example, a dot matrix, a stripe pattern, or a combination of both. Of course, other coding patterns can also be included.
  • the infrared light beam projected by the infrared projection device is formed by an infinite number of light spots, and the more the light spots, the higher the resolution of the infrared image obtained by the infrared image sensor 22.
  • the infrared image sensor 22 is used to collect the optical signal reflected from the infrared beam by the surface of the object to form 3D contour information of the surface of the object, and can also be used to acquire a 2D image of the object in front of the electronic device.
  • the infrared image sensor included in the image capturing device 10 and the infrared image sensor included in the contour information acquiring device 20 are the same component, and may of course be modified or different components.
  • the infrared image sensor mentioned in each embodiment of the image capturing apparatus 10 described above is the same component. Of course, it may be changed or may be a different component; the RGB image sensor mentioned is the same component, and of course, may be modified. It can also be a different part.
  • FIG. 13 is a functional block diagram of another embodiment of the contour information acquiring device 20 in the face recognition module of FIG.
  • the contour information acquiring device 20 includes a light emitter 23 and a light receiver 24.
  • the light emitter 23 includes, for example, a light source, a light modulation unit that modulates the light signal emitted by the light source, and modulates the light signal emitted by the light source by the light modulation unit, so that the light transmitting device continuously emits the preset object toward the object in front of the electronic device.
  • Optical pulse signal included in the light transmitting device continuously emits the preset object toward the object in front of the electronic device.
  • the light receiver 24 includes, for example, a light sensing unit and a signal processing unit for receiving an optical signal reflected by an object by a predetermined light pulse.
  • the signal processing device Since the optical pulse signal emitted by the light emitter 23 is synchronized with the light receiving device, such as the emission time and waveform of the optical pulse signal, the signal processing device obtains the time difference or phase difference of the transmitted optical pulse signal and the received optical pulse signal.
  • the distance between the surface of the object and the light receiver 24, and a 3D contour image of the front object is formed according to the distance between the obtained object surface and the light receiver 24.
  • the optical pulse signals may be transmitted point by point or simultaneously to transmit optical signals of a plurality of points.
  • the contour image of the entire face of the object can be acquired at one time.
  • the facial recognition processor 30 is configured to: match the obtained surface contour information with the pre-stored contour template to determine whether the object in front of the electronic device is a stereoscopic surface.
  • the outline template of the face outline information may also be formed and stored.
  • the contour template includes, for example, depth reference information of a preset feature of the object face.
  • the depth information of the face preset feature is extracted from the surface contour information, and is compared with the depth reference information corresponding to the facial feature in the pre-stored contour template, thereby Determine whether the object in front of the electronic device is a stereo face.
  • the facial recognition processor 30 is configured to: extract depth information of the facial preset feature from the obtained surface contour information, and determine, according to the obtained depth information of the target facial preset feature, whether the object in front of the electronic device is a three-dimensional facial .
  • the above-mentioned object facial preset features such as eyes, nose, lips, jaw, shin, and the like.
  • the depth information of the surface preset feature of the object includes, for example, the depth of the eye, and the height of the nose, lips, jaw, and tibia.
  • it is determined whether the object in front of the electronic device is a stereoscopic face by setting a depth threshold corresponding to the preset feature of the object. That is, after the depth information of the object face preset feature in front of the electronic device is obtained, it is compared with a preset depth threshold to determine whether the object in front of the electronic device is a stereo face.
  • the depth information of the preset feature of the object face cannot be extracted according to the obtained infrared image. That is to say, if another person holds a photo or a picture for identification, based on the obtained surface contour information, it is determined that the object in front of the electronic device is not a stereoscopic face, and thus the identity cannot be recognized, thereby improving the security of the electronic device. .
  • the depth information of the preset feature of the object is used to determine whether the object in front of the electronic device has a 3D attribute, for example, whether the front of the electronic device is a real face, rather than a photo or a picture. Therefore, the embodiment of the present application is simpler than the prior art 3D image information, which not only speeds up the recognition speed, but also improves the user experience.
  • the facial recognition processor 30 is further configured to: determine a current working state of the electronic device, and determine whether identification is required according to the current working state of the electronic device.
  • the current working state is divided according to the display state of the display screen, for example, including the state of the information screen, the unlit state of the bright screen, and the unlocked state of the bright screen.
  • the state of the information screen means that the display screen is not lit and is in an extinguished state.
  • the unlit state of the bright screen means that the display is lit, but the unlocked interface is displayed on the display. In this state, the electronic device is not unlocked and cannot enter the main page of the electronic device.
  • the unlocked state of the bright screen means that the display is lit and the corresponding interface is displayed according to the specific operation. In this state, the electronic device has been unlocked, and the user can use the electronic device normally.
  • the user may use the electronic device at any time, so in order to quickly respond to the user's operation, some sensors on the electronic device are still in operation to monitor the electronic device.
  • Some sensors on the electronic device are still in operation to monitor the electronic device.
  • Use case to determine if there is a screen wake-up operation For example, touch sensors, gravity sensors, distance sensors, and the like.
  • the screen wake-up operation includes one or more of picking up the electronic device, touching the display of the electronic device, the object is close to the front preset range of the device, and pressing the function button of the electronic device. It can be understood that the screen wake-up operation of the electronic device can be flexibly set according to the needs of the user.
  • the touch sensing module of the control electronic device when the electronic device is in the state of the interest screen, the touch sensing module of the control electronic device performs touch detection, and performs a touch action on the display screen (eg, a normal touch operation, a pressing operation, and a sliding Operation) to detect and respond.
  • the touch sensing module When the electronic device is in the interest screen state, the touch sensing module will perform touch detection at the first scanning frequency (lower power consumption state).
  • the first scanning frequency lower power consumption state
  • the touch sensing module When the electronic device is in a bright-screen unlocked state or a bright-screen unlocked state, the touch sensing module performs touch detection at a second scanning frequency (higher power consumption state or normal power consumption state), thereby reducing the electronic device state of the screen. Power consumption. Wherein the first scanning frequency is less than the second scanning frequency. Of course, the touch sensing module can also perform touch detection in a normal power consumption state.
  • the user may use the electronic device at any time in this case, thus determining that identification is required.
  • the electronic device When the electronic device is in the bright-screen unlocked state, it is further determined whether there is currently an authentication request initiated by the application to the operating system, for example, the payment application initiates an authentication request for mobile payment. If there is an authentication request, it is determined that identification is required, otherwise the electronic device performs normal operations.
  • the above identification method is applied, for example, to a full-screen electronic device, and of course, to other non-full-screen electronic devices.
  • the facial recognition function of the electronic device is activated, thereby performing facial recognition on the user of the electronic device to determine the legal identity of the user, and determining the legal identity to control the electronic device to perform corresponding operations, such as unlocking, Make mobile payments, etc.
  • the electronic device is a mobile terminal.
  • FIG. 14 is a schematic diagram of functional modules when the electronic device is a mobile terminal according to an embodiment of the present application.
  • the mobile terminal 1 includes a processor 101, a memory 102, a display device 103, and a face recognition device 104.
  • the processor 101 is configured to activate the facial recognition device 104 when the electronic device requires identification.
  • the facial recognition device 104 includes the facial recognition module 100 mentioned in the above embodiment. After the facial recognition device 104 is activated, a 2D image of an object in front of the electronic device is obtained, and the obtained 2D image is matched with a preset 2D image template. After the obtained 2D image is successfully matched with the preset 2D image template, obtaining surface contour information of the object in front of the electronic device, and determining whether the object in front of the electronic device is a stereoscopic surface according to the obtained surface contour information of the front object, thereby determining Whether the identity of the object in front of the electronic device is legal.
  • the facial recognition device 104 transmits the identification result back to the processor 101.
  • the processor 101 performs corresponding processing according to the returned identification result. For example, controlling the electronic device to perform an unlocking operation and the like.
  • the face recognition device 104 may directly control other components of the electronic device according to the identification result and perform corresponding processing.
  • one or more component structures may be combined or omitted, for example, the processor 101 and the memory 102 are integrated into a control chip or the like.
  • the mobile terminal 1 may include other components (e.g., communication circuits, power supplies, buses, microphones, cameras, etc.) that are not combined or included in the components shown in FIG. Moreover, for the sake of brevity, only some of the components of the mobile terminal are shown in FIG.
  • the processor 101 includes any processing circuitry that is provided to control the operation and performance of the mobile terminal 1.
  • the processor 101 is used to run an operating system, an app, a media playback application, or any other application software, and is used to handle interactions with a user, and the like.
  • the processor 101 is, for example, a control IC in which processing circuits are integrated, or a processor cluster including processing circuits arranged in a distributed manner, for example, a central processing unit CPU for centrally controlling various components of an electronic device, and for an electronic device.
  • Image processing GPU in graphics processing.
  • other dedicated processors can also be provided, such as a coprocessor for monitoring the detection results of the various sensors of the electronic device, a baseband processor for electronic device communication, an identification processor for electronic device identification, and the like. Wait.
  • Memory 102 includes, for example, one or more computer storage media including hard disks, floppy disks, Flash, ROM, RAM, and any other suitable types of storage components or any combination thereof.
  • the memory 102 is used to store any program code files that are available for the processor 101 to call, such as an operating system, application software, and functional modules.
  • the memory 102 is also used to store processing data processed by the processor 101 and processing results such as application data, user operation information, user setting information, multimedia data, and the like. It can be understood that the memory 102 can be set separately or integrated with the processor 101.
  • the computer storage medium in the memory 102 stores a plurality of program code files for the processor 101 to call to perform the unlock control method described in the above embodiments, thereby implementing unlock control in the state of the electronic device.
  • the display device 103 includes, for example, an LCD display screen, an OLED display screen, and corresponding display driving circuits, and the processor 101 controls the display driving circuit to drive the display screen for corresponding display. It can be understood that if the electronic device further includes a graphics processor, the graphics processor can perform graphics processing, and then the display driver circuit is used to drive the display screen to perform corresponding display.
  • the identity of the object in front is determined, thereby preventing people from using the photo for face recognition, improving security, and not based on
  • the 3D image information is used for face recognition, which simplifies the data processing of the face recognition, and also speeds up the face recognition speed and improves the user experience.
  • the processor 101 is further configured to: determine a current working state of the electronic device, and determine, according to a current working state of the electronic device, whether identity recognition is required.
  • the current working state is divided according to the display state of the display screen, for example, including the state of the information screen, the unlit state of the bright screen, and the unlocked state of the bright screen.
  • the state of the information screen means that the display screen is not lit and is in an extinguished state.
  • the bright screen is not unlocked.
  • the status is that the display is lit, but the unlocked interface is displayed on the display. In this state, the electronic device is not unlocked and cannot enter the main page of the electronic device.
  • the unlocked state of the bright screen means that the display is lit and the corresponding interface is displayed according to the specific operation. In this state, the electronic device has been unlocked, and the user can use the electronic device normally.
  • the user may use the electronic device at any time, so in order to quickly respond to the user's operation, some sensors on the electronic device are still in operation to monitor the electronic device.
  • Some sensors on the electronic device are still in operation to monitor the electronic device.
  • Use case to determine if there is a screen wake-up operation For example, touch sensors, gravity sensors, distance sensors, and the like.
  • the screen wake-up operation includes one or more of picking up the electronic device, touching the display of the electronic device, the object is close to the front preset range of the device, and pressing the function button of the electronic device. It can be understood that the screen wake-up operation of the electronic device can be flexibly set according to the needs of the user.
  • the touch sensing module of the control electronic device when the electronic device is in the state of the interest screen, the touch sensing module of the control electronic device performs touch detection, and touch action on the display screen (for example, ordinary touch) Operation, pressing operation, sliding operation) to detect and respond.
  • touch detection for example, ordinary touch
  • touch action on the display screen for example, ordinary touch
  • the touch sensing module will perform touch detection at the first scanning frequency (lower power consumption state).
  • the touch sensing module When the electronic device is in a bright-screen unlocked state or a bright-screen unlocked state, the touch sensing module performs touch detection at a second scanning frequency (higher power consumption state or normal power consumption state), thereby reducing the electronic device state of the screen. Power consumption. Wherein the first scanning frequency is less than the second scanning frequency. Of course, the touch sensing module can also perform touch detection in a normal power consumption state.
  • the user may use the electronic device at any time in this case, thus determining that identification is required.
  • the electronic device When the electronic device is in the bright-screen unlocked state, it is further determined whether there is currently an authentication request initiated by the application to the operating system, for example, the payment application initiates an authentication request for mobile payment. If there is an authentication request, it is determined that identification is required, otherwise the electronic device performs normal operations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Collating Specific Patterns (AREA)
  • Image Input (AREA)

Abstract

本申请公开了一种电子设备及其身份识别方法,该身份识别方法包括:S1,当电子设备需要进行身份识别时,获得电子设备前方物体的2D图像;S2,将获得的2D图像与预设的2D图像模板进行匹配;S3,当获得的2D图像与预设的2D图像模板匹配成功后,获得电子设备前方物体的表面轮廓信息;S4,根据获得的前方物体的表面轮廓信息,判断电子设备前方的物体是否为立体面部;S5,当电子设备前方的物体为立体面部时,确定前方物体的身份合法。电子设备运行该身份识别方法。

Description

电子设备及其身份识别方法 技术领域
本申请涉及一种电子设备及其身份识别方法。
背景技术
随着智能手机的全面屏发展趋势,手机正面的屏占比越来越高,因此手机正面因为空间位置受限无法再设置其他组件,例如按键,指纹识别模组等。由于无法设置指纹识别模组,为了实现身份识别,业界提出了采用手机正面的前置摄像头获取面部信息,并进行面部识别。
但是现有的面部识别技术的安全性还需要提高。
申请内容
本申请实施方式旨在至少解决现有技术中存在的技术问题之一。为此,本申请实施方式需要提供一种电子设备及其身份识别方法。
本申请提供一种电子设备的身份识别方法,包括以下步骤:
S1,当电子设备需要进行身份识别时,获得电子设备前方物体的2D图像;
S2,将获得的2D图像与预设的2D图像模板进行匹配;
S3,当获得的2D图像与预设的2D图像模板匹配成功后,获得电子设备前方物体的表面轮廓信息;
S4,根据获得的前方物体的表面轮廓信息,判断电子设备前方的物体是否为立体面部;
S5,当电子设备前方的物体为立体面部时,确定前方物体的身份合法。
本申请实施方式中,在2D图像匹配成功后,再进行立体面部的判断,当判断前方的物体为立体面部时,才确定前方物体的身份合法,如此既防止了人们利用照片进行面部识别,提高了安全性,而且又不用基于3D图像信息进行面部识别,从而简化了面部识别的数据处理,还加快了面部识别速度,提升了用户体验。另外,由于2D图像匹配成功后再进行立体面部的判断,从而若2D图像匹配不成功则省去了立体面部的判断,进而加快了面部识别速度。
在某些实施方式中,所述电子设备包括位于电子设备正面的红外图像传感器和红外泛光灯;所述步骤S1包括:当电子设备需要进行身份识别时,控制红外泛光灯开启,以使红外泛光灯发出红外光束到电子设备前方的物体,并控制所述红外图像传感器获取物体对红外光束反射回来的光信号,并形成2D图像。
在某些实施方式中,所述电子设备还包括位于电子设备正面的RGB图像传感器;所述步骤S1进一步包括:当电子设备需要进行身份识别时,获取当前环境的光照强度;当当前环境的光照强度位于预设的光强度范围内时,控制所述RGB图像传感器采集电子设备前方物体的2D图像;否则控制所述红外泛光灯开启,并控制所述红外图像传感器采集电子设备前方物体的2D图像。
在某些实施方式中,所述电子设备包括位于电子设备正面的RGB图像传感器和补光装置;所述步骤S1包括:当电子设备需要进行身份识别时,获取电子设备周围的环境光强度;若所述环境光强度位于预设的第二光强度范围内,控制所述RGB图像传感器采集电子设备前方物体的2D图像;否则控制所述补光装置开启,并控制所述RGB图像传感器采集电子设备前方物体的2D图像。
在某些实施方式中,所述预设的光强度范围包括上限值和下限值;所述上限值为电子设备前方物体处于逆光环境下的临界值;所述下限值为电子设备前方物体处于弱光环境下的临界值。
在某些实施方式中,所述电子设备包括位于电子设备正面的红外图像传感器和红外投射装置,所述步骤S3中获得电子设备前方物体的表面轮廓信息包括:控制电子设备的红外投射装置投射结构光束到电子设备前方的物体,并控制所述红外图像传感器获取物体对结构光束反射回来的光信号,并根据所述红外图像传感器获取到的光信号形成前方物体的3D轮廓图像。
在某些实施方式中,所述红外投射装置投射的结构光束形成一图案,且所述图案包括点阵式、条纹式、网格式、散斑式的一种或几种。
在某些实施方式中,所述电子设备包括位于电子设备正面的光发射器、光接收器,所述步骤S3中获得电子设备前方物体的表面轮廓信息包括:控制电子设备的光发射器发射预设的光脉冲信号到电子设备前方的物体,控制所述光接收器接收物体对预设的光脉冲信号反射回来的光信号,并根据发射的光脉冲信号和接收的光脉冲信号的时间差或相位差,获得物体表面与光接收器之间的距离,并根据获得的物体表面与光接收器之间的距离形成前方物体的3D轮廓图像。
在某些实施方式中,所述步骤S1之前进一步包括:判断电子设备的当前工作状态;根据电子设备的当前工作状态,确定是否需要进行身份识别。
在某些实施方式中,所述电子设备的当前工作状态包括息屏状态、亮屏未解锁状态、亮屏已解锁状态;所述根据电子设备的当前工作状态,确定是否需要进行身份识别包括:
当电子设备处于息屏状态时,判断是否有屏幕唤醒操作;当确定有屏幕唤醒操作时,确定需要进行身份识别;
当电子设备处于亮屏未解锁状态时,确定需要进行身份识别;
当电子设备处于亮屏已解锁状态时,判断当前是否存在应用程序发起的身份验证请求,若存在身份验证请求,确定需要进行身份识别。
在某些实施方式中,所述屏幕唤醒操作包括:拿起电子设备、触摸电子设备的显示屏、物体靠近电子设备正面预设范围、按压电子设备功能按键的一种或多种。
在某些实施方式中,所述步骤S4包括:将获得的表面轮廓信息与预存的轮廓模板进行匹配,判断电子设备前方的物体是否为立体面部。
在某些实施方式中,所述预存的轮廓模板包括物体面部预设特征的深度参考信息;所述步骤S4进一步包括:从获得的表面轮廓信息中提取面部预设特征的深度信息,并将其与该面部预设特征对应的深度参考信息进行比对,判断电子设备前方的物体是否为立体面部。
本申请实施方式提供了一种电子设备,包括处理器以及位于电子设备正面的图像采集装置、轮廓信息获取装置;所述处理器用于:在电子设备需要进行身份识别时,控制所述图像采集装置获得电子设备前方物体的2D图像,将获得的2D图像与预设的2D图像模板进行匹配;当获得的2D图像与预设的2D图像模板匹配成功后,控制所述轮廓信息获取装置获得电子设备前方物体的表面轮廓信息,并根据获得的前方物体的表面轮廓信息,判断电子设备前方的物体是否为立体面部;当电子设备前方的物体为立体面部时,确定前方物体的身份合法。
本申请实施方式中,在2D图像匹配成功后,再进行立体面部的判断,当判断前方的物体为立体面部时,才确定前方物体的身份合法,如此既防止了人们利用照片进行面部识别,提高了安全性,而且又不用基于3D图像信息进行面部识别,从而简化了面部识别的数据处理,还加快了面部识别速度,提升了用户体验。另外,由于2D图像匹配成功后再进行立体面部的判断,从而若2D图像匹配不成功则省去了立体面部的判断,进而加快了面部识别速度。
在某些实施方式中,所述图像采集装置包括红外图像传感器和红外泛光灯;所述处理器进一步用于:当电子设备需要进行身份识别时,控制红外泛光灯开启,以使红外泛光灯发出红外光束到电子设备前方的物体,并控制所述红外图像传感器获取物体对红外光束反射回来的光信号,并形成2D图像。
在某些实施方式中,所述电子设备正面还设置环境光传感器,用于采集电子设备周围的环境光强度;所述图像采集装置还包括RGB图像传感器;所述处理器进一步用于:当电子设备需要进行身份识别时,从所述环境光传感器读取当前的环境光强度;当当前的环境光强度位于预设的光强度范围内时,控制所述红外泛光灯开启,并控制所述红外图像传感器采集电子设备前方物体的2D图像;否则控制所述RGB图像传感器采集电子设备前方物体的2D图像。
在某些实施方式中,所述电子设备正面还设置环境光传感器,用于采集电子设备周围的环境光强度;所述图像采集装置包括RGB图像传感器和补光装置;所述处理器进一步用于:当电子设备需要进行身份识别时,获取电子设备周围的环境光强度;若所述环境光强度位于预设的光强度范围内,控制所述RGB图像传感器采集电子设备前方物体的2D图像;否则控制所述补光装置开启,并控制所述RGB图像传感器采集电子设备前方物体的2D图像。
在某些实施方式中,所述补光装置包括位于电子设备正面的柔光灯。
在某些实施方式中,当控制电子设备正面的补光装置开启时,进一步调整补光装置的亮度、色温和颜色中的一个或几个参数。
在某些实施方式中,所述预设的光强度范围包括上限值和下限值;所述上限值为电子设备前方物体处于逆光环境下的临界值;所述下限值为电子设备前方物体处于弱光环境下的临界值。
在某些实施方式中,所述图像采集装置包括红外图像传感器和红外投射装置,所述处理器进一步用于:控制所述红外投射装置投射结构光束到电子设备前方的物体,并控制所述红外图像传感器获取物体对结构光束反射回来的光信号,并形成物体的3D轮廓图像。
在某些实施方式中,所述红外投射装置投射的结构光束形成一图案,且所述图案包括点阵式、条纹式、网格式、散斑式的一种或几种。
在某些实施方式中,所述图像采集装置包括光发射器、光接收器以及信号处理装置,所述处理器进一步用于:控制所述光发射器发射预设的光脉冲信号到电子设备前方的物 体,控制所述光接收器接收物体对预设的光脉冲信号反射回来的光信号,所述信号处理装置计算预设的光脉冲的发射时间和反射时间的时间差或相位差,获得物体表面与光接收器之间的距离,并根据所获得的物体表面与光接收器之间的距离形成前方物体的3D轮廓图像。
在某些实施方式中,所述处理器用于判断电子设备的当前工作状态,根据电子设备的当前工作状态,确定是否需要进行身份识别。
在某些实施方式中,所述电子设备的当前工作状态包括息屏状态、亮屏未解锁状态、亮屏已解锁状态;所述处理器进一步用于:
当电子设备处于息屏状态时,判断是否有屏幕唤醒操作;当确定有屏幕唤醒操作时,确定需要进行身份识别;
当电子设备处于亮屏未解锁状态时,确定需要进行身份识别;
当电子设备处于亮屏已解锁状态时,判断当前是否存在应用程序发起的身份验证请求,若存在身份验证请求,确定需要进行身份识别。
在某些实施方式中,所述屏幕唤醒操作包括:拿起电子设备、触摸电子设备的显示屏、物体靠近电子设备正面预设范围、按压电子设备功能按键的一种或多种。
在某些实施方式中,所述处理器进一步用于:将获得的表面轮廓信息与预存的轮廓模板进行匹配,判断电子设备前方的物体是否为立体面部。
在某些实施方式中,所述预存的轮廓模板包括物体面部预设特征的深度参考信息;所述处理器进一步用于:从获得的表面轮廓信息中提取面部预设特征的深度信息,并将其与该面部预设特征对应的深度参考信息进行比对,判断电子设备前方的物体是否为立体面部。
本申请实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请实施方式的实践了解到。
附图说明
本申请实施方式的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是本申请第一实施方式的电子设备的身份识别方法的流程示意图;
图2是图1中获得电子设备前方物体的2D图像的一实施方式的细化流程示意图;
图3是图1中获得电子设备前方物体的2D图像的另一实施方式的细化流程示意图;
图4是图1中获得电子设备前方物体的2D图像的又一实施方式的细化流程示意图;
图5是图1中获得电子设备前方物体的表面轮廓信息的一实施方式的细化流程示意图;
图6是图1中获得电子设备前方物体的表面轮廓信息的另一实施方式的细化流程示意图;
图7是本申请第二实施方式的电子设备的身份识别方法的流程示意图;
图8是本申请一实施方式的面部识别模组的功能模块示意图;
图9是应用本申请一实施方式的面部识别模组的电子设备的正面结构示意图;
图10是图8的面部识别模组中图像采集装置一实施方式的功能模块示意图;
图11是图8的面部识别模组中图像采集装置另一实施方式的功能模块示意图;
图12是图8的面部识别模组中轮廓信息获取装置一实施方式的功能模块示意图;
图13是图8的面部识别模组中轮廓信息获取装置另一实施方式的功能模块示意图;
图14是本申请一实施方式的电子设备为移动终端时的功能模块示意图。
具体实施方式
下面详细描述本申请的实施方式,所述实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本申请,而不能理解为对本申请的限制。
在本申请的描述中,需要理解的是,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个所述特征。在本申请的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。“接触”或“触摸”包括直接接触或间接接触。
在本申请的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接或可以相互通信;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本申请中的具体含义。
下文的公开提供了许多不同的实施方式或例子用来实现本申请的不同结构。为了简化本申请的公开,下文中对特定例子的部件和设定进行描述。当然,它们仅仅为示例,并且目的不在于限制本申请。此外,本申请可以在不同例子中重复参考数字和/或参考字 母,这种重复是为了简化和清楚的目的,其本身不指示所讨论各种实施方式和/或设定之间的关系。此外,本申请提供了的各种特定的工艺和材料的例子,但是本领域普通技术人员可以意识到其他工艺的应用和/或其他材料的使用。
进一步地,所描述的特征、结构可以以任何合适的方式结合在一个或更多实施方式中。在下面的描述中,提供许多具体细节从而给出对本申请的实施方式的充分理解。然而,本领域技术人员应意识到,没有所述特定细节中的一个或更多,或者采用其它的结构、组元等,也可以实践本申请的技术方案。在其它情况下,不详细示出或描述公知结构或者操作以避免模糊本申请。
身份识别方法是指通过采集物体的生物特征信息,并根据采集到的生物特征信息确定物体的身份是否合法。以面部识别为例,通过摄像头采集物体的面部信息,并将采集到的面部信息与注册的面部模板进行匹配,匹配成功则确定物体的身份合法,匹配失败则确定物体的身份非法。
这里的物体例如包括人体或其他的生物体。面部信息例如包括2D图像信息或3D图像信息。基于2D图像信息进行面部识别时,利用现有的摄像头即可实现,不需要额外设置摄像装置。但是他人利用一张物体面部照片,很容易进行身份确认,由此安全性不高。基于3D图像信息进行面部识别时,3D图像信息的匹配过程非常复杂且费时,虽然安全性提高了,但是识别速度较慢,用户体验性差。
对此,本申请实施方式提出一种安全性更高且识别速度更快的身份识别方法。参照图1,图1是本申请第一实施方式的电子设备的身份识别方法的流程示意图。该电子设备的身份识别方法包括以下步骤:
S1,当电子设备需要进行身份识别时,获得电子设备前方物体的2D图像;
该身份识别方法运行于一电子设备。该电子设备例如但不局限为消费性电子产品、家居式电子产品、车载式电子产品、金融终端产品等合适类型的电子产品。其中,消费性电子产品如为手机、平板电脑、笔记本电脑、桌面显示器、电脑一体机等。家居式电子产品如为智能门锁、电视、冰箱、穿戴式设备等。车载式电子产品如为车载导航仪、车载DVD等。金融终端产品如为ATM机、自助办理业务的终端等。
电子设备正面设有相应的图像采集设备,例如RGB图像传感器、红外图像传感器等等,以在电子设备需要进行身份识别时,获得电子设备前方物体的2D图像。
S2,将获得的2D图像与预设的2D图像模板进行匹配;
以面部图像为例,电子设备在使用之前,对使用者的面部信息进行注册,并形成面 部模板,并存储,以用于面部识别时2D图像的匹配。图像匹配例如采用面部特征提取的方式,先从2D图像中提取一个或几个面部特征,例如眼睛、鼻子、眉毛、嘴唇、下颚等等,然后将该提取到的面部特征与2D图像模板中的相应面部特征分别进行比对,若比对均一致则匹配成功,否则匹配失败。
S3,当获得的2D图像与预设的2D图像模板匹配成功后,获得电子设备前方物体的表面轮廓信息;
电子设备正面设有相应的轮廓信息采集设备,用于获得电子设备前方物体的表面轮廓信息。该轮廓信息采集设备例如采用结构光技术、TOF(Time of flight)技术、双目立体成像技术中的一项或几项。由于该表面轮廓信息仅用于表示物体的面部轮廓,例如包括物体表面的各像素点在三维空间中的坐标信息,因此相比于物体的3D图像来说,不但包含的信息量更少,而且在进行立体面部的判断时也更简洁。
S4,根据获得的前方物体的表面轮廓信息,判断电子设备前方的物体面部是否为立体面部;
从获得的前方物体的表面轮廓信息中提取相应的3D属性,例如面部特征的深度信息,从而判断该电子设备前方的物体面部是否为立体的面部,如此能避免人们利用照片进行面部识别。
S5,当电子设备前方的物体为立体面部时,确定前方物体的身份合法。
本申请实施方式中,在2D图像匹配成功后,再进行立体面部的判断,当判断前方的物体为立体面部时,才确定前方物体的身份合法,如此既防止了人们利用照片进行面部识别,提高了安全性,而且又不用基于3D图像信息进行面部识别,从而简化了面部识别的数据处理,还加快了面部识别速度,提升了用户体验。另外,由于2D图像匹配成功后再进行立体面部的判断,从而若2D图像匹配不成功则省去了立体面部的判断,进而加快了面部识别速度。
在某些实施方式中,参照图2,图2是图1中获得电子设备前方物体的2D图像的一实施方式的细化流程示意图。上述步骤S1中获得电子设备前方物体的2D图像包括以下步骤:
S10,当电子设备需要进行身份识别时,控制红外泛光灯开启,以使红外泛光灯发出红外光束到电子设备前方的物体;
S11,控制红外图像传感器获取物体对红外光束反射回来的光信号,并形成2D图像。
本实施方式中,利用位于电子设备正面的红外传感设备来采集电子设备前方物体的 2D面部图像。由于红外感测技术采用红外线感测,因此不局限于可见光的强弱,不但能在白天使用,而且还能在夜晚使用。另外,考虑到电子设备周边环境中红外光信号不足的情况,物体反射回来的红外光信号也会不足,因此红外传感设备采集到的图像会比较模糊,如此将无法进行准确的身份识别。对此,本申请实施方式中还在电子设备正面设置红外泛光灯,以在电子设备需要进行身份识别时开启,通过红外泛光灯发出红外光束并照射至电子设备前方的物体上,来补充电子设备周边环境中的红外光。
进一步地,参照图3,图3是图1中获得电子设备前方物体的2D图像的另一实施方式的细化流程示意图。上述步骤S1中获得电子设备前方物体的2D图像包括以下步骤:
S12,当电子设备需要进行身份识别时,获取当前环境的光照强度;
S13,判断当前环境的光照强度是否位于预设的光强度范围内,是则执行步骤S14,否则执行步骤S15;
S14,控制RGB图像传感器采集电子设备前方物体的2D图像;
S15,控制红外泛光灯开启,并控制所述红外图像传感器采集电子设备前方物体的2D图像。
基于上述实施例,电子设备正面除了设置红外图像传感器和红外泛光灯,还设有RGB图像传感器。例如移动终端,通过RGB图像传感器对电子设备前方物体进行拍摄,实现自拍的效果。因此本申请实施方式中,利用现有的RGB图像传感器结合红外图像传感器、红外泛光灯来进行面部识别中2D图像的采集,从而节省了成本。
具体地,由于RGB图像传感器在正常光照环境下能采集到清晰的面部图像,而在逆光或弱光环境下时,则无法采集到清晰的面部图像,从而影响面部识别效果,例如因采集到的面部图像较模糊而需要较长的识别过程,甚至无法实现面部识别。如此,还造成用户体验较差。这里的逆光环境是指物体背对光源时的情况,此时物体面部反射回来的光信号非常弱,图像传感器采集到的光信号几乎都是光源发出的光信号;弱光环境是指周围环境较暗的情况,此时物体面部反射回来的光信号也非常弱,图像传感器采集到的光信号非常少。需要说明的是,清晰的面部图像是指能准确实现面部识别时的面部图像。
对此,在电子设备正面设置环境光传感器,用于在电子设备需要进行面部识别时,采集电子设备的周围环境的光照强度,以判断采集图像时处于逆光环境还是弱光环境。具体地,当电子设备前方物体背对着光源时,则电子设备正面的环境光传感器面对着光源,因此若该光源的光照较强时,则环境光传感器到的光照强度非常强。当电子设备的 周围环境较暗时,则环境光传感器采集到的光照强度较弱。当然,可变更地,该电子设备的周围环境的光照强度还可以采用其他的传感设备进行采集,例如电子设备正面设置的RGB图像传感器。
本申请实施例中,通过预设一个临界范围,即预设的光强度范围,包括上限值和下限值,其中上限值为物体处于逆光环境下的光强临界值,下限值为物体处于弱光环境下的光强临界值。若环境光强度位于该预设的光强度范围内,即当前环境光强度大于或等于下限值,且小于或等于上限值,则表示RGB图像传感器在当前环境下能采集到清晰的面部图像,以实现面部识别;若环境光强度位于该预设的光强度范围之外,即当前环境光强度大于上限值,或小于下限值,则表示RGB图像传感器在当前环境下无法采集到清晰的面部图像,造成面部识别失败。
因此,本申请实施方式中,当当前环境光强度位于预设的光强度范围内时,控制RGB图像传感器采集电子设备前方物体的2D图像,当当前环境光强度位于预设的光强度之外时,控制红外泛光灯开启,并控制红外图像传感器采集电子设备前方物体的2D图像。如此,既实现了不同环境下的面部识别,又避免了红外泛光灯频繁使用而影响红外泛光灯的使用寿命。
在某些实施方式中,参照图4,图4是图1中获得电子设备前方物体的2D图像的又一实施方式的细化流程示意图。上述步骤S1中获得电子设备前方物体的2D图像包括以下步骤:
S16,当电子设备需要进行身份识别时,获取当前环境的光照强度;
S17,判断当前环境的光照强度是否位于预设的光强度范围内;是则执行步骤S19,否则执行步骤S18;
S18,控制补光装置开启,并执行步骤S19;
S19,控制RGB图像传感器采集电子设备前方物体的2D图像。
本实施方式中,电子设备正面设有RGB图像传感器和补光装置。如前面描述,RGB图像传感器在逆光或弱光环境下,无法采集到清晰的2D图像。因此,本申请实施方式中,在逆光环境或弱光环境下,利用补光装置对电子设备进行正面补光。当采集到当前环境光强度,且该当前环境光强度大于预设的光强度范围的上限值,或者小于预设的光强度范围的下限值时,控制电子设备进行正面补光,从而使得电子设备周围环境下的光照强度位于预设的光强度范围内,以采集清晰的2D图像。当采集到当前环境光强度,且当前环境光强度大于或等于预设的光强度范围的下限值,小于或等于预设的光强度范 围的上限值时,直接控制电子设备正面的RGB图像传感器采集电子设备前方物体的2D图像。
在某些实施方式中,上述电子设备正面的补光装置既可以为电子设备的显示屏,也可以为位于电子设备顶部的补光光源,该补光光源为额外设置且专用于补光的部件,例如柔光灯。通过电子设备的显示屏进行补光时,则利用现有结构即可实现补光,节省了额外设置补光光源的成本。而通过补光光源进行补光时,虽然增加了成本,但是既不影响其他组件(例如显示屏)的正常工作,而且还可实现更好的补光效果。
进一步地,在控制电子设备正面的补光装置开启的同时,还调节补光装置的亮度、色温或者颜色中的一个或几个参数。补光装置例如包括多个LED或其他发光元件。而且该发光元件的亮度、颜色以及色温可以根据需要进行控制。本实施方式中,通过控制LED的亮度、颜色以及色温中的一个或几个参数,使得补光装置发出的光信号更加柔和,从而在进行面部图像采集时,用户不至于感觉太刺眼,提升了用户体验。可以理解的是,前述参数的具体取值不做限定,可根据实际使用情况而灵活选取。
在某些实施方式中,在步骤S19中:当当前环境光强度小于或等于预设的光强度范围的下限值时,调节电子设备的显示屏至最大亮度,以进行正面补光。
若电子设备通过显示屏进行电子设备的正面补光,则在当前环境光强度小于或等于预设的光强度范围的下限值时,控制显示屏逐渐增大到最大亮度,以进行正面补光。显示屏亮度的调节,可以逐次进行调节,以免亮度骤增对用户造成不适。
进一步地,在步骤S19中,若当前环境光强度与显示屏的最大亮度之和仍然小于预设的光强度范围的下限值,则控制电子设备正面顶端的补光光源开启,以使所述补光光源与所述显示屏一起进行正面补光。
由于显示屏本身的发光能力有限,若显示屏的亮度无法达到补光要求,则控制电子设备正面顶端的补光光源开启,以使补光光源与显示屏一起进行正面补光。因此,本实施方式中,先调节显示屏至最大亮度,同时采集当前环境光强度,若当前环境光强度仍然小于预设的光强度范围的下限值,则表示显示屏的补光无法达到补光要求,此时控制电子设备正面顶端的补光光源开启。若当前环境光强度大于或等于预设的光强度范围的下限值,则表示显示屏的亮度能够达到补光要求,因此仅通过显示屏的亮度进行补光即可。
当然,可变更地,为了使得控制过程简洁,也可以在判断当前环境光强度小于预设的光强度范围的下限值时,同时调节显示屏至最大亮度和控制补光光源开启,使得显示 屏与补光光源一起进行正面补光。如此简单、方便。
在某些实施方式中,参照图5,图5是图1中获得电子设备前方物体的表面轮廓信息的一实施方式的细化流程示意图。上述步骤S3中获得电子设备前方物体的表面轮廓信息包括:
S30,控制电子设备的红外投射装置投射结构光束到电子设备前方的物体;
S31,通过电子设备的红外图像传感器获取物体对结构光束反射回来的光信号,并形成前方物体的3D轮廓图像。
该红外投射装置与红外图像传感器设置于电子设备正面,而且该红外投射装置和红外图像传感器分开设置。当然,该红外投射装置与红外图像传感器也可以集成在一起,以方便安装,同时节省安装空间。该红外投射装置用于投射红外光束于电子设备前方的物体上,该红外光束于电子设备前方的物体上时将发生反射,而且反射回来的光信号被电子设备的红外图像传感器进行感测,根据该红外图像传感器的感测信号将形成前方物体的3D轮廓图像,亦即表面轮廓信息。
在某些实施方式中,红外投射装置例如包括光源、准直镜头以及光学衍射元件(DOE),其中光源用于产生一红外激光束;准直镜头将红外激光束进行校准,形成近似平行光;光学衍射元件对校准后的红外激光束进行调制,形成相应的散斑图案。而且该图案例如包括点阵式、条纹式、网格式、散斑式的一种或几种。当然,还可包括其他的编码图案。需要说明的是,红外投射装置投射的红外光束由无数个光点形成,光点越多,则红外图像传感器获得的红外图像的分辨率越高。
在某些实施方式中,参照图6,图6是图1中获得电子设备前方物体的表面轮廓信息的另一实施方式的细化流程示意图。上述步骤S3中获得电子设备前方物体的表面轮廓信息包括:
S32,控制电子设备的光发射器发射预设的光脉冲信号到电子设备前方的物体;
S33,控制光接收器接收物体对预设的光脉冲信号反射回来的光信号,并根据发射的光脉冲信号和接收的光脉冲信号的时间差或/和相位差,获得物体表面与光接收器之间的距离,并根据获得的物体表面与光接收器之间的距离形成前方物体的3D轮廓图像。
光发射器例如包括光源、对光源发出的光信号进行调制的光调制器,通过光调制器对光源发出的光信号进行调制,从而使得光发射器朝电子设备前方的物体连续发射预设的光脉冲信号。光接收器例如包括光传感器以及信号处理装置,该光传感器用于接收预设的光脉冲经物体反射回来的光信号。由于光发射器发射的光脉冲信号同步于光接收装 置中,例如光脉冲信号的发射时间以及波形,因此信号处理装置根据发射的光信号和接收的光脉冲信号的时间差或相位差,获得物体表面与光接收器之间的距离,并根据物体表面与光接收器之间的距离形成前方物体的3D轮廓图像。
在某些实施方式中,上述光脉冲信号可以逐个点发送或者同时发送多个点的光信号。通过同时发送多个点的光信号,可以一次实现物体整个面部的轮廓图像的采集。
在某些实施方式中,上述步骤S4包括:将获得的表面轮廓信息与预存的轮廓模板进行匹配,判断电子设备前方的物体是否为立体面部。
具体地,在进行面部模板的注册时,也可以形成面部轮廓信息的轮廓模板,并存储。该轮廓模板例如包括物体面部预设特征的深度参考信息。当获得电子设备前方物体的表面轮廓信息时,从该表面轮廓信息中提取面部预设特征的深度信息,并将其与预存的轮廓模板中与该面部特征对应的深度参考信息进行比对,从而判断电子设备前方的物体是否为立体面部。
进一步地,步骤S4包括:从获得的表面轮廓信息中提取面部预设特征的深度信息,根据所获得的物体面部预设特征的深度信息,判断电子设备前方的物体是否为立体面部。
物体面部预设特征例如眼睛、鼻子、嘴唇、下颚、颧骨等。物体面部预设特征的深度信息例如包括眼睛的深度,以及鼻子、嘴唇、下颚、颧骨的高度等。本实施方式中,通过设置物体面部预设特征对应的深度阈值来判断电子设备前方的物体是否为立体面部。即,当获得电子设备前方的物体面部预设特征的深度信息后,将其与预设的深度阈值进行比较,从而判断电子设备前方的物体是否为立体面部。
由于照片或图片均为平面图像,因此根据该获得的表面轮廓信息中,无法提取到物体面部预设特征的深度信息。也就是说,若他人拿着照片或图片进行身份识别,则根据所获得的表面轮廓信息,判断出电子设备前方的物体不是立体面部,从而无法通过身份识别,进而提高了电子设备的安全性。
另外,通过物体面部预设特征的深度信息来判断电子设备前方的物体是否具有3D属性,例如判断电子设备前方是否为一张真实的人脸,而不是照片或图片。因此,本申请实施方式相比现有技术3D图像信息的判断,更加简单,不但加快了识别速度,而且还提升了用户体验。
进一步地,参照图7,图7是本申请第二实施方式的电子设备的身份识别方法的流程示意图。该电子设备的身份识别方法中,在上述步骤S1之前还包括:
步骤S6,判断电子设备的当前工作状态;
当前工作状态按照显示屏的显示状态来分,例如包括息屏状态、亮屏未解锁状态、亮屏已解锁状态三种。其中息屏状态是指显示屏未被点亮而处于熄灭状态。亮屏未解锁状态是指显示屏被点亮,但显示屏上显示未解锁界面。该状态下,电子设备未被解锁,无法进入电子设备的主页面。亮屏已解锁状态是指显示屏被点亮,并根据具体的操作显示相应的界面。该状态下电子设备已经解锁,使用者可以正常使用该电子设备。
步骤S7,根据电子设备的当前工作状态,确定是否需要进行身份识别。
具体地,当电子设备处于息屏状态时,使用者随时可能会使用该电子设备,因此为了能快速响应使用者的操作,该电子设备上的某些传感器仍然处于工作状态,以监测电子设备的使用情况,判断是否存在屏幕唤醒操作。例如,触摸传感器、重力传感器、距离传感器等等。屏幕唤醒操作包括拿起电子设备、触摸电子设备的显示屏、物体靠近设备正面预设范围、按压电子设备的功能按键的一种或几种。可以理解的是,该电子设备的屏幕唤醒操作可根据使用者的需要而灵活设置。
以触摸传感器为例,本申请实施方式中,在电子设备处于息屏状态时,控制电子设备的触摸传感模块执行触摸检测,对显示屏上的触摸动作(例如普通触摸操作、按压操作、滑动操作)进行检测及响应。当电子设备处于息屏状态时,该触摸传感模块将以第一扫描频率(较低功耗状态)进行触摸检测。当检测到物体触摸显示屏,则确定需要进行身份识别。当电子设备处于亮屏未解锁状态或亮屏已解锁状态时,触摸传感模块再以第二扫描频率(较高功耗状态或正常功耗状态)进行触摸检测,从而降低电子设备息屏状态时的功耗。其中第一扫描频率小于第二扫描频率。当然,可变更地,该触摸传感模块也可以一直以正常功耗状态进行触摸检测。
当电子设备处于亮屏未解锁状态时,这种情况下使用者随时可能使用该电子设备,因此确定需要进行身份识别,并进入步骤S1。
当电子设备处于亮屏已解锁状态时,则要进一步判断当前是否存在应用程序向操作***发起的身份验证请求,例如支付应用程序发起用于移动支付的身份验证请求。若存在身份验证请求,则确定需要进行身份识别,并进入步骤S1,否则电子设备进行正常的操作。
在某些实施方式中,上述步骤S5之后进一步包括:当身份识别成功后,控制所述电子设备进行相应的操作。例如控制电子设备解锁、进行支付交易操作等等。
对应地,本申请一实施方式提出一种用以执行上述实施方式的方法的面部识别模组。参照图8及图9,图8是本申请一实施方式的面部识别模组的功能模块示意图,图9是 应用本申请一实施方式的面部识别模组的电子设备的正面结构示意图。该面部识别模组100设置于电子设备的非显示区,例如电子设备正面的顶端。当然,可变更地,面部识别模组100也可以设置于电子设备的其他位置,例如面部识别模组100中的各组件集成于电子设备的显示装置中,从而使得面部识别模组位于电子设备的显示区;再例如,面部识别模组100设置于电子设备的非显示区时,也可位于电子设备正面的底端或侧端。该面部识别模组100包括基板40、基板图像采集装置10、轮廓信息获取装置20、以及面部识别处理器30。该图像采集装置10例如设置在所述基板10上。该轮廓信息获取装置20例如设置在所述基板40上。该面部识别处理器30例如为一处理芯片。该处理芯片例如通过FPC柔性线路板与基板40上的图像采集装置10及轮廓信息获取装置20电性连接,或者,该处理芯片设置于基板40上并与基板40上的图像采集装置10及轮廓信息获取装置20电性连接。该基板40例如为印刷电路板、硅基板、金属基板等等。可变更地,在某些实施方式中,所述图像采集装置10、轮廓信息获取装置20、以及面部识别处理器30例如也可分别单独设置,该基板40可被省略。
图像采集装置10用于获得电子设备前方物体的2D图像,轮廓信息获取装置20用于获得电子设备前方物体的表面轮廓信息。面部识别处理器30用于在电子设备需要进行身份识别时,控制所述图像采集装置10获得电子设备前方物体的2D图像,将获得的2D图像与预设的2D图像模板进行匹配;当获得的2D图像与预设的2D图像模板匹配成功后,控制所述轮廓信息获取装置20获得电子设备前方物体的表面轮廓信息,并根据获得的前方物体的表面轮廓信息,判断电子设备前方的物体是否为立体面部;当电子设备前方的物体为立体面部时,确定前方物体的身份合法。
本申请实施方式中,在2D图像匹配成功后,再进行立体面部的判断,当判断前方的物体为立体面部时,才确定前方物体的身份合法,如此既防止了人们利用照片进行面部识别,提高了安全性,而且又不用基于3D图像信息进行面部识别,从而简化了面部识别的数据处理,还加快了面部识别速度,提升了用户体验。另外,由于2D图像匹配成功后再进行立体面部的判断,从而若2D图像匹配不成功则省去了立体面部的判断,进而加快了面部识别速度。
在某些实施方式中,参照图10,图10是图8的面部识别模组中图像采集装置一实施方式的功能模块示意图。图像采集装置10例如包括红外图像传感器11和红外泛光灯12。本实施方式中,利用位于电子设备正面的红外图像传感器11来采集电子设备前方物体的2D图像。由于红外感测技术采用红外线感测,因此不局限于可见光的强弱,不 但能在白天使用,而且还能在夜晚使用。另外,考虑到电子设备周边环境中红外光信号不足的情况,物体反射回来的红外光信号也会不足,因此红外图像传感器采集到的图像会比较模糊,如此将无法进行准确的身份识别。对此,本申请实施方式中还在电子设备正面设置红外泛光灯12,以在电子设备需要进行身份识别时开启,红外泛光灯12发出红外光束并照射至电子设备前方的物体上,补充电子设备周边环境中的红外光。因此,面部识别处理器30进一步用于:当电子设备需要进行身份识别时,控制红外泛光灯12开启,以使红外泛光灯12发出红外光束到电子设备前方的物体,并控制所述红外图像传感器11获取物体对红外光束反射回来的光信号,并根据红外图像传感器11的感测信号形成2D图像。
进一步地,继续参照图9,电子设备正面还设置环境光传感器50,用于采集电子设备周围的环境光强度。图像采集装置10除了包括位于电子设备正面的红外图像传感器11和红外泛光灯12,还包括RGB图像传感器13。例如移动终端,通过RGB图像传感器13对电子设备前方物体进行拍摄,实现自拍的效果。因此本申请实施方式中,利用现有的RGB图像传感器13结合红外图像传感器11、红外泛光灯12来进行面部识别中2D图像的采集,从而节省了成本。当然,可变更地,该环境光传感器50也可以设置在基板40上。
具体地,由于RGB图像传感器13在正常光照环境下能采集到清晰的面部图像,而在逆光或弱光时,则无法采集到清晰的面部图像,从而影响面部识别效果,例如因采集到的面部图像较模糊而需要较长的识别过程,甚至无法实现面部识别。如此,还造成用户体验较差。这里的逆光是指物体背对光源时的情况,此时物体面部反射回来的光信号非常弱,图像传感器采集到的光信号几乎都是光源发出的光信号;弱光是指周围环境较暗的情况,此时物体面部反射回来的光信号也非常弱,图像传感器采集到的光信号非常少。需要说明的是,清晰的面部图像是指能准确实现面部识别时的面部图像。
对此,本申请实施方式通过在电子设备正面设置环境光传感器50,以采集电子设备周围环境的光照强度。而且,本申请实施例中,还预设一个临界范围,即预设的光强度范围,包括上限值和下限值,其中上限值为逆光环境下的临界值,下限值为弱光环境下的临界值。若环境光强度位于该预设的光强度范围内,即当前环境光强度大于或等于下限值,且小于或等于上限值,则表示RGB图像传感器13在当前环境下能采集到清晰的面部图像,以实现面部识别;若环境光强度位于该预设的光强度范围之外,即当前环境光强度大于上限值,或小于下限值,则表示RGB图像传感器13在当前环境下无法采集 到清晰的面部图像,造成面部识别失败。
本申请实施方式中,面部识别处理器30进一步用于:当电子设备需要进行身份识别时,从所述环境光传感器50读取当前的环境光强度;当当前的环境光强度位于预设的光强度范围内时,控制所述红外泛光灯12开启,并控制所述红外图像传感器11采集电子设备前方物体的2D图像;否则控制所述RGB图像传感器13采集电子设备前方物体的2D图像。如此,既实现了不同环境下的面部识别,又避免了红外泛光灯12频繁使用而影响红外泛光灯的使用寿命。
在某些实施方式中,继续参照图9,电子设备正面还设置环境光传感器50,用于采集电子设备周围环境的光照强度。参照图11,图11是图8的面部识别模组中图像采集装置另一实施方式的功能模块示意图。该图像采集装置10例如包括RGB图像传感器14和补光装置15。如前面描述,RGB图像传感器14在逆光或弱光环境下,无法采集到清晰的2D图像。因此,本申请实施方式中,在逆光环境或弱光环境下,利用补光装置15对电子设备进行正面补光。面部识别处理器30进一步用于:当电子设备需要进行身份识别时,获取电子设备周围的环境光强度;若所述环境光强度位于预设的光强度范围内,控制所述RGB图像传感器14采集电子设备前方物体的2D图像;否则控制所述补光装置15开启,并控制所述RGB图像传感器14采集电子设备前方物体的2D图像。
在某些实施方式中,上述电子设备正面的补光装置15既可以为电子设备的显示屏151,也可以为位于电子设备顶部的补光光源152,该补光光源152为额外设置且专用于补光的部件,例如柔光灯。通过电子设备的显示屏151进行补光时,则利用现有结构即可实现补光,节省了成本。通过补光光源152进行补光时,虽然增加了成本,但是既不影响其他组件(例如显示屏)的正常工作,而且还可实现更好的补光效果。
进一步地,在控制电子设备正面的补光装置15开启的同时,还调节补光装置15的亮度、色温或者颜色中的一个或几个参数。补光装置15例如包括多个LED或其他发光元件。而且该发光元件的亮度、颜色以及色温可以根据需要进行控制。本实施方式中,通过控制LED的亮度、颜色以及色温中的一个或几个参数,使得补光装置15发出的光信号更加柔和,从而在进行面部图像采集时,用户不至于感觉太刺眼,提升了用户体验。可以理解的是,前述参数的具体取值不做限定,可根据实际使用情况而灵活选取。
在某些实施方式中,面部识别处理器30进一步用于:当当前环境光强度小于或等于预设的光强度范围的下限值时,调节电子设备的显示屏151至最大亮度,以进行正面补光。
若电子设备通过显示屏151进行电子设备的正面补光,则在当前环境光强度小于或等于预设的光强度范围的下限值时,控制显示屏151逐渐增大到最大亮度,以进行正面补光。显示屏151亮度的调节,可以逐次进行调节,以免亮度骤增对用户造成不适。
进一步地,面部识别处理器30进一步用于,若当前环境光强度与显示屏151的最大亮度之和仍然小于预设的光强度范围的下限值,则控制电子设备正面顶端的补光光源152开启,以使所述补光光源152与所述显示屏151一起进行正面补光。
由于显示屏本身的发光能力有限,若显示屏151的亮度无法达到补光要求,则控制电子设备正面顶端的补光光源152开启,以使补光光源152与显示屏151一起进行正面补光。因此,本实施方式中,先调节显示屏151至最大亮度,同时采集当前环境光强度,若当前环境光强度仍然小于预设的光强度范围的下限值,则表示显示屏151的补光无法达到补光要求,此时控制电子设备正面顶端的补光光源152开启。若当前环境光强度大于或等于预设的光强度范围的下限值,则表示显示屏151的亮度能够达到补光要求,因此仅通过显示屏151的亮度进行补光即可。
当然,可变更地,为了使得控制过程简洁,也可以在判断当前环境光强度小于预设的光强度范围的下限值时,同时调节显示屏151至最大亮度和控制补光光源152开启,使得显示屏151与补光光源152一起进行正面补光。如此简单、方便。
在某些实施方式中,轮廓信息获取装置20例如包括深度成像传感器,用于获取物体表面的轮廓信息。
深度成像传感器可分为主动式和被动式。其中主动式传感器主要是向目标发射能量束(例如激光、电磁波、超声波),并检测反射回来的光信号,例如结构光技术、TOF技术。被动式传感器则利用周围环境条件成像,例如双目立体成像技术。本申请实施方式中采用主动式的深度成像传感器,以获取电子设备前方物体表面的轮廓信息。
在某些实施方式中,参照图12,图12是图8的面部识别模组中轮廓信息获取装置20一实施方式的功能模块示意图。该轮廓信息获取装置20包括红外投射装置21以及红外图像传感器22,红外投射装置21用于投射红外光束到电子设备前方的物体表面,红外图像传感器22用于获取物体表面对红外光束反射回来的光信号,并形成红外图像。
本实施方式中,红外投射装置21和红外图像传感器22分开设置。当然,红外投射装置21与红外图像传感器22也可以集成在一起,以方便安装,同时节省安装空间。
上述红外投射装置21例如包括光源、准直镜头以及光学衍射元件(DOE)。其中光源用于产生一红外激光束;准直镜头将红外激光束进行校准,形成近似平行光;光学 衍射元件对校准后的红外激光束进行调制,形成相应的散斑图案。而且该图案例如包括点阵式、条纹式或者两者结合的方式。当然,还可包括其他的编码图案。需要说明的是,红外投射装置投射的红外光束由无数个光点形成,光点越多,则红外图像传感器22获得的红外图像的分辨率越高。
在某些实施方式中,上述红外图像传感器22既用于采集物体表面对红外光束反射回来的光信号,形成物体表面的3D轮廓信息,而且还能用于采集电子设备前方物体的2D图像。
在某些实施方式中,上述图像采集装置10包括的红外图像传感器与轮廓信息获取装置20包括的红外图像传感器为同一部件,当然,可变更地,也可以为不同的部件。另外,上述图像采集装置10各实施方式中提及的红外图像传感器为同一部件,当然,可变更地,也可以为不同的部件;提及的RGB图像传感器为同一部件,当然,可变更地,也可以为不同的部件。
在某些实施方式中,参照图13,图13是图8的面部识别模组中轮廓信息获取装置20另一实施方式的功能模块示意图。该轮廓信息获取装置20包括光发射器23和光接收器24。光发射器23例如包括光源、对光源发出的光信号进行调制的光调制单元,通过光调制单元对光源发出的光信号进行调制,从而使得光发送装置朝电子设备前方的物体连续发射预设的光脉冲信号。光接收器24例如包括光传感单元以及信号处理装置,该光传感单元用于接收预设的光脉冲经物体反射回来的光信号。由于光发射器23发射的光脉冲信号同步于光接收装置中,例如光脉冲信号的发射时间以及波形,因此信号处理装置根据发射的光脉冲信号和接收的光脉冲信号的时间差或相位差,获得物体表面与光接收器24之间的距离,并根据获得的物体表面与光接收器24之间的距离形成前方物体的3D轮廓图像。
在某些实施方式中,上述光脉冲信号可以逐个点发送或者同时发送多个点的光信号。通过同时发送多个点的光信号,可以一次实现物体整个面部的轮廓图像的采集。
进一步地,面部识别处理器30用于:将获得的表面轮廓信息与预存的轮廓模板进行匹配,判断电子设备前方的物体是否为立体面部。
具体地,在进行面部模板的注册时,也可以形成面部轮廓信息的轮廓模板,并存储。该轮廓模板例如包括物体面部预设特征的深度参考信息。当获得电子设备前方物体的表面轮廓信息时,从该表面轮廓信息中提取面部预设特征的深度信息,并将其与预存的轮廓模板中与该面部特征对应的深度参考信息进行比对,从而判断电子设备前方的物体是 否为立体面部。
进一步地,面部识别处理器30用于:从获得的表面轮廓信息中提取面部预设特征的深度信息,根据所获得的物体面部预设特征的深度信息,判断电子设备前方的物体是否为立体面部。
上述物体面部预设特征例如眼睛、鼻子、嘴唇、下颚、颧骨等。物体面部预设特征的深度信息例如包括眼睛的深度,以及鼻子、嘴唇、下颚、颧骨的高度等。本实施方式中,通过设置物体面部预设特征对应的深度阈值来判断电子设备前方的物体是否为立体面部。即,当获得电子设备前方的物体面部预设特征的深度信息后,将其与预设的深度阈值进行比较,从而判断电子设备前方的物体是否为立体面部。
由于照片或图片均为平面图像,因此根据该获得的红外图像,无法提取到物体面部预设特征的深度信息。也就是说,若他人拿着照片或图片进行身份识别时,则根据所获得的表面轮廓信息,判断出电子设备前方的物体不是立体面部,从而无法通过身份识别,进而提高了电子设备的安全性。
另外,通过物体面部预设特征的深度信息来判断电子设备前方的物体是否具有3D属性,例如判断电子设备前方是否为一张真实的人脸,而不是照片或图片。因此,本申请实施方式相比现有技术3D图像信息的判断,更加简单,不但加快了识别速度,而且还提升了用户体验。
进一步地,上述面部识别处理器30还用于:判断电子设备的当前工作状态,根据电子设备的当前工作状态,确定是否需要进行身份识别。
当前工作状态按照显示屏的显示状态来分,例如包括息屏状态、亮屏未解锁状态、亮屏已解锁状态三种。其中息屏状态是指显示屏未被点亮而处于熄灭状态。亮屏未解锁状态是指显示屏被点亮,但显示屏上显示未解锁界面。该状态下,电子设备未被解锁,无法进入电子设备的主页面。亮屏已解锁状态是指显示屏被点亮,并根据具体的操作显示相应的界面。该状态下电子设备已经解锁,使用者可以正常使用该电子设备。
具体地,当电子设备处于息屏状态时,使用者随时可能会使用该电子设备,因此为了能快速响应使用者的操作,该电子设备上的某些传感器仍然处于工作状态,以监测电子设备的使用情况,判断是否存在屏幕唤醒操作。例如,触摸传感器、重力传感器、距离传感器等等。屏幕唤醒操作包括拿起电子设备、触摸电子设备的显示屏、物体靠近设备正面预设范围、按压电子设备的功能按键的一种或几种。可以理解的是,该电子设备的屏幕唤醒操作可根据使用者的需要而灵活设置。
以触摸传感器为例,本申请实施方式中,在电子设备处于息屏状态时,控制电子设备的触摸传感模块执行触摸检测,对显示屏上的触摸动作(例如普通触摸操作、按压操作、滑动操作)进行检测及响应。当电子设备处于息屏状态时,该触摸传感模块将以第一扫描频率(较低功耗状态)进行触摸检测。当检测到物体触摸显示屏,则确定需要进行身份识别。当电子设备处于亮屏未解锁状态或亮屏已解锁状态时,触摸传感模块再以第二扫描频率(较高功耗状态或正常功耗状态)进行触摸检测,从而降低电子设备息屏状态时的功耗。其中第一扫描频率小于第二扫描频率。当然,可变更地,该触摸传感模块也可以一直以正常功耗状态进行触摸检测。
当电子设备处于亮屏未解锁状态时,这种情况下使用者随时可能使用该电子设备,因此确定需要进行身份识别。
当电子设备处于亮屏已解锁状态时,则要进一步判断当前是否存在应用程序向操作***发起的身份验证请求,例如支付应用程序发起用于移动支付的身份验证请求。若存在身份验证请求,则确定需要进行身份识别,否则电子设备进行正常的操作。
上述身份识别方法例如应用于全面屏的电子设备中,当然,也可以应用于其他非全面屏的电子设备中。需要进行身份识别时,启动电子设备的面部识别功能,从而实现对电子设备的使用者进行面部识别,以确定使用者的合法身份,确定为合法身份后控制电子设备进行相应的操作,例如解锁、进行移动支付等。以电子设备为移动终端举例,参照图14,图14是本申请一实施方式的电子设备为移动终端时的功能模块示意图。该移动终端1包括处理器101、存储器102、显示装置103、面部识别装置104。处理器101用于在电子设备需要进行身份识别时,启动面部识别装置104。面部识别装置104包括上述实施方式中提及的面部识别模组100,该面部识别装置104启动后,获得电子设备前方物体的2D图像,将获得的2D图像与预设的2D图像模板进行匹配;当获得的2D图像与预设的2D图像模板匹配成功后,获得电子设备前方物体的表面轮廓信息,并根据获得的前方物体的表面轮廓信息,判断电子设备前方的物体是否为立体面部,从而判断电子设备前方的物体身份是否合法。面部识别装置104将身份识别结果传回给处理器101。处理器101根据返回的身份识别结果,进行相应的处理。例如,控制电子设备进行解锁操作等等。当然,可变更地,也可以由面部识别装置104根据身份识别结果,直接控制电子设备的其他组件,进行相应的处理。
需要说明的是,在一些实施方式中,可以组合或省略一个或多个部件结构,例如处理器101与存储器102集成为一控制芯片等等。另外,该移动终端1可以包括未组合或 未包括在图14中所示部件中的其他部件(例如,通信电路、电源、总线、麦克风、摄像头等等)。而且,为了简洁,图14中仅示出了移动终端的部分部件。
具体地,处理器101包括为控制移动终端1的操作和性能而设置的任何处理电路。该处理器101被用于运行操作***、app应用、媒体播放应用,或者任何其他应用软件,以及被用于处理与用户之间的交互操作等等。该处理器101例如为各处理电路集成在一起的控制IC,或者为包括分布设置的各处理电路的处理器集群,例如用于集中控制电子设备各元部件的中央处理器CPU、用于电子设备中图形处理的图像处理器GPU。当然,还可以设置其他专用处理器,例如用于监测电子设备的各传感器的检测结果的协处理器、用于电子设备通讯的基带处理器、用于电子设备进行身份识别的身份识别处理器等等。
存储器102例如包括一个或多个计算机存储介质,该存储介质包括硬盘、软盘、Flash、ROM、RAM,以及任何其他适合类型的存储部件或者它们的任意组合。存储器102用于存储可供处理器101调用的任何程序代码文件,例如操作***、应用软件以及功能模块等等。该存储器102还用于存储处理器101处理的处理数据以及处理结果,例如应用数据、用户操作信息、用户设置信息,多媒体数据等等。可以理解的是,该存储器102可以单独设置,也可以与处理器101集成在一起。
进一步地,该存储器102中的计算机存储介质存储有多个程序代码文件,以供处理器101调用,用以执行上述实施方式描述的解锁控制方法,从而实现电子设备息屏状态下的解锁控制。
显示装置103例如包括LCD显示屏、OLED显示屏,以及相应的显示驱动电路,处理器101控制所述显示驱动电路,来驱动显示屏进行相应的显示。可以理解的是,若电子设备还包括图形处理器,则可由图形处理器进行图形处理后,再通过显示驱动电路来驱动显示屏进行相应的显示。
本申请实施方式中,在2D图像匹配成功,而且判断前方的物体为立体面部时,才确定前方物体的身份合法,如此既防止了人们利用照片进行面部识别,提高了安全性,而且又不用基于3D图像信息进行面部识别,从而简化了面部识别的数据处理,还加快了面部识别速度,提升了用户体验。
进一步地,上述处理器101还用于:判断电子设备的当前工作状态,根据电子设备的当前工作状态,确定是否需要进行身份识别。
当前工作状态按照显示屏的显示状态来分,例如包括息屏状态、亮屏未解锁状态、亮屏已解锁状态三种。其中息屏状态是指显示屏未被点亮而处于熄灭状态。亮屏未解锁 状态是指显示屏被点亮,但显示屏上显示未解锁界面。该状态下,电子设备未被解锁,无法进入电子设备的主页面。亮屏已解锁状态是指显示屏被点亮,并根据具体的操作显示相应的界面。该状态下电子设备已经解锁,使用者可以正常使用该电子设备。
具体地,当电子设备处于息屏状态时,使用者随时可能会使用该电子设备,因此为了能快速响应使用者的操作,该电子设备上的某些传感器仍然处于工作状态,以监测电子设备的使用情况,判断是否存在屏幕唤醒操作。例如,触摸传感器、重力传感器、距离传感器等等。屏幕唤醒操作包括拿起电子设备、触摸电子设备的显示屏、物体靠近设备正面预设范围、按压电子设备的功能按键的一种或几种。可以理解的是,该电子设备的屏幕唤醒操作可根据使用者的需要而灵活设置。
以电子设备上设置的触摸传感器105为例,本申请实施方式中,在电子设备处于息屏状态时,控制电子设备的触摸传感模块执行触摸检测,对显示屏上的触摸动作(例如普通触摸操作、按压操作、滑动操作)进行检测及响应。当电子设备处于息屏状态时,该触摸传感模块将以第一扫描频率(较低功耗状态)进行触摸检测。当检测到物体触摸显示屏,则确定需要进行身份识别。当电子设备处于亮屏未解锁状态或亮屏已解锁状态时,触摸传感模块再以第二扫描频率(较高功耗状态或正常功耗状态)进行触摸检测,从而降低电子设备息屏状态时的功耗。其中第一扫描频率小于第二扫描频率。当然,可变更地,该触摸传感模块也可以一直以正常功耗状态进行触摸检测。
当电子设备处于亮屏未解锁状态时,这种情况下使用者随时可能使用该电子设备,因此确定需要进行身份识别。
当电子设备处于亮屏已解锁状态时,则要进一步判断当前是否存在应用程序向操作***发起的身份验证请求,例如支付应用程序发起用于移动支付的身份验证请求。若存在身份验证请求,则确定需要进行身份识别,否则电子设备进行正常的操作。
在本说明书的描述中,参考术语“一个实施方式”、“某些实施方式”、“示意性实施方式”、“示例”、“具体示例”、或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。
尽管上面已经示出和描述了本申请的实施方式,可以理解的是,上述实施方式是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对 上述实施方式进行变化、修改、替换和变型。

Claims (28)

  1. 一种电子设备的身份识别方法,包括以下步骤:
    S1,当电子设备需要进行身份识别时,获得电子设备前方物体的2D图像;
    S2,将获得的2D图像与预设的2D图像模板进行匹配;
    S3,当获得的2D图像与预设的2D图像模板匹配成功后,获得电子设备前方物体的表面轮廓信息;
    S4,根据获得的前方物体的表面轮廓信息,判断电子设备前方的物体是否为立体面部;
    S5,当电子设备前方的物体为立体面部时,确定前方物体的身份合法。
  2. 如权利要求1所述的电子设备的身份识别方法,其特征在于:所述电子设备包括位于电子设备正面的红外图像传感器和红外泛光灯;所述步骤S1包括:当电子设备需要进行身份识别时,控制红外泛光灯开启,以使红外泛光灯发出红外光束到电子设备前方的物体,并控制所述红外图像传感器获取物体对红外光束反射回来的光信号,并形成2D图像。
  3. 如权利要求2所述的电子设备的身份识别方法,其特征在于:所述电子设备还包括位于电子设备正面的RGB图像传感器;所述步骤S1进一步包括:当电子设备需要进行身份识别时,获取当前环境的光照强度;当当前环境的光照强度位于预设的光强度范围内时,控制所述RGB图像传感器采集电子设备前方物体的2D图像;否则控制所述红外泛光灯开启,并控制所述红外图像传感器采集电子设备前方物体的2D图像。
  4. 如权利要求1所述的电子设备的身份识别方法,其特征在于:所述电子设备包括位于电子设备正面的RGB图像传感器和补光装置;所述步骤S1包括:当电子设备需要进行身份识别时,获取电子设备周围的环境光强度;若所述环境光强度位于预设的第二光强度范围内,控制所述RGB图像传感器采集电子设备前方物体的2D图像;否则控制所述补光装置开启,并控制所述RGB图像传感器采集电子设备前方物体的2D图像。
  5. 如权利要求3或4所述的电子设备的身份识别方法,其特征在于:所述预设的光强度范围包括上限值和下限值;所述上限值为电子设备前方物体处于逆光环境下的临界值;所述下限值为电子设备前方物体处于弱光环境下的临界值。
  6. 如权利要求1所述的电子设备的身份识别方法,其特征在于:所述电子设备包括位于电子设备正面的红外图像传感器和红外投射装置,所述步骤S3中获得电子设备 前方物体的表面轮廓信息包括:控制电子设备的红外投射装置投射结构光束到电子设备前方的物体,并控制所述红外图像传感器获取物体对结构光束反射回来的光信号,并根据所述红外图像传感器获取到的光信号形成前方物体的3D轮廓图像。
  7. 如权利要求6所述的电子设备的身份识别方法,其特征在于:所述红外投射装置投射的结构光束形成一图案,且所述图案包括点阵式、条纹式、网格式、散斑式的一种或几种。
  8. 如权利要求1所述的电子设备的身份识别方法,其特征在于:所述电子设备包括位于电子设备正面的光发射器、光接收器,所述步骤S3中获得电子设备前方物体的表面轮廓信息包括:控制电子设备的光发射器发射预设的光脉冲信号到电子设备前方的物体,控制所述光接收器接收物体对预设的光脉冲信号反射回来的光信号,并根据发射的光脉冲信号和接收的光脉冲信号的时间差或相位差,获得物体表面与光接收器之间的距离,并根据获得的物体表面与光接收器之间的距离形成前方物体的3D轮廓图像。
  9. 如权利要求1所述的电子设备的身份识别方法,其特征在于:所述步骤S1之前进一步包括:判断电子设备的当前工作状态;根据电子设备的当前工作状态,确定是否需要进行身份识别。
  10. 如权利要求9所述的电子设备的身份识别方法,其特征在于:所述电子设备的当前工作状态包括息屏状态、亮屏未解锁状态、亮屏已解锁状态;所述根据电子设备的当前工作状态,确定是否需要进行身份识别包括:
    当电子设备处于息屏状态时,判断是否有屏幕唤醒操作;当确定有屏幕唤醒操作时,确定需要进行身份识别;
    当电子设备处于亮屏未解锁状态时,确定需要进行身份识别;
    当电子设备处于亮屏已解锁状态时,判断当前是否存在应用程序发起的身份验证请求,若存在身份验证请求,确定需要进行身份识别。
  11. 如权利要求10所述的电子设备的身份识别方法,其特征在于:所述屏幕唤醒操作包括:拿起电子设备、触摸电子设备的显示屏、物体靠近电子设备正面预设范围、按压电子设备功能按键的一种或多种。
  12. 如权利要求1所述的电子设备的身份识别方法,其特征在于:所述步骤S4包括:将获得的表面轮廓信息与预存的轮廓模板进行匹配,判断电子设备前方的物体是否为立体面部。
  13. 如权利要求12所述的电子设备的身份识别方法,其特征在于:所述预存的轮廓 模板包括物体面部预设特征的深度参考信息;所述步骤S4进一步包括:从获得的表面轮廓信息中提取面部预设特征的深度信息,并将其与该面部预设特征对应的深度参考信息进行比对,判断电子设备前方的物体是否为立体面部。
  14. 一种电子设备,包括处理器以及位于电子设备正面的图像采集装置、轮廓信息获取装置;所述处理器用于:在电子设备需要进行身份识别时,控制所述图像采集装置获得电子设备前方物体的2D图像,将获得的2D图像与预设的2D图像模板进行匹配;当获得的2D图像与预设的2D图像模板匹配成功后,控制所述轮廓信息获取装置获得电子设备前方物体的表面轮廓信息,并根据获得的前方物体的表面轮廓信息,判断电子设备前方的物体是否为立体面部;当电子设备前方的物体为立体面部时,确定前方物体的身份合法。
  15. 如权利要求14所述的电子设备,其特征在于:所述图像采集装置包括红外图像传感器和红外泛光灯;所述处理器进一步用于:当电子设备需要进行身份识别时,控制红外泛光灯开启,以使红外泛光灯发出红外光束到电子设备前方的物体,并控制所述红外图像传感器获取物体对红外光束反射回来的光信号,并形成2D图像。
  16. 如权利要求15所述的电子设备,其特征在于:所述电子设备正面还设置环境光传感器,用于采集电子设备周围的环境光强度;所述图像采集装置还包括RGB图像传感器;所述处理器进一步用于:当电子设备需要进行身份识别时,从所述环境光传感器读取当前的环境光强度;当当前的环境光强度位于预设的光强度范围内时,控制所述红外泛光灯开启,并控制所述红外图像传感器采集电子设备前方物体的2D图像;否则控制所述RGB图像传感器采集电子设备前方物体的2D图像。
  17. 如权利要求14所述的电子设备,其特征在于:所述电子设备正面还设置环境光传感器,用于采集电子设备周围的环境光强度;所述图像采集装置包括RGB图像传感器和补光装置;所述处理器进一步用于:当电子设备需要进行身份识别时,获取电子设备周围的环境光强度;若所述环境光强度位于预设的光强度范围内,控制所述RGB图像传感器采集电子设备前方物体的2D图像;否则控制所述补光装置开启,并控制所述RGB图像传感器采集电子设备前方物体的2D图像。
  18. 如权利要求17所述的电子设备,其特征在于:所述补光装置包括位于电子设备正面的柔光灯。
  19. 如权利要求17所述的电子设备,其特征在于:当控制电子设备正面的补光装置开启时,进一步调整补光装置的亮度、色温和颜色中的一个或几个参数。
  20. 如权利要求16或17所述的电子设备,其特征在于:所述预设的光强度范围包括上限值和下限值;所述上限值为电子设备前方物体处于逆光环境下的临界值;所述下限值为电子设备前方物体处于弱光环境下的临界值。
  21. 如权利要求14所述的电子设备,其特征在于:所述图像采集装置包括红外图像传感器和红外投射装置,所述处理器进一步用于:控制所述红外投射装置投射结构光束到电子设备前方的物体,并控制所述红外图像传感器获取物体对结构光束反射回来的光信号,并形成物体的3D轮廓图像。
  22. 如权利要求21所述的电子设备,其特征在于:所述红外投射装置投射的结构光束形成一图案,且所述图案包括点阵式、条纹式、网格式、散斑式的一种或几种。
  23. 如权利要求14所述的电子设备,其特征在于:所述图像采集装置包括光发射器、光接收器以及信号处理装置,所述处理器进一步用于:控制所述光发射器发射预设的光脉冲信号到电子设备前方的物体,控制所述光接收器接收物体对预设的光脉冲信号反射回来的光信号,所述信号处理装置计算预设的光脉冲的发射时间和反射时间的时间差或相位差,获得物体表面与光接收器之间的距离,并根据所获得的物体表面与光接收器之间的距离形成前方物体的3D轮廓图像。
  24. 如权利要求14所述的电子设备,其特征在于:所述处理器用于判断电子设备的当前工作状态,根据电子设备的当前工作状态,确定是否需要进行身份识别。
  25. 如权利要求24所述的电子设备,其特征在于:所述电子设备的当前工作状态包括息屏状态、亮屏未解锁状态、亮屏已解锁状态;所述处理器进一步用于:
    当电子设备处于息屏状态时,判断是否有屏幕唤醒操作;当确定有屏幕唤醒操作时,确定需要进行身份识别;
    当电子设备处于亮屏未解锁状态时,确定需要进行身份识别;
    当电子设备处于亮屏已解锁状态时,判断当前是否存在应用程序发起的身份验证请求,若存在身份验证请求,确定需要进行身份识别。
  26. 如权利要求25所述的电子设备,其特征在于:所述屏幕唤醒操作包括:拿起电子设备、触摸电子设备的显示屏、物体靠近电子设备正面预设范围、按压电子设备功能按键的一种或多种。
  27. 如权利要求14所述的电子设备,其特征在于:所述处理器进一步用于:将获得的表面轮廓信息与预存的轮廓模板进行匹配,判断电子设备前方的物体是否为立体面部。
  28. 如权利要求27所述的电子设备,其特征在于:所述预存的轮廓模板包括物体面 部预设特征的深度参考信息;所述处理器进一步用于:从获得的表面轮廓信息中提取面部预设特征的深度信息,并将其与该面部预设特征对应的深度参考信息进行比对,判断电子设备前方的物体是否为立体面部。
PCT/CN2018/083621 2018-04-18 2018-04-18 电子设备及其身份识别方法 WO2019200578A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880000318.6A CN108496172A (zh) 2018-04-18 2018-04-18 电子设备及其身份识别方法
PCT/CN2018/083621 WO2019200578A1 (zh) 2018-04-18 2018-04-18 电子设备及其身份识别方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/083621 WO2019200578A1 (zh) 2018-04-18 2018-04-18 电子设备及其身份识别方法

Publications (1)

Publication Number Publication Date
WO2019200578A1 true WO2019200578A1 (zh) 2019-10-24

Family

ID=63343458

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/083621 WO2019200578A1 (zh) 2018-04-18 2018-04-18 电子设备及其身份识别方法

Country Status (2)

Country Link
CN (1) CN108496172A (zh)
WO (1) WO2019200578A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419718A (zh) * 2022-03-10 2022-04-29 荣耀终端有限公司 电子设备及人脸识别方法

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109391709B (zh) * 2018-09-10 2020-11-06 Oppo广东移动通信有限公司 电子装置及其控制方法、控制装置和计算机可读存储介质
CN109376515A (zh) * 2018-09-10 2019-02-22 Oppo广东移动通信有限公司 电子装置及其控制方法、控制装置和计算机可读存储介质
CN110956577A (zh) * 2018-09-27 2020-04-03 Oppo广东移动通信有限公司 电子装置的控制方法、电子装置和计算机可读存储介质
CN109343659A (zh) * 2018-09-28 2019-02-15 深圳阜时科技有限公司 一种设备
CN109508069A (zh) * 2018-09-28 2019-03-22 深圳阜时科技有限公司 一种设备
CN109417579A (zh) * 2018-09-28 2019-03-01 深圳阜时科技有限公司 一种设备
WO2020062108A1 (zh) * 2018-09-28 2020-04-02 深圳阜时科技有限公司 设备
CN109407758A (zh) * 2018-09-28 2019-03-01 深圳阜时科技有限公司 一种设备
CN109328326A (zh) * 2018-09-28 2019-02-12 深圳阜时科技有限公司 一种设备
CN109101084A (zh) * 2018-09-28 2018-12-28 深圳阜时科技有限公司 一种设备
CN109409310B (zh) * 2018-11-05 2021-08-17 Oppo(重庆)智能科技有限公司 显示屏组件、电子设备及指纹识别方法
CN109774718A (zh) * 2018-12-24 2019-05-21 惠州市德赛西威汽车电子股份有限公司 一种一体式车载身份识别***
CN110378207B (zh) * 2019-06-10 2022-03-29 北京迈格威科技有限公司 人脸认证方法、装置、电子设备及可读存储介质
CN110398988A (zh) * 2019-06-28 2019-11-01 联想(北京)有限公司 一种控制方法和电子设备
CN111343333B (zh) * 2020-02-04 2021-05-04 Oppo广东移动通信有限公司 接近检测控制方法及相关装置
CN111538970B (zh) * 2020-07-08 2020-12-22 德能森智能科技(成都)有限公司 一种基于智能化物联网的云平台***
CN112766086A (zh) * 2021-01-04 2021-05-07 深圳阜时科技有限公司 一种识别模板注册方法及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6668071B1 (en) * 1997-04-04 2003-12-23 Viktor Albertovich Minkin Method and apparatus for user identification using pulsating light source
CN107220623A (zh) * 2017-05-27 2017-09-29 湖南德康慧眼控制技术股份有限公司 一种人脸识别方法及***
CN107368730A (zh) * 2017-07-31 2017-11-21 广东欧珀移动通信有限公司 解锁验证方法和装置
CN107506752A (zh) * 2017-09-18 2017-12-22 艾普柯微电子(上海)有限公司 人脸识别装置及方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6668071B1 (en) * 1997-04-04 2003-12-23 Viktor Albertovich Minkin Method and apparatus for user identification using pulsating light source
CN107220623A (zh) * 2017-05-27 2017-09-29 湖南德康慧眼控制技术股份有限公司 一种人脸识别方法及***
CN107368730A (zh) * 2017-07-31 2017-11-21 广东欧珀移动通信有限公司 解锁验证方法和装置
CN107506752A (zh) * 2017-09-18 2017-12-22 艾普柯微电子(上海)有限公司 人脸识别装置及方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419718A (zh) * 2022-03-10 2022-04-29 荣耀终端有限公司 电子设备及人脸识别方法
CN114419718B (zh) * 2022-03-10 2022-08-02 荣耀终端有限公司 电子设备及人脸识别方法

Also Published As

Publication number Publication date
CN108496172A (zh) 2018-09-04

Similar Documents

Publication Publication Date Title
WO2019200578A1 (zh) 电子设备及其身份识别方法
US11892710B2 (en) Eyewear device with fingerprint sensor for user input
CN109076662B (zh) 用于镜子部件的自适应照明***和控制自适应照明***的方法
JP6623171B2 (ja) ディスプレイに統合されるユーザ分類、セキュリティおよび指紋システム
US9750420B1 (en) Facial feature selection for heart rate detection
WO2018121428A1 (zh) 一种活体检测方法、装置及存储介质
US8831295B2 (en) Electronic device configured to apply facial recognition based upon reflected infrared illumination and related methods
US7646896B2 (en) Apparatus and method for performing enrollment of user biometric information
WO2020056939A1 (zh) 屏下光学检测***、电子设备及物体接近检测方法
CN107340875B (zh) 内建传感器及光源模块的键盘装置
CN108510962A (zh) 一种显示控制方法及电子设备
WO2019196075A1 (zh) 电子设备及其面部识别方法
US20130127705A1 (en) Apparatus for touching projection of 3d images on infrared screen using single-infrared camera
CN208172794U (zh) 电子设备
US20210365535A1 (en) Eye scanner for user identification and security in an eyewear device
US20190304124A1 (en) Low feature object detection and pose estimation for image data streams
CN112106046A (zh) 用于执行生物特征认证的电子设备及其操作方法
WO2019134093A1 (zh) 智能终端
WO2019196074A1 (zh) 电子设备及其面部识别方法
JP6525345B2 (ja) 車両の表示システム及び車両の表示システムの制御方法
US20130127704A1 (en) Spatial touch apparatus using single infrared camera
US20190347500A1 (en) Electronic device for performing biometric authentication and method of operating the same
KR101961266B1 (ko) 시선 추적 장치 및 이의 시선 추적 방법
CN110287861B (zh) 指纹识别方法、装置、存储介质及电子设备
CN109684945A (zh) 基于瞳孔的活体检测方法、装置、服务器及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18915461

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18915461

Country of ref document: EP

Kind code of ref document: A1