WO2024045446A1 - 基于头戴式显示器的虹膜图像采集方法及相关产品 - Google Patents

基于头戴式显示器的虹膜图像采集方法及相关产品 Download PDF

Info

Publication number
WO2024045446A1
WO2024045446A1 PCT/CN2022/142700 CN2022142700W WO2024045446A1 WO 2024045446 A1 WO2024045446 A1 WO 2024045446A1 CN 2022142700 W CN2022142700 W CN 2022142700W WO 2024045446 A1 WO2024045446 A1 WO 2024045446A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
iris
imaging sensor
iris image
head
Prior art date
Application number
PCT/CN2022/142700
Other languages
English (en)
French (fr)
Inventor
韦燕华
Original Assignee
上海闻泰电子科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海闻泰电子科技有限公司 filed Critical 上海闻泰电子科技有限公司
Publication of WO2024045446A1 publication Critical patent/WO2024045446A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features

Definitions

  • the present disclosure relates to an iris image collection method based on a head-mounted display and related products.
  • the iris is the internal tissue of the human body. It is the area between the sclera and the pupil of the eyeball.
  • the iris has two concentric circles inside and outside, with obvious texture characteristics. The texture characteristics of the iris are genetically determined, so identification using iris has a high accuracy.
  • VR devices such as head-mounted displays can have built-in cameras to collect the user's iris images, and use the collected iris images to verify the user's identity to verify usage. Whether the user identity of the VR device is legal.
  • the accuracy of user identification is closely related to the quality of the iris image. Low-quality iris images will greatly reduce the accuracy of identification.
  • a head-mounted display-based iris image collection method and related products are provided.
  • An iris image collection method based on a head-mounted display includes a light source and a camera device, the camera device includes: a camera hole, an imaging sensor, and a position adjustment module of the imaging sensor; The camera hole is arranged toward the wearer of the head-mounted display; the light emitted by the light source is incident from the camera hole; and the method includes:
  • the position adjustment module is controlled to perform a movement operation and/or a rotation operation according to the first iris image;
  • the movement operation is configured to adjust the distance from the imaging sensor to the camera hole so that the iris is imaged in the image Maximized operation;
  • the rotation operation is configured to rotate the imaging sensor to change the angle at which light enters the imaging sensor;
  • a second iris image generated by the adjusted imaging sensor based on the incident light is acquired.
  • controlling the position adjustment module to perform a movement operation based on the first iris image includes:
  • the reference area is the image area corresponding to when the complete iris is maximized in the image
  • the position adjustment module is controlled to perform a moving operation according to the moving direction.
  • determining the moving direction in which the position adjustment module performs a moving operation based on the area difference between the iris area and the reference area includes:
  • the moving direction in which the position adjustment module performs the moving operation is a direction that shortens the distance.
  • controlling the position adjustment module to perform a rotation operation according to the first iris image includes:
  • the incident angle is the angle in the target direction between the light and the plane where the imaging sensor is located;
  • the target direction is from the a direction from the inside of the head-mounted display to the outside of said head-mounted display;
  • the rotation direction of the rotation operation is determined based on the injection angle and;
  • the position adjustment module is controlled to perform a rotation operation according to the rotation direction.
  • determining the rotation direction of the rotation operation according to the injection angle includes:
  • the rotation direction of the rotation operation is determined to be counterclockwise rotation; or
  • the rotation direction of the rotation operation is determined to be clockwise rotation.
  • the position adjustment module includes: a telescopic link; one end of the telescopic link is connected to the imaging sensor, and the other end is connected to the head-mounted display.
  • the housing is connected; the telescopic link is configured as a link to perform the moving operation; and/or,
  • the position adjustment module may include: a rotatable platform; the imaging sensor is placed on the rotatable platform; and the rotatable platform is configured to perform the rotation operation.
  • the camera device further includes: a correction chip;
  • the head-mounted display further includes: an eye tracking sensor;
  • the correction chip stores a distortion correction model.
  • the distortion correction model is obtained after training using sample data.
  • the sample data includes multiple sample iris images.
  • the multiple sample iris images are the images of the user looking at the head.
  • the sample data are respectively collected at different positions of the display screen of the wearable display, and the sample data also includes undistorted supervisory iris images corresponding to each of the sample iris images; the method also includes:
  • the distortion correction model stored in the correction chip performs distortion correction on the second iris image according to the gaze position to obtain a third iris image output by the distortion correction model.
  • An iris image collection device based on a head-mounted display includes a light source and a camera device, the camera device includes: a camera hole, an imaging sensor, and a position adjustment module of the imaging sensor; The camera hole is arranged toward the wearer of the head-mounted display; the light emitted by the light source is incident from the camera hole; and the device includes:
  • an imaging module configured to acquire a first iris image generated by the imaging sensor based on incident light
  • a motion control module configured to control the position adjustment module to perform a movement operation and/or a rotation operation according to the first iris image;
  • the movement operation is configured to adjust the imaging sensor to the The distance of the camera hole is an operation to maximize the imaging of the iris in the image;
  • the rotation operation is configured to rotate the imaging sensor to change the angle at which light enters the imaging sensor;
  • the imaging module is further configured to acquire a second iris image generated by the adjusted imaging sensor based on the incident light.
  • the motion control module includes: an iris recognition unit, a movement determination unit and a movement execution unit;
  • the iris recognition unit is configured to determine the iris area in the first iris image
  • the movement determination unit is configured to compare the area area of the iris area with the area area of a reference area; the reference area corresponds to the complete iris when the imaging in the image is maximized. image area; and, the movement determination unit is configured to be a unit that determines the movement direction in which the position adjustment module performs the movement operation based on the area difference between the iris area and the reference area;
  • the movement execution unit is configured to control the position adjustment module to perform a movement operation according to the movement direction.
  • the movement determination unit is further configured to determine the position adjustment model when the area of the iris area is smaller than the area of the reference area.
  • the moving direction in which the group performs the moving operation is the direction of increasing the distance; or,
  • the movement determination unit is further configured to determine the movement direction of the position adjustment module to perform the movement operation as a direction of shortening the distance when the area of the iris area is larger than the area of the reference area.
  • the motion control module includes: an incident angle calculation unit, a rotation determination unit and a rotation execution unit;
  • the incident angle calculation unit is configured to calculate the incident angle of light entering the imaging sensor according to the first iris image; the incident angle is the plane where the light and the imaging sensor are located The included angle in the target direction; the target direction is the direction from the inside of the head-mounted display to the outside of the head-mounted display;
  • the rotation determination unit is configured to determine the rotation direction of the rotation operation based on the injection angle after determining that the injection angle is not a right angle;
  • the rotation execution unit is configured to control the position adjustment module to perform a rotation operation according to the rotation direction.
  • the rotation determination unit is also configured to determine the rotation direction of the rotation operation as counterclockwise rotation when the injection angle is an obtuse angle; or;
  • the rotation determining unit is further configured to determine the rotation direction of the rotation operation to be clockwise rotation when the incident angle is an acute angle.
  • the position adjustment module includes: a telescopic link; one end of the telescopic link is connected to the imaging sensor, and the other end is connected to the head-mounted display.
  • the housing is connected; the telescopic link is configured as a link to perform the moving operation; and/or,
  • the position adjustment module may include: a rotatable platform; the imaging sensor is placed on the rotatable platform; and the rotatable platform is configured to perform the rotation operation.
  • the head-mounted display-based iris image collection device further includes: a line of sight determination module, a data transmission module, and a correction module;
  • the gaze determination module is configured as a module for determining the gaze position of the user's gaze on the display screen detected by the eye tracking sensor;
  • the data transmission module is configured to transmit the second iris image generated by the imaging sensor and the determined gaze position to a module of the correction chip;
  • the correction module is configured to perform distortion correction on the second iris image according to the gaze position through the trained distortion correction model stored in the correction chip to obtain the third iris image output by the distortion correction model.
  • An electronic device includes a memory and one or more processors, the memory being configured as a module for storing computer readable instructions; when the computer readable instructions are executed by the processor, the one or more processors The processor executes the steps of any one of the above head-mounted display-based iris image acquisition methods.
  • One or more non-volatile storage media storing computer-readable instructions.
  • the computer-readable instructions When executed by one or more processors, they cause the one or more processors to perform any one of the above-mentioned headset-based methods.
  • the steps of the iris image acquisition method for the display are executed by one or more processors.
  • Figure 1 is a schematic diagram of an application scenario of an iris image collection method based on a head-mounted display provided by one or more embodiments of the present disclosure
  • Figure 2A is a schematic structural diagram of a head-mounted display provided by one or more embodiments of the present disclosure
  • Figure 2B is a schematic structural diagram of another head-mounted display provided by one or more embodiments of the present disclosure.
  • Figure 2C is a schematic structural diagram of another head-mounted display provided by one or more embodiments of the present disclosure.
  • Figure 3 is a schematic flowchart of an iris image collection method based on a head-mounted display provided by one or more embodiments of the present disclosure
  • Figure 4 is a schematic flowchart of an iris image collection method based on a head-mounted display provided by one or more embodiments of the present disclosure
  • Figure 5 is a schematic flow chart of an iris image collection method based on a head-mounted display provided by one or more embodiments of the present disclosure
  • Figure 6 is a schematic flowchart of an iris image collection method based on a head-mounted display provided by one or more embodiments of the present disclosure
  • Figure 7 is a schematic structural diagram of an iris image collection device based on a head-mounted display in one or more embodiments of the present disclosure
  • Figure 8 is a schematic structural diagram of an electronic device in one or more embodiments of the present disclosure.
  • first, second, etc. in the description and claims of the present disclosure are used to distinguish different objects, rather than to describe a specific order of objects.
  • first camera and the second camera are used to distinguish different cameras, rather than to describe a specific order of the cameras.
  • the words “exemplary” or “such as” are used to mean an example, illustration, or illustration. Any embodiment or design described in this disclosure as “exemplary” or “such as” is not intended to be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as “exemplary” or “such as” is intended to present relevant concepts in a specific manner. Furthermore, in the description of the present disclosure, unless otherwise stated, the meaning of "plurality” refers to two or More than two.
  • the present disclosure provides an iris image collection method based on a head-mounted display, which can be configured in an application environment as shown in Figure 1 .
  • a first operating environment is given, which may include a head-mounted display 101 , a terminal device 102 and a server 103 .
  • the user may wear the head mounted display 101 so that the head mounted display 101 acquires data.
  • the head-mounted display 101 does not have data processing capabilities. After acquiring the data, it can transmit the data with the terminal 102 through short-range communication technology.
  • the terminal device 102 may include electronic devices such as smart TVs, three-dimensional visual display devices, large-scale projection systems, multimedia playback devices, mobile phones, tablet computers, game consoles, and PCs (Personal Computers).
  • the terminal device 102 can receive the data transmitted by the head-mounted display 101 and process the data.
  • the server 103 is configured to provide background services for the terminal 102 so that the terminal 102 processes the received data transmitted by the head-mounted display 101, thereby completing the server of the iris image collection method provided by the present disclosure.
  • the server 103 can also generate corresponding control instructions according to the data processing results.
  • the control instructions can be sent to the terminal 102 and/or the head-mounted display 101 respectively to control the terminal 102 and/or the head-mounted display 103.
  • server 103 may be a backend server.
  • the server 103 may be one server, a server cluster composed of multiple servers, or a cloud computing service center.
  • the server 103 provides background services for multiple terminals 102 at the same time.
  • a second operating environment is given, which may include a head-mounted display 101 and a terminal device 102 .
  • the head-mounted display 101 may include various types of devices as stated above.
  • the head-mounted display 101 does not have data processing capabilities. After acquiring the data, it can transmit data with the terminal 102 through short-range communication technology. .
  • the terminal device 102 may include various types of electronic devices stated above.
  • the terminal device 102 can receive the data transmitted by the head-mounted display 101 and process the data to complete the iris image collection method provided by the present disclosure.
  • the terminal 102 can also generate corresponding control instructions according to the data processing results, and the control instructions can be sent to the head-mounted display 101 respectively to control the head-mounted display 103.
  • a third operating environment is given, which only includes the head-mounted display 101 .
  • the head-mounted display 101 not only has data acquisition capabilities, but also has data processing capabilities, that is, it can call the program code through the processor in the head-mounted display 101 to realize the functions of the iris image acquisition method provided by the present disclosure.
  • the program code can be stored in a computer storage medium. It can be seen that the head-mounted display at least includes a processor and a storage medium.
  • FIG. 2A is a schematic structural diagram of a head-mounted display provided by one or more embodiments of the present disclosure.
  • the head-mounted display 20 may include a display screen 22 , a light source 23 , and a camera 24 .
  • the display screen 22 may be a light-emitting diode (LED) screen or a liquid crystal display (Liquid Crystal Display, LCD) screen and is configured to output image data.
  • LED light-emitting diode
  • LCD liquid crystal display
  • the light source 23 can be configured to emit light, and can be a visible light source or an infrared light source, etc., and is not specifically limited.
  • FIG. 2B is a schematic structural diagram of another head-mounted display provided by one or more embodiments of the present disclosure.
  • the head mounted display shown in FIG. 2B may be a side view of the head mounted display shown in FIG. 2A.
  • the head-mounted display 20 may include a lens module 21 , a display screen 22 , a light source 23 , and a camera 24 .
  • the lens module 21 can be arranged on the display screen 22 and can be configured to refract light to bring the image on the display screen 22 closer to the retina, so that the human eye can easily see the display screen 22 that is almost close to the eyes. ;
  • the lens module 21 also has a light-gathering function, gathering the light inside the head-mounted display so that more light can enter the camera device 24.
  • the lens module 21 can be a Panckae optical module, which is composed of two or more lenses; or the lens module 21 can also be a Fresnel optical module, which is composed of one lens to form an optical module. .
  • lens modules 21, display screens 22 and light sources 23 in the head-mounted display 20 disclosed in this disclosure is not limited.
  • the head-mounted display 20 may include a display screen 22 and a lens module 21, around which a plurality of light sources 23 may be disposed.
  • the head-mounted display 20 may also include two display screens 22 corresponding to the left eye and the right eye respectively, and a lens module 21 corresponding to each display screen 22.
  • Multiple light sources 23 may be provided around each lens module 21 .
  • the camera device 24 may be configured as a device for collecting image data, and may at least include: a camera hole 241, an imaging sensor 242, a position adjustment module 243 of the imaging sensor, and a base 244.
  • the camera hole 241 may be disposed inside the head-mounted display 20 , that is, facing the wearer of the head-mounted display 20 .
  • the light generated by the light source 23 can be incident through the camera hole 241 .
  • the display screen 22 may be a hole-digging screen, and the hole-digging position of the display screen 22 may be the setting position of the camera hole 241 .
  • the camera hole 241 may also be disposed above or below the middle position of the two lens modules 21 .
  • the imaging sensor 242 can be any photosensitive device, such as a charge coupled device image sensor (Charge Coupled Device, CCD) or a complementary metal-oxide semiconductor (Complementary Metal-Oxide Semiconductor, CMOS).
  • CCD Charge Coupled Device
  • CMOS complementary metal-oxide semiconductor
  • the position adjustment module 243 may include: a telescopic link 243a and/or a rotatable platform 243b.
  • One end of the retractable link 243a is connected to the imaging sensor 242, and the other end is connected to the housing of the head-mounted display 20.
  • the length of the telescopic link 243a is lengthened, the distance between the imaging sensor 242 and the camera hole 241 is shortened, and the optical zoom factor becomes smaller; when the length of the telescopic link 243a is shortened, the distance between the imaging sensor 242 and the camera hole 241 becomes smaller. As the distance between them increases, the optical zoom factor increases.
  • the rotatable platform 243b can be configured as a platform on which the imaging sensor 242 is placed.
  • the rotatable platform 243b can rotate clockwise or counterclockwise, thereby driving the imaging sensor 242 placed on the rotatable platform 243b to rotate.
  • the rotation of the imaging sensor 242 can change the angle at which light enters the imaging sensor 242, thereby changing the area of the imaging sensor 242 that receives light.
  • the area of the imaging sensor 242 that receives the light is the largest.
  • the position adjustment module 243 includes a telescopic link 243a and a rotatable platform 243b, then as shown in FIG. 2B, one end of the telescopic link 243a is connected to the rotatable platform 243b.
  • the telescopic link 243a causes the position of the rotatable platform 243b to change, thereby changing the distance between the imaging sensor 242 placed on the rotatable platform 243b and the camera hole 241.
  • the retractable link 243a can be an optional implementation for adjusting the distance between the imaging sensor 242 and the camera hole 241.
  • the imaging sensor 242 and the camera hole 241 can be adjusted through other components. adjust the distance between them.
  • the rotatable platform 243 may be an optional implementation of rotating the imaging sensor 242. In other possible embodiments, the imaging sensor 242 may also be rotated through other components.
  • the base 244 is configured to carry the position adjustment module 243 and the imaging sensor 242 .
  • FIG. 2C is a schematic structural diagram of another head-mounted display provided by one or more embodiments of the present disclosure.
  • the camera device 24 may further include a reflector 245 , and the reflector 245 may be disposed between the camera hole 241 and the imaging sensor 242 .
  • the reflector 245 can be configured to change the optical path of the light incident from the camera hole 241, and reflect the light to the imaging sensor 242.
  • the arrangement of the reflector 245 allows the camera hole 241 and the imaging sensor 242 to be arranged not on the same straight line, which is beneficial to the internal space design of the head-mounted display 20 .
  • Figure 3 is a schematic flow chart of an iris image collection method based on a head-mounted display provided by one or more embodiments of the present disclosure. This method can be applied to any electronic device such as terminal equipment, service equipment, or head-mounted display body shown in the aforementioned operating environment. Please refer to Figure 3.
  • the method shown in Figure 3 may include the following steps:
  • the light emitted by the light source can be reflected by the iris of the user's eyeball, and the light reflected by the iris enters the interior of the casing of the head-mounted display through the camera hole and reaches the imaging sensor.
  • the imaging sensor can convert the light reflected by the iris into a digital signal to obtain the first iris image.
  • the head mounted display may further include an eye tracking sensor, which may be configured as a sensor that detects the user's gaze.
  • an eye tracking sensor which may be configured as a sensor that detects the user's gaze.
  • Specific implementations of step 310 may include: when it is detected that the user's line of sight falls into the display screen of the head-mounted display, acquiring a first iris image generated by the imaging sensor based on the incident light.
  • Control the position adjustment module to perform a movement operation and/or a rotation operation according to the first iris image.
  • the iris area occupied by the iris in the first iris image can be compared with a standard reference area, and the position adjustment module is controlled to perform a movement operation according to the comparison result to adjust the distance from the imaging sensor to the camera hole. Adjust the image size of the iris in the image, and adjust the direction so that the image of the iris in the image is consistent with the reference area.
  • the reference area may be the image area corresponding to when the imaging of the complete iris in the image is maximized, and the iris area may be compared with the reference area by regional parameters such as area or location.
  • the angle at which light enters the imaging sensor can be calculated based on the first iris image, and the position adjustment module is controlled to perform a rotation operation according to the calculated incident angle to change the angle at which light enters the imaging sensor.
  • the area of the imaging sensor that receives light is adjusted so that the imaging sensor is perpendicular to the incident light. The larger the area of the imaging sensor that receives light, the brighter the image obtained after imaging.
  • the position adjustment module of the first iris image can be controlled to perform a movement operation and a rotation operation simultaneously according to the comparison result between the iris area in the first iris image and the reference area, so as to simultaneously adjust the position of the iris in the image.
  • Image size and image brightness can be controlled to perform a movement operation and a rotation operation simultaneously according to the comparison result between the iris area in the first iris image and the reference area, so as to simultaneously adjust the position of the iris in the image.
  • a first iris image can be captured first, and the first iris image can be used to adjust the distance between the imaging sensor and the camera hole and/or the angle at which light enters the imaging sensor.
  • the adjusted distance between the imaging sensor and the camera hole changes.
  • the imaging size of the iris in the second iris image is closer to the imaging size when imaging is maximized.
  • the second iris image may include more Much iris information.
  • the area of the adjusted imaging sensor receiving light is larger, and the brightness of the second iris image is higher than that of the first iris image. Therefore, the image quality of the second iris image is higher than that of the first iris image, and using the second iris image for identity identification is beneficial to obtaining a more accurate identity identification result.
  • the position adjustment module can perform movement operations.
  • Figure 4 is a schematic flowchart of an iris image collection method based on a head-mounted display provided by one or more embodiments of the present disclosure. This method can be applied to terminal devices and services shown in the aforementioned operating environment. Any electronic device such as a device or a head-mounted display body. Please refer to Figure 4. The method shown in Figure 4 may include the following steps:
  • the iris area may be located in the first iris image through a target detection method based on iris features. After comparing the area of the iris area with the area of the reference area, if the area of the iris area is not equal to the area of the reference area, step 430 may be continued.
  • the area difference between the iris area and the reference area may be configured to at least determine the difference in the moving direction when the position adjustment module performs the moving operation.
  • the moving direction of the position adjustment module to perform the moving operation is a direction that increases the distance between the imaging sensor and the camera hole.
  • the moving direction of the position adjustment module to perform the moving operation is a direction that shortens the distance between the imaging sensor and the camera hole.
  • the position adjustment module may include a telescopic link.
  • the telescopic link When the moving direction is the direction of shortening the distance, the telescopic link is controlled to lengthen; when the moving direction is the direction of increasing the distance, the telescopic link is controlled to shorten.
  • the moving distance of each moving operation may be a preset distance value.
  • each movement operation can be set to correspond to a distance change of 1 mm or 2 mm of the telescopic link.
  • the second iris image generated by the imaging sensor can be used as a new first iris image, and then continue to perform the aforementioned step 420 until it is determined that the area of the iris area in the first iris image is equal to the reference area. area. That is, the position adjustment module can be controlled to move a preset distance value each time, and is adjusted successively to collect a second iris image that meets the requirements.
  • the area difference between the iris area and the reference area may also be configured to determine the difference in moving distance when the position adjustment module performs a moving operation. Among them, the area difference between the iris area and the reference area can be calculated, and the moving distance can be calculated using the area difference.
  • the position adjustment module can be controlled to move the calculated movement distance according to the above-mentioned movement direction. That is, the position adjustment module can be controlled to move the calculated movement distance in one go, and the second iris image that meets the requirements can be directly collected through one adjustment.
  • the imaging of the iris in the first iris image is too small, and it is necessary to increase the optical zoom factor of the camera device and increase the imaging sensor to the camera distance between holes. Therefore, after increasing the distance between the imaging sensor and the camera hole, the image of the iris in the second iris image generated by the imaging sensor becomes larger.
  • the imaging of the iris in the first iris image is too large and may be incomplete. It is necessary to reduce the optical zoom factor of the camera device and shorten the distance between the imaging sensor and the camera. distance between holes. Therefore, after shortening the distance between the imaging sensor and the camera hole, the image of the iris in the second iris image generated by the imaging sensor becomes smaller.
  • the difference in the area of the iris area and the reference area in the first iris image can be used to control the position adjustment module to perform a moving operation to adjust the optical zoom factor of the camera device and adjust the imaging size of the iris to collect images. to a second iris image that maximizes iris imaging.
  • the position adjustment module can perform a rotation operation.
  • Figure 5 is a schematic flowchart of an iris image collection method based on a head-mounted display provided by one or more embodiments of the present disclosure. This method can be applied to terminal devices and services shown in the aforementioned operating environment. Any electronic device such as a device or a head-mounted display body. Please refer to Figure 5. The method shown in Figure 5 may include the following steps:
  • the first iris image can be converted into a grayscale image, and the grayscale mean value of each pixel in the grayscale image can be calculated.
  • the grayscale mean value of the grayscale image may be configured to indicate the image brightness of the first iris image, the higher the grayscale mean value, the higher the image brightness.
  • the incident angle of the light may be calculated using the image brightness of the first iris image.
  • the angle range of the incident angle may be [0°-180°], and the incident angle may refer to the angle in the target direction between the light and the plane where the imaging sensor is located.
  • the target direction may be a direction from the inside of the head-mounted display to the outside of the head-mounted display.
  • the target direction may be perpendicular to the horizontal plane and upward; for the head-mounted display shown in Figure 2C, the target direction may be parallel to the horizontal plane and to the right. direction.
  • the rotation direction of the rotation operation is determined based on the incident angle and.
  • the incident angle of the light is an obtuse angle, it can be determined that the rotation direction of the rotation operation is counterclockwise rotation; or;
  • the rotation direction of the rotation operation is determined to be clockwise rotation.
  • the position adjustment module may include a rotatable platform.
  • the rotation direction is clockwise, the rotatable platform is controlled to rotate in the clockwise direction; when the rotation direction is counterclockwise, the rotatable platform is controlled to rotate in the counterclockwise direction.
  • the imaging sensor can be rotated through the position adjustment module to change the angle at which light enters the imaging sensor, which is beneficial to improving the image brightness of the second iris image generated after adjustment.
  • the imaging sensor can be rotated as shown in Figure 5 to adjust the area of the imaging sensor that receives light, thereby improving the brightness of the generated image.
  • the rotation angle for each rotation operation may be a preset angle value.
  • each rotation operation can be set to correspond to an angle change of 3°, 5°, or 10° of the rotatable platform.
  • the second iris image generated by the imaging sensor can be used as a new first iris image, and then the aforementioned step 520 can be continued until it is determined that the incident angle is vertical. That is, the position adjustment module can be controlled to rotate by a preset angle value each time, and is adjusted successively to collect a second iris image that meets the requirements.
  • the camera hole of the head-mounted display camera device is often set above, below, or on one side of the display screen, rather than directly in the center of the display screen. Therefore, when the user looks at the display screen, the camera hole may not be located directly in front of the human eye, which may easily lead to distortion of the captured iris image, that is, the inner and outer circles of the iris in the iris image are not concentric.
  • the camera device 24 of the head-mounted display as shown in FIG. 2A or 2B may further include a correction chip, and the correction chip may be another integrated circuit module independent of the imaging sensor.
  • the correction chip may also be a part of the imaging sensor, and the details are not limited.
  • the distortion correction model can be stored in the correction chip, and the distortion correction model is obtained after training using sample data.
  • the sample data includes multiple sample iris images.
  • the multiple sample iris images are respectively collected when the user looks at different positions of the display screen of the head-mounted display.
  • the sample data also includes distortion-free images corresponding to each sample iris image.
  • Supervised iris images That is to say, the sample data may include a plurality of iris image pairs, each iris image pair corresponding to a position where the user looks at the display screen, and each iris image pair may include a sample iris image, and a sample iris image corresponding to the iris image. Distortion-free supervised iris images.
  • the training process of the distortion correction model may include: inputting the sample iris image corresponding to each position where the user looks at the display screen and the position information of the user's gaze on the display screen into the distortion correction model, and obtaining the output result of the distortion correction model.
  • the loss parameters of the output result and the corresponding supervised iris image are determined based on the preset loss function; the distortion correction model is updated based on the loss parameters until the end of the training, thereby obtaining the trained distortion correction model.
  • the head-mounted display may further include an eye-tracking sensor configured as a sensor that detects the user's line of sight.
  • Figure 6 is a schematic flow chart of an iris image acquisition method based on a head-mounted display provided by one or more embodiments of the present disclosure. The method can be applied to any electronic device such as terminal equipment, service equipment, or head-mounted display body shown in the aforementioned operating environment. Please refer to Figure 6.
  • the method shown in Figure 6 may include the following steps:
  • Control the position adjustment module to perform a movement operation and/or a rotation operation according to the first iris image.
  • the user's line of sight can be detected through an eye tracking sensor, and the gaze position of the user's line of sight on the display screen can be further calculated.
  • what is stored in the correction chip may be a trained distortion correction model.
  • the distortion correction model may use the distortion correction ability learned in the training stage to perform distortion correction on the second iris image, thereby outputting a third iris image without distortion. .
  • the position of the imaging sensor can be adjusted first according to the first iris image to adjust the imaging size of the iris in the second iris image and the image brightness of the second iris image.
  • the trained distortion correction model is used to perform distortion correction on the second iris image according to the gaze position of the user's gaze on the display screen, to further obtain a third iris image without distortion. Imaging a third iris image with moderate size, high image brightness and no distortion is conducive to extracting more accurate iris information from the third iris image.
  • the sample iris images used when training the distortion correction model may be collected by a head-mounted display using a Pancake optical module as a lens module.
  • the image brightness of the supervised iris image in the sample data and the corresponding sample iris image may be different, and the image brightness of the supervised iris image may be higher than the image brightness of the sample iris image. Therefore, the trained distortion correction model can not only adjust image distortion, but also improve image brightness.
  • the distortion correction model can also perform the second iris
  • the image undergoes a brightness enhancement operation, and the image brightness of the third iris image output by the distortion correction model may be higher than the image brightness of the second iris image.
  • steps in the flowcharts of FIGS. 3 to 6 are shown in sequence as indicated by arrows, these steps are not necessarily executed in the order indicated by arrows. Unless otherwise specified in this article, there is no strict order restriction on the execution of these steps, and these steps can be executed in other orders. Moreover, at least some of the steps in Figures 3 to 6 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but may be executed at different times. These sub-steps or The execution order of the stages is not necessarily sequential, but may be performed in turn or alternately with other steps or sub-steps of other steps or at least part of the stages.
  • FIG. 7 is a schematic structural diagram of an iris image collection device based on a head-mounted display in one or more embodiments of the present disclosure.
  • the iris image acquisition device can be applied to any electronic equipment such as terminal equipment, service equipment, or a head-mounted display body shown in the aforementioned operating environment.
  • the iris image collection device 700 may include: an imaging module 710 and a motion control module 720 .
  • Imaging module 710 a module configured to acquire the first iris image generated by the imaging sensor based on the incident light
  • the motion control module 720 is configured to control the position adjustment module of the first iris image to perform a movement operation and/or a rotation operation; the movement operation is configured to adjust the distance from the imaging sensor to the camera hole, so that the iris is in An operation to maximize imaging in the image; a rotation operation configured to rotate the imaging sensor to change the angle at which light enters the imaging sensor;
  • the imaging module 710 is also configured as a module to acquire a second iris image generated by the adjusted imaging sensor based on the incident light.
  • the motion control module 720 may include an iris recognition unit, a movement determination unit, and a movement execution unit.
  • An iris recognition unit the iris recognition unit may be configured as a unit for determining the iris area in the first iris image
  • the movement determination unit may be configured as a unit that compares the area area of the iris area with the area area of the reference area; the reference area is the image area corresponding to when the complete iris is maximized in the image; and, according to the iris The area difference between the area and the reference area determines the moving direction of the position adjustment module to perform the movement operation;
  • the mobile execution unit can be configured as a unit that controls the position adjustment module to perform a moving operation according to the moving direction.
  • the position adjustment module may include: a telescopic link; and a mobile execution unit.
  • the mobile execution unit may be configured as a unit that controls the telescopic link to perform a moving operation according to the moving direction.
  • the movement determination unit can also be configured as a unit that determines the movement direction of the position adjustment module to perform the movement operation as a direction of increasing distance when the area of the iris area is smaller than the area of the reference area; or, it can also be configured as a unit.
  • the movement determination unit is configured to determine that the movement direction in which the position adjustment module performs the movement operation is a direction of shortening the distance when the area of the iris area is larger than the area of the reference area.
  • the motion control module 720 may include an incident angle calculation unit, a rotation determination unit, and a rotation execution unit.
  • the incident angle calculation unit can be configured as a unit that calculates the incident angle of light entering the imaging sensor based on the first iris image; the incident angle is the angle in the target direction between the light and the plane where the imaging sensor is located; the target The direction is the direction from the inside of the head mounted display to the outside of the head mounted display;
  • the rotation determination unit can be configured to be a unit that determines the rotation direction of the rotation operation based on the injection angle after determining that the injection angle is not a right angle;
  • the rotation execution unit can be configured as a unit that controls the position adjustment module to perform a rotation operation according to the rotation direction.
  • the position adjustment module may include: a rotatable platform.
  • the rotation execution unit can be configured as a unit that controls the rotatable platform to perform a rotation operation according to the rotation direction.
  • the rotation determination unit can also be configured as a unit that determines the rotation direction of the rotation operation as counterclockwise rotation when the angle of incidence is an obtuse angle; or; the rotation determination unit can also be configured as a unit that determines the direction of rotation of the rotation operation as an acute angle when the angle of incidence is an acute angle. , determine the rotation direction of the rotation operation as the unit of clockwise rotation.
  • the head-mounted display-based iris image collection device 700 may further include: a line of sight determination module, a data transmission module, and a correction module.
  • the gaze determination module can be configured as a module for determining the gaze position of the user's gaze on the display screen detected by the eye tracking sensor;
  • a data transmission module can be configured to transmit the second iris image generated by the imaging sensor and the determined gaze position to a module of the correction chip;
  • the correction module can be configured to perform distortion correction on the second iris image according to the gaze position through the trained distortion correction model stored in the correction chip, and obtain the third iris image output by the distortion correction model.
  • the first iris image can be captured first, and the first iris image can be used to image the distance between the imaging sensor and the camera hole and/or the incidence of light.
  • the angle of the sensor is adjusted.
  • the imaging size of the iris in the second iris image is closer to the maximized imaging, and the image brightness is higher. Therefore, the image quality of the second iris image is higher than that of the first iris image, and using the second iris image for identity identification is beneficial to obtaining a more accurate identity identification result.
  • Each module in the above-mentioned head-mounted display-based iris image collection device can be implemented in whole or in part by software, hardware, and combinations thereof.
  • Each of the above modules may be embedded in or independent of the processor of the computer device in the form of hardware, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • Figure 8 is a schematic structural diagram of an electronic device in one or more embodiments of the present disclosure.
  • the electronic device can be any of the terminal equipment, service equipment or head-mounted display body shown in the aforementioned operating environment. A sort of. As shown in Figure 8, the electronic device 800 may include:
  • Memory 810 storing executable program code
  • processor 820 coupled to memory 810;
  • the processor 820 calls the executable program code stored in the memory 810 to execute any head-mounted display-based iris image acquisition method disclosed in this disclosure.
  • FIG. 8 is only a block diagram of a partial structure related to the disclosed solution, and does not constitute a limitation on the computer equipment on which the disclosed solution should be configured. Specific computer equipment More or fewer components may be included than shown in the figures, or certain components may be combined, or may have a different arrangement of components.
  • the iris image collection device based on a head-mounted display can be implemented in the form of a computer program, and the computer program can run on the computer device as shown in FIG. 8 .
  • Each program module that constitutes the head-mounted display-based iris image collection device can be stored in the memory of the computer device, such as the imaging module 710 and the motion control module 720 shown in FIG. 7 .
  • the computer program composed of each program module causes the processor to execute the steps in the head-mounted display-based iris image acquisition method of various embodiments of the present disclosure described in this specification.
  • the computer device shown in FIG. 8 may perform the step of acquiring the first iris image generated by the imaging sensor based on the incident light through the imaging module 710 in the head-mounted display-based iris image acquisition device shown in FIG. 7 .
  • the computer device may perform the step of controlling the position adjustment module to perform a movement operation and/or a rotation operation according to the first iris image through the motion control module 720 .
  • the computer device may perform, through the imaging module 710, the step of acquiring a second iris image generated by the adjusted imaging sensor based on the incident light.
  • a computer device in one embodiment, includes a memory and one or more processors, the memory being configured as a module for storing computer readable instructions; when the computer readable instructions are executed by the processor , causing the one or more processors to execute the steps of the head-mounted display-based iris image acquisition method described in the above method embodiment.
  • Non-volatile memory can include read-only memory (ROM), magnetic tape, floppy disk, flash memory or optical memory, etc.
  • Volatile memory may include random access memory (Random Access Memory, RAM) or external cache memory.
  • RAM Random Access Memory
  • SRAM Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • the iris image acquisition method based on the head-mounted display can effectively utilize the first iris image control position adjustment module to perform movement operations and/or rotation operations to adjust the distance between the imaging sensor and the camera hole and/or The angle at which light enters the imaging sensor is adjusted, and the adjusted imaging sensor is used to generate a second iris image, so that the image quality of the second iris image is higher than that of the first iris image, and using the second iris image for identity identification is beneficial Obtain more accurate identification results and have strong industrial practicability.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

本公开实施例提供了一种基于头戴式显示器的虹膜图像采集方法及相关产品,头戴式显示器包括光源和摄像装置,摄像装置包括:摄像孔、成像传感器、以及成像传感器的位置调整模组;摄像孔设置在头戴式显示器与佩戴者的头部接触的一侧;光源发射的光线从摄像孔射入;该方法包括:获取成像传感器基于射入的光线生成的第一虹膜图像;根据第一虹膜图像控制位置调整模组执行移动操作和/或旋转操作;移动操作用于调整成像传感器到摄像孔的距离,以使虹膜在图像中的成像最大化;旋转操作用于旋转成像传感器,以改变光线射入成像传感器的角度;获取调整后的成像传感器基于射入的光线生成的第二虹膜图像。实施本公开实施例,能够提高采集到虹膜图像的图像质量。

Description

基于头戴式显示器的虹膜图像采集方法及相关产品
相关交叉引用
本公开要求于2022年8月30日提交中国专利局、申请号为202211056340.7、发明名称为“基于头戴式显示器的虹膜图像采集方法及相关产品”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及基于头戴式显示器的虹膜图像采集方法及相关产品。
背景技术
虹膜是人体的内部组织,位于眼球的巩膜和瞳孔之间的区域为虹膜。虹膜具有内外两个同心圆,具有较为明显的纹理特征。虹膜的纹理特征是由遗传决定的,因此,利用虹膜进行身份鉴别具有较高的准确性。
在虚拟现实(Virtual Reality,VR)场景中,头戴式显示器等VR设备可内置有摄像头,用于采集用户的虹膜图像,并利用采集到的虹膜图像对用户进行身份校验,以校验使用VR设备的用户身份是否合法。然而,在实践中发现,用户身份识别的准确性与虹膜图像的质量息息相关,低质量的虹膜图像将会大大降低身份鉴别的准确性。
发明内容
(一)要解决的技术问题
在现有技术中,在采集用户的虹膜图像,并利用采集到的虹膜图像对用户进行身份校验的过程中,低质量的虹膜图像将会大大降低身份鉴别的准确性。
(二)技术方案
根据本公开公开的各种实施例,提供一种基于头戴式显示器的虹膜图像采集方法及相关产品。
一种基于头戴式显示器的虹膜图像采集方法,所述头戴式显示器包括光源和摄像装置,所述摄像装置包括:摄像孔、成像传感器、以及所述成像传感器的位置调整模组;所述摄像孔朝向所述头戴式显示器的佩戴者设置;所述光源发射的光线从所述摄像孔射入;以及,所述方法包括:
获取所述成像传感器基于射入的光线生成的第一虹膜图像;
根据所述第一虹膜图像控制所述位置调整模组执行移动操作和/或旋转操作;所述移动操作配置成调整所述成像传感器到所述摄像孔的距离,以使虹膜在图像中的成像最大化的操作;所述旋转操作配置成旋转所述成像传感器,以改变光线射入所述成像传感器的角度的操作;
获取调整后的所述成像传感器基于射入的光线生成的第二虹膜图像。
作为本公开实施例一种可选的实施方式,根据所述第一虹膜图像控制所述位置调整模组执行移动操作,包括:
确定所述第一虹膜图像中的虹膜区域,并将所述虹膜区域的区域面积与参考区域的区域面积进行比较;所述参考区域是完整的虹膜在图像中成像最大化时对应的图像区域;
根据所述虹膜区域与所述参考区域的面积差异,确定所述位置调整模组执行 移动操作的移动方向;
控制所述位置调整模组按照所述移动方向执行移动操作。
作为本公开实施例一种可选的实施方式,所述根据所述虹膜区域与所述参考区域的面积差异,确定所述位置调整模组执行移动操作的移动方向,包括:
在所述虹膜区域的面积小于所述参考区域的面积时,确定所述位置调整模组执行所述移动操作的移动方向为增长所述距离的方向;或者,
在所述虹膜区域的面积大于所述参考区域的面积时,确定所述位置调整模组执行所述移动操作的移动方向为缩短所述距离的方向。
作为本公开实施例一种可选的实施方式,根据所述第一虹膜图像控制所述位置调整模组执行旋转操作,包括:
根据所述第一虹膜图像计算光线射入所述成像传感器的射入角度;所述射入角度是光线与所述成像传感器所在平面在目标方向上的夹角;所述目标方向为从所述头戴式显示器的内部至所述头戴式显示器外部的方向;
若所述射入角度不为直角,则根据所述射入角度与确定旋转操作的旋转方向;
控制所述位置调整模组按照所述旋转方向执行旋转操作。
作为本公开实施例一种可选的实施方式,所述根据所述射入角度确定旋转操作的旋转方向,包括:
若所述射入角度为钝角,则确定所述旋转操作的旋转方向为逆时针旋转;或者;
若所述射入角度为锐角,则确定所述旋转操作的旋转方向为顺时针旋转。
作为本公开实施例一种可选的实施方式,所述位置调整模组包括:可伸缩连杆;所述可伸缩连杆的一端与所述成像传感器连接,另一端与所述头戴式显示器的壳体连接;所述可伸缩连杆配置成执行所述移动操作的连杆;和/或,
所述位置调整模组可包括:可旋转平台;所述成像传感器放置于所述可旋转平台上;所述可旋转平台配置成执行所述旋转操作的平台。
作为本公开实施例一种可选的实施方式,所述摄像装置还包括:矫正芯片;所述头戴显示器还包括:眼动追踪传感器;
所述矫正芯片存储有畸变矫正模型,所述畸变矫正模型是利用样本数据进行训练后得到的,所述样本数据包括多个样本虹膜图像,所述多个样本虹膜图像是用户视线注视所述头戴式显示器的显示屏幕的不同位置时分别采集到的,所述样本数据还包括与各个所述样本虹膜图像分别对应的无畸变的监督虹膜图像;所述方法还包括:
确定所述眼动追踪传感器检测到的用户视线在所述显示屏幕上的注视位置;
将所述成像传感器生成的所述第二虹膜图像以及确定出的所述注视位置传输至所述矫正芯片;
通过所述矫正芯片存储的畸变矫正模型根据所述注视位置对所述第二虹膜图像进行畸变矫正,得到所述畸变矫正模型输出的第三虹膜图像。
一种基于头戴式显示器的虹膜图像采集装置,所述头戴式显示器包括光源和摄像装置,所述摄像装置包括:摄像孔、成像传感器、以及所述成像传感器的位置调整模组;所述摄像孔朝向所述头戴式显示器的佩戴者设置;所述光源发射的光线从所述摄像孔射入;以及,所述装置包括:
成像模块,将所述成像模块配置成获取所述成像传感器基于射入的光线生成的第一虹膜图像的模块;
运动控制模块,将所述运动控制模块配置成根据所述第一虹膜图像控制所述 位置调整模组执行移动操作和/或旋转操作的模块;所述移动操作配置成调整所述成像传感器到所述摄像孔的距离,以使虹膜在图像中的成像最大化的操作;所述旋转操作配置成旋转所述成像传感器,以改变光线射入所述成像传感器的角度的操作;
还将所述成像模块配置成获取调整后的所述成像传感器基于射入的光线生成的第二虹膜图像的模块。
作为本公开实施例一种可选的实施方式,所述运动控制模块包括:虹膜识别单元、移动确定单元和移动执行单元;
所述虹膜识别单元,将所述虹膜识别单元配置成确定所述第一虹膜图像中的虹膜区域的单元;
所述移动确定单元,将所述移动确定单元配置成将所述虹膜区域的区域面积与参考区域的区域面积进行比较的单元;所述参考区域是完整的虹膜在图像中成像最大化时对应的图像区域;以及,将所述移动确定单元配置成根据所述虹膜区域与所述参考区域的面积差异,确定位置调整模组执行移动操作的移动方向的单元;
所述移动执行单元,将所述移动执行单元配置成控制所述位置调整模组按照所述移动方向执行移动操作的单元。
作为本公开实施例一种可选的实施方式,所述移动确定单元,还将所述移动确定单元配置成在所述虹膜区域的面积小于所述参考区域的面积时,确定所述位置调整模组执行所述移动操作的移动方向为增长所述距离的方向;或者,
还将所述移动确定单元配置成在所述虹膜区域的面积大于所述参考区域的面积时,确定所述位置调整模组执行所述移动操作的移动方向为缩短所述距离的方向。
作为本公开实施例一种可选的实施方式,所述运动控制模块包括:入射角计算单元、旋转确定单元和旋转执行单元;
所述入射角计算单元,将所述入射角计算单元配置成根据所述第一虹膜图像计算光线射入所述成像传感器的射入角度;所述射入角度是光线与所述成像传感器所在平面在目标方向上的夹角;所述目标方向为从所述头戴式显示器的内部至所述头戴式显示器外部的方向;
所述旋转确定单元,将所述旋转确定单元配置成在判断出所述射入角度不为直角后,根据所述射入角度与确定旋转操作的旋转方向;
所述旋转执行单元,将所述旋转执行单元配置成控制所述位置调整模组按照所述旋转方向执行旋转操作。
作为本公开实施例一种可选的实施方式,所述旋转确定单元,还将所述旋转确定单元配置成当所述射入角度为钝角,确定所述旋转操作的旋转方向为逆时针旋转;或者;
还将所述旋转确定单元配置成当所述射入角度为锐角,确定所述旋转操作的旋转方向为顺时针旋转。
作为本公开实施例一种可选的实施方式,所述位置调整模组包括:可伸缩连杆;所述可伸缩连杆的一端与所述成像传感器连接,另一端与所述头戴式显示器的壳体连接;所述可伸缩连杆配置成执行所述移动操作的连杆;和/或,
所述位置调整模组可包括:可旋转平台;所述成像传感器放置于所述可旋转平台上;所述可旋转平台配置成执行所述旋转操作。
作为本公开实施例一种可选的实施方式,所述基于头戴式显示器的虹膜图像 采集装置还包括:视线确定模块、数据传输模块、矫正模块;
所述视线确定模块,将所述视线确定模块配置成确定眼动追踪传感器检测到的用户视线在显示屏幕上的注视位置的模块;
所述数据传输模块,将所述数据传输模块配置成将成像传感器生成的第二虹膜图像以及确定出的注视位置传输至矫正芯片的模块;
所述矫正模块,将所述矫正模块配置成通过矫正芯片存储的训练好的畸变矫正模型根据注视位置对第二虹膜图像进行畸变矫正,得到畸变矫正模型输出的第三虹膜图像的模块。
一种电子设备,包括存储器和一个或多个处理器,将所述存储器配置成存储计算机可读指令的模块;所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器执行上述任一项所述的基于头戴式显示器的虹膜图像采集方法的步骤。
一个或多个存储有计算机可读指令的非易失性存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述任一项所述的基于头戴式显示器的虹膜图像采集方法的步骤。
本公开的其他特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本公开而了解。本公开的目的和其他优点在说明书、权利要求书以及附图中所特别指出的结构来实现和获得,本公开的一个或多个实施例的细节在下面的附图和描述中提出。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举可选实施例,并配合所附附图,作详细说明如下。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用来解释本公开的原理。
为了更清楚地说明本公开或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本公开一个或多个实施例提供的一种基于头戴式显示器的虹膜图像采集方法的应用场景示意图;
图2A是本公开一个或多个实施例提供的一种头戴式显示器的结构示意图;
图2B是本公开一个或多个实施例提供的另一种头戴式显示器的结构示意图;
图2C是本公开一个或多个实施例提供的另一种头戴式显示器的结构示意图;
图3是本公开一个或多个实施例提供的一种基于头戴式显示器的虹膜图像采集方法的方法流程示意图;
图4是本公开一个或多个实施例提供的一种基于头戴式显示器的虹膜图像采集方法的方法流程示意图;
图5是本公开一个或多个实施例提供的一种基于头戴式显示器的虹膜图像采 集方法的方法流程示意图;
图6是本公开一个或多个实施例提供的一种基于头戴式显示器的虹膜图像采集方法的方法流程示意图;
图7是本公开一个或多个实施例中一种基于头戴式显示器的虹膜图像采集装置的结构示意图;
图8是本公开一个或多个实施例中一种电子设备的结构示意图。
具体实施方式
为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开的方案进行进一步描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本公开的一部分实施例,而不是全部的实施例。
本公开的说明书和权利要求书中的术语“第一”和“第二”等是用来区别不同的对象,而不是用来描述对象的特定顺序。例如,第一摄像头和第二摄像头是为了区别不同的摄像头,而不是为了描述摄像头的特定顺序。
在本公开中,“示例性的”或者“例如”等词来表示作例子、例证或说明。本公开中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念,此外,在本公开的描述中,除非另有说明,“多个”的含义是指两个或两个以上。
本公开提供的一种基于头戴式显示器的虹膜图像采集方法,可以应配置成如图1所示的应用环境中。如图1所示,给出第一种运行环境,该运行环境可以包括头戴式显示器101、终端设备102和服务器103。
用户可以佩戴头戴式显示器101以使头戴式显示器101获取数据。这里,头戴式显示器101不具备数据处理能力,其在获取到数据后,可以与终端102之间通过近距离通信技术传输数据。
终端设备102可以包括智能电视、三维视觉显示设备、大型投影***、多媒体播放设备、手机、平板电脑、游戏主机、PC(Personal Computer,个人计算机)等电子设备。终端设备102可以接收头戴式显示器101传输的数据,对数据进行处理。
服务器103配置成为终端102提供后台服务,以供终端102对接收到的头戴式显示器101传输的数据进行处理,从而完成本公开所提供的虹膜图像采集方法的服务器。可选的,服务器103还可根据数据处理结果生成相对应的控制指令,控制指令可分别发送至终端102和/或头戴式显示器101,以对终端102和/或头戴式显示103进行控制。例如,服务器103可以是后台服务器。服务器103可以是一台服务器,也可以是由多台服务器组成的服务器集群,或者是一个云计算服务中心。可选地,服务器103同时为多个终端102提供后台服务。
再给出第二种运行环境,该运行环境可以包括头戴式显示器101和终端设备102。
这里,头戴式显示器101可以包括上述所陈述的各种类型的设备,头戴式显示器101不具备数据处理能力,其在获取到数据后,可以与终端102之间通过近距离通信技术传输数据。
终端设备102可以包括上述所陈述的各种类型的电子设备。终端设备102可以接收头戴式显示器101传输的数据,对数据进行处理,以完成本公开所提供的虹膜图像采集方法。可选的,终端102还可根据数据处理结果生成相对应的控制指令,控制指令可分别发送头戴式显示器101,以对头戴式显示103进行控制。
再给出第三种运行环境,该运行环境仅包括头戴式显示器101。这里,头戴式显示器101不仅具有数据获取能力,还具有数据处理能力,即能够通过头戴式显示器101中的处理器调用程序代码来实现本公开提供的虹膜图像采集方法所实现的功能,当然程序代码可以保存在计算机存储介质中,可见,该头戴式显示器至少包括处理器和存储介质。
请参阅图2A,图2A是本公开一个或多个实施例提供的一种头戴式显示器的结构示意图。如图2A所示,头戴式显示器20可包括显示屏幕22、光源23、摄像装置24。
显示屏幕22,可以是发光二极管(Light-emitting Diode,LED)屏幕或液晶显示(Liquid Crystal Display,LCD)屏幕,配置成输出图像数据。
光源23,可配置成发射光线,可以是可见光光源或红外光光源等,具体不做限定。
请参阅图2B,图2B是本公开一个或多个实施例提供的另一种头戴式显示器的结构示意图。图2B所示的头戴式显示器可以是如图2A所示的头戴式显示器的侧试图。如图2B所示,头戴式显示器20可包括透镜模组21、显示屏幕22、光源23、摄像装置24。
透镜模组21,可设置在显示屏幕22之上,可配置成折射光线,将显示屏幕22上的画面成像拉近到视网膜位置,使人的眼睛能轻松看清几乎贴在眼前的显示屏幕22;透镜模组21还具有聚光作用,将头戴式显示器内部的光线聚集,使得更多光线可以进入摄像装置24。
在本公开中,透镜模组21可以是Panckae光学模组,由2片或多片透镜组合而成;或者,透镜模组21也可以是菲涅尔光学模组,由一片透镜形成光学模组。
需要说明的是,本公开公开的头戴式显示器20中透镜模组21、显示屏幕22和光源23的数量不做限定。
示例性的,头戴式显示器20可包括一个显示屏幕22以及一个透镜模组21,围绕透镜模组21可设置有多个光源23。
示例性的,头戴式显示器20也可包括分别与左眼和右眼对应的两个显示屏幕22,以及与每个显示屏幕22对应的透镜模组21。围绕每个透镜模组21可设置有多个光源23。
摄像装置24,可将摄像装置24配置成采集图像数据的装置,可至少包括:摄像孔241、成像传感器242、成像传感器的位置调整模组243和底座244。
摄像孔241,可设置在头戴式显示器20的内侧,即朝向头戴式显示器20的佩戴者设置。光源23产生的光线可由摄像孔241射入。
示例性的,如图2B所示,显示屏幕22可以是挖孔屏,显示屏幕22的挖孔位置可为摄像孔241的设置位置。示例性的,若头戴式显示器20包括两个透镜模组21,则摄像孔241也可设置在两个透镜模组21中间位置的上方或者下方。
成像传感器242,可以是任意一种感光器件,例如电荷藕合器件图像传感器(Charge Coupled Device,CCD)或者互补性氧化金属半导体(Complementary Metal-Oxide Semiconductor,CMOS)等。
位置调整模组243,可包括:可伸缩连杆243a和/或可旋转平台243b。
可伸缩连杆243a的一端与成像传感器242连接,另一端与头戴式显示器20的壳体连接。当可伸缩连杆243a的长度拉长时,成像传感器242与摄像孔241之间的距离缩短,光学变焦倍数变小;当可伸缩连杆243a的长度缩短时,成像传感器242与摄像孔241之间的距离增长,光学变焦倍数增大。
可旋转平台243b可配置成放置成像传感器242的平台,可旋转平台243b可向顺时针方向或逆时针方向旋转,从而带动放置在可旋转平台243b上的成像传感器242旋转。成像传感器242的旋转可改变光线射入成像传感器242的角度,从而改变成像传感器242接收光线的面积。当光线垂直射入成像传感器242时,成像传感器242接收光线的面积最大。
需要说明的是,若位置调整模组243包括可伸缩连杆243a和可旋转平台243b,则可如图2B所示,可伸缩连杆243a的一端与可旋转平台243b连接。可伸缩连杆243a的伸缩带动可旋转平台243b的位置变化,从而改变放置在可旋转平台243b上的成像传感器242与摄像孔241之间的距离。
此外,可伸缩连杆243a可以是调整成像传感器242与摄像孔241之间距离的一种可选的实施方式,在另一些可能的实施例中,可通过其它元件对成像传感器242与摄像孔241之间的距离进行调整。可旋转平台243可以是旋转成像传感器242的一种可选的实施方式,在另一些可能的实施例中,也可通过其它元件旋转成像传感器242。
底座244,将底座244配置成承载位置调整模组243和成像传感器242。
示例性的,请参阅图2C,图2C是本公开一个或多个实施例提供的另一种头戴式显示器的结构示意图。如图2C所示,摄像装置24还可包括反射镜245,反射镜245可设置在摄像孔241与成像传感器242之间。
反射镜245,可将反射镜245配置成改变从摄像孔241射入的光线光路,光线反射至成像传感器242。反射镜245的设置使得摄像孔241与成像传感器242的设置位置能够不在同一条直线上,有利于头戴式显示器20的内部空间设计。
基于如前述实施例公开的任意一种头戴式显示器,请参阅图3,图3是本公开一个或多个实施例提供的一种基于头戴式显示器的虹膜图像采集方法的方法流程示意图,该方法可应用于前述运行环境所示的终端设备、服务设备或者头戴式显示器本体等任意一种电子设备。请参阅图3,图3所示的方法可包括以下步骤:
310、获取成像传感器基于射入的光线生成的第一虹膜图像。
在本公开中,当用户佩戴头戴式显示器之后,光源发射的光线可被用户眼球的虹膜反射,虹膜反射的光线通过摄像孔进入头戴式显示器的壳体内部,到达成像传感器。成像传感器可将虹膜反射的光线转换成数字信号,得到第一虹膜图像。
在一些实施例中,头戴式显示器还可包括眼动追踪传感器,眼动追踪传感器可配置成检测用户视线的传感器。步骤310的具体实施方式可包括:当检测到用户视线落入头戴式显示器的显示屏幕时,获取成像传感器基于射入的光线生成的第一虹膜图像。
320、根据第一虹膜图像控制位置调整模组执行移动操作和/或旋转操作。
在一些实施例中,可利用虹膜在第一虹膜图像中占据的虹膜区域与标准的参考区域进行对比,并根据对比结果控制位置调整模组执行移动操作,以调整成像传感器到摄像孔的距离,调整虹膜在图像中的成像大小,调整方向为使得虹膜在图像中的成像与参考区域一致。其中,参考区域可以是完整的虹膜在图像中的成像最大化时对应的图像区域,可将虹膜区域与参考区域的区域面积或区域位置等区域参数进行对比。
在另一些实施例中,可根据第一虹膜图像计算光线射入成像传感器的角度,并根据计算得到的射入角度控制位置调整模组执行旋转操作,以改变光线射入成像传感器的角度,调整成像传感器接收光线的面积,调整方向为使得成像传感器与射入的光线垂直。成像传感器接收光线的面积越大,成像后得到的图像亮度越高。
在又一些实施例中,可同时根据第一虹膜图像中的虹膜区域与参考区域的比较结果以及第一虹膜图像控制位置调整模组执行移动操作和旋转操作,以同时调成虹膜在图像中的成像大小以及图像亮度。
330、获取调整后的成像传感器基于射入的光线生成的第二虹膜图像。
在本公开中,能够先拍摄第一虹膜图像,并利用第一虹膜图像对成像传感器到摄像孔之间的距离和/或光线射入成像传感器的角度进行调整。调整后的成像传感器与摄像孔之间的距离发生变化,相较于第一虹膜图像,虹膜在第二虹膜图像中的成像大小更接近成像最大化时的成像大小,第二虹膜图像可包括更多的虹膜信息。和/或,调整后的成像传感器接收光线的面积更大,相较于第一虹膜图像,第二虹膜图像的亮度更高。因此,第二虹膜图像的图像质量高于第一虹膜图像,利用第二虹膜图像进行身份鉴别,有利于得到更准确的身份鉴别结果。
在一些实施例中,位置调整模组可执行移动操作。请参阅图4,图4是本公开一个或多个实施例提供的一种基于头戴式显示器的虹膜图像采集方法的方法流程示意图,该方法可应用于前述运行环境所示的终端设备、服务设备或者头戴式显示器本体等任意一种电子设备。请参阅图4,图4所示的方法可包括以下步骤:
410、获取成像传感器基于射入的光线生成的第一虹膜图像。
420、确定第一虹膜图像中的虹膜区域,将虹膜区域的区域面积与参考区域的区域面积进行比较。
在本公开中,可通过基于虹膜特征的目标检测方法在第一虹膜图像中定位出虹膜区域。在将虹膜区域的面积与参考区域的面积进行对比之后,若虹膜区域的面积不等于参考区域的面积,则可继续执行步骤430。
430、根据虹膜区域与参考区域的面积差异,确定位置调整模组执行移动操作的移动方向。
在本公开中,虹膜区域与参考区域之间的面积差异可至少配置成确定位置调整模组执行移动操作时的移动方向的差异。
在虹膜区域的面积小于参考区域的面积时,确定位置调整模组执行移动操作的移动方向为增长成像传感器到摄像孔之间距离的方向。或者,
在虹膜区域的面积大于参考区域的面积时,确定位置调整模组执行移动操作的移动方向为缩短成像传感器到摄像孔之间距离的方向。
440、控制位置调整模组按照上述的移动方向执行移动操作。
在本公开中,位置调整模组可包括可伸缩连杆。当移动方向为缩短距离的方向,则控制可伸缩连杆拉长;当移动方向增长距离的方向,则控制可伸缩连杆缩短。
在一些实施例中,执行步骤440控制位置调整模组按照上述的移动方向执行移动操作时,每次执行移动操作的移动距离可以是预设的距离值。例如,可设置每次移动操作对应于可伸缩连杆1毫米、2毫米的距离变化量。相应地,在执行步骤450之后,可将成像传感器生成的第二虹膜图像作为新的第一虹膜图像,再继续执行前述的步骤420,直至判断出第一虹膜图像中虹膜区域的面积等于参考区域的面积。即,可控制位置调整模组每次移动预设距离值,逐次调整以采集到符合 要求的第二虹膜图像。
在另一些实施例中,执行步骤430时,虹膜区域与参考区域之间的面积差异还可配置成确定位置调整模组执行移动操作时的移动距离的差异。其中,可计算虹膜区域与参考区域的面积差,并利用面积差计算移动距离。相应地,在执行步骤440时,可控制位置调整模组按照上述的移动方向移动计算出的移动距离。即,可控制位置调整模组一次性移动计算好的移动距离,通过一次调整直接采集到符合要求的第二虹膜图像。
450、获取调整后的成像传感器基于射入的光线生成的第二虹膜图像。
在本公开中,若第一虹膜图像中虹膜区域的面积小于参考区域的面积,说明虹膜在第一虹膜图像中的成像过小,需要增大摄像装置的光学变焦倍数,增大成像传感器到摄像孔之间的距离。因此,增大成像传感器到摄像孔之间的距离之后,成像传感器生成的第二虹膜图像中虹膜的成像变大。
若第一虹膜图像中虹膜区域的面积大于参考区域的面积,说明虹膜在第一虹膜图像中的成像过大,可能是不完整的,需要减小摄像装置的光变焦倍数,缩短成像传感器到摄像孔之间的距离。因此,缩短成像传感器到摄像孔之间的距离之后,成像传感器生成的第二虹膜图像中虹膜的成像变小。
可见,在前述实施例中,可通过第一虹膜图像中虹膜区域与参考区域面积上的差异控制位置调整模组执行移动操作,以调整摄像装置的光学变焦倍数,调整虹膜的成像大小,以采集到虹膜成像最大化的第二虹膜图像。
在一些实施例中,位置调整模组可执行旋转操作。请参阅图5,图5是本公开一个或多个实施例提供的一种基于头戴式显示器的虹膜图像采集方法的方法流程示意图,该方法可应用于前述运行环境所示的终端设备、服务设备或者头戴式显示器本体等任意一种电子设备。请参阅图5,图5所示的方法可包括以下步骤:
510、获取成像传感器基于射入的光线生成的第一虹膜图像。
520、根据第一虹膜图像计算光线射入成像传感器的射入角度。
在本公开中,可将第一虹膜图像转换成灰度图像,并计算灰度图像中每个像素点的灰度均值。灰度图的灰度均值可配置成指示第一虹膜图像的图像亮度,灰度均值越高,图像亮度越高。可利用第一虹膜图像的图像亮度计算光线的射入角度。
在本公开中,射入角度的角度范围可以是[0°-180°],射入角度可指光线与成像传感器所在平面在目标方向上的夹角。其中,目标方向可以是从头戴式显示器的内部至头戴式显示器外部的方向。示例性的,如图2B所示的头戴式显示器,则目标方向可以是与水平面垂直且向上的方向;如图2C所示的头戴式显示器,则目标方向可以是与水平面平行且向右的方向。
530、若光线的射入角度不为直角,则根据射入角度与确定旋转操作的旋转方向。
在本公开中,若光线的射入角度为钝角,则可确定所述旋转操作的旋转方向为逆时针旋转;或者;
若光线射入角度为锐角,则确定旋转操作的旋转方向为顺时针旋转。
540、控制所述位置调整模组按照旋转方向执行旋转操作。
在本公开中,位置调整模组可包括可旋转平台。当旋转方向为顺时针旋转,则控制可旋转平台往顺时针方向旋转;当旋转方向为逆时针旋转,则控制可旋转平台往逆时针方向旋转。
550、获取调整后的成像传感器基于射入的光线生成的第二虹膜图像。
若光线射入成像传感器的射入角度不为直角,则成像传感器无法以最大面积接收光线,导致成像的第一虹膜图像的图像亮度降低,不利于从第一虹膜图像中准确进行虹膜信息的提取。因此,在本公开中,可通过位置调整模组旋转成像传感器,以改变光线射入成像传感器的角度的,有利于提高调整后生成的第二虹膜图像的图像亮度。
示例性的,若头戴式显示器的透镜模组为Pancake光学模组,由于Pancake光学模组的特性,光线经过Pancake光学模组后只有四分之一的亮度光。因此,可通过如图5所示的方法对成像传感器进行旋转,以调整成像传感器接收光线的面积,从而提高生成的图像亮度。
在本公开中,执行步骤540控制所述位置调整模组按照旋转方向执行旋转操作时,每次执行旋转操作的旋转角度可以是预设的角度值。例如,可设置每次旋转操作对应于可旋转平台3°、5°、10°的角度变化量。相应地,在执行步骤550之后,可将成像传感器生成的第二虹膜图像作为新的第一虹膜图像,再继续执行前述的步骤520,直至判断出入射角度垂直。即,可控制位置调整模组每次旋转预设角度值,逐次调整以采集到符合要求的第二虹膜图像。
一般而言,头戴式显示器摄像装置的摄像孔往往设置在显示屏幕的上方、下方或者显示屏幕的某一侧,不会直接设置在显示屏幕的正中。因此,当用户的视线注视显示屏幕时,摄像孔可能并不会位于人眼的正前方,容易导致拍摄得到的虹膜图像发生畸变,即虹膜图像中虹膜的内外圆不同心。
在一些实施例中,如图2A或图2B所示的头戴式显示器的摄像装置24还可包括:矫正芯片,矫正芯片可以是与成像传感器独立的另一个集成电路模块。或者,矫正芯片也可以是成像传感器的一部分,具体不做限定。矫正芯片中可存储有畸变矫正模型,畸变矫正模型是利用样本数据进行训练后得到的。
其中,样本数据包括多个样本虹膜图像,多个样本虹膜图像是用户视线注视头戴式显示器的显示屏幕的不同位置时分别采集到的,样本数据还包括与各个样本虹膜图像分别对应的无畸变的监督虹膜图像。也就是说,样本数据可包括多个虹膜图像对,每个虹膜图像对与用户注视显示屏幕的一个位置相对应,每个虹膜图像对可包括一张样本虹膜图像,以及与该虹膜图像对应的无畸变的监督虹膜图像。
畸变矫正模型的训练过程可包括:将用户注视显示屏幕的每个位置对应的样本虹膜图像,以及用户在显示屏幕上注视的位置信息输入至畸变矫正模型,得到畸变矫正模型的输出结果。基于预设的损失函数确定输出结果与对应的监督虹膜图像的损失参数;根据损失参数对畸变矫正模型进行反馈更新,直至训练结束,以此得到训练好的畸变矫正模型。
此外,头戴式显示器还可包括眼动追踪传感器,眼动追踪传感器配置成检测用户视线的传感器。
基于包括矫正芯片和眼动传感器的头戴式显示器,请参阅图6,图6是本公开一个或多个实施例提供的一种基于头戴式显示器的虹膜图像采集方法的方法流程示意图,该方法可应用于前述运行环境所示的终端设备、服务设备或者头戴式显示器本体等任意一种电子设备。请参阅图6,图6所示的方法可包括以下步骤:
610、通过眼动追踪传感器检测用户视线。
620、当检测到用户视线落入头戴式显示器的显示屏幕时,获取成像传感器基于射入的光线生成的第一虹膜图像。
630、根据第一虹膜图像控制位置调整模组执行移动操作和/或旋转操作。
640、获取调整后的成像传感器基于射入的光线生成的第二虹膜图像。
前述步骤610-步骤640的实施方式可参见前述实施例,以下内容不做赘述。
650、确定眼动追踪传感器检测到的用户视线在显示屏幕上的注视位置。
在本公开中,可通过眼动追踪传感器检测用户视线,并进一步计算用户视线在显示屏幕上的注视位置。
660、将成像传感器生成的第二虹膜图像以及确定出的注视位置传输至矫正芯片。
670、通过矫正芯片存储的畸变矫正模型根据注视位置对第二虹膜图像进行畸变矫正,得到畸变矫正模型输出的第三虹膜图像。
在本公开中,矫正芯片中存储的可以是训练好的畸变矫正模型,畸变矫正模型可利用训练阶段学习到的畸变矫正能力对第二虹膜图像进行畸变矫正,从而输出无畸变的第三虹膜图像。
可见,在前述实施例中,可先对根据第一虹膜图像对成像传感器的位置进行调整,以调整虹膜在第二虹膜图像的成像大小以及第二虹膜图像的图像亮度。进一步地,利用训练好的畸变矫正模型根据用户视线注视显示屏幕的注视位置对第二虹膜图像进行畸变矫正,进一步得到无畸变的第三虹膜图像。成像大小适中、图像亮度较高且无畸变的第三虹膜图像,有利于从第三虹膜图像中提取出更准确的虹膜信息。
此外,在一些实施例中,对畸变矫正模型进行训练时使用的样本虹膜图像,可以是通过采用Pancake光学模组作为透镜模组的头戴式显示器采集到的。这里,样本数据中的监督虹膜图像与对应的样本虹膜图像的图像亮度可以不同,监督虹膜图像的图像亮度可高于样本虹膜图像的图像亮度。因此,训练好的畸变矫正模型除了可以对图像畸变进行调整,还可提高图像亮度。
相应地,在执行前述的基于头戴式显示器的虹膜图像采集方法时,若头戴式显示器采用的也是Pancake光学模组,则在执行前述的步骤670时,畸变矫正模型还可对第二虹膜图像进行亮度增强操作,畸变矫正模型输出的第三虹膜图像的图像亮度可高于第二虹膜图像的图像亮度。
应该理解的是,虽然图3-图6的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图3-图6中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
请参阅图7,图7是本公开一个或多个实施例中一种基于头戴式显示器的虹膜图像采集装置的结构示意图。该虹膜图像采集装置可应用于前述运行环境所示的终端设备、服务设备或者头戴式显示器本体等任意一种电子设备。如图7所示,该虹膜图像采集装置700可包括:成像模块710和运动控制模块720。
成像模块710,将成像模块710配置成获取成像传感器基于射入的光线生成的第一虹膜图像的模块;
运动控制模块720,将运动控制模块720配置成根据第一虹膜图像控制位置调整模组执行移动操作和/或旋转操作的模块;移动操作配置成调整成像传感器到摄像孔的距离,以使虹膜在图像中的成像最大化的操作;旋转操作配置成旋转成像传感器,以改变光线射入成像传感器的角度的操作;
还将成像模块710配置成获取调整后的成像传感器基于射入的光线生成的第二虹膜图像的模块。
在一个实施例中,运动控制模块720可包括:虹膜识别单元、移动确定单元和移动执行单元。
虹膜识别单元,可将虹膜识别单元配置成确定第一虹膜图像中的虹膜区域的单元;
移动确定单元,可将移动确定单元配置成将虹膜区域的区域面积与参考区域的区域面积进行比较的单元;参考区域是完整的虹膜在图像中成像最大化时对应的图像区域;以及,根据虹膜区域与参考区域的面积差异,确定位置调整模组执行移动操作的移动方向;
移动执行单元,可将移动执行单元配置成控制位置调整模组按照移动方向执行移动操作的单元。
可选的,位置调整模组可包括:可伸缩连杆;移动执行单元,可将移动执行单元配置成控制可伸缩连杆按照移动方向执行移动操作的单元。
在一个实施例中,还可将移动确定单元配置成在虹膜区域的面积小于参考区域的面积时,确定位置调整模组执行移动操作的移动方向为增长距离的方向的单元;或者,还可将移动确定单元配置成在虹膜区域的面积大于参考区域的面积时,确定位置调整模组执行移动操作的移动方向为缩短距离的方向的单元。
在一个实施例中,运动控制模块720可包括:入射角计算单元、旋转确定单元和旋转执行单元。
入射角计算单元,可将入射角计算单元配置成根据第一虹膜图像计算光线射入成像传感器的射入角度的单元;射入角度是光线与成像传感器所在平面在目标方向上的夹角;目标方向为从头戴式显示器的内部至头戴式显示器外部的方向;
旋转确定单元,可将旋转确定单元配置成在判断出射入角度不为直角后,根据射入角度与确定旋转操作的旋转方向的单元;
旋转执行单元,可将旋转执行单元配置成控制位置调整模组按照旋转方向执行旋转操作的单元。
可选的,位置调整模组可包括:可旋转平台。旋转执行单元,可将旋转执行单元配置成控制可旋转平台按照旋转方向执行旋转操作的单元。
在一个实施例中,还可将旋转确定单元配置成当射入角度为钝角,确定旋转操作的旋转方向为逆时针旋转的单元;或者;还可将旋转确定单元配置成当射入角度为锐角,确定旋转操作的旋转方向为顺时针旋转的单元。
在一个实施例中,基于头戴式显示器的虹膜图像采集装置700还可包括:视线确定模块、数据传输模块、矫正模块。
视线确定模块,可将视线确定模块配置成确定眼动追踪传感器检测到的用户视线在显示屏幕上的注视位置的模块;
数据传输模块,可将数据传输模块配置成将成像传感器生成的第二虹膜图像以及确定出的注视位置传输至矫正芯片的模块;
矫正模块,可将矫正模块配置成通过矫正芯片存储的训练好的畸变矫正模型根据注视位置对第二虹膜图像进行畸变矫正,得到畸变矫正模型输出的第三虹膜图像的模块。
可见,实施前述实施例公开的基于头戴式显示器的虹膜图像采集装置,能够先拍摄第一虹膜图像,并利用第一虹膜图像对成像传感器到摄像孔之间的距离和/或光线射入成像传感器的角度进行调整。调整后的成像传感器生成的第二虹膜图 像,虹膜在第二虹膜图像中的成像大小更接近最大化的成像,且图像亮度更高。因此,第二虹膜图像的图像质量高于第一虹膜图像,利用第二虹膜图像进行身份鉴别,有利于得到更准确的身份鉴别结果。
关于基于头戴式显示器的虹膜图像采集装置的具体限定可以参见上文中对于基于头戴式显示器的虹膜图像采集方法的限定,在此不再赘述。上述基于头戴式显示器的虹膜图像采集装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
请参阅图8,图8是本公开一个或多个实施例中一种电子设备的结构示意图,该电子设备可以是前述运行环境所示的终端设备、服务设备或者头戴式显示器本体中的任意一种。如图8所示,该电子设备800可以包括:
存储有可执行程序代码的存储器810;
与存储器810耦合的处理器820;
其中,处理器820调用存储器810中存储的可执行程序代码,执行本公开公开的任意一种基于头戴式显示器的虹膜图像采集方法。
本领域技术人员可以理解,图8中示出的结构,仅仅是与本公开方案相关的部分结构的框图,并不构成对本公开方案所应配置成其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,本公开提供的基于头戴式显示器的虹膜图像采集装置可以实现为一种计算机程序的形式,计算机程序可在如图8所示的计算机设备上运行。计算机设备的存储器中可存储组成该基于头戴式显示器的虹膜图像采集装置的各个程序模块,比如,图7所示的成像模块710和运动控制模块720。各个程序模块构成的计算机程序使得处理器执行本说明书中描述的本公开各个实施例的基于头戴式显示器的虹膜图像采集方法中的步骤。
例如,图8所示的计算机设备可以通过如图7所示的基于头戴式显示器的虹膜图像采集装置中的成像模块710执行获取成像传感器基于射入的光线生成的第一虹膜图像的步骤。计算机设备可通过运动控制模块720执行根据第一虹膜图像控制位置调整模组执行移动操作和/或旋转操作的步骤。计算机设备可通过成像模块710执行获取调整后的成像传感器基于射入的光线生成的第二虹膜图像的步骤。
在一个实施例中,提供了一种计算机设备,包括存储器和一个或多个处理器,将所述存储器配置成存储计算机可读指令的模块;所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器执行上述方法实施例所述的基于头戴式显示器的虹膜图像采集方法的步骤。
本领域普通技术人员可以理解实现上述方法实施例中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成的,计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本公开所提供的各实施例中所使用的对存储器、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,比如 静态随机存取存储器(Static Random Access Memory,SRAM)和动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上实施例仅表达了本公开的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本公开构思的前提下,还可以做出若干变形和改进,这些都属于本公开的保护范围。因此,本公开专利的保护范围应以所附权利要求为准。
工业实用性
本公开提供的基于头戴式显示器的虹膜图像采集方法,可有效利用第一虹膜图像控制位置调整模组执行移动操作和/或旋转操作,以对成像传感器到摄像孔之间的距离和/或光线射入成像传感器的角度进行调整,并利用调整后的成像传感器生成第二虹膜图像,使得第二虹膜图像的图像质量高于第一虹膜图像,而利用第二虹膜图像进行身份鉴别,有利于得到更准确的身份鉴别结果,具有很强的工业实用性。

Claims (16)

  1. 一种基于头戴式显示器的虹膜图像采集方法,所述头戴式显示器包括光源和摄像装置,所述摄像装置包括:摄像孔、成像传感器、以及所述成像传感器的位置调整模组;所述摄像孔朝向所述头戴式显示器的佩戴者设置;所述光源发射的光线从所述摄像孔射入;以及,所述方法包括:
    获取所述成像传感器基于射入的光线生成的第一虹膜图像;
    根据所述第一虹膜图像控制所述位置调整模组执行移动操作和/或旋转操作;所述移动操作配置成调整所述成像传感器到所述摄像孔的距离,以使虹膜在图像中的成像最大化的操作;所述旋转操作配置成旋转所述成像传感器,以改变光线射入所述成像传感器的角度的操作;
    获取调整后的所述成像传感器基于射入的光线生成的第二虹膜图像。
  2. 根据权利要求1所述的方法,其中,根据所述第一虹膜图像控制所述位置调整模组执行移动操作,包括:
    确定所述第一虹膜图像中的虹膜区域,并将所述虹膜区域的区域面积与参考区域的区域面积进行比较;所述参考区域是完整的虹膜在图像中成像最大化时对应的图像区域;
    根据所述虹膜区域与所述参考区域的面积差异,确定所述位置调整模组执行移动操作的移动方向;
    控制所述位置调整模组按照所述移动方向执行移动操作。
  3. 根据权利要求2所述的方法,其中,所述根据所述虹膜区域与所述参考区域的面积差异,确定所述位置调整模组执行移动操作的移动方向,包括:
    在所述虹膜区域的面积小于所述参考区域的面积时,确定所述位置调整模组执行所述移动操作的移动方向为增长所述距离的方向;或者,
    在所述虹膜区域的面积大于所述参考区域的面积时,确定所述位置调整模组执行所述移动操作的移动方向为缩短所述距离的方向。
  4. 根据权利要求1所述的方法,其中,根据所述第一虹膜图像控制所述位置调整模组执行旋转操作,包括:
    根据所述第一虹膜图像计算光线射入所述成像传感器的射入角度;所述射入角度是光线与所述成像传感器所在平面在目标方向上的夹角;所述目标方向为从所述头戴式显示器的内部至所述头戴式显示器外部的方向;
    若所述射入角度不为直角,则根据所述射入角度确定旋转操作的旋转方向;
    控制所述位置调整模组按照所述旋转方向执行旋转操作。
  5. 根据权利要求4所述的方法,其中,所述根据所述射入角度与确定旋转操作的旋转方向,包括:
    若所述射入角度为钝角,则确定所述旋转操作的旋转方向为逆时针旋转;或者;
    若所述射入角度为锐角,则确定所述旋转操作的旋转方向为顺时针旋转。
  6. 根据权利要求1-5任一项所述的方法,其中,所述位置调整模组包括:可伸缩连杆;所述可伸缩连杆的一端与所述成像传感器连接,另一端与所述头戴式显示器的壳体连接;所述可伸缩连杆配置成执行所述移动操作的连杆;和/或,
    所述位置调整模组可包括:可旋转平台;所述成像传感器放置于所述可旋转 平台上;所述可旋转平台配置成执行所述旋转操作。
  7. 根据权利要求1-5任一项所述的方法,其中,所述摄像装置还包括:矫正芯片;所述头戴显示器还包括:眼动追踪传感器;
    所述矫正芯片存储有训练好的畸变矫正模型;所述方法还包括:
    确定所述眼动追踪传感器检测到的用户视线在所述显示屏幕上的注视位置;
    将所述成像传感器生成的所述第二虹膜图像以及确定出的所述注视位置传输至所述矫正芯片;
    通过所述矫正芯片存储的所述畸变矫正模型根据所述注视位置对所述第二虹膜图像进行畸变矫正,得到所述畸变矫正模型输出的第三虹膜图像。
  8. 一种基于头戴式显示器的虹膜图像采集装置,其中,所述头戴式显示器包括光源和摄像装置,所述摄像装置包括:摄像孔、成像传感器、以及所述成像传感器的位置调整模组;所述摄像孔朝向所述头戴式显示器的佩戴者设置;所述光源发射的光线从所述摄像孔射入;以及,所述装置包括:
    成像模块,将成像模块配置成获取所述成像传感器基于射入的光线生成的第一虹膜图像的模块;
    运动控制模块,将运动控制模块配置成根据所述第一虹膜图像控制所述位置调整模组执行移动操作和/或旋转操作的模块;所述移动操作配置成调整所述成像传感器到所述摄像孔的距离,以使虹膜在图像中的成像最大化的操作;所述旋转操作配置成旋转所述成像传感器,以改变光线射入所述成像传感器的角度的操作;
    还将所述成像模块配置成获取调整后的所述成像传感器基于射入的光线生成的第二虹膜图像的模块。
  9. 根据权利要求8所述的装置,其中,运动控制模块包括:虹膜识别单元、移动确定单元和移动执行单元;
    所述虹膜识别单元,将所述虹膜识别单元配置成确定所述第一虹膜图像中的虹膜区域的单元;
    所述移动确定单元,将所述移动确定单元配置成将所述虹膜区域的区域面积与参考区域的区域面积进行比较的单元;所述参考区域是完整的虹膜在图像中成像最大化时对应的图像区域;以及,将所述移动确定单元配置成根据所述虹膜区域与所述参考区域的面积差异,确定位置调整模组执行移动操作的移动方向的单元;
    所述移动执行单元,将所述移动执行单元配置成控制所述位置调整模组按照所述移动方向执行移动操作的单元。
  10. 根据权利要求9所述的装置,其中,所述移动确定单元,还将所述移动确定单元配置成在所述虹膜区域的面积小于所述参考区域的面积时,确定所述位置调整模组执行所述移动操作的移动方向为增长所述距离的方向;或者,
    还将所述移动确定单元配置成在所述虹膜区域的面积大于所述参考区域的面积时,确定所述位置调整模组执行所述移动操作的移动方向为缩短所述距离的方向。
  11. 根据权利要求8所述的装置,其中,所述运动控制模块包括:入射角计算单元、旋转确定单元和旋转执行单元;
    所述入射角计算单元,将所述入射角计算单元配置成根据所述第一虹膜图像计算光线射入所述成像传感器的射入角度;所述射入角度是光线与所述成像传感器所在平面在目标方向上的夹角;所述目标方向为从所述头戴式显示器的内部至所述头戴式显示器外部的方向;
    所述旋转确定单元,将所述旋转确定单元配置成在判断出所述射入角度不为直角后,根据所述射入角度与确定旋转操作的旋转方向;
    所述旋转执行单元,将所述旋转执行单元配置成控制所述位置调整模组按照所述旋转方向执行旋转操作。
  12. 根据权利要求11所述的装置,其中,所述旋转确定单元,还将所述旋转确定单元配置成当所述射入角度为钝角,确定所述旋转操作的旋转方向为逆时针旋转;或者;
    还将所述旋转确定单元配置成当所述射入角度为锐角,确定所述旋转操作的旋转方向为顺时针旋转。
  13. 根据权利要求8-12任一项所述的装置,其中,所述位置调整模组包括:可伸缩连杆;所述可伸缩连杆的一端与所述成像传感器连接,另一端与所述头戴式显示器的壳体连接;所述可伸缩连杆配置成执行所述移动操作的连杆;和/或,
    所述位置调整模组可包括:可旋转平台;所述成像传感器放置于所述可旋转平台上;所述可旋转平台配置成执行所述旋转操作。
  14. 根据权利要求8-12任一项所述的装置,其中,所述基于头戴式显示器的虹膜图像采集装置还包括:视线确定模块、数据传输模块、矫正模块;
    所述视线确定模块,将所述视线确定模块配置成确定眼动追踪传感器检测到的用户视线在显示屏幕上的注视位置的模块;
    所述数据传输模块,将所述数据传输模块配置成将成像传感器生成的第二虹膜图像以及确定出的注视位置传输至矫正芯片的模块;
    所述矫正模块,将所述矫正模块配置成通过矫正芯片存储的训练好的畸变矫正模型根据注视位置对第二虹膜图像进行畸变矫正,得到畸变矫正模型输出的第三虹膜图像的模块。
  15. 一种电子设备,其中,包括存储器和一个或多个处理器,所述存储器中存储有计算机可读指令;所述计算机可读指令被所述一个或多个处理器执行时,使得所述一个或多个处理器执行权利要求1至7任一项所述方法的步骤。
  16. 一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行权利要求1至7任一项所述方法的步骤。
PCT/CN2022/142700 2022-08-30 2022-12-28 基于头戴式显示器的虹膜图像采集方法及相关产品 WO2024045446A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211056340.7 2022-08-30
CN202211056340.7A CN115393947A (zh) 2022-08-30 2022-08-30 基于头戴式显示器的虹膜图像采集方法及相关产品

Publications (1)

Publication Number Publication Date
WO2024045446A1 true WO2024045446A1 (zh) 2024-03-07

Family

ID=84124683

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/142700 WO2024045446A1 (zh) 2022-08-30 2022-12-28 基于头戴式显示器的虹膜图像采集方法及相关产品

Country Status (2)

Country Link
CN (1) CN115393947A (zh)
WO (1) WO2024045446A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393947A (zh) * 2022-08-30 2022-11-25 上海闻泰电子科技有限公司 基于头戴式显示器的虹膜图像采集方法及相关产品

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106603922A (zh) * 2016-12-23 2017-04-26 信利光电股份有限公司 一种距离可调的虹膜识别模组及***
US20170263007A1 (en) * 2016-03-11 2017-09-14 Oculus Vr, Llc Eye tracking system with single point calibration
CN107341467A (zh) * 2017-06-30 2017-11-10 广东欧珀移动通信有限公司 虹膜采集方法及设备、电子装置和计算机可读存储介质
CN107533362A (zh) * 2015-05-08 2018-01-02 Smi创新传感技术有限公司 眼睛跟踪设备和用于操作眼睛跟踪设备的方法
CN113709353A (zh) * 2020-05-20 2021-11-26 杭州海康威视数字技术股份有限公司 图像采集方法和设备
CN115393947A (zh) * 2022-08-30 2022-11-25 上海闻泰电子科技有限公司 基于头戴式显示器的虹膜图像采集方法及相关产品

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107533362A (zh) * 2015-05-08 2018-01-02 Smi创新传感技术有限公司 眼睛跟踪设备和用于操作眼睛跟踪设备的方法
US20170263007A1 (en) * 2016-03-11 2017-09-14 Oculus Vr, Llc Eye tracking system with single point calibration
CN106603922A (zh) * 2016-12-23 2017-04-26 信利光电股份有限公司 一种距离可调的虹膜识别模组及***
CN107341467A (zh) * 2017-06-30 2017-11-10 广东欧珀移动通信有限公司 虹膜采集方法及设备、电子装置和计算机可读存储介质
CN113709353A (zh) * 2020-05-20 2021-11-26 杭州海康威视数字技术股份有限公司 图像采集方法和设备
CN115393947A (zh) * 2022-08-30 2022-11-25 上海闻泰电子科技有限公司 基于头戴式显示器的虹膜图像采集方法及相关产品

Also Published As

Publication number Publication date
CN115393947A (zh) 2022-11-25

Similar Documents

Publication Publication Date Title
CN108664783B (zh) 基于虹膜识别的识别方法和支持该方法的电子设备
US11917126B2 (en) Systems and methods for eye tracking in virtual reality and augmented reality applications
CN108205374B (zh) 一种视频眼镜的眼球追踪模组及其方法、视频眼镜
US20140223548A1 (en) Adapting content and monitoring user behavior based on facial recognition
US20190303722A1 (en) Deep learning for three dimensional (3d) gaze prediction
US11693475B2 (en) User recognition and gaze tracking in a video system
CN108139806A (zh) 相对于可穿戴设备跟踪穿戴者的眼睛
US10671890B2 (en) Training of a neural network for three dimensional (3D) gaze prediction
US20140104392A1 (en) Generating image information
US20170263192A1 (en) Automatic control of display brightness
JP2018506781A (ja) 動的カメラまたはライトの動作
KR20140125183A (ko) 프로젝터 장착 안경 및 그 제어 방법
US11546519B2 (en) Intelligent image capture mode for images involving reflective surfaces
WO2024045446A1 (zh) 基于头戴式显示器的虹膜图像采集方法及相关产品
WO2021238351A1 (zh) 一种图像校正方法与电子设备
CN111880285A (zh) 光学透镜***和包括该光学透镜***的电子装置
KR20220097585A (ko) 인공지능 기반의 자궁경부암 검진 서비스 시스템
WO2022247482A1 (zh) 虚拟显示设备和虚拟显示方法
CN116184746A (zh) 摄像装置、基于摄像装置的图像采集方法及相关产品
US20230367857A1 (en) Pose optimization in biometric authentication systems
WO2019190561A1 (en) Deep learning for three dimensional (3d) gaze prediction
CN117121478A (zh) 包括多个相机的可穿戴电子装置
CN116569221A (zh) 用于成像***的灵活照明
CN116529787A (zh) 多波长生物识别成像***
US20200311425A1 (en) Imaging effect based on object depth information

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22957256

Country of ref document: EP

Kind code of ref document: A1