WO2023082980A1 - 一种显示方法与电子设备 - Google Patents

一种显示方法与电子设备 Download PDF

Info

Publication number
WO2023082980A1
WO2023082980A1 PCT/CN2022/127013 CN2022127013W WO2023082980A1 WO 2023082980 A1 WO2023082980 A1 WO 2023082980A1 CN 2022127013 W CN2022127013 W CN 2022127013W WO 2023082980 A1 WO2023082980 A1 WO 2023082980A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
point
display device
display screen
distance
Prior art date
Application number
PCT/CN2022/127013
Other languages
English (en)
French (fr)
Inventor
何小宇
陈启超
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP22891777.9A priority Critical patent/EP4400941A1/en
Publication of WO2023082980A1 publication Critical patent/WO2023082980A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display

Definitions

  • This description relates to the field of electronic technology, in particular to a display method and electronic equipment.
  • a VR device can simulate a three-dimensional (3D) virtual world scene, and can also provide a visual, auditory, tactile or other sensory simulation experience to make users feel as if they are in the scene. Moreover, the user can also interact with the simulated virtual world scene. AR devices can superimpose and display virtual images for users while viewing real-world scenes, and users can also interact with virtual images to achieve the effect of augmented reality.
  • MR combines AR and VR, which can provide users with a vision after merging the real and virtual worlds.
  • a head-mounted display device is a display device worn on a user's head, which can provide a new visual environment for the user. Head-mounted display devices can present different effects such as VR, AR or MR to users by emitting optical signals.
  • two display devices are provided on the head-mounted display device, one display device corresponds to the left eye, and the other display device corresponds to the right eye.
  • the left-eye display device and the right-eye display device display images, respectively. In this way, people's left eye and right eye respectively collect images and fuse them through the brain to experience the virtual world.
  • users wearing head-mounted display devices are prone to blurred images, dizziness or visual fatigue, which seriously affect the comfort and experience of head-mounted display devices.
  • the purpose of this specification is to provide a display method and an electronic device for improving the comfort of a head-mounted display device.
  • a display method is provided, which is applied to an electronic device.
  • the electronic device includes a first display screen and a second display screen, the first display screen displays a first image, the first display screen corresponds to the user's first eye, and the second display screen displays a second image, the second display screen corresponds to the user's second glance.
  • the first image and the second image have an overlapping area, and at least one same object is included in the overlapping area.
  • the center point of the overlapping area is located at the first position.
  • the center point of the overlapping area is located at the second location.
  • the distance from the first position to the center point of the first image is the first distance
  • the distance from the second position to the center point of the second image is the second distance
  • the direction from the first position to the center point of the first image is the first direction
  • the direction from the second position to the center point of the second image is the second direction.
  • the first distance is not equal to the second distance, and/or the first direction is different from the second direction.
  • the overlapping area is based on
  • the center line of the human face (or the center plane of the electronic device) is left-right symmetrical, that is, the first distance from the first position (the position of the center point of the overlapping area on the first display screen) to the center point of the first display screen is equal to the second position (the position of the center point of the overlapping area on the second display screen) to the second distance from the center point of the second display screen, and the first direction from the first position to the center point of the first display screen is the same as the second distance from the second position to the second
  • the second directions of the center points of the two display screens are opposite.
  • the positions of the overlapping regions on the images of the display screens on the two display screens are asymmetrical, so as to compensate for the assembly deviation.
  • the first distance from the first position (the position of the center point of the overlapping area on the first image) to the center point of the first image is not equal to the distance from the second position (the position of the center point of the overlapping area on the second image) to the second image
  • the second distance of the center point, and/or, the first direction from the first position to the center point of the first image is different from the second direction from the second position to the center point of the second image.
  • the electronic device further includes a first optical device and a second optical device, the first optical device corresponds to the first display screen, and the second optical device corresponds to the second display screen, the first optical device and the second optical device are symmetrical with respect to a mid-plane; the first position and the second position are symmetrical with respect to the mid-plane.
  • the overlapping areas can be better blended to achieve a better visual effect.
  • the electronic device is a head-mounted display device, and when the electronic device is worn by the user, the first position and the second position are relative to the center of the user's face Line symmetry can make overlapping areas blend better and achieve better visual effects.
  • the first position changes as the position of the first display screen changes. For example, when the first display screen moves in a third direction, the overlapping area on the first image moves in a direction opposite to the third direction.
  • the second position changes as the position of the second display screen changes. For example, when the second display screen moves in a fourth direction, the overlapping area on the first image moves in a direction opposite to the fourth direction.
  • the position of the first display screen and/or the second display screen can be changed dynamically.
  • the position of the overlapping area changes dynamically, so as to ensure that the overlapping area can be merged.
  • the method before displaying the first image through the first display screen and displaying the second image through the second display screen, the method further includes the first display screen and The second display screen performs interpupillary distance adjustment, and the interpupillary distance adjustment includes: the first display screen moves a certain distance along a fifth direction, and the second display screen moves a certain distance along a fifth direction opposite to the fifth direction. moving the same distance in six directions; wherein, the fifth direction is a direction in which the first display screen is away from the second display screen, or, the fifth direction is a direction in which the first display screen is close to the second display screen screen orientation.
  • the VR glasses with assembly deviation are adjusted by the interpupillary distance (IPD), there is still assembly deviation.
  • IPD interpupillary distance
  • the VR glasses are adjusted for IPD (that is, the first display device and the second display device move the same distance and move in opposite directions, for example, the first display device and the second display device move closer to each other, or the first display device and the second display device move closer to each other. display devices move away from each other).
  • the first image and the second image are displayed after IPD adjustment, the distance difference between the first distance and the second distance remains unchanged compared with that before the pupil distance adjustment, and the first direction and The relative relationship between the second directions remains unchanged compared with that before the pupil distance adjustment, so as to ensure that the overlapping areas before and after the IPD adjustment can be fused.
  • the at least one identical object includes a first object and a second object; on the first image, the first feature point of the first object is at the first coordinate, and the second The second feature point of the object is at the second coordinate; on the second image, the first feature point of the first object is at the third coordinate, and the second feature point of the second object is at the fourth coordinate; A coordinate difference between the first coordinate and the third coordinate is different from a coordinate difference between the second coordinate and the fourth coordinate.
  • the offsets of the two objects may be different.
  • the offsets of the two objects are different, and the conditions include:
  • Condition 1 the first object is in the area where the user's gaze point is located, and the second object is not in the area where the user's gaze point is located;
  • both the first object and the second object are located in the area where the user's gaze point is located, and the second object is closer to the edge of the area where the user's gaze point is located than the first object;
  • the number of user interactions corresponding to the first object is greater than the number of user interactions corresponding to the second object;
  • the first object is a user-specified object
  • the second object is not a user-specified object.
  • the second object is an object with low attention of the user or an object that the user is not interested in (the number of interactions is low), so the offset to the second object is smaller, or the second object is not offset , will not affect the user's perception, and can save the calculation amount of electronic equipment and improve efficiency.
  • the method further includes: the electronic device includes a first display module and a second display module; the first display module includes the first display screen and a first optical device , the second display module includes the second display screen and a second optical device, there is a first offset between the position of the first display screen and the position of the first optical device, and the first optical device There is a second offset between the positions of the second display screen and the position of the second optical device; the method further includes: acquiring three-dimensional image data; acquiring a first coordinate transformation matrix and a second coordinate transformation matrix, the first coordinate transformation matrix A coordinate transformation matrix corresponds to the first optical device, and the second coordinate transformation matrix corresponds to the second optical device; obtaining the first offset and the second offset; based on the first coordinate transformation matrix and the obtained the first offset, and process the three-dimensional image data into the first image; based on the second coordinate transformation matrix and the second offset, process the three-dimensional image data into the second image.
  • first image and the second image are taken from the same three-dimensional image data, respectively according to the positions of the first optical device (corresponding to the position of the first human eye) and the second optical device (corresponding to the position of the second human eye)
  • the resulting transformation is helpful for the fusion of the overlapping regions on the first image and the second image, and ensures that the user can clearly see the virtual environment.
  • the first coordinate transformation matrix changes; or, when the position of the second display module changes, the second coordinate Transformation matrix changes.
  • the general display module will change with the position of the human eye, so that the viewing angle of the display screen can be adjusted according to the change of the position of the human eye.
  • a calibration method is also provided, which is applied to a calibration device, and the calibration device includes an image capture module, and the method includes: displaying the first image on the first display screen of the electronic device to be calibrated, and displaying the first image on the second display screen
  • the image capture module captures the first display screen to obtain a third image, and captures the second display screen to obtain a fourth image; wherein, the first image and There is an overlapping area on the second image, the overlapping area includes at least one identical object, and the at least one identical object includes a calibration object, and the center point of the overlapping area on the first image is located in the first position, the center point of the overlapping area on the second image is located at the second position; the distance from the first position to the center of the first image is equal to the distance from the second position to the center of the second image distance, the direction from the first position to the center of the first image is equal to the direction from the second position to the center of the second image; the third image is fuse
  • the assembly deviation of the electronic equipment can be calibrated, that is, the offset of the two display devices, so as to realize the compensation of the electronic equipment for the assembly deviation.
  • the first offset includes a first displacement and a first direction
  • the second offset includes a second displacement and a second direction; wherein, the first displacement and the The displacement sum of the second displacement is equal to the distance difference; the first direction is opposite to the second direction.
  • the first displacement is half of the distance difference
  • the second displacement is half of the distance difference
  • the method further includes: writing the first offset and the second offset into the electronic device to be calibrated, so that the electronic device to be calibrated is based on the The image displayed on the first display screen is processed according to the first offset amount, and the image displayed on the second display screen is processed based on the second offset amount.
  • a display method is also provided, the electronic device includes a first display module and a second display module; the first display module includes the first display screen and the first optical device, the The second display module includes the second display screen and a second optical device, there is a first offset between the position of the first display screen and the position of the first optical device, and the second display screen There is a second offset between the position of the second optical device and the position of the second optical device; the method further includes: acquiring three-dimensional image data; acquiring a first coordinate transformation matrix and a second coordinate transformation matrix, the first coordinate transformation The matrix corresponds to the first optical device, and the second coordinate transformation matrix corresponds to the second optical device; the first offset and the second offset are obtained; based on the first coordinate transformation matrix and the first an offset, processing the three-dimensional image data into a first image, and the first display module displays the first image; based on the second coordinate transformation matrix and the second offset, the The three-dimensional image data is processed into a second image, and the second display module displays the second image.
  • first image and the second image are taken from the same three-dimensional image data, respectively according to the positions of the first optical device (corresponding to the position of the first human eye) and the second optical device (corresponding to the position of the second human eye)
  • the resulting transformation is helpful for the fusion of the overlapping regions on the first image and the second image, and ensures that the user can clearly see the virtual environment.
  • the first coordinate transformation matrix changes; or, when the position of the second display module changes, the second coordinate Transformation matrix changes.
  • the general display module will change with the position of the human eye, so that the viewing angle of the display screen can be adjusted according to the change of the position of the human eye.
  • an electronic device including:
  • processor memory, and, one or more programs
  • the one or more programs are stored in the memory, the one or more programs include instructions, and when the instructions are executed by the processor, the electronic device performs the above-mentioned first aspect The method steps provided.
  • a calibration device including:
  • processor memory, and, one or more programs
  • the one or more programs are stored in the memory, the one or more programs include instructions, and when the instructions are executed by the processor, the electronic device performs the above-mentioned second aspect The method steps provided.
  • a system including:
  • a computer-readable storage medium the computer-readable storage medium is used to store a computer program, and when the computer program runs on a computer, the computer executes the above-mentioned first aspect or the first aspect. The method described in the two aspects.
  • a computer program product including a computer program, which, when the computer program is run on a computer, causes the computer to execute the method as described in the first aspect or the second aspect above.
  • FIG. 1 is a schematic diagram of a VR system provided by an embodiment of this specification
  • FIG. 2A and FIG. 2B are a schematic structural diagram of a VR head-mounted display device provided by an embodiment of this specification;
  • FIG. 3A is another schematic structural diagram of a VR head-mounted display device provided by an embodiment of this specification.
  • FIG. 3B is a schematic diagram of a software structure of a VR head-mounted display device provided by an embodiment of this specification.
  • FIGS. 4A to 4B are schematic diagrams of the human eye observation mechanism provided by an embodiment of this specification.
  • Fig. 5 is a schematic diagram of the display principle of VR glasses provided by an embodiment of this specification.
  • 6A to 6B are schematic diagrams of binocular disfusion provided by an embodiment of this specification.
  • FIG. 7A to 7B are another schematic diagram of binocular non-fusion provided by an embodiment of this specification.
  • FIGS. 8A to 8B are schematic diagrams of a display method provided by an embodiment of this specification.
  • FIGS. 9 to 10 are schematic diagrams of another display method provided by an embodiment of this specification.
  • Fig. 11 is a schematic flowchart of a display method provided by an embodiment of this specification.
  • 12A to 12B are schematic diagrams of obtaining a two-dimensional image from a three-dimensional image provided by an embodiment of the present specification
  • 12C to 13 are schematic diagrams of a calibration method provided by an embodiment of this specification.
  • Fig. 14 is another schematic flowchart of a display method provided by an embodiment of this specification.
  • Fig. 15 is a schematic diagram of an assembly deviation provided by an embodiment of this specification.
  • Fig. 16 is a schematic diagram of an electronic device provided by an embodiment of this specification.
  • the at least one involved in the embodiments of the present application includes one or more; wherein, a plurality means greater than or equal to two.
  • words such as “first” and “second” are only used for the purpose of distinguishing descriptions, and cannot be understood as express or implied relative importance, nor can they be understood as express or imply order.
  • the first object and the second object do not represent the importance of the two or the order of the two, but are only used to distinguish the description.
  • “and/or” only describes the association relationship, which means that there may be three kinds of relationships, for example, A and/or B, which can mean: A exists alone, A and B exist simultaneously, and B exists alone These three situations.
  • the character "/" in this article generally indicates that the contextual objects are an "or” relationship.
  • connection can be detachably connected, or It is a non-detachable connection; it can be directly connected or indirectly connected through an intermediary.
  • references to "one embodiment” or “some embodiments” or the like in this specification means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the specification.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically stated otherwise.
  • the terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless specifically stated otherwise.
  • VR technology is a means of human-computer interaction created with the help of computer and sensor technology.
  • VR technology integrates computer graphics technology, computer simulation technology, sensor technology, display technology and other science and technology to create a virtual environment.
  • the virtual environment includes two-dimensional or three-dimensional virtual objects generated by computers and dynamically played in real time, providing users with simulations of vision and other senses, making users feel as if they are in the scene.
  • perceptions such as hearing, touch, force, movement, and even smell and taste, which are also called multi-sense.
  • the computer can also detect the user's head rotation, eyes, gestures, or other human body actions, and the computer will process the data adapted to the user's actions, and respond to the user's actions in real time, and feed back to the user's facial features respectively. And then form a virtual environment.
  • the user can see the VR game interface by wearing a VR head-mounted display device (eg, VR glasses, VR helmet, etc.), and can interact with the VR game interface through gestures, handles, etc., as if in a game.
  • a VR head-mounted display device eg, VR glasses, VR helmet, etc.
  • Augmented reality refers to the superimposition of computer-generated virtual objects on the real world scene, so as to realize the enhancement of the real world.
  • AR technology needs to collect real-world scenes, and then add a virtual environment to the real world. Therefore, the difference between VR technology and AR technology is that VR technology creates a complete virtual environment, and all users see are virtual objects; while AR technology superimposes virtual objects on the real world, that is, the real world can be seen Medium objects can also see virtual objects.
  • the user wears transparent glasses, through which the lens of the glasses can see the surrounding real environment, and virtual objects can also be displayed on the lenses, so that the user can see both real objects and virtual objects.
  • Mixed reality technology is to build a bridge of interactive feedback information between the virtual environment, the real world and the user by introducing real scene information (or called real scene information) in the virtual environment, thereby enhancing Realistic user experience.
  • the real object is virtualized (for example, using a camera to scan the real object for 3D reconstruction to generate a virtual object), and the virtualized real object is introduced into the virtual environment, so that the user can see in the virtual environment real object.
  • Binocular fusion also known as binocular fusion, is a visual phenomenon. That is to say, when two eyes observe the same object at the same time, two images of the object are formed on the respective retinas, and then transmitted to the same area of the cortical visual center through the optic nerves on both sides, and then fused into a complete, single image perception experience.
  • the virtual image or virtual environment may include various objects, and the objects may also be called targets.
  • Objects can include objects or things that can appear in a real-world environment, such as people, animals, or furniture.
  • Objects can also include virtual elements such as virtual icons, navigation bars, software buttons, or windows, which can be used to interact with users.
  • the following mainly introduces the VR head-mounted display device as an example.
  • the VR head-mounted display device 100 may be applied in a VR system as shown in FIG. 1 .
  • the VR system includes a VR head-mounted display device 100 and a processing device 200, and the VR system may be called a VR split device.
  • the VR head-mounted display device 100 can be connected with the processing device 200 .
  • the connection between the VR head-mounted display device 100 and the processing device 200 includes a wired or wireless connection.
  • the wireless connection may be Bluetooth (BT), traditional Bluetooth or BLE Bluetooth, wireless local area networks (wireless local area networks) , WLAN) (such as wireless fidelity (Wi-Fi) network), Zigbee, frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR), Or general 2.4G/5G frequency band wireless communication connection, etc.
  • BT Bluetooth
  • Wi-Fi wireless local area networks
  • WLAN such as wireless fidelity (Wi-Fi) network
  • WiFi wireless fidelity
  • FM frequency modulation
  • NFC near field communication technology
  • infrared technology infrared, IR
  • general 2.4G/5G frequency band wireless communication connection etc.
  • the processing device 200 can perform processing calculations, for example, the processing device 200 can generate images and process the images (the processing method will be described later), and then send the processed images to the VR head-mounted display device to display.
  • the processing device 200 may include a host (such as a VR host) or a server (such as a VR server).
  • the VR host or VR server may be a device with relatively large computing capabilities.
  • the VR host can be a device such as a mobile phone, a tablet computer, or a notebook computer, and the VR server can be a cloud server, etc.
  • the VR head-mounted display device 100 may be glasses, a helmet, and the like.
  • the VR head-mounted display device 100 is generally provided with two display devices, that is, a first display device 110 and a second display device 120 .
  • the display device of the VR head-mounted display device 100 may display images to human eyes.
  • the first display device 110 and the second display device 120 are wrapped inside the VR glasses, so the arrows used to indicate the first display device 110 and the second display device 120 in FIG. 1 use dotted lines express.
  • the VR head-mounted display device 100 also has functions such as image generation and processing, that is, the VR head-mounted display device 100 does not need the processing device 200 in FIG. 1 , such a VR head-mounted display device 100 can be called a VR all-in-one machine.
  • FIG. 2A is a schematic diagram of a VR head-mounted display device 100 .
  • the VR head-mounted display device 100 includes a display module 1 and a display module 2 .
  • the display module 1 includes a first display device 110 and an optical device 130 .
  • the display module 2 includes a second display device 120 and an optical device 140 .
  • the display module 1 and the display module 2 may also be referred to as lens barrels.
  • the display module 1 is used to display images to the user's right eye.
  • the display module 2 is used to display images to the user's left eye. It can be understood that the VR head-mounted display device 100 shown in FIG.
  • 2A may also include other components, such as a support part 30 and a bracket 20, wherein the support part 30 is used to support the VR head-mounted display device 100 On the bridge of the nose, the bracket 20 is used to support the VR head-mounted display device 100 on both ears, so as to ensure that the VR head-mounted display device 100 is worn stably.
  • a support part 30 is used to support the VR head-mounted display device 100 On the bridge of the nose
  • the bracket 20 is used to support the VR head-mounted display device 100 on both ears, so as to ensure that the VR head-mounted display device 100 is worn stably.
  • optics 130 and optics 140 are symmetrical about a median plane C, which in FIG. 2A is a plane perpendicular to the paper.
  • the VR head-mounted display device 100 can have a left-right symmetrical structure, and the support part 30 and/or the bracket 20 can be left-right symmetrical with respect to the middle plane C, respectively, and the support part 30 can fix the position of the face, which is beneficial to optical Device 130 and optics 140 are aligned with the user's left and right eyes, respectively.
  • the optics 130 and 140 are aligned with the user's left and right eyes, respectively.
  • a human face is basically left-right symmetrical, and a person's left eye and right eye are left-right symmetrical with respect to the center line of the human face.
  • the VR head-mounted display device 100 is worn by the user, the left eye and the right eye are symmetrical with respect to the middle plane C, the optical device 130 and the optical device 140 are symmetrical with respect to the center line of the human face, and the center line of the human face is on the middle plane C Inner, that is, the centerline of the face overlaps in the middle plane C.
  • symmetry may be strict symmetry, or there may be slight deviations.
  • the optical device 130 and the optical device 140 may be strictly symmetrical with respect to the middle plane C, or the optical device 130 and the optical device 140 may be substantially symmetrical with respect to the middle plane C, and there may be a certain deviation in the basic symmetry, and the deviation is within a small range Inside.
  • FIG. 2B can be understood as a simplification of the VR head-mounted display device 100 in FIG. 2A.
  • the second display device 120 is located on the side of the optical device 140 away from the left eye
  • the first display device 110 is located on the side of the optical device 130 away from the right eye.
  • Device 130 and optics 140 are symmetrical about the centerline D of the human face.
  • the first display device 110 When the first display device 110 is displaying an image, the light emitted by the first display device 110 passes through the optical device 130 and converges to the right eye of the person; when the second display device 120 displays an image, the light emitted by the second display device 120 passes through Optics 140 converge to the person's left eye.
  • the composition of the VR head-mounted display device 100 shown in FIG. 2A or FIG. 2B is only a logical illustration.
  • the number of optical devices and/or display devices can be flexibly set according to different requirements.
  • the first display device 110 and the second display device 120 may be two independent display devices, or may be two display areas on the same display device.
  • the first display device 110 and the second display device 120 may be display screens, such as liquid crystal display screens, light emitting diode (light emitting diode, LED) display screens, or other types of display devices. Examples are not limited.
  • optical device 130 and optical device 140 may be two separate optical devices, or different parts of the same optical device.
  • the optical device 130 or 140 can be one or several optical devices in reflective mirrors, transmissive mirrors, or optical waveguides, etc., and can also improve the viewing angle.
  • the optical devices 130 or 140 can be respectively An eyepiece composed of a plurality of transmission mirrors.
  • the optical device may be a Fresnel lens and/or an aspheric lens, etc., which are not limited in this embodiment of the present application.
  • the optical device 130 and the optical device 140 are aimed at the two eyes of the user respectively, and when IPD adjustment is performed, the two optical devices adjust the same distance and opposite directions.
  • the VR head-mounted display device 100 may further include more components, see FIG. 3A for details.
  • FIG. 3A is a schematic structural diagram of a head-mounted display device 100 provided by an embodiment of the present application.
  • the head-mounted display device 100 may be a VR head-mounted display device, an AR head-mounted display device, an MR head-mounted display device, and the like. Take a VR head-mounted display device as an example.
  • the processor 101 is generally used to control the overall operation of the VR head-mounted display device 100, and may include one or more processing units, for example: the processor 101 may include an application processor (application processor, AP), a modem processor, Graphics processing unit (GPU), image signal processor (image signal processor, ISP), video processing unit (video processing unit, VPU) controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • GPU Graphics processing unit
  • ISP image signal processor
  • VPU video processing unit
  • memory video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit
  • a memory may also be provided in the processor 101 for storing instructions and data.
  • the memory in processor 101 is a cache memory.
  • the memory may hold instructions or data that the processor 101 has just used or recycled. If the processor 101 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 101 is reduced, thereby improving the efficiency of the system.
  • the processor 101 may be used to control the optical power of the VR head-mounted display device 100 .
  • the processor 101 may be used to control the optical power of the optical display module 106 to realize the function of adjusting the optical power of the head-mounted display device 100 .
  • the processor 101 can adjust the relative position between the various optical devices (such as lenses, etc.) When the human eye is imaging, the position of the corresponding virtual image plane can be adjusted. In this way, the effect of controlling the optical power of the head-mounted display device 100 is achieved.
  • processor 101 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, a universal asynchronous receiver/transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general input and output (general -purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and/or universal serial bus (universal serial bus, USB) interface, serial peripheral interface (serial peripheral interface, SPI) interface etc.
  • I2C integrated circuit
  • MIPI mobile industry processor interface
  • GPIO general input and output
  • subscriber identity module subscriber identity module
  • SIM subscriber identity module
  • USB serial peripheral interface
  • SPI serial peripheral interface
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL).
  • processor 101 may include multiple sets of I2C buses.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is generally used to connect the processor 101 and the communication module 170 .
  • the processor 101 communicates with the Bluetooth module in the communication module 170 through the UART interface to realize the Bluetooth function.
  • the MIPI interface can be used to connect the processor 101 with the display device in the optical display module 106 , the camera 180 and other peripheral devices.
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 101 with the camera 180 , the display device in the optical display module 106 , the communication module 170 , the sensor module 103 , the microphone 104 and so on.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the camera 180 can capture images including real objects
  • the processor 101 can fuse the images captured by the camera with the virtual objects, and display the fused images through the optical display module 106 .
  • the camera 180 can also capture images including human eyes.
  • the processor 101 performs eye tracking through the image.
  • the USB interface is an interface that conforms to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface can be used to connect a charger to charge the VR head-mounted display device 100, and can also be used to transmit data between the VR head-mounted display device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices, such as mobile phones.
  • the USB interface may be USB3.0, which is compatible with high-speed display port (DP) signal transmission, and can transmit video and audio high-speed data.
  • DP high-speed display port
  • the interface connection relationship between modules shown in the embodiment of the present application is only a schematic illustration, and does not constitute a structural limitation of the head-mounted display device 100 .
  • the head-mounted display device 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the VR head-mounted display device 100 may include a wireless communication function.
  • the VR head-mounted display device 100 may receive images from other electronic devices (such as a VR host) for display, or the VR head-mounted display device 100 may directly transmit Stations such as base stations obtain data.
  • the communication module 170 may include a wireless communication module and a mobile communication module.
  • the wireless communication function can be realized by an antenna (not shown), a mobile communication module (not shown), a modem processor (not shown), and a baseband processor (not shown).
  • Antennas are used to transmit and receive electromagnetic wave signals. Multiple antennas may be included in the VR head-mounted display device 100, and each antenna may be used to cover single or multiple communication frequency bands.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module can provide the second generation (2th generation, 2G) network/third generation (3th generation, 3G) network/fourth generation (4th generation, 4G) network/ Solutions for wireless communication such as the fifth generation (5th generation, 5G) network/sixth generation (6th generation, 6G) network.
  • the mobile communication module may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module can receive electromagnetic waves through the antenna, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module can also amplify the signal modulated by the modem processor, and convert it into electromagnetic wave and radiate it through the antenna.
  • at least part of the functional modules of the mobile communication module may be set in the processor 101 .
  • at least part of the functional modules of the mobile communication module and at least part of the modules of the processor 101 may be set in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is passed to the application processor after being processed by the baseband processor.
  • the application processor outputs sound signals through audio equipment (not limited to speakers, etc.), or displays images or videos through the display device in the optical display module 106 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 101, and be set in the same device as the mobile communication module or other functional modules.
  • the wireless communication module can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (bluetooth, BT) applied on the VR head-mounted display device 100, Global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module may be one or more devices integrating at least one communication processing module.
  • the wireless communication module receives electromagnetic waves via the antenna, frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 101 .
  • the wireless communication module can also receive the signal to be sent from the processor 101, frequency-modulate it, amplify it, and convert it into electromagnetic wave to radiate out through the antenna.
  • the antenna of the VR head-mounted display device 100 is coupled to the mobile communication module, so that the VR head-mounted display device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), 5G, 6G, BT, GNSS, WLAN, NFC, FM, and/or IR technology, etc.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • CDMA code division multiple access
  • WCDMA wideband code wideband code division multiple access
  • TD-SCDMA time-division code division multiple access
  • LTE long term evolution
  • 5G, 6G, BT, GNSS, WLAN, NFC, FM, and/or IR technology etc.
  • GNSS can include global positioning system (global positioning system, GPS), global navigation satellite system (global navigation satellite system, GLONASS), Beidou satellite navigation system (beidou navigation satellite system, BDS), quasi-zenith satellite system (quasi-zenith) satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • Beidou satellite navigation system beidou navigation satellite system, BDS
  • quasi-zenith satellite system quasi-zenith satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the VR head-mounted display device 100 realizes the display function through the GPU, the optical display module 106 , and the application processor.
  • the GPU is a microprocessor for image processing, connected to the optical display module 106 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 101 may include one or more GPUs that execute program instructions to generate or change display information.
  • Memory 102 may be used to store computer-executable program code, including instructions.
  • the processor 101 executes various functional applications and data processing of the VR head-mounted display device 100 by executing instructions stored in the memory 102 .
  • the memory 102 may include an area for storing programs and an area for storing data.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage data area can store data (such as audio data, phonebook, etc.) created during the use of the head-mounted display device 100 .
  • the memory 102 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the VR head-mounted display device 100 can implement audio functions through an audio module, a speaker, a microphone 104 , an earphone interface, and an application processor. Such as music playback, recording, etc.
  • the audio module is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module can also be used to encode and decode audio signals.
  • the audio module may be set in the processor 101 , or some functional modules of the audio module may be set in the processor 101 . Loudspeakers, also called “horns", are used to convert audio electrical signals into sound signals.
  • the head-mounted display device 100 can listen to music through the speakers, or listen to hands-free calls.
  • the microphone 104 also called “microphone” or “microphone”, is used to convert sound signals into electrical signals.
  • the VR head-mounted display device 100 may be provided with at least one microphone 104 .
  • the VR head-mounted display device 100 may be provided with two microphones 104, which may also implement a noise reduction function in addition to collecting sound signals.
  • the VR head-mounted display device 100 can also be provided with three, four or more microphones 104 to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions, etc.
  • the headphone jack is used to connect wired headphones.
  • the headphone interface can be a USB interface, or a 3.5 millimeter (mm) open mobile terminal platform (OMTP) standard interface, and the cellular telecommunications industry association of the USA (CTIA) ) standard interface.
  • mm millimeter
  • CTIA cellular telecommunications industry association of the USA
  • the VR head-mounted display device 100 may include one or more buttons 150 , and these buttons may control the VR head-mounted display device and provide users with functions to interact with the VR head-mounted display device 100 .
  • Keys 150 may be in the form of buttons, switches, dials, and touch or near-touch sensing devices such as touch sensors. Specifically, for example, the user can turn on the optical display module 106 of the VR head-mounted display device 100 by pressing a button.
  • the keys 150 include a power key, a volume key and the like.
  • the key 150 may be a mechanical key. It can also be a touch button.
  • the head-mounted display device 100 can receive key input and generate key signal input related to user settings and function control of the head-mounted display device 100 .
  • the VR head-mounted display device 100 may include an input-output interface 160, and the input-output interface 160 may connect other devices to the VR head-mounted display device 100 through suitable components.
  • Components may include, for example, audio/video jacks, data connectors, and the like.
  • the optical display module 106 is used for presenting images to the user under the control of the processor 101 .
  • the optical display module 106 can convert the real pixel image display into a near-eye projection virtual image display through one or several optical devices such as mirrors, transmission mirrors, or optical waveguides, so as to realize virtual interactive experience, or realize virtual and Interactive experience combined with reality.
  • the optical display module 106 receives the image data information sent by the processor 101 and presents corresponding images to the user.
  • the optical display module 106 can refer to the structure shown in FIG. 2A above.
  • the optical display module 106 includes two display screen display devices, that is, a first display device 110 and a second display device 120 .
  • the optical display module 106 can also refer to the structure shown in FIG. 2B above.
  • the optical display module 106 includes a display module 1 and a display module 2, and the display module 1 includes a first display device 110 and an optical device 130 , the display module 2 includes a second display device 120 and an optical device 140 .
  • the VR head-mounted display device 100 may further include an eye-tracking module 1200, which is used to track the movement of human eyes, and then determine the point of gaze of the human eyes.
  • the position of the pupil can be located by image processing technology, the coordinates of the center of the pupil can be obtained, and then the gaze point of the person can be calculated.
  • the eye tracking system can determine the position of the user's fixation point (or determine the direction of the user's line of sight) through methods such as video eye diagram method, photodiode response method, pupil cornea reflection method, etc., so as to realize the user's eye tracking. motion tracking.
  • the eye tracking system may include one or more near-infrared light-emitting diodes (Light-Emitting Diode, LED) and one or more near-infrared cameras.
  • the NIR LED and NIR camera are not shown in Figure 3A.
  • the near-infrared LEDs can be placed around the optics so as to fully illuminate the human eye.
  • the near-infrared LED may have a center wavelength of 850 nm or 940 nm.
  • the eye tracking system can obtain the user's line of sight direction through the following methods: the human eye is illuminated by near-infrared LEDs, and the near-infrared camera captures the image of the eyeball, and then according to the position of the reflection point of the near-infrared LED on the cornea and the pupil in the eyeball image to determine the direction of the optical axis of the eyeball, thereby obtaining the direction of the user's line of sight.
  • eye-tracking systems corresponding to both eyes of the user may be set respectively, so as to perform eye-tracking on both eyes synchronously or asynchronously.
  • an eye-tracking system can also be set only near the user's single eye, and the line-of-sight direction of the corresponding human eye can be obtained through the eye-tracking system, and according to the relationship between the gaze points of the two eyes (such as the user's When observing an object with both eyes, the fixation point positions of the two eyes are generally similar or the same), combined with the distance between the user's eyes, the line-of-sight direction or fixation point position of the user's other eye can be determined.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the VR head-mounted display device 100 .
  • the VR head-mounted display device 100 may include more or fewer components than those shown in FIG. 3A , or combine certain components, or split certain components, or arrange different components. This application Examples are not limited.
  • FIG. 3B is a block diagram of the software structure of the VR head-mounted display device 100 according to the embodiment of the present application.
  • the software structure of the VR head-mounted display device 100 may be a layered architecture, for example, the software may be divided into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces. In some embodiments, it is divided into five layers, from top to bottom are application program layer 210, application program framework layer (framework, FWK) 220, Android runtime (Android runtime) 230 and system library 240, kernel layer 250 and hardware layer 260 .
  • application program layer 210 application program framework layer (framework, FWK) 220
  • Android runtime Android runtime
  • system library 240 kernel layer 250 and hardware layer 260 .
  • the application layer 210 may include a series of application packages.
  • the application program layer includes a gallery 211 application, a game 212 application, and so on.
  • the application framework layer 220 provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer can include some predefined functions.
  • the application framework layer may include a resource manager 221 , a view system 222 and so on.
  • the view system 222 includes visual controls, such as controls for displaying text, controls for displaying pictures, and the like. View system 222 may be used to build applications.
  • a display interface can consist of one or more views.
  • a display interface including a message notification icon may include a view for displaying text and a view for displaying pictures.
  • the resource manager 221 provides various resources for applications, such as localized strings, icons, pictures, layout files, video files, and so on.
  • Android runtime 230 includes a core library and a virtual machine.
  • Android runtime 230 is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function function that the java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application program layer and the application program framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library 240 may include a number of functional modules. For example: a surface manager (surface manager) 241, a media library (media libraries) 242, a three-dimensional graphics processing library (for example: OpenGL ES) 243, a 2D graphics engine 244 (for example: SGL), etc.
  • the surface manager 241 is used to manage the display subsystem, and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of various commonly used audio and video formats, as well as still image files, etc.
  • the media library 242 can support multiple audio and video encoding formats, for example: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library 243 is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • the 2D graphics engine 244 is a drawing engine for 2D drawing.
  • the system library 240 may also include a VR algorithm integration module 245 .
  • the VR algorithm integration module 245 includes the first offset of the first display device, the second offset of the second display device, a coordinate transformation matrix, and related algorithms for coordinate transformation based on the coordinate transformation matrix, and so on. Details about the first offset, the second offset, and the coordinate conversion matrix will be described later. It should be noted that, in FIG. 3B, the VR algorithm integration module 245 is located in the system library as an example. It can be understood that the VR algorithm integration module 245 can also be located in other layers, such as the application framework layer 220, which is not limited in the embodiment of the present application. .
  • the kernel layer 250 is a layer between hardware and software.
  • the kernel layer 250 includes at least a display driver 251 , a camera driver 252 , an audio driver 253 , a sensor driver 254 and so on.
  • the hardware layer may include a first display device 110 , a second display device 120 , and various sensor modules, such as an acceleration sensor 201 , a gravity sensor 202 , a touch sensor 203 and the like.
  • the software structure shown in FIG. 3B does not constitute a specific limitation on the software structure of the VR head-mounted display device 100 .
  • the software structure of the VR head-mounted display device 100 may include more or fewer layers than that in FIG.
  • the system library 240 it is used to realize the adaptation between the upper layer (that is, the application framework layer) and the lower layer (that is, the system library), for example, to realize the interface matching between the upper layer and the lower layer, so as to ensure that the upper layer and the lower layer can carry out data communication.
  • an exemplary flow of the display method provided by the present application includes:
  • the system library 240 converts the three-dimensional image generated by the game 212 application into a first plane image and a second plane image, wherein the first plane image corresponds to the first first display device 110, and the second plane image corresponds to the second second display device 120.
  • the system library obtains the first offset and the second offset in the VR algorithm integration module 240, uses the first offset to process the first plane image (such as coordinate conversion processing) to obtain a third plane image, uses the second offset
  • the displacement is processed on the second plane image (such as coordinate conversion processing) to obtain the fourth plane image.
  • the system library 240 drives the first and first display devices 110 to display the third plane image through the display driver 251 in the kernel layer, and drives the second and second display devices 120 to display the fourth plane image, so as to pass the third plane image and the fourth plane image
  • the image presents the virtual environment to the user.
  • the VR head-mounted display device 100 is VR glasses as an example for introduction below.
  • Figure 4A is a schematic diagram of the composition of the human eye.
  • the human eye may include a lens, a ciliary muscle, and a retina located in the fundus.
  • the lens can function as a zoom lens to converge the light rays entering the human eye, so that the incident light rays can be converged on the retina of the human eye fundus, so that the scene in the actual scene can form a clear image on the retina.
  • the ciliary muscle can be used to adjust the shape of the lens.
  • the ciliary muscle can adjust the diopter of the lens by contracting or relaxing, so as to achieve the effect of adjusting the focal length of the lens. Therefore, objects at different distances in the actual scene can be clearly imaged on the retina through the lens.
  • the perspectives of the left eye and the right eye are different, so the images collected by the left eye and the right eye are different.
  • the left eye and the right eye overlap in the field of view, so there is an overlapping area on the images collected by the left eye and the right eye, and the overlapping area includes images of objects located in the overlapping range of the user's binocular field of view.
  • the real environment 400 includes multiple observed objects, such as a tree 410 , a football 420 , a dog 430 and so on.
  • the football 420 is in the field of view of the left eye but not in the field of view of the right eye
  • the tree 410 is in the overlapping range 440 of the field of view of the left eye and the right eye
  • the dog 430 is in the field of view of the right eye within the field of view of but not within the field of view of the left eye.
  • what the left eye captures is the image 4100, that is, the image 4100 is the image formed on the retina of the left eye.
  • the overlapping area includes images of objects within the field of view overlapping range 440 .
  • the overlapping area on image 4100 is area 4110 .
  • the overlapping area on image 4200 is region 4210 .
  • the image 4101 of the tree 410 is included in the area 4110
  • the image 4201 of the tree 410 is included in the area 4210 (because the tree 410 is within the overlapping field of view of the left eye and the right eye).
  • Non-overlapping region 4120 of image 4100 includes image 4102 of soccer ball 420 (since soccer ball 420 is within the field of view of the left eye).
  • the image of the dog 430 is not included on the image 4100 . Included within non-overlapping region 4220 of image 4200 is image 4202 of dog 430 (since dog 430 is within the field of view of the right eye). Since the football 420 is not in the field of view of the right eye, the image 4200 does not include the image of the football 420 .
  • the center point Q1 of the image 4100 is aligned with the center W1 of the left eyeball, that is, the center point Q1 and the center W1 of the left eyeball are on the same straight line K1, and the straight line K1 passes through the center W1 of the left eyeball and is aligned with the left eyeball center W1.
  • the eyeball center W1 of the left eye can be understood as the center of the pupil of the left eye.
  • the center point Q2 of the image 4200 is aligned with the center W2 of the right eyeball, that is, Q2 and W2 are on the same straight line K2, and the straight line K2 is a line passing through the center W2 of the right eyeball and perpendicular to the right eyeball.
  • the eyeball center W2 of the right eye can be understood as the center of the pupil of the right eye.
  • the distance between the central point P of the visual field overlapping region 440 of both eyes and the straight line K1 is L1
  • the distance to the straight line K2 is L2
  • the distance L1 is equal to the distance L2
  • the direction from the central point P to the straight line K1 is the same as that to the straight line K1.
  • the direction of the straight line K2 is opposite.
  • the overlapping area 4110 includes a central point P1, and the point P1 is an image point corresponding to the point P in the image 4100 .
  • the overlapping area 4210 includes a central point P2, and the point P2 is an image point corresponding to the point P in the image 4200 .
  • the distance between point P1 and the central point Q1 of the image 4100 is L1'
  • the distance between the point P2 and the central point Q2 of the image 4200 is L2'
  • the distance L1' is equal to the distance L2'
  • the direction from P1 to Q1 is the same as
  • the direction from point P2 to point Q2 is opposite
  • the center point P1 and center point P2 are symmetrical with respect to the center line D of the face, that is, the center point P1 and center point P2 are also symmetrical with respect to the middle plane.
  • the center point of the image can be understood as the exact center of the image in the up-down direction, and is the exact center of the image in the left-right direction.
  • the center point of the overlapping area can be understood as the exact center of the overlapping area in the up-down direction, and the exact center in the left-right direction.
  • the brain will fuse the image 4100 and the image 4200 to obtain an image, which is the image actually seen by the user.
  • the fusion of the image 4100 and the image 4200 includes fusion of the image in the overlapping area 4110 and the image in the overlapping area 4210 .
  • the image of the same observed object in the overlap region 4110 and the image in the overlap region 4210 will be merged into one, for example, the image 4101 and the image 4201 will be merged into one, so that the user sees that the image includes a tree, which conforms to conditions of the real environment.
  • the overlapping areas on the image seen by the left eye and the right eye can be fused (that is, binocular fusion can be realized), so the scene seen by the user is clear and there will be no ghosting (such as the image of a tree). There is ghosting), and the state of the human eye is comfortable.
  • a head-mounted display device such as VR glasses uses the above-mentioned human vision generation mechanism to display a virtual environment to the user.
  • the VR glasses For the convenience of comparison, take the virtual environment shown by the VR glasses to the user as the environment 400 shown in FIG. Images of various objects (such as tree 410, football 420, dog 430), so that the user can feel the environment 400 through these two images.
  • objects such as tree 410, football 420, dog 430
  • the VR glasses generate two images, namely image 5100 and image 5200 .
  • Image 5100 and image 5200 include images of various objects in environment 400 (such as tree 410 , football 420 , dog 430 ).
  • the image 5100 and the image 5200 include overlapping regions.
  • the overlapping area on image 5100 is area 5110
  • the overlapping area on image 5200 is area 5210
  • all objects in area 5110 are included in area 5210
  • all objects in area 5210 are also included in area 5110 .
  • the image 5100 is displayed on the second display device 120 .
  • An image 5200 is displayed on the first display device 110 .
  • the center point R1 of the image 5100 is aligned with the center point S1 of the second display device 120, that is, R1 and S1 are on the same straight line K3, and the straight line K3 passes through the center of the second display device 120 Point S1 and a line perpendicular to the second display device 120 .
  • the central point R2 of the image 5200 is aligned with the central point S2 of the first display device 110, that is, R2 and S2 are on the same straight line K4.
  • the straight line K4 is a line passing through the center point S2 of the first display device 110 and perpendicular to the first display device 110 .
  • the distance from the center point P3 of the overlapping area 5110 on the image 5100 to the straight line K3 is L3.
  • the distance from the center point P4 of the overlapping area 5210 on the image 5200 to the straight line K4 is L4.
  • the distance L3 is equal to the distance L4, and the direction from the central point P3 to the straight line K3 is opposite to the direction from the central point P4 to the straight line K4.
  • the eyeball center W1 of the user's left eye is aligned with the center point S1 of the second display device 120 , and an image 5300 is collected. That is, the center point T1 of the image 5300 is aligned with the center point S1 of the display device. That is to say, the center point R1 of the image 5100, the center point S1 of the second display device 120, the center W1 of the user's left eye, and the center point T1 of the image 5300 are on the same straight line K3.
  • point T1 on the image 5300 is an image point corresponding to R1 on the image 5100.
  • the eyeball center W2 of the user's right eye is aligned with the center point S2 of the first display device 110 , and the captured image 5400 is obtained. That is, the center point T2 of the image 5400 is aligned with the center point S2 of the first display device 110 . That is to say, the center point R2 of the image 5200, the center point S2 of the first display device 110, the center W2 of the user's right eye, and the center point T2 of the image 5400 are on the same straight line K4. Wherein, point T2 on the image 5400 is an image point corresponding to R2 on the image 5200 .
  • the image 5300 captured by the left eye and the image 5400 captured by the right eye include overlapping regions.
  • the overlapping area on image 5300 is region 5310 .
  • the overlapping area on image 5400 is region 5410 .
  • the overlapping area 5310 includes a center point P3', and the point P3' is an image point corresponding to the point P3 on the image 5100 on the image 5300.
  • the overlapping area 5410 includes a center point P4', and the point P4' is an image point corresponding to the point P4 on the image 5200 on the image 5400.
  • the distance from point P3' to straight line K3 is L3'.
  • the distance from point P4' to straight line K4 is L4'.
  • the distance L3' is equal to the distance L4', and the direction from the point P3' to the straight line K3 is opposite to the direction from the point P4' to the straight line K4.
  • the center point P3' and the center point P4' are symmetrical with respect to the center line D of the human face, that is, the center point P3' and the center point P4' are also symmetrical with respect to the middle plane.
  • the brain can fuse the image 5300 and the image 5400 to obtain the environment 400 shown in FIG. 4B , thereby simulating the real environment.
  • the image 5300 and the image 5400 can be fused, the user's eyes are comfortable. That is to say, when the user wears VR glasses, the images collected by both eyes can be fused, and the eyes are comfortable.
  • the eyeball center W1 of the user's left eye is aligned with the center point S1 of the second display device 120
  • the eyeball center W2 of the user's right eye is aligned with the center point of the first display device 110 .
  • S2 alignment is taken as an example.
  • three situations are included.
  • Situation 1 The eyeball center W1 of the left eye can be aligned with the center point S1, but the eyeball center W2 of the right eye cannot be aligned with the center point S2.
  • One possible scenario is that when a manufacturer of VR glasses produces the VR glasses, due to assembly errors, the center of at least one display device (ie, the display screen) cannot be aligned with the eyes. Or, because the center of at least one display device cannot be aligned with the eyes due to loose parts during the user's use of the VR glasses, for example, the display screen is not aligned with the corresponding optical device during assembly; when the VR glasses are worn, The optics generally align with the eye, but the center of the display does not.
  • the distance between the two display devices of the VR glasses can be adjusted, referred to as interpupillary distance (Inter Pupillary Distance, IPD) adjustment, to adapt to changes in the interpupillary distance between different users.
  • IPD Inter Pupillary Distance
  • buttons or handles on the VR glasses can be used to adjust the distance between the two display modules of the VR glasses, when the positions of the display screen and optical devices change with the positions of the display modules.
  • member A may adjust the distance between the two display devices through buttons or handles.
  • member B can re-adjust the distance between the two display devices (which can be understood as the distance between two display modules).
  • the adjustment distances of the two display devices are the same and the directions are opposite. For example, if the first display device 110 moves to the left by 1 cm, the second display device 120 will move to the right by 1 cm; or if the first display device 110 moves to the right by 2 cm, then the second display device 120 will move to the left by 2 cm. .
  • the technical solution provided by the embodiment of the present application is applicable to any scene where the W1 point and the S1 point cannot be aligned and/or the W2 point and the S2 point cannot be aligned.
  • the eyeball center W1 of the left eye is aligned with the center point S1 of the second display device 120 , that is, the center point S1 and the eyeball center W1 of the left eye are on the same straight line K5 .
  • the straight line K5 is a line passing through the eyeball center W1 of the left eye and perpendicular to the eyeball of the left eye.
  • the eyeball center W2 of the right eye cannot be aligned with the center point S2 of the first display device 110 .
  • the eyeball center W2 of the right eye is aligned with the S2' point on the first display device 110 (the S2' point is at a distance N to the right of the S2 point), that is, the W2 point and the S2' point are on the same straight line K6'.
  • the straight line K6' is a straight line passing through the center W2 of the right eyeball and perpendicular to the right eyeball.
  • the virtual environment shown to the user through the VR glasses shown in FIG. 6A is the environment 400 shown in FIG.
  • Each object in 400 for example, images of tree 410, football 420, and dog 430, so that the user can feel the environment 400 through these two images.
  • the VR glasses generate two images, namely image 6100 and image 6200 .
  • the image 6100 and the image 6200 include overlapping regions.
  • the overlapping area on image 6100 is area 6110
  • the overlapping area on image 6200 is area 6210
  • all objects in area 6110 are included in area 6210
  • all objects in area 6210 are also included in area 6110 .
  • the image 6100 is displayed on the second display device 120 .
  • the image 6200 is displayed on the first display device 110 .
  • the center point R3 of the image 6100 is aligned with the center point S1 of the second display device 120, that is, the center point R3 and the center point S1 are on the same straight line K5.
  • the central point R4 of the image 6200 is aligned with the central point S2 of the first display device 110, that is, the central point R4 and the central point S2 are on the same straight line K6.
  • the straight line K6 is a line passing through the center point S2 of the first display device 110 and perpendicular to the first display device 110 .
  • the straight line K6 and the straight line K6' are different straight lines, and the distance between them is N.
  • the distance from the center point P5 of the overlapping area 6110 on the image 6100 to the straight line K5 is L5.
  • the distance from the center point P6 of the overlapping area 6210 on the image 6200 to the straight line K6 is L6.
  • the distance L5 is equal to the distance L6, and the direction from the central point P5 to the straight line K5 is opposite to the direction from the central point P6 to the straight line K6.
  • Image 6300 is captured by the left eye.
  • the image 6300 includes a center point T3 point.
  • Point T3 is an image point corresponding to center point R3 on image 6100 . That is, the R3 point, the S1 point, the W1 point, and the T3 point are on the same straight line K5.
  • the eyeball center W2 of the right eye is aligned with the point S2' on the first display device 110.
  • S2' aligns the R4' point on the image 6200.
  • Point R4' is at a distance N to the right of center point R4.
  • Image 6400 is captured by the right eye.
  • the image 6400 includes a center point T4.
  • Point T4 is an image point corresponding to point R4' on image 6200. That is to say, R4' point, S2' point, W2 point, T4 point are on the same straight line K6'.
  • the center point R4 of the image 6200 cannot be aligned, but the point R4' at a distance N to the right of point R4 cannot be aligned.
  • the image 6200 is shifted to the left by a distance N along with the first display device 110, so that the right eye cannot align with the center point R4 of the image 6200, but aligns with the point R4'. In this way, some areas on the image 6200 will move out of the sight range of the right eye. For example, the left area 6201 (shaded area) on the image 6200 moves out of the sight range of the right eye.
  • the image 6400 collected by the right eye does not include this part. Since the field of view of the right eye remains unchanged (for example, 110 degrees), even if the area 6201 moves out of the sight range of the right eye, the size of the image collected by the right eye remains unchanged.
  • the image 6400 includes a right region 6430 (shaded region). This region 6430 is not an image of the image 6200, but belongs to a part of the image captured by the right eye. Exemplarily, there is no image of any object in the area 6430, such as a black area.
  • FIG. 6B Please compare FIG. 6B with FIG. 5 for understanding.
  • the eyeball center W1 of the left eye is aligned with the center point S1 of the second display device 120
  • the eyeball center W2 of the right eye is aligned with the center point S2 of the second display device 120
  • the user's left eye sees Image 5300
  • the right eye sees image 5400
  • the brain can fuse the overlapping area 5310 with the overlapping area 5410, so the environment 400 can be seen clearly, and the human eye is comfortable.
  • the eyeball center W1 of the left eye is aligned with the center point S1 of the second display device 120, but the eyeball center W2 of the right eye is not aligned with the center point S2 of the first display device 110.
  • the brain cannot fuse the overlapping region 6310 with the overlapping region 6410. This is because the overlapping area 6410 in the image 6400 is missing part of the content compared with the overlapping area 5410 in the image 5400 in FIG. Since the overlapping areas cannot be fully fused, the user cannot see the environment 400 clearly.
  • the brain instinctively controls the movement of the right eye muscles to turn the right eyeball to the left in an attempt to align with the center point R4 of the image 6200, which will cause the left Look straight ahead and look to the left with the right eye. If the direction of sight of the two eyes is inconsistent, it will cause dizziness and poor experience.
  • Figures 6A and 6B take case 1 (that is, point W1 and point S1 can be aligned, but point W2 and S2 cannot be aligned) as an example. Alignment, but the center point W2 of the right eyeball can be aligned with the center point S2) is the same principle, and will not be repeated.
  • the following takes case 3 (the eyeball center W1 of the left eye cannot be aligned with the center point S1, and the eyeball center W2 of the right eye cannot be aligned with the center point S2) as an example to introduce.
  • the distance B1 between the center point S1 of the second display device 120 and the center point S2 of the first display device 110 is smaller than the center W1 of the left eye and the center W1 of the right eye.
  • the distance B2 between W2, wherein, the distance B2 between the eyeball center W1 of the left eye and the eyeball center W2 of the right eye can also be understood as the distance between the pupils, also known as the interpupillary distance (IPD). That is, as shown in FIG. 7A , the eyeball center W1 of the left eye cannot be aligned with the center point S1 , and the eyeball center W2 of the right eye cannot be aligned with the center point S2 either.
  • the eyeball center W1 of the left eye is aligned with point S1' on the display device (point S1' is at a distance N1 to the left of point S1), that is, the eyeball center W1 of the left eye and point S1' are on the same straight line K7, and the straight line K7 passes through
  • the eyeball center W1 of the left eye is a line perpendicular to the eyeball center of the left eye.
  • the eyeball center W2 of the right eye is aligned with point S2' on the display device (point S2' is at a distance N2 to the right of point S2), that is, the eyeball center W2 of the right eye and point S2' are on the same straight line K8, and the straight line K8 passes through the right eye
  • the eyeball center W2 is a line perpendicular to the eyeball center of the right eye.
  • the distance N1 from point S1' to point S1 plus the distance N2 from point S2' to point S2 is equal to the distance difference between B1 and B2.
  • Distance N1 may or may not be equal to distance N2. In some embodiments, the distance N1 may not be equal to the distance N2 due to, for example, assembly deviations (eg, display device assembly deviations).
  • the VR glasses generate two images, namely image 7100 and image 7200 .
  • image 7100 and image 7200 include overlapping regions.
  • the overlapping region on image 7100 is region 7110
  • the overlapping region on image 7200 is region 7210 .
  • the distance L7 from the center point P7 of the overlapping area 7110 to the center point R5 of the image 7100 is equal to the distance L8 from the center point P8 of the overlapping area 7210 to the center point R6 of the image 7200
  • the direction from P7 to R5 is the same as that from P8 to The direction of point R6 is opposite.
  • the image 7100 is displayed on the second display device 120 .
  • An image 7200 is displayed on the first display device 110 .
  • the center point S1 of the second display device 120 is aligned with the center point R5 of the image 7100, that is, the center point R5 and the center point S1 are on the same straight line K7'.
  • the straight line K7' and the straight line K7 are different straight lines, and the distance between them is N1.
  • the center point S2 of the first display device 110 is aligned with the center point R6 of the image 7200, that is, the center point R6 and the center point S2 are on the same straight line K8'.
  • the straight line K8' and the straight line K8 are different straight lines, and the distance between them is N2.
  • N2 is greater than N1.
  • the eyeball center W1 of the left eye is aligned with the point S1' on the display device.
  • the S1' point is aligned with the R5' point on the image 7100.
  • Point R5' is at a distance N1 to the left of center point R5.
  • Image 7300 is captured by the left eye.
  • the image 7300 includes a center point T5.
  • Point T5 is an image point corresponding to point R5' on the image 7100. That is to say, R5' point, S1' point, W1 point, T5 point are on the same straight line K7.
  • the eyeball center W2 of the right eye is aligned with the point S2' on the first display device 110.
  • the S2' point is aligned with the R6' point on the image 7200.
  • the point R6' is at a distance N2 to the right of the center point R6 of the image 7200.
  • Image 7400 is captured by the right eye.
  • the image 7400 includes a center point T6.
  • Point T6 is the image point corresponding to R6' on the image 7200. That is to say, R6' point, S2' point, W2 point, T6 point are on the same straight line K8.
  • the left eye cannot align with the center point R5 of the image 7100, but aligns with the center point R5 at a distance N1 to the left R5' point.
  • the right area 7101 (shaded area) on the image 7100 will move out of the sight range of the left eye.
  • the left area 7101 includes an image 7102 of a small tree (which can be interpreted as an object 7102), and the image 7102 moves out of the sight range of the left eye, so the image 7300 collected by the left eye does not include the image 7102 of the small tree.
  • the image 7300 includes a left area 7330 (shaded area).
  • This area 7330 is not an image of the image 7100, but belongs to a part of the image collected by the left eye.
  • the right eye cannot align with the center point R6 of the image 7200, but aligns with the center point R6 at a distance N2 to the right R6' point. Therefore, the left area 7201 (shaded area) on the image 7200 will move out of the sight range of the right eye.
  • the right area 7201 includes a part of the image 7202 of the tree (for example, the left part), this part will move out of the sight range of the right eye, so the image 7400 collected by the right eye only includes the image 7202 of the tree and is not in the area 7201 part.
  • the image 7400 includes a right region 7430 (shaded region), which is not an image of the image 7200, but a part of the image captured by the right eye.
  • a right region 7430 shade region
  • there is no image of any object in the area 7430 such as a black area. It can be understood that if the area 7201 (shadow area) is moved out of the sight range of the right eye, the user cannot obtain a comfortable field of view, and the field of view observed by the user will become smaller, affecting the VR experience.
  • the width of the region 7201 on the image 7200 that moves out of the sight range of the right eye is larger than the width of the region 7101 on the image 7100 that moves out of the sight range of the left eye.
  • the width of the area 7430 on the image 7400 captured by the right eye is larger than the width of the area 7330 on the image 7300 captured by the left eye.
  • the brain is unable to fuse overlapping regions on image 7300 and image 7400 . Because the objects contained in the overlapping area 7310 on the image 7300 and the overlapping area 7410 on the image 7400 are not exactly the same. For example, the overlapping region 7410 includes only half of the image of the tree. Therefore, the overlapping area 7310 and the overlapping area 7410 cannot be completely merged. Since the overlapping area 7310 and the overlapping area 7410 cannot be completely merged, the user cannot see the environment 400 clearly. At this time, the brain instinctively controls the movement of the left eye muscles to drive the left eyeball to turn to the right in an attempt to align with the center R5 of the image 7100.
  • Fig. 8A and Fig. 8B show a possible implementation manner.
  • the first display device 110 of the VR glasses includes two areas, area 1 and area 2 .
  • Area 1 can be the central area
  • area 2 can be the edge area (shaded area)
  • area 2 surrounds area 1. That is, the center point of the area 1 may overlap with the center point of the first display device 110, which is S2.
  • the area of the area 1 may be preset, and the distance N4 between the inner edge of the area 2 and the outer edge of the area 1 may be preset.
  • the second display device 120 includes two regions, region 3 and region 4 .
  • Area 3 may be a central area and area 4 may be an edge area (shaded area).
  • Region 4 surrounds Region 3.
  • the center point of the area 3 may overlap with the center point of the second display device 120, which is S1.
  • the area of the area 3 may be preset, and the distance N5 between the inner edge of the area 4 and the outer edge of the area 3 may be preset.
  • the eyeball center W1 of the left eye is aligned with the center point S1 of the second display device 120 .
  • An image 8100 is displayed in area 3 on the second display device 120 .
  • Nothing is displayed in area 4 (for example, area 4 is in a black screen state). That is, the eyeball center W1 of the left eye may be aligned with the center point S1 of the image 8100 .
  • Image 8300 is captured by the left eye.
  • the eyeball center W2 of the right eye is aligned with the center point S2 of the first display device 110 .
  • An image 8200 is displayed in area 1 on the first display device 110 .
  • Nothing is displayed in area 2 (for example, area 2 is in a black screen state).
  • point W2 may be aligned with center point S2 of image 8200 .
  • Image 8400 is collected for the right eye. Since the W1 point is aligned with the S1 point, and the W2 point is aligned with the S2 point, the image 8300 and the image 8400 can be merged without dizziness.
  • Fig. 8A is a situation where the eyeball center W1 of the left eye is aligned with the center point S1 of the second display device 120, and the eyeball center W2 of the right eye is aligned with the center point S2 of the first display device 110.
  • the One of the above-mentioned three situations may occur in the VR glasses, causing the eyeball center W1 of the left eye to be out of alignment with the center point S1 of the second display device 120, and/or, the eyeball center W2 of the right eye is not aligned with the first display device 120
  • the central point S2 of the device 110 cannot be aligned, and when one of the three situations occurs, the solution shown in FIG. 8B can be used.
  • case 1 (the eyeball center W1 of the left eye is aligned with the center point S1 of the second display device 120 , but the eyeball center W2 of the right eye is not aligned with the center point S2 of the first display device 110 ) is introduced as an example.
  • the eyeball center W2 of the user's right eye cannot be aligned with the center point S2 of the first display device 110 .
  • the eyeball center W2 of the right eye is aligned with the point S2' on the first display device 110.
  • Point S2' is at the distance N6 to the left of point S2.
  • the image 8200 is displayed in the area 5 on the first display device 110 .
  • the center point of area 5 is point S2', so when image 8200 is displayed in area 5, the center point of image 8200 is point S2', so point W2 can be aligned with the center point of image 8200. Therefore, the image 8400 captured by the right eye can be fused with the image 8300 captured by the left eye.
  • the problem that the point W1 cannot be aligned with the point S1 and/or the point W2 cannot be aligned with the point S2 can be solved.
  • Fig. 9 shows another possible implementation manner for solving binocular misfusion. This method does not need to reserve a display area on the display device, and can solve the problem of binocular misfusion through image processing, which is more conducive to reducing the size of the display device and realizing the miniaturization and portability of the device.
  • the eyeball center W1 of the left eye can be aligned with the center point S1 of the second display device 120, and the eyeball center W2 of the right eye cannot be aligned with the center point of the first display device 110.
  • the center point S2 is aligned with the S2' point on the display device (the S2' point is at a distance N to the right of the S2 point).
  • the first display device 110 and the second display device 120 are asymmetrical with respect to the middle plane (or the center line of the human face).
  • the VR glasses generate two images, namely image 9100 and image 9200 .
  • the image 9100 is displayed in full screen on the second display device 120 .
  • the image 9200 is displayed in full screen on the first display device 110 . That is, there is no need to reserve a display area on the display device, so that the size of the display device can be relatively small, which can save costs, and the small size of the display device is conducive to the design trend of light and small equipment.
  • the second display device 120 displays the image 9100
  • the center point S1 of the second display device 120 is aligned with the center point R9 of the image 9100, that is, point S1 and point R9 are on the same straight line K9.
  • the center point S2 of the first display device 110 is aligned with the center point R10 of the image 9200, that is, the point S2 and the point R10 are on the same straight line K10.
  • image 9100 and image 9200 have overlapping areas.
  • the overlapping area on image 9100 is region 9110 .
  • the overlapping area on image 9200 is region 9210 .
  • the distance from the center point P9 of the overlapping area 9110 to the center point R9 of the image 9100 is L9
  • the first direction from the center point P9 of the overlapping area 9110 to the center point R9 of the image 9100 is left, and the center point P10 of the overlapping area 9210 is to
  • the distance from the central point R10 of the image 9200 is L10
  • the first direction from the central point P10 of the overlapping area 9210 to the central point R10 of the image 9200 is right.
  • Distance L9 is not equal to distance L10.
  • the distance difference between the distance L9 and the distance L10 is N.
  • the point (such as the right vertex) is the reference, that is, the distance from the center point P9 of the overlapping area 9110 to the left vertex of the image 9100 is not equal to the distance from the center point P10 of the overlapping area 9210 to the right vertex of the image 9200.
  • the center point P9 and the center point P10 are symmetrical with respect to the middle plane (or the center line of the human face).
  • Image 9300 is captured by the left eye.
  • the image 9300 includes a center point T9.
  • Point T9 is an image point corresponding to center point R9 on image 9100 . That is, the R9 point, the S1 point, the W1 point, and the T9 point are on the same straight line K9.
  • the eyeball center W2 of the right eye is aligned with point S2' on the display device (point S2' is at a distance N to the right of point S2).
  • the point S2' is aligned with the point R10' on the image 9200, and the point R10' is located at a distance N to the right of the central point R10.
  • Image 9400 is collected by the right eye.
  • the image 9400 includes a center point T10.
  • Point T10 is the image point corresponding to point R10' on the image 9200. That is, point T10, point W2, point S2' and point R10' are on the same straight line K10'. Wherein, the straight line K10' is different from the straight line K10, and the distance between them is N.
  • the overlapping area 9310 on the image 9300 includes the central point P9'.
  • Point P9' is an image point corresponding to point P9 on the image 9100.
  • the distance from point P9' to straight line K9 is L9'.
  • the overlapping area 9410 on the image 9400 includes a central point P10', and the point P10' is an image point corresponding to the point P10 on the image 9200.
  • the distance from point P10' to straight line K10' is L10'.
  • the distance L9' is equal to L10'.
  • the direction from point P9' to straight line K9 is opposite to the direction from point P10' to straight line K10', and the center point P9' and center point P10' are symmetrical with respect to the median plane (or the center line of the face).
  • the first display device 110 in FIG. 9 is shifted to the left by a distance N compared to the corresponding optical device 130, because the overlapping region 9210 is moved to the right by a distance N, the first display device 110 is compensated (or can be called offset). Therefore, the overlapping region 9410 on the image collected by the right eye and the overlapping region 9310 on the image collected by the left eye can be fused.
  • an area 9230 is left at the far left of the first display device 110 , and the width of the area 9230 is N.
  • the area 9230 displays a part of the image on the left side of the overlapping area 9110.
  • an area with a width of N near the left side of the overlapping area 9110 displays background objects (such as the background is blue sky and white clouds), then the area The 9230 also displays background objects (such as blue sky and white clouds in the background). Then the objects in the area 9230 and the partial area on the left side of the overlapping area 9110 are the same, and these two areas can also be understood as overlapping areas.
  • region 9230 includes new objects that were not present on image 9100. For example, if the first display device 110 is displaced upward by a distance N compared to the corresponding optical device 130 , the overlapping region 9210 moves downward by a distance N, and the region 9230 appears on the upper part of the first display device 110 . Since the image 9200 is an image block on a panoramic image, the objects in the area 9230 may be objects in the area above the overlapping area 9210 in the panoramic image block (objects not in the image 9100). In some embodiments, the area 9230 may display a first color, and the type of the first color is not limited, for example, black, white and so on.
  • the area 9230 may move out of the sight range of the right eye, so the area 9230 may not display any content, For example, the area 9230 may not be powered on, that is, the screen of the area 9230 is black, which can save power consumption. It should be noted that when the area 9230 moves out of the right eye's line of sight, since the field angle of the right eye does not change, the image 9400 captured by the right eye includes the right area 9430 (shaded area), and the area 9430 is not an image of the image 9200 , such as a black area, which represents the image display without any objects in this part.
  • FIG. 9 takes case 1 as an example for introduction, and case 3 (ie, FIG. 7A ) is taken as an example for introduction below.
  • N1 is smaller than N2. Therefore, when the user wears VR glasses, the eyeball center W1 of the left eye cannot be aligned with the center S1 of the second display device 120, but is aligned with the point S1' on the second display device 120, and the point S1' is at a distance N1 to the left of point S1 place.
  • the eyeball center W2 of the right eye cannot be aligned with the center point S2 of the first display device 110, but is aligned with the point S2' on the display device, and the point S2' is at a distance N2 to the right of point S2.
  • the first display device 110 and the second display device 120 are asymmetrical with respect to the middle plane (or the center line of the human face).
  • the VR glasses generate two images, namely image 1000 and image 1100 .
  • the image 1000 is displayed in full screen on the second display device 120 .
  • the image 1100 is displayed in full screen on the first display device 110 . That is, there is no need to reserve a display area on the display device, so that the size of the display device can be relatively small, which can save costs and is conducive to the design trend of light and small equipment.
  • the second display device 120 displays the image 1000
  • the center point S1 of the second display device 120 is aligned with the center point R11 of the image 1000 , that is, S1 and R11 are on the same straight line K11 .
  • the center point S2 of the first display device 110 is aligned with the center point R12 of the image 1100 , that is, S2 and R12 are on the same straight line K12 .
  • image 1000 and image 1100 have overlapping areas.
  • the overlapping area on the image 1000 is the area 1010 , and in some embodiments, the area 1030 may also be the overlapping area, and the area 1030 overlaps with the part of the image on the right side of the area 1110 .
  • the area 1030 in order to save power, may not display any content. In FIG. 10 , the area 1030 may not display any content as an example for illustration.
  • the overlapping area on the image 1100 is the area 1110 , and in some embodiments, the area 1130 may also be the overlapping area, and the area 1130 overlaps with the part of the image on the left side of the area 1010 .
  • the area 1130 may not display any content. In FIG. 10 , the area 1130 may not display any content as an example for illustration.
  • the distance from the center point P11 of the overlapping area 1010 to the center point R11 of the image 1000 is L11, and the first direction from the center point P11 of the overlapping area 1010 to the center point R11 of the image 1000 is left.
  • the distance from the center point P12 of the overlapping area 1110 to the center point R12 of the image 1100 is L12, and the second direction from the center point P12 of the overlapping area 1110 to the image 1100 is right. Since N1 is not equal to N2, the distance L11 is not equal to the distance L12. Taking N1 smaller than N2 as an example, the distance L11 is greater than the distance L12.
  • the distance difference between the distance L11 and the distance L12 is equal to the distance difference between N1 and N2.
  • the direction from point P11 to point R11 is different from the direction from point P12 to point R12, for example, the direction is opposite.
  • the center point P11 and the center point P12 are symmetrical with respect to the middle plane (or the center line of the human face).
  • the eyeball center W1 of the left eye is aligned with the S1' point of the second display device 120.
  • the point S1' is aligned with the point R11' on the image 1000, and the point R11' is located at a distance N1 to the left of the central point R11.
  • Image 1200 is captured by the left eye.
  • the image 1200 includes a center point T11.
  • Point T11 is an image point corresponding to point R11' on the image 1000. That is, point T11, point W1, point S1', point R11' are on the same straight line K11'. Wherein, the straight line K11' is different from the straight line K11, and the distance between them is N1.
  • the eyeball center W2 of the right eye is aligned with the point S2' on the first display device 110 .
  • the point S2' is aligned with the point R12' on the image 1100, and the point R12' is located at a distance N2 to the right of the central point R12.
  • Image 1300 is captured by the right eye.
  • the image 1300 includes a center point T12.
  • Point T12 is an image point corresponding to point R12' on the image 1100. That is, point T12, point W2, point S2' and point R12' are on the same straight line K12'. Wherein, the straight line K12' is different from the straight line K12, and the distance between them is N2.
  • the overlapping area 1210 on the image 1200 collected by the left eye includes the central point P11'.
  • Point P11' is an image point corresponding to point P11 on the image 1000.
  • the distance from point P11' to straight line K11' is L11'.
  • the overlapping area 1310 on the image 1300 captured by the right eye includes a center point P12', and the point P12' is the image point corresponding to the point P12 on the image 1100.
  • the distance from point P12' to straight line K12' is L12'.
  • the distance L11' is equal to L12'.
  • the direction from point P11' to straight line K11' is opposite to the direction from point P12' to straight line K12', and the center point P11' and center point P12' are symmetrical with respect to the median plane (or the center line of the face).
  • the second display device 120 in FIG. offset.
  • the first display device 110 is shifted to the left by a distance N2 relative to the corresponding optical device 130, but the overlapping region 1110 is moved to the right by a distance N2, which compensates (or offsets) the shift of the first display device 110, so
  • the overlapping area 1210 and the overlapping area 1310 can be merged.
  • the non-overlapping areas on the image 1000 include two areas, namely, area 1030 and area 1040 .
  • Overlap region 1010 is located between region 1030 and region 1040 .
  • the second color may be displayed in the area 1030 , and the type of the second color is not limited, for example, black, white, etc., or may also be the background color of the image 1000 .
  • the non-overlapping regions on image 1100 include two regions, region 1130 and region 1140 .
  • Overlap region 1110 is located between region 1130 and region 1140 .
  • the first color can be displayed in the area 1130, and the type of the first color is not limited, such as black, white, etc., or it can also be the background color of the image 1100.
  • the first color and the second color may be the same or different.
  • N1 is not equal to N2
  • the area (or width) of the non-overlapping region 1030 on the image 1000 is different from the area (or width) of the non-overlapping region 1130 on the image 1100 .
  • the width of the region 1030 is smaller than the width of the region 1130 .
  • the area (or width) of the non-overlapping region 1040 on the image 1000 is different from the area (or width) of the non-overlapping region 1140 on the image 1100 .
  • the width of the region 1140 is smaller than the width of the region 1040 .
  • image 1000 and image 1100 are image blocks in different areas on the same panoramic image, for example, image 1000 is an image block located in the first display area on the panoramic image; image 1100 is an image block located on the panoramic image An image block in the second display area; wherein, the overlapping area is the overlapping area of the first display area and the second display area.
  • the area 1030 may be an area on the right side of the area 1010 on the panoramic image.
  • the area 1130 may be an area on the left side of the area 1110 on the panoramic image.
  • the region 1130 will move out of the sight range of the right eye, so the region 1130 may also not display any content, for example, not display any color.
  • the image 1300 collected by the right eye includes the right area 1330 (shaded area) because the field angle of the right eye does not change, and the area 1330 is not an image of the image 1100. , such as a black area, which represents the image display without any objects in this part.
  • the region 1030 will move out of the sight range of the left eye, so the region 1030 may also not display any content, for example, not display any color .
  • the image 1200 collected by the left eye includes the left area 1230 (shaded area) because the field angle of the left eye does not change, and the area 1230 is not an image of the image 1000. , such as a black area, which represents the image display without any objects in this part.
  • the position of the first display device 110 and/or the second display device 120 can be changed dynamically.
  • the position of the central point P11 of the overlapping area 1010 can move as the position of the second display device 120 moves, wherein the moving direction of the central point P11 is the same as that of the second display device 120.
  • the moving direction of the second display device 120 is opposite to compensate or counteract the positional movement of the second display device 120 .
  • the position of the center point P12 of the overlapping area 1110 can move as the position of the first display device 110 moves, wherein the direction of the position movement of the center point P12 is opposite to the direction of the position movement of the first display device 110 to compensate or The positional movement of the first display device 110 is counteracted.
  • VR glasses with assembly deviation still have assembly deviation when performing IPD adjustment.
  • the VR glasses are adjusted for IPD (that is, the first display device 110 and the second display device 120 move the same distance and the movement direction is opposite, for example, the first display device 110 and the second display device 120 respectively left and right, or the first display device 110 and the second display device 120 move right and left respectively).
  • the relative positional relationship of the assembly deviation of the first display device 110 and the second display device 120 remains unchanged (that is, the difference of the offset does not change), so the first display device 110 and the second display device 120 respectively display
  • the positional movement relationship of the two images remains unchanged to ensure that both eyes can achieve binocular fusion before and after IPD adjustment.
  • the distance difference between the distance L11 and the distance L12 remains unchanged compared with that without IPD adjustment, and the distance between the first direction and the second direction The relative relationship between them remained unchanged compared to before IPD adjustment.
  • the first direction and the second direction remain unchanged compared to before the IPD adjustment.
  • the VR glasses perform IPD adjustment before displaying the first image and the second image, and before the IPD adjustment, when the first display device 110 and the second display device 120 respectively display images, the second display device 120 displays
  • the distance from the center point of the overlapping area on the image to the center point of the image displayed by the second display device 120 is a third distance, and the center point of the overlapping area on the image displayed by the first display device 110 is to the center of the image displayed by the first display device 110
  • the distance between the points is the fourth distance
  • the difference between the third distance and the fourth distance is the first distance difference; after being adjusted by the IPD, the first display device 110 and the second display device 120 respectively display the first image and the second image,
  • the distance from the center point of the overlapping area on the first image to the center point of the first image is the distance L11
  • the distance from the center point of the overlapping area on the second image to the center point of the second image is the distance L12
  • the distance between the distance L11 and the distance L12 The difference is a second
  • the VR glasses perform IPD adjustment after displaying the first image and the second image, and before the IPD adjustment, the first display device 110 and the second display device 120 respectively display the first image and the second image, and the first
  • the distance from the center point of the overlapping area on the image to the center point of the first image is the distance L11
  • the distance from the center point of the overlapping area on the second image to the center point of the second image is the distance L12
  • the difference between the distance L11 and the distance L12 is The second distance difference
  • the first display device 110 and the second display device 120 display images respectively, the center point of the overlapping area on the image displayed by the second display device 120 to the second display device 120 displays the image
  • the distance from the center point of the first display device 110 is the fifth distance
  • the distance from the center point of the overlapping area on the image displayed by the first display device 110 to the center point of the image displayed by the first display device 110 is the sixth distance
  • the difference is a third distance difference
  • FIG. 11 is a schematic flowchart of a display method provided by an embodiment of the present specification.
  • the method can be applied to any of the above display methods, for example, can be applied to the display method shown in FIG. 9 or FIG. 10 .
  • the process includes:
  • the VR glasses acquire three-dimensional image data.
  • the 3D image data includes 2D image information and depth information.
  • the depth information includes the depth corresponding to each pixel in the two-dimensional image information.
  • the three-dimensional image data may be generated by a VR application, such as a VR game application, a VR teaching application, a VR viewing application, a VR driving application, and the like.
  • the VR glasses acquire a first coordinate transformation matrix and a second coordinate transformation matrix.
  • the first coordinate transformation matrix is used to transform the three-dimensional image data into the first plane image
  • the second coordinate transformation matrix is used to transform the three-dimensional image data into the second plane image.
  • the first plane image corresponds to the first display device
  • the second plane image corresponds to the second display device.
  • the first display device may be the second display device 120 in the foregoing, corresponding to the left eye
  • the second display device may be the first display device 110 in the foregoing, corresponding to the right eye.
  • the first coordinate transformation matrix is used to convert the three-dimensional image data from the first coordinate system to the second coordinate system
  • the first coordinate system is the coordinate system where the three-dimensional image data is located
  • the second coordinate system is the coordinate corresponding to the first display device or the left eye Tie.
  • the coordinate system corresponding to the left eye may be the coordinate system of the first virtual camera.
  • the first virtual camera can be understood as a virtual camera created to simulate the left eye. Because the image acquisition principle of the human eye is similar to that of the camera, a virtual camera can be created to simulate the image acquisition process of the human eye. The first virtual camera is the left eye of the simulated person.
  • the position of the first virtual camera is the same as that of the left eye, and/or the field of view of the first virtual camera is the same as that of the left eye.
  • the viewing angle of the human eye is 110 degrees up and down, and 110 degrees left and right, so the field of view of the first virtual camera is 110 degrees up and down, and 110 degrees left and right.
  • the VR glasses can determine the position of the left eye, and the first virtual camera is set at the position of the left eye. There are multiple ways to determine the position of the left eye. For example, in mode 1, the position of the first display device is determined first, and then the position of the left eye can be estimated by adding the distance A to the position of the first display device.
  • the position of the left eye determined in this way is more accurate.
  • the distance A is the distance between the display device and the human eye, which may be stored in advance.
  • Mode 2 the position of the left eye is equal to the position of the first display device. This method ignores the distance between the human eye and the display device, and is less difficult to implement.
  • the second coordinate transformation matrix is used to convert the three-dimensional image data from the first coordinate system to the third coordinate system
  • the first coordinate system is the coordinate system where the three-dimensional image data is located
  • the third coordinate system is the coordinate corresponding to the second display device or the right eye Tie.
  • the coordinate system corresponding to the right eye may be the coordinate system of the second virtual camera.
  • the second virtual camera can be understood as a camera created to simulate the user's right eye.
  • the position of the second virtual camera is the same as that of the right eye, and/or the field of view of the second virtual camera is the same as that of the right eye.
  • the first coordinate transformation matrix and the second coordinate transformation matrix may be stored in the VR glasses in advance.
  • the VR glasses read the first coordinate transformation matrix and the second coordinate transformation matrix from the register.
  • the VR glasses process the 3D image data according to the first coordinate transformation matrix to obtain a first plane image, and process the 3D image data according to the second coordinate transformation matrix to obtain a second plane image.
  • both the first planar image and the second planar image are obtained from three-dimensional image data through coordinate transformation.
  • the coordinate conversion process can be understood as using a virtual camera to capture a 3D image to complete the conversion from 3D to 2D.
  • the first virtual camera shoots three-dimensional image data to obtain a first plane image
  • the second virtual camera shoots three-dimensional image data to obtain a second plane image.
  • FIG. 12A is a schematic diagram of the first virtual camera.
  • the first virtual camera includes four parameters, such as a field of view (Field Of View, FOV) angle, an aspect ratio of an actual shooting window, a near clipping plane, and a far clipping plane.
  • FOV Field Of View
  • the aspect ratio of the actual shooting window may be the aspect ratio of the far clipping plane.
  • the far clipping plane can be understood as the farthest range that the first virtual camera can capture
  • the near clipping plane can be understood as the closest range that the first virtual camera can capture.
  • objects within the FOV and between the near clipping plane and the far clipping plane can be captured by the first virtual camera.
  • the three-dimensional image data includes multiple objects, such as a sphere 1400 , a sphere 1401 and a sphere 1402 .
  • the sphere 1400 is outside the FOV and cannot be photographed, while the spheres 1401 and 1402 are within the FOV and between the near clipping plane and the far clipping plane, and can be photographed. Therefore, the image captured by the first virtual camera includes the sphere 1401 and the sphere 1402 .
  • the image captured by the first camera can be understood as a projected image of the three-dimensional image data on the near clipping plane. The coordinates of each pixel point on the projected image can be determined. For example, referring to FIG. 12B , a three-dimensional coordinate system O-XYZ is established with the center of the first virtual camera.
  • edge point G1 , edge point G2 , edge point G3 and edge point G4 of the object on the projection image of the near clipping plane in the three-dimensional image data Take the edge point G1 , edge point G2 , edge point G3 and edge point G4 of the object on the projection image of the near clipping plane in the three-dimensional image data as an example.
  • the coordinates corresponding to the edge point G1 to the edge point G4 are (l, r, t, b). Among them, l is left (left), t is top (top), r is right (right), and b is bottom (bottom).
  • the coordinates of edge point G1 are (3, 0, 3, 0)
  • the coordinates corresponding to edge point G2 are (0, 3, 3, 0)
  • the coordinates corresponding to edge point G3 are (3, 0, 0, 3)
  • the coordinates corresponding to the edge point G4 are (0, 3, 0, 3).
  • the depths of the four edge points are all n.
  • A is (l, r, t, b), and the first coordinate conversion matrix H satisfies as follows:
  • the above-mentioned first coordinate transformation matrix H can be used to offset in four directions, up, down, left, and right, to obtain the position in the first plane image. Since A is one row and four columns, and H is four rows and four columns, the obtained K is one row and four columns, that is, the position of the pixel on the first plane image is described by parameters in the four directions of up, down, left and right.
  • the acquisition principle of the second plane image is the same as that of the first plane image, and will not be repeated.
  • the VR glasses acquire the first offset of the first display device, and/or, the second offset of the second display device.
  • the first offset and the second offset may be the same or different.
  • the first offset of the first display device 110 is the distance N2 to the left of the first display device 110 relative to the corresponding optical device 130
  • the second offset of the second display device 120 is The second display device 120 is offset to the right by a distance N1 relative to the corresponding optical device 140 .
  • the first offset of the first display device 110 is the first display device 110 relative to the corresponding optical device 130 Offset distance N3 to the left
  • the second offset amount of the second display device 120 is to shift the distance N3 to the left of the second display device 120 relative to the corresponding optical device 140
  • the first image and the second image do not need to coordinate transformation, the user's eyes can be fused, and at this time, the coordinate transformation of the first image and the second image may not be performed.
  • the coordinates of the first image and the second image can be moved to the right by a distance N3, so that the center of the image appears directly in front of the human eye, avoiding squinting.
  • the VR glasses store the first offset and the second offset in advance. For example, stored in a register.
  • the VR glasses read the first offset and the second offset from the register.
  • the first offset and the second offset may be calibrated and stored in the VR glasses before leaving the factory.
  • the binocular fusion detection device includes a camera 1401 and an optical system 1402 .
  • the images displayed by the two display devices of the VR glasses are captured by the camera 1401 through a series of reflections by the optical system 1402 .
  • the second display device 120 of the VR glasses displays image 1
  • the first display device 110 displays image 2 .
  • Image 1 and image 2 There are crosses on image 1 and image 2 (the cross on image 1 is a dotted line, and the cross on image 2 is a solid line), and the distance L01 from the intersection point O1 of the cross on image 1 to the center point O2 of image 1 is equal to the image
  • the distance L02 from the intersection point O3 of the cross on 2 to the center point O4 of the image 2 and the direction from the intersection point O1 to the center point O2 is opposite to the direction from the intersection point O3 to the center point O4.
  • the image 3 captured by the camera 1401 is reflected by the optical system 1402 .
  • Image 3 includes two crosses.
  • the intersection point of one cross is O1', which is the image point corresponding to the intersection point O1 on the image 1
  • the intersection point O3' of the other cross is the image point corresponding to the intersection point O3 on the image 2. It should be noted that if there is no assembly displacement deviation, since the cross in image 1 and image 2 is symmetrical (that is, the distance L01 is equal to the distance L02, and the direction from the intersection O1 to the center point O2 is the same as the direction from the intersection O3 to the center point O4 is in the opposite direction), so there should be a cross on image 3 (cross fusion of image 1 and image 2).
  • the interval between two crosses on the image 3 includes: the interval in the X direction is x1, and the interval in the Y axis is y1.
  • the interval between two crosses can be used to determine the first offset of the first display device and the second offset of the second display device.
  • the first offset of the first display device is (x1/2, y1/2), and the second offset of the second display device is (-x1/2, -y1/2).
  • the first offset of the first display device is (x1, y1), and the second offset of the second display device is 0.
  • the first offset of the first display device is (x1/3, y1/3), and the second offset of the second display device is (-2x1/3, -2y1/3).
  • the sum of the displacements of the first display device and the second display device in the direction of the X axis is x1
  • the sum of the displacements of the first display device and the second display device in the direction of the Y axis is y1.
  • the second offset of the second display device is For example, it indicates that the first display device is offset in the positive direction of the X axis and the positive direction of the Y axis, because the first offset is a positive number; the second display device is offset in the opposite direction of the X axis, and in the opposite direction of the Y axis, because the second offset Quantity is negative.
  • the size of the offset of the object on the image is proportional to the size of the position offset of the display device.
  • the offset of the display device should not be too large or the two displays The offset difference of the devices is not easy to be too large.
  • the first offset of the first display device is (x1, y1)
  • the second offset of the second display device is 0, which will cause The offset of the object in the image is too large or the difference in the offset of the object on the image displayed by the two display devices is too large, so the offset corresponding to the two display devices can be shared to compensate the total translation, such as the first display device
  • the first offset of the second display device is (x1/2, y1/2)
  • the second offset of the second display device is (-x1/2, -y1/2), so as to ensure that the object offset on the image does not exceed too large or the physical offset difference between the images displayed by the two display devices will not be too large.
  • the VR glasses process the first plane image based on the first offset to obtain a third plane image, and/or process the second plane image according to the second translation to obtain a fourth plane image.
  • the first plane image is image 1 and the second plane image is image 2 .
  • the first offset of the second display device 120 is (x1/2, y1/2)
  • the second offset of the first display device 110 is (-x1/2, -y1/2).
  • the image 3 captured by the camera includes a cross, that is, the cross lines on the two images 1 are fused.
  • processing the first plane image based on the first offset to obtain the third plane image may include at least one of manner 1 or manner 2.
  • Mode 1 using the first offset to translate each pixel on the first plane image to obtain the third plane image. That is to say, the first plane image moves as a whole (including overlapping areas and non-overlapping areas).
  • Mode 2 using the first offset to translate the pixels in the overlapping area on the first plane image to obtain the third plane image.
  • the overlapping area is an overlapping area on the first plane image and the second plane image, and at least one same object is included in the overlapping area.
  • Method 2 considers that the pixels in the non-overlapping area do not need to be fused, so only the pixels in the overlapping area are translated to ensure binocular fusion. This method can reduce the workload and improve efficiency.
  • the first point in the overlapping area on the first plane image is at (X1, Y1), and the first point is at the third plane
  • the first point may be any point in the overlapping area, such as a central point, an edge point, and the like.
  • processing the second plane image based on the second translation amount to obtain the fourth plane image may include at least one of manner 1 or manner 2.
  • Mode 1 using the second offset to translate each pixel on the second plane image to obtain the fourth plane image. That is to say, the second plane image moves as a whole.
  • Mode 2 using the second offset to translate the pixels in the overlapping area on the second plane image to obtain the fourth plane image.
  • the overlapping area is an overlapping area on the first plane image and the second plane image, and at least one same object is included in the overlapping area.
  • the second point may be any point in the overlapping area, such as a center point, an edge point, and the like.
  • multiple objects may be included in the overlapping area, such as a first object and a second object.
  • the offsets of the first object and the second object may be different.
  • the first feature point of the first object and the second feature point of the second object are taken as an example for description.
  • the coordinates of the first feature point of the first object are (X5, Y5)
  • the coordinates of the second feature point of the second object are (X6, Y6)
  • the first feature point of the first object The point may be the center point of the first object or a certain vertex of the first object, etc.
  • the second feature point of the second object may be the center point of the second object or a certain vertex of the second object.
  • the first offset of the first feature point as (x1/2, y1/2) and the first offset of the second feature point as (x1/3, y1/3) as an example, that is, the first feature point The first offset of is larger than the second feature point.
  • the third plane image obtained after processing the first plane image is used for displaying on the second display device 120 .
  • coordinates (X5, Y5), coordinates (X6, Y6), coordinates (X7, Y7), and coordinates (X8, Y8) are coordinates in the same coordinate system, and are coordinates on the second display device 120 .
  • the coordinates of the first feature point of the first object are (X9, Y9), and the coordinates of the second feature point of the second object are (X10, Y10).
  • the second offset of the first feature point is (x1/4, y1/4), and the second offset of the second feature point is (x1/5, y1/5), that is, the second offset of the first feature point The offset is greater than the second object.
  • the fourth plane image is obtained after processing the second plane image, and the fourth plane image is used for displaying on the first display device 110 .
  • the coordinate difference (D1, D2) and the coordinate difference (D3, D4) are different, for example, D1>D3 and/or D2>D4, because the offset of the first object is greater than the offset of the second object, wherein, The offset of the second object may be 0, that is, the second object may not be offset.
  • the offset of the first object is greater than that of the second object.
  • the conditions include:
  • the first object is in the area where the user's gaze point is located, and the second object is not in the area where the user's gaze point is located.
  • the gaze point of the user can be obtained according to the information obtained by the eye tracking module 105 .
  • the area where the gaze point is located may be a circular area or a square area centered on the gaze point.
  • users pay more attention to objects in the area where the gaze point is located, and less attention to objects in the area where the gaze point is located. Therefore, the offset of the object where the non-user gaze point is located is small or even not offset, which will not affect the user experience. , and save computational effort.
  • the offsets of different objects can change accordingly to match the gaze point change and achieve better visual effects.
  • both the first object and the second object are located in the area where the user's gaze point is located, and the second object is closer to the edge of the area where the user's gaze point is located than the first object.
  • users pay more attention to objects in the middle of the area where the gaze point is located, and less attention to objects at the edge. Therefore, the offset of objects at the edge is smaller, which will not affect the user experience and save workload.
  • the distance between the first object and the center of the first image is greater than the distance between the second object and the center of the second image, and in the fused image, the first object is closer to the center than the second object .
  • users pay more attention to objects in the middle of the image and less attention to objects in the edge. Therefore, the offset of objects at the upper edge of the image is smaller, which can save workload and will not affect the user experience. For example, when the electronic device is playing a movie, the user will pay more attention to the content in the center of the screen, and the second object located at the upper edge of the image may be shifted less or even not.
  • the number of user interactions corresponding to the first object is greater than the number of user interactions corresponding to the second object.
  • the objects with many interactions are the objects that the user is interested in, and the objects with few interactions are not the objects that the user is interested in, so the offset of the object with few interactions is small, which can save the workload and will not affect the user experience.
  • the electronic device may record the number of interactions between the user and each object, and if the number of interactions between the user and the first object is greater than a first threshold, the first object may always be shifted. If the number of interactions between the user and the second object is less than the second threshold, only when the second object is located in the area where the gaze point is located, the second object is offset, or more offset is performed.
  • the first object is a user-specified object
  • the second object is not a user-specified object.
  • the user may choose to offset only the first object, or to offset the first object more, according to his needs.
  • the first display device displays the third plane image
  • the second display device displays the fourth plane image
  • the third plane image and the fourth plane image are processed images.
  • the user will not be unable to achieve binocular fusion. Able to see the virtual environment clearly and comfortably.
  • the coordinate transformation matrix is first used to process the three-dimensional image data to obtain the first plane image and the second plane image, and then the first plane image and/or the second plane image are processed panning.
  • the coordinate transformation matrix can also be adjusted first, and then the three-dimensional image data can be processed to obtain a plane image by using the adjusted coordinate transformation matrix.
  • the plane image obtained in this way does not need to be translated, because The coordinate transformation matrix has been adjusted, so the obtained planar image is an image that has been translated.
  • FIG. 14 is another schematic flowchart of the display method provided by the embodiment of the present application.
  • the process includes:
  • the first coordinate transformation matrix is used to transform the three-dimensional image data into the first plane image
  • the second coordinate transformation matrix is used to transform the three-dimensional image data into the second plane image.
  • the first planar image is displayed on the first display device
  • the second planar image is displayed on the second display device.
  • update l, r, t, and b in the above-mentioned first coordinate transformation matrix to l-x/2, r-x/2, t+y/2, b+y/2 is brought into the first coordinate transformation matrix above to obtain the third coordinate transformation matrix, as follows:
  • first plane image and the second plane image do not need to be translated.
  • the distance may be represented by the number of pixels.
  • the distance L9 and the distance L10 can be expressed in pixels, and the distance L9 can be expressed as: there are M1 pixels between the center point P9 of the overlapping area 9110 and the center point R9 of the image 9100; the distance L10 can be expressed as: There are M2 pixels between the center point P10 of the overlapping area 9210 and the center point R1 of the image 9200; if M1 is not equal to M2, then the distance L9 is not equal to the distance L10.
  • the first display device 110 or the second display device 120 moves in the horizontal direction, the eyeball center W1 of the user's left eye cannot be aligned with the center point S1 of the second display device 120, or the right eye
  • the eyeball center W2 cannot be aligned with the center point S2 as an example.
  • the first display device 110 or the second display device 120 can also move in other directions. For example, the first display device 110 or the second display device 120 moves up and down. direction, so that the eyeball center W1 of the user's left eye cannot be aligned with the center point S1 of the second display device 120 or the eyeball center W2 of the right eye of the user cannot be aligned with the center point S2.
  • the rotation angle of the first display device 110 In this case, the image displayed on the first display device 110 has a projected image in the horizontal direction, and the center point of the projected image can be aligned with the eyeball center W2 of the right eye.
  • the above embodiments mainly take a VR scene as an example, that is, a VR head-mounted display device executes the display method in this specification.
  • the display method in this specification can be executed by the AR head-mounted display device.
  • the AR head-mounted display device may be AR glasses or the like.
  • AR glasses include light machines that project light for images. Among them, the light emitted by the optical machine can be introduced on the in-coupling grating of the lens of the AR glasses, and exported to the human eye at the out-coupling grating position of the lens, so that the user can see the virtual image corresponding to the image.
  • the display method provided in this manual can be used for AR glasses that have assembly deviations between the optomechanics and the coupling grating.
  • AR glasses include monocular display AR glasses and binocular display AR glasses. Wherein, in the monocular display AR glasses, at least part of one of the two lenses adopts an optical waveguide structure; in the binocular display AR glasses, at least part of the two lenses both adopt an optical waveguide structure.
  • an optical waveguide is a dielectric device that guides light waves to propagate in it, also known as a dielectric optical waveguide.
  • an optical waveguide refers to an optical element that uses the principle of total reflection to guide light waves to propagate in itself through total reflection.
  • a common waveguide substrate can be a guiding structure made of an optically transparent medium (such as quartz glass) that transmits optical frequency electromagnetic waves.
  • FIG. 16 shows an electronic device 1600 provided by this application.
  • the electronic device 1600 may be the aforementioned VR head-mounted display device.
  • an electronic device 1600 may include: one or more processors 1601; one or more memories 1602; a communication interface 1603, and one or more computer programs 1604, and each of the above devices may communicate through one or more bus 1605 connection.
  • the one or more computer programs 1604 are stored in the above-mentioned memory 1602 and are configured to be executed by the one or more processors 1601, the one or more computer programs 1604 include instructions, and the above-mentioned instructions can be used to perform the above-mentioned Relevant steps of the mobile phone in the corresponding embodiment.
  • the communication interface 1603 is used to implement communication with other devices, for example, the communication interface may be a transceiver.
  • the methods provided in the embodiments of the present application are introduced from the perspective of an electronic device (such as a VR head-mounted display device) as an execution subject.
  • the electronic device may include a hardware structure and/or a software module, and realize the above-mentioned functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether one of the above-mentioned functions is executed in the form of a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraints of the technical solution.
  • the terms “when” or “after” may be interpreted to mean “if” or “after” or “in response to determining" or “in response to detecting ".
  • the phrase “in determining” or “if detected (a stated condition or event)” may be interpreted to mean “if determining" or “in response to determining" or “on detecting (a stated condition or event)” or “in response to detecting (a stated condition or event)”.
  • relational terms such as first and second are used to distinguish one entity from another, without limiting any actual relationship and order between these entities.
  • references to "one embodiment” or “some embodiments” or the like in this specification means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically stated otherwise.
  • the terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless specifically stated otherwise.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, e.g.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).
  • SSD solid state disk

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

一种显示方法与电子设备。电子设备通过第一显示屏显示第一图像,通过第一显示屏显示第一图像,第一显示屏对应用户的第一眼;所述第二显示屏显示第二图像,第二显示屏对应所述用户的第二眼;其中,第一图像和第二图像上存在重叠区域,重叠区域内包括至少一个相同对象;在第一图像上,重叠区域的中心点位于第一位置;在第二图像上,重叠区域的中心点位于第二位置;第一位置到第一图像的中心点的距离不等于第二位置到第二图像的中心点的距离,和/或,第一位置到第一图像的中心点的方向不等于第二位置到第二图像的中心点的方向。通过这种方式,可以补偿电子设备(如头戴式显示设备)上显示屏的组装偏差,有助于提升显示效果。

Description

一种显示方法与电子设备
相关申请的交叉引用
本申请要求在2021年11月11日提交中国专利局、申请号为202111338178.3、申请名称为“一种显示方法与电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本说明书涉及电子技术领域,尤其涉及一种显示方法与电子设备。
背景技术
随着终端显示技术的发展,虚拟现实(Virtual Reality,VR)、增强现实(Augmented Reality,AR)、混合现实(Mixed Reality,MR)技术的应用场景越来越多。VR设备可以模拟产生一个三维(three-dimensional,3D)的虚拟世界场景,还可以提供在视觉、听觉、触觉或其他感官上的模拟体验,让用户感觉仿佛身历其境。并且,用户也可以与该模拟的虚拟世界场景进行交互。AR设备可以在用户观看现实世界场景的同时,为用户叠加显示虚拟图像,用户还可以与虚拟图像进行交互来实现增强现实的效果。MR融合了AR和VR,可以为用户提供合并现实和虚拟世界后的视界。
头戴式显示设备,是佩戴于用户头部的显示设备,可以为用户提供新的可视化环境。头戴式显示设备可以通过发射光学信号,向用户呈现VR、AR或MR等不同效果。
一般的,头戴式显示设备上设置两个显示器件,一个显示器件对应左眼,另一个显示器件对应右眼。左眼显示器件和右眼显示器件分别显示图像。这样,人的左眼和右眼分别采集到图像并经过大脑融合进而感受到虚拟世界。然而,在实际应用中,用户佩戴头戴式显示设备容易出现图像模糊、头晕或视觉疲劳,严重影响头戴式显示设备的舒适度及体验。
发明内容
本说明书的目的在于提供了一种显示方法与电子设备,用于提升头戴式显示设备的舒适度。
第一方面,提供一种显示方法,应用于电子设备。电子设备包括第一显示屏和第二显示屏,通过第一显示屏显示第一图像,第一显示屏对应用户的第一眼,通过第二显示屏显示第二图像,第二显示屏对应用户的第二眼。其中第一图像和第二图像上存在重叠区域,重叠区域内包括至少一个相同对象。在第一图像上,重叠区域的中心点位于第一位置。在第二图像上,重叠区域的中心点位于第二位置。第一位置到第一图像的中心点的距离为第一距离,第二位置到第二图像的中心点的距离为第二距离。第一位置到第一图像的中心点的方向为第一方向,第二位置到第二图像的中心点的方向为第二方向。第一距离不等于第二距离,和/或,第一方向不同于第二方向。
以电子设备是头戴式显示设备为例,一般来说,头戴式显示设备上第一显示屏所显示的图像和第二显示屏所显示的图像上存在重叠区域,例如,该重叠区域基于人脸中心线(或 电子设备的中心平面)是左右对称的,即,第一位置(第一显示屏上重叠区域中心点的位置)到第一显示屏的中心点的第一距离等于第二位置(第二显示屏上重叠区域中心点的位置)到第二显示屏的中心点的第二距离,而且,第一位置到第一显示屏的中心点的第一方向与第二位置到第二显示屏的中心点的第二方向相反。考虑到生产头戴式显示设备的过程中存在组装偏差会导致第一显示屏不能对齐对应的人眼,和/或,第二显示屏不能对齐对应的人眼,当第一显示屏和第二显示屏分别显示图像时,第一显示屏上的重叠区域与第二显示屏上的重叠区域相对于人脸中心线不对称,这样用户无法将两张图像上的重叠区域融合。
鉴于此,本申请实施例中,两个显示屏上显示屏的图像上的重叠区域的位置不对称,以补偿组装偏差。比如,第一位置(第一图像上重叠区域中心点的位置)到第一图像的中心点的第一距离不等于第二位置(第二图像上重叠区域中心点的位置)到第二图像的中心点的第二距离,和/或,第一位置到第一图像的中心点的第一方向与第二位置到第二图像的中心点的第二方向不同。这样,当第一显示屏显示所述第一图像、第二显示屏显示所述第二图像时,用户可以将第一图像和第二图像上的重叠区域融合,有助于提升头戴式显示的舒适度。
在一种可能的设计中,所述电子设备还包括第一光学器件和第二光学器件,所述第一光学器件对应所述第一显示屏,所述第二光学器件对应所述第二显示屏,所述第一光学器件和所述第二光学器件相对于中间平面对称;所述第一位置和所述第二位置相对于所述中间平面对称。当第一位置和第二位置相对于中间平面对称时,可以使得重叠区域更好地融合,达到更佳的视觉效果。
在一种可能的设计中,所述电子设备为头戴式显示设备,当所述电子设备被用户佩戴时,所述第一位置和所述第二位置相对于所述用户的人脸的中心线对称,可以使得重叠区域更好地融合,达到更佳的视觉效果。
在一种可能的设计中,所述第一位置随着所述第一显示屏的位置变化而变化。例如,所述第一显示屏向第三方向移动的情况下,所述第一图像上所述重叠区域向与所述第三方向相反的方向移动。在一种可能的设计中,所述第二位置随着所述第二显示屏的位置变化而变化。例如,所述第二显示屏向第四方向移动的情况下,所述第一图像上所述重叠区域向与所述第四方向相反的方向移动。
也就是说,第一显示屏和/或第二显示屏的位置可以动态变化。随着显示屏的动态变化,重叠区域的位置动态变化,以保证重叠区域可以融合。
在一种可能的设计中,在所述通过所述第一显示屏显示第一图像和所述通过所述第二显示屏显示第二图像之前,所述方法还包括所述第一显示屏和所述第二显示屏进行瞳距调节,所述瞳距调节包括:所述第一显示屏沿着第五方向移动一定距离,所述第二显示屏沿着与所述第五方向相反的第六方向移动相同距离;其中,所述第五方向为所述第一显示屏远离所述第二显示屏的方向,或,所述第五方向为所述第一显示屏靠近所述第二显示屏的方向。
需要说明的是,具有组装偏差的VR眼镜在经过瞳距(Inter Pupillary Distance,IPD)调节时,仍然存在组装偏差。比如,假设对VR眼镜作IPD调节(即第一显示器件和第二显示器件移动相同距离且移动方向相反,例如第一显示器件和第二显示器件相互靠近移动,或者第一显示器件和第二显示器件相互远离移动)。IPD调节之后再显示第一图像和第二图像时,所述第一距离与所述第二距离之间的距离差相较于所述瞳距调节之前保持不变,且 所述第一方向和所述第二方向之间的相对关系相较于所述瞳距调节之前保持不变,保证IPD调节前后重叠区域均可以融合。
一种可能的设计中,所述至少一个相同对象中包括第一对象和第二对象;在所述第一图像上,所述第一对象的第一特征点处于第一坐标,所述第二对象的第二特征点处于第二坐标;在所述第二图像上,所述第一对象的第一特征点处于第三坐标,所述第二对象的第二特征点处于第四坐标;所述第一坐标与所述第三坐标之间的坐标差与所述第二坐标与所述第四坐标之间的坐标差不同。
也就是说,重叠区域内包括两个对象,这两个对象的偏移量可以不同。比如,在满足如下条件中的至少一种时,这两个对象的偏移量不同,所述条件包括:
条件1,所述第一对象处于用户注视点所在区域内,所述第二对象不处于所述用户注视点所在区域内;
条件2,所述第一对象和所述第二对象均处于所述用户注视点所在区域内,且所述第二对象比所述第一对象靠近所述用户注视点所在区域的边缘;
条件3,所述第一对象与所述第一图像的中心点之间的距离小于所述第二对象与所述第二图像中心之间的距离;
条件4,所述第一对象对应的用户交互次数大于所述第二对象对应的用户交互次数;
条件5,所述第一对象是用户指定对象,所述第二对象不是用户指定对象。
在本申请实施例中,第二对象是用户关注点低的对象或者用户不感兴趣的对象(交互次数低),所以对第二对象的偏移量小些,或者,不对第二对象进行偏移,不会影响用户的观感,而且可以节省电子设备的计算量,提升效率。
在一种可能的设计中,所述方法还包括:所述电子设备包括第一显示模组和第二显示模组;所述第一显示模组包括所述第一显示屏和第一光学器件,所述第二显示模组包括所述第二显示屏和第二光学器件,所述第一显示屏的位置和所述第一光学器件的位置之间存在第一偏移量,所述第二显示屏的位置和所述第二光学器件的位置之间存在第二偏移量;所述方法还包括:获取三维图像数据;获取第一坐标转换矩阵和第二坐标转换矩阵,所述第一坐标转换矩阵对应第一光学器件,所述第二坐标转换矩阵对应第二光学器件;获取所述第一偏移量和所述第二偏移量;基于所述第一坐标转换矩阵和所述第一偏移量,将所述三维图像数据处理为所述第一图像;基于所述第二坐标转换矩阵和所述第二偏移量,将所述三维图像数据处理为所述第二图像。可以理解的是,第一图像和第二图像取自同一三维图像数据,分别根据第一光学器件(对应第一人眼的位置)和第二光学器件(对应第二人眼的位置)的位置转化得到,有助于第一图像和第二图像上重叠区域的融合,保证用户清楚的看到虚拟环境。
在一种可能的设计中,当所述第一显示模组的位置变化时,所述第一坐标转换矩阵变化;或者,当所述第二显示模组的位置变化时,所述第二坐标转换矩阵变化。当人眼的位置变化时,一般显示模组会随人眼的位置变化而变化,从而可以根据人眼的位置变化,调整显示屏的视角变化。
第二方面,还提供一种标定方法,应用于标定装置,所述标定装置中包括图像拍摄模块,所述方法包括:在待标定电子设备的第一显示屏显示第一图像、第二显示屏显示第二图像的情况下,通过所述图像拍摄模块对所述第一显示屏进行拍摄得到第三图像,对所述第二显示屏进行拍摄得到第四图像;其中,所述第一图像和所述第二图像上存在重叠区域, 所述重叠区域内包括至少一个相同对象,所述至少一个相同对象中包括一个标定对象,而且所述第一图像上所述重叠区域的中心点位于第一位置,所述第二图像上所述重叠区域的中心点位于第二位置;所述第一位置到所述第一图像的中心的距离等于所述第二位置到所述第二图像的中心的距离,所述第一位置到所述第一图像的中心的方向等于所述第二位置到所述第二图像的中心的方向;将所述第三图像与所述第四图像融合得到第五图像,所述第五图像上包括两个所述标定对象;基于所述两个标定对象之间的距离差确定所述第一显示屏的第一偏移量,和/或,所述第二显示屏的第二偏移量。
通过这种方式,可以标定出电子设备的组装偏差,即两个显示器件的偏移量,以便实现电子设备对组装偏差的补偿。
在一种可能的设计中,所述第一偏移量包括第一位移和第一方向,所述第二偏移量包括第二位移和第二方向;其中,所述第一位移与所述第二位移的位移总和等于所述距离差;所述第一方向与所述第二方向相反。
在一种可能的设计中,所述第一位移为所述距离差的二分之一,所述第二位移为所述距离差的二分之一。
在一种可能的设计中,所述方法还包括:将所述第一偏移量和所述第二偏移量写入所述待标定电子设备中,以使所述待标定电子设备基于所述第一偏移量对所述第一显示屏上所显示的图像进行处理,基于所述第二偏移量对所述第二显示屏上所述显示的图像进行处理。
第三方面,还提供一种显示方法,所述电子设备包括第一显示模组和第二显示模组;所述第一显示模组包括所述第一显示屏和第一光学器件,所述第二显示模组包括所述第二显示屏和第二光学器件,所述第一显示屏的位置和所述第一光学器件的位置之间存在第一偏移量,所述第二显示屏的位置和所述第二光学器件的位置之间存在第二偏移量;所述方法还包括:获取三维图像数据;获取第一坐标转换矩阵和第二坐标转换矩阵,所述第一坐标转换矩阵对应第一光学器件,所述第二坐标转换矩阵对应第二光学器件;获取所述第一偏移量和所述第二偏移量;基于所述第一坐标转换矩阵和所述第一偏移量,将所述三维图像数据处理为第一图像,所述第一显示模组显示所述第一图像;基于所述第二坐标转换矩阵和所述第二偏移量,将所述三维图像数据处理为第二图像,所述第二显示模组显示所述第二图像。可以理解的是,第一图像和第二图像取自同一三维图像数据,分别根据第一光学器件(对应第一人眼的位置)和第二光学器件(对应第二人眼的位置)的位置转化得到,有助于第一图像和第二图像上重叠区域的融合,保证用户清楚的看到虚拟环境。
在一种可能的设计中,当所述第一显示模组的位置变化时,所述第一坐标转换矩阵变化;或者,当所述第二显示模组的位置变化时,所述第二坐标转换矩阵变化。当人眼的位置变化时,一般显示模组会随人眼的位置变化而变化,从而可以根据人眼的位置变化,调整显示屏的视角变化。
第四方面,还提供一种电子设备,包括:
处理器,存储器,以及,一个或多个程序;
其中,所述一个或多个程序被存储在所述存储器中,所述一个或多个程序包括指令,当所述指令被所述处理器执行时,使得所述电子设备执行如上述第一方面提供的方法步骤。
第五方面,还提供一种标定装置,包括:
处理器,存储器,以及,一个或多个程序;
其中,所述一个或多个程序被存储在所述存储器中,所述一个或多个程序包括指令,当所述指令被所述处理器执行时,使得所述电子设备执行如上述第二方面提供的方法步骤。
第六方面,还提供一种***,包括:
用于执行如上述第一方面所述的方法步骤的电子设备,以及,
用于执行如上述第二方面所述的标定装置。
第七方面,还提供一种计算机可读存储介质,所述计算机可读存储介质用于存储计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如上述第一方面或第二方面所述的方法。
第八方面,还提供一种计算机程序产品,包括计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如上述第一方面或第二方面所述的方法。
附图说明
图1为本说明书一实施例提供的VR***的示意图;
图2A和图2B为本说明书一实施例提供的VR头戴式显示设备的一种结构示意图;
图3A为本说明书一实施例提供的VR头戴式显示设备的另一种结构示意图;
图3B为本说明书一实施例提供的VR头戴式显示设备的一种软件结构示意图;
图4A至图4B为本说明书一实施例提供的人眼观察机制的示意图;
图5为本说明书一实施例提供的VR眼镜的显示原理的示意图;
图6A至图6B为本说明书一实施例提供的双目不融合的一种示意图;
图7A至图7B为本说明书一实施例提供的双目不融合的另一种示意图;
图8A至图8B为本说明书一实施例提供的一种显示方法的示意图;
图9至图10为本说明书一实施例提供的另一种显示方法的示意图;
图11为本说明书一实施例提供的显示方法的一种流程示意图;
图12A至图12B为本说明书一实施例提供的三维图像得到二维图像的示意图;
图12C至图13为本说明书一实施例提供的一种标定方法的示意图;
图14为本说明书一实施例提供的显示方法的另一种流程示意图;
图15为本说明书一实施例提供的存在组装偏差的一种示意图;
图16为本说明书一实施例提供的电子设备的示意图。
具体实施方式
以下,对本申请实施例中的部分用语进行解释说明,以便于本领域技术人员理解。
本申请实施例涉及的至少一个,包括一个或者多个;其中,多个是指大于或者等于两个。另外,需要理解的是,在本说明书的描述中,“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为明示或暗示相对重要性,也不能理解为明示或暗示顺序。比如,第一对象和第二对象并不代表二者的重要程度或者代表二者的顺序,仅仅是为了区分描述。在本申请实施例中,“和/或”,仅仅是描述关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
在本申请实施例的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“连接”应做广义理解,例如,“连接”可以是可拆卸地连接,也可以是不可拆卸地连接;可以是直接连接,也可以通过中间媒介间接连接。本申请实施例中所提到的方位用语,例如,“上”、“下”、“左”、“右”、“内”、“外”等,仅是参考附图的方向,因此,使用的方位用语是为了更好、更清楚地说明及理解本申请实施例,而不是指示或暗指所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请实施例的限制。“多个”是指至少两个。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本说明书的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
虚拟现实(Virtual Reality,VR)技术是借助计算机及传感器技术创造的一种人机交互手段。VR技术综合了计算机图形技术、计算机仿真技术、传感器技术、显示技术等多种科学技术,可以创建虚拟环境。虚拟环境包括由计算机生成的、并实时动态播放的二维或三维虚拟对象,提供用户关于视觉等感官的模拟,让用户感觉仿佛身历其境。而且,除了计算机图形技术所生成的视觉感知外,还有听觉、触觉、力觉、运动等感知,甚至还包括嗅觉和味觉等,也称为多感知。此外,还可以检测用户的头部转动,眼睛、手势、或其他人体行为动作,由计算机来处理与用户的动作相适应的数据,并对用户的动作实时响应,并分别反馈到用户的五官,进而形成虚拟环境。示例性的,用户佩戴VR头戴式显示设备(如,VR眼镜、VR头盔等)可以看到VR游戏界面,通过手势、手柄等操作,可以与VR游戏界面交互,仿佛身处游戏中。
增强现实(Augmented Reality,AR)技术是指将计算机生成的虚拟对象叠加到真实世界的场景之上,从而实现对真实世界的增强。也就是说,AR技术中需要采集真实世界的场景,然后在真实世界上增加虚拟环境。因此,VR技术与AR技术的区别在于,VR技术创建的是完全的虚拟环境,用户看到的全部是虚拟对象;而AR技术是在真实世界上叠加了虚拟对象,即既可以看到真实世界中对象也可以看到虚拟对象。比如,用户佩戴透明眼镜,通过该眼镜的镜片可以看到周围的真实环境,而且该镜片上还可以显示虚拟对象,这样,用户既可以看到真实对象也可以看到虚拟对象。
混合现实技术(Mixed Reality,MR),是通过在虚拟环境中引入现实场景信息(或称为真实场景信息),将虚拟环境、现实世界和用户之间搭起一个交互反馈信息的桥梁,从而增强用户体验的真实感。具体来说,把现实对象虚拟化,(比如,使用摄像头来扫描现实对象进行三维重建,生成虚拟对象),经过虚拟化的真实对象引入到虚拟环境中,这样,用户在虚拟环境中可以看到真实对象。
双目融合(binocular fusion),又可以称为双眼视象融合,是一种视觉现象。即两眼同时观察同一对象时,在各自视网膜上形成该对象的两个像,然后分别经两侧视神经传到皮层视中枢同一区域,而融合成完整、单一物像的知觉经验。
在本文中,虚拟图像或虚拟环境可以包括各种对象(object),对象又可以称为目标。对象可以包括人、动物或家具等可以出现在真实世界环境中的物体或东西,对象也可以包 括虚拟图标、导航栏、软件按钮或窗口等虚拟元素,这些虚拟元素可以用于与用户进行交互。
需要说明的是,本申请实施例提供的技术方案可以适用于VR、AR或MR等头戴式显示设备中;或者,还可以适用于除了VR、AR和MR之外,其它需要向用户展示三维立体环境的场景或电子设备,本申请实施例对电子设备的具体类型不作任何限制。
为了方便理解,下文主要以VR头戴式显示设备为例进行介绍。
示例性的,VR头戴式显示设备100可以应用于如图1所示的VR***中。VR***中包括VR头戴式显示设备100,以及处理设备200,该VR***可以称为VR分体机。VR头戴式显示设备100可以与处理设备200连接。VR头戴式显示设备100与处理设备200之间的连接包括有线或无线连接,无线连接可以是蓝牙(bluetooth,BT),可以是传统蓝牙或者低功耗BLE蓝牙,无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),Zigbee,调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR),或通用2.4G/5G频段无线通信连接等。
在一些实施例中,处理设备200可以进行处理计算,例如,处理设备200可以生成图像并对图像处理(处理方式将在后文介绍),然后将处理后的图像发送给VR头戴式显示设备进行显示。其中,处理设备200可以包括主机(例如VR主机)或服务器(例如VR服务器)。VR主机或VR服务器可以是具有较大计算能力的设备。例如,VR主机可以是手机、平板电脑、笔记本电脑等设备,VR服务器可以是云服务器等。
在一些实施例中,VR头戴式显示设备100可以是眼镜、头盔等。VR头戴式显示设备100上一般设置有两个显示器件,即第一显示器件110和第二显示器件120。VR头戴式显示设备100的显示器件可以向人眼显示图像。在图1所示的实施例中,第一显示器件110和第二显示器件120被包裹在VR眼镜内部,所以图1中用于指示第一显示器件110和第二显示器件120的箭头使用虚线表示。
在一些实施例中,VR头戴式显示设备100本机还具有图像生成、处理等功能,即VR头戴式显示设备100不需要图1中的处理设备200,这样的VR头戴式显示设备100可以称为VR一体机。
请参见图2A,为VR头戴式显示设备100的一种示意图。如图2A所示,VR头戴式显示设备100包括显示模组1和显示模组2。其中,显示模组1包括第一显示器件110和光学器件130。显示模组2包括第二显示器件120和光学器件140。其中,显示模组1和显示模组2也可以称为镜筒。当用户佩戴VR头戴式显示设备100时,显示模组1用于向用户右眼展示图像。显示模组2用于向用户左眼展示图像。可以理解的是,图2A所示的VR头戴式显示设备100种还可以包括其它部件,比如,还包括支撑部30和支架20,其中支撑部30用于将VR头戴式显示设备100支撑在鼻梁上,支架20用于将VR头戴式显示设备100支撑在双耳上,以保证VR头戴式显示设备100稳定佩戴。
在一些实施例中,光学器件130和光学器件140相对于中间平面C对称,在图2A中,中间平面C为垂直于纸面的平面。在一些实施例中,VR头戴式显示设备100可以为左右对称结构,支撑部30和/或支架20可以分别相对于中间平面C左右对称,支撑部30可以固定人脸的位置,有利于光学器件130和光学器件140分别对齐用户的左眼和右眼。
当VR头戴式显示设备100被用户佩戴时,光学器件130和光学器件140分别对齐用户的左眼和右眼。一般,人脸基本为左右对称的,人的左眼和右眼相对于人脸的中心线左右对称。当VR头戴式显示设备100被用户佩戴时,左眼和右眼相对于中间平面C对称,光学器件130和光学器件140相对于人脸的中心线对称,人脸的中心线在中间平面C内,即人脸的中心线在中间平面C重叠。
在本申请的实施例中,“对称”可以是严格的对称,也可以存在微小的偏差。例如,光学器件130和光学器件140可以相对于中间平面C严格对称,或者,光学器件130和光学器件140相对于中间平面C基本对称,基本对称可以存在一定的偏差,该偏差在微小的范围之内。
为了方便描述,请参见图2B,图2B可以理解为对图2A中的VR头戴式显示设备100的一种简化,比如,图2B中仅示出了显示模组1和显示模组2,其它部件未示出。如图2B,用户佩戴VR头戴式显示设备100的情况下,第二显示器件120位于光学器件140背离左眼的一侧,第一显示器件110位于光学器件130背离右眼的一侧,光学器件130和光学器件140相对于人脸的中心线D对称。当第一显示器件110在显示图像时,第一显示器件110发出的光线经过光学器件130汇聚到人的右眼,当第二显示器件120在显示图像时,第二显示器件120发出的光线经过光学器件140汇聚到人的左眼。
需要说明的是,图2A或图2B所示的VR头戴式显示设备100的组成,仅为一种逻辑的示意。在具体的实现中,光学器件和/或显示器件的数量可以根据不同需求灵活设置。比如,在一些实施例中,第一显示器件110和第二显示器件120可以是两个独立的显示器件,或者,是同一块显示器件上的两个显示区域。其中,在一些实施例中,第一显示器件110和第二显示器件120可以分别为显示屏,例如液晶显示屏、发光二极管(light emitting diode,LED)显示屏或者其它类型的显示器件,本申请实施例不作限定。在另一些实施例中,光学器件130和光学器件140可以是两个独立的光学器件,或者是同一光学器件上的不同部分。在一些实施例中,光学器件130或140可以分别是反射镜、透射镜或光波导等中的一种或几种光学器件,也可以提高视场角,例如,光学器件130或140可以分别是多个透射镜组成的目镜,示例性的,光学器件可以是菲涅尔透镜和/或非球面透镜等,本申请实施例不作限定。一般情况下,光学器件130和光学器件140分别对准用户的两只眼睛,当进行IPD调节时,两个光学器件调整的距离相同,方向相反。
为了方便理解本说明书的技术方案,下文中主要以图2B的VR头戴式显示设备100为例进行介绍,但图2A中的VR头戴式显示设备也可以实施该技术方案(图2B仅是对图2A的一种简化)。
可以理解的是,VR头戴式显示设备100中还可以包括更多器件,具体参见图3A。
示例性的,请参考图3A,为本申请实施例提供的一种头戴式显示设备100的结构示意图。头戴式显示设备100可以是VR头戴式显示设备、AR头戴式显示设备、MR头戴式显示设备等等。以VR头戴式显示设备为例,如图3A所示,VR头戴式显示设备100可以包括处理器101,存储器102,传感器模块103(例如可以用于获取用户的姿态等),麦克风104,按键150,输入输出接口160,通信模块170,摄像头180,电池190、光学显示模组106以及眼动追踪模组105等。
处理器101通常用于控制VR头戴式显示设备100的整体操作,可以包括一个或多个处理单元,例如:处理器101可以包括应用处理器(application processor,AP),调制解调 处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),视频处理单元(video processing unit,VPU)控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
处理器101中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器101中的存储器为高速缓冲存储器。该存储器可以保存处理器101刚用过或循环使用的指令或数据。如果处理器101需要再次使用该指令或数据,可从该存储器中直接调用。避免了重复存取,减少了处理器101的等待时间,因而提高了***的效率。
在本说明书的一些实施例中,处理器101可以用于控制VR头戴式显示设备100的光焦度。示例性的,处理器101可以用于控制光学显示模组106的光焦度,实现对头戴式显示设备100的光焦度的调整的功能。例如,处理器101可以通过调整光学显示模组106中各个光学器件(如透镜等)之间的相对位置,使得光学显示模组106的光焦度得到调整,进而使得光学显示模组106在向人眼成像时,对应的虚像面的位置可以得到调整。从而达到控制头戴式显示设备100的光焦度的效果。
在一些实施例中,处理器101可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口,串行外设接口(serial peripheral interface,SPI)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器101可以包含多组I2C总线。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器101与通信模块170。例如:处理器101通过UART接口与通信模块170中的蓝牙模块通信,实现蓝牙功能。
MIPI接口可以被用于连接处理器101与光学显示模组106中的显示器件,摄像头180等***器件。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器101与摄像头180,光学显示模组106中的显示器件,通信模块170,传感器模块103,麦克风104等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。在一些实施例中,摄像头180可以采集包括真实对象的图像,处理器101可以将摄像头采集的图像与虚拟对象融合,通过光学显示模组106现实融合得到的图像。在一些实施例中,摄像头180还可以采集包括人眼的图像。处理器101通过该图像进行眼动追踪。
USB接口是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口可以用于连接充电器为VR头戴式显示设备100充电,也可以用于VR头戴式显示设备100与***设备之间传输数据。也可以用于连接耳机,通过耳 机播放音频。该接口还可以用于连接其他电子设备,例如手机等。USB接口可以是USB3.0,用于兼容高速显示接口(display port,DP)信号传输,可以传输视音频高速数据。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对头戴式显示设备100的结构限定。在本说明书另一些实施例中,头戴式显示设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
另外,VR头戴式显示设备100可以包含无线通信功能,比如,VR头戴式显示设备100可以从其它电子设备(比如VR主机)接收图像进行显示,或者VR头戴式显示设备100可以直接从基站等站点获取数据。通信模块170可以包含无线通信模块和移动通信模块。无线通信功能可以通过天线(未示出)、移动通信模块(未示出),调制解调处理器(未示出)以及基带处理器(未示出)等实现。天线用于发射和接收电磁波信号。VR头戴式显示设备100中可以包含多个天线,每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块可以提供应用在VR头戴式显示设备100上的包括第二代(2th generation,2G)网络/第三代(3th generation,3G)网络/***(4th generation,4G)网络/第五代(5th generation,5G)网络/第六代(6th generation,6G)网络等无线通信的解决方案。移动通信模块可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块可以由天线接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块还可以对经调制解调处理器调制后的信号放大,经天线转为电磁波辐射出去。在一些实施例中,移动通信模块的至少部分功能模块可以被设置于处理器101中。在一些实施例中,移动通信模块的至少部分功能模块可以与处理器101的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器等)输出声音信号,或通过光学显示模组106中的显示器件显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器101,与移动通信模块或其他功能模块设置在同一个器件中。
无线通信模块可以提供应用在VR头戴式显示设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星***(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块经由天线接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器101。无线通信模块还可以从处理器101接收待发送的信号,对其进行调频,放大,经天线转为电磁波辐射出去。
在一些实施例中,VR头戴式显示设备100的天线和移动通信模块耦合,使得VR头戴式显示设备100可以通过无线通信技术与网络以及其他设备通信。该无线通信技术可以包括全球移动通讯***(global system for mobile communications,GSM),通用分组无线 服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),5G,6G,BT,GNSS,WLAN,NFC,FM,和/或IR技术等。GNSS可以包括全球卫星定位***(global positioning system,GPS),全球导航卫星***(global navigation satellite system,GLONASS),北斗卫星导航***(beidou navigation satellite system,BDS),准天顶卫星***(quasi-zenith satellite system,QZSS)和/或星基增强***(satellite based augmentation systems,SBAS)。
VR头戴式显示设备100通过GPU,光学显示模组106,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接光学显示模组106和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器101可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
存储器102可以用于存储计算机可执行程序代码,该可执行程序代码包括指令。处理器101通过运行存储在存储器102的指令,从而执行VR头戴式显示设备100的各种功能应用以及数据处理。存储器102可以包括存储程序区和存储数据区。其中,存储程序区可存储操作***,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储头戴式显示设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,存储器102可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
VR头戴式显示设备100可以通过音频模块,扬声器,麦克风104,耳机接口,以及应用处理器等实现音频功能。例如音乐播放,录音等。音频模块用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块还可以用于对音频信号编码和解码。在一些实施例中,音频模块可以设置于处理器101中,或将音频模块的部分功能模块设置于处理器101中。扬声器,也称“喇叭”,用于将音频电信号转换为声音信号。头戴式显示设备100可以通过扬声器收听音乐,或收听免提通话。
麦克风104,也称“话筒”,“传声器”,用于将声音信号转换为电信号。VR头戴式显示设备100可以设置至少一个麦克风104。在另一些实施例中,VR头戴式显示设备100可以设置两个麦克风104,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,VR头戴式显示设备100还可以设置三个,四个或更多麦克风104,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口用于连接有线耳机。耳机接口可以是USB接口,也可以是3.5毫米(mm)的开放移动头戴式显示设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
在一些实施例中,VR头戴式显示设备100可以包括一个或多个按键150,这些按键可以控制VR头戴式显示设备,为用户提供与VR头戴式显示设备100进行交互的功能。按键150的形式可以是按钮、开关、刻度盘和触摸或近触摸传感设备(如触摸传感器)。具体的,例如,用户可以通过按下按钮来打开VR头戴式显示设备100的光学显示模组106。按键150包括开机键,音量键等。按键150可以是机械按键。也可以是触摸式按键。头戴 式显示设备100可以接收按键输入,产生与头戴式显示设备100的用户设置以及功能控制有关的键信号输入。
在一些实施例中,VR头戴式显示设备100可以包括输入输出接口160,输入输出接口160可以通过合适的组件将其他装置连接到VR头戴式显示设备100。组件例如可以包括音频/视频插孔,数据连接器等。
光学显示模组106用于在处理器101的控制下,为用户呈现图像。光学显示模组106可以通过反射镜、透射镜或光波导等中的一种或几种光学器件,将实像素图像显示转化为近眼投影的虚拟图像显示,实现虚拟的交互体验,或实现虚拟与现实相结合的交互体验。例如,光学显示模组106接收处理器101发送的图像数据信息,并向用户呈现对应的图像。
示例性的,光学显示模组106可以参见前文图2A所示的结构,比如光学显示模组106中包括两个显示屏显示器件,即第一显示器件110和第二显示器件120。或者,光学显示模组106还可以参见前文图2B所示的结构,比如,光学显示模组106中包括显示模组1和显示模组2,显示模组1包括第一显示器件110和光学器件130,显示模组2包括第二显示器件120和光学器件140。
在一些实施例中,VR头戴式显示设备100还可以包括眼动跟踪模组1200,眼动跟踪模组1200用于跟踪人眼的运动,进而确定人眼的注视点。如,可以通过图像处理技术,定位瞳孔位置,获取瞳孔中心坐标,进而计算人的注视点。在一些实施例中,该眼动追踪***可以通过视频眼图法或者光电二极管响应法或者瞳孔角膜反射法等方法,确定用户的注视点位置(或者确定用户的视线方向),从而实现用户的眼动追踪。
在一些实施例中,以采用瞳孔角膜反射法确定用户的视线方向为例。眼动追踪***可以包括一个或多个近红外发光二极管(Light-Emitting Diode,LED)以及一个或多个近红外相机。该近红外LED和近红外相机未在图3A中示出。在不同的示例中,该近红外LED可以设置在光学器件周围,以便对人眼进行全面的照射。在一些实施例中,近红外LED的中心波长可以为850nm或940nm。该眼动追踪***可以通过如下方法获取用户的视线方向:由近红外LED对人眼进行照明,近红外相机拍摄眼球的图像,然后根据眼球图像中近红外LED在角膜上的反光点位置以及瞳孔的中心,确定眼球的光轴方向,从而得到用户的视线方向。
需要说明的是,在本说明书的一些实施例中,可以分别为用户的双眼设置各自对应的眼动追踪***,以便同步或异步地对双眼进行眼动追踪。在本说明书的另一些实施例中,也可以仅在用户的单只眼睛附近设置眼动追踪***,通过该眼动追踪***获取对应人眼的视线方向,并根据双眼注视点的关系(如用户在通过双眼观察物体时,两个眼睛的注视点位置一般相近或相同),结合用户的双眼间距,即可确定用户的另一只眼睛的视线方向或者注视点位置。
可以理解的是,本申请实施例示意的结构并不构成对VR头戴式显示设备100的具体限定。在本说明书另一些实施例中,VR头戴式显示设备100可以包括比图3A更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置,本申请实施例不作限定。
图3B是本申请实施例的VR头戴式显示设备100的软件结构框图。
如图3B所示,VR头戴式显示设备100的软件结构可以是分层架构,例如可以将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实 施例中,分为五层,从上至下分别为应用程序层210,应用程序框架层(framework,FWK)220,安卓运行时(Android runtime)230和***库240,内核层250以及硬件层260。
其中,应用程序层210可以包括一系列应用程序包。示例性的,如图3B所示,应用程序层中包括图库211应用、游戏212应用,等等。
应用程序框架层220为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层可以包括一些预先定义的函数。如图3B所示,应用程序框架层可以包括资源管理器221,视图***222等。比如,视图***222包括可视控件,例如显示文字的控件,显示图片的控件等。视图***222可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括消息通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。资源管理器221为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
Android runtime 230包括核心库和虚拟机。Android runtime 230负责安卓***的调度和管理。核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
***库240可以包括多个功能模块。例如:表面管理器(surface manager)241,媒体库(media libraries)242,三维图形处理库(例如:OpenGL ES)243,2D图形引擎244(例如:SGL)等。其中,表面管理器241用于对显示子***进行管理,并且为多个应用程序提供了2D和3D图层的融合。媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库242可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。三维图形处理库243用于实现三维图形绘图,图像渲染,合成,和图层处理等。2D图形引擎244是2D绘图的绘图引擎。
此外,***库240还可以包括VR算法集成模块245。VR算法集成模块245中包括第一显示器件的第一偏移量、第二显示器件的第二偏移量、坐标转换矩阵,以及基于坐标转换矩阵进行坐标转换的相关算法,等等。关于第一偏移量、第二偏移量、坐标转换矩阵等将在后文进行详细的介绍。需要说明的是,图3B中以VR算法集成模块245位于***库中为例,可以理解的是,VR算法集成模块245还可以位于其他层,比如应用程序框架层220,本申请实施例不作限定。
内核层250是硬件和软件之间的层。内核层250至少包含显示驱动251,摄像头驱动252,音频驱动253,传感器驱动254等等。
硬件层可以包括第一第一显示器件110、第二第二显示器件120,以及各类传感器模块,例如加速度传感器201、重力传感器202、触摸传感器203等。
可以理解的是,图3B所示的软件结构并不构成对VR头戴式显示设备100的软件结构的具体限定。比如,在另一些实施例中,VR头戴式显示设备100的软件结构可以包括比图3B更多或更少的层,比如还包括适配层,该适配层位于应用程序框架层220与***库240之间,用于实现上层(即应用程序框架层)与下层(即***库)之间的适配,比如,实现上层与下层之间的接口匹配,以保证上层与下层能够进行数据通信。
以图3B所示的软件结构为例,本申请提供的显示方法的一种示例性的流程包括:
应用程序层210中的游戏212应用启动后,通过框架层220调用***库240。***库 240将该游戏212应用产生的三维图像转换为第一平面图像和第二平面图像,其中,第一平面图像对应第一第一显示器件110,第二平面图像对应第二第二显示器件120。***库获取VR算法集成模块240中的第一偏移量和第二偏移量,使用第一偏移量对第一平面图像处理(比如坐标转换处理)得到第三平面图像,使用第二偏移量对第二平面图像处理(比如坐标转换处理)得到第四平面图像。***库240通过内核层内的显示驱动251来驱动第一第一显示器件110显示第三平面图像,驱动第二第二显示器件120显示第四平面图像,以通过第三平面图像和第四平面图像向用户展示虚拟环境。
为了方便描述,下文以VR头戴式显示设备100是VR眼镜为例进行介绍。
为了能够清楚的说明本说明书的技术方案,以下首先对人眼视觉产生机制进行简单说明。
图4A为人眼的组成示意图。如图4A,人眼中可以包括晶状体和睫状肌,以及位于眼底的视网膜。其中,晶状体可以起到变焦透镜的作用,对射入人眼的光线进行汇聚处理,以便将入射光线汇聚到人眼眼底的视网膜上,使得实际场景中的景物能够在视网膜上成清晰的像。睫状肌可以用于调节晶状体的形态,比如睫状肌可以通过收缩或放松,调节晶状体的屈光度,达到调整晶状体焦距的效果。从而使得实际场景中不同距离的物体,都可以通过晶状体清晰地在视网膜上的成像。
在真实世界中,用户(未佩戴VR眼镜等头戴式电子设备)观看物体时,左眼和右眼的视角有不同,所以左眼和右眼采集的图像不同。一般,左眼和右眼存在视场范围的重叠,所以左眼和右眼采集的图像上存在重叠区域,重叠区域内包括位于用户双眼视场重叠范围的物体的像。
举例来说,请参见图4B,真实环境400中包括多个被观察物体,比如,树410、足球420、狗430等。用户观察真实环境400时,假设足球420处于左眼的视场范围内但不处于右眼的视场范围内,树410处于左眼和右眼的视场重叠范围440内,狗430处于右眼的视场范围内但不处于左眼的视场范围内。那么,左眼捕捉到的是图像4100,即图像4100为左眼视网膜上形成的像。右眼捕捉到的是图像4200,即图像4200为右眼视网膜上形成的像。图像4100与图像4200上存在重叠区域。重叠区域内包括视场重叠范围440内的物体的像。比如,图像4100上重叠区域为区域4110。图像4200上重叠区域为区域4210。其中,区域4110内包括树410的像4101,区域4210内包括树410的像4201(因为树410处于左眼和右眼的重叠的视场范围内)。图像4100的非重叠区域4120内包括足球420的像4102(因为足球420位于左眼的视场范围内)。由于狗430不处于左眼的视场范围内,所以图像4100上不包括狗430的像。图像4200的非重叠区域4220内包括狗430的像4202(因为狗430位于右眼的视场范围内)。由于足球420不处于右眼的视场范围,所以图像4200中不包括足球420的像。
可以理解的是,图像4100的中心点Q1对齐左眼眼球中心W1,即,中心点Q1与左眼眼球中心W1点处于同一直线K1上,直线K1是经过左眼眼球中心W1且与左眼眼球垂直的线。左眼眼球中心W1可以理解为左眼瞳孔的中心。图像4200的中心点Q2对齐右眼球中心W2,即,Q2与W2处于同一直线K2上,直线K2是经过右眼眼球中心W2且与右眼眼球垂直的线。右眼眼球中心W2可以理解为右眼瞳孔的中心。
继续如图4B,双眼的视场重叠区域440的中心点P到直线K1的距离为L1,到直线 K2的距离为L2,距离L1等于距离L2,而且,中心点P到直线K1的方向与到直线K2的方向相反。对应的,重叠区域4110中包括中心点P1,P1点是P点在图像4100中对应的像点。重叠区域4210中包括中心点P2,P2点是P点在图像4200中对应的像点。因此,P1点距离图像4100的中心点Q1的距离为L1’,P2点距离图像4200的中心点Q2的距离为L2’,距离L1’等于距离L2’,而且,P1点到Q1点的方向与P2点到Q2点的方向相反,中心点P1和中心点P2相对于人脸的中心线D对称,即中心点P1和中心点P2相对于中间平面也对称。在本申请的实施例中,图像的中心点可以理解为图像在上下方向的正中心,并且是图像在左右方向的正中心。同理,重叠区域的中心点可以理解为重叠区域在上下方向的正中心,并且是在左右方向的正中心。
左眼采集到图像4100,右眼采集到图像4200之后,大脑会融合图像4100与图像4200,得到一张图像,该图像即用户实际看到的图像。其中,图像4100与图像4200融合包括重叠区域4110内的像与重叠区域4210内的像融合。比如,同一被观察物体在重叠区域4110内的像与在重叠区域4210内的像会融合为一个,比如,像4101与像4201会融为一个,这样用户看到图像中包括一棵树,符合真实环境的情况。
这种人眼视觉机制下,左眼和右眼看到的图像上重叠区域可以融合(即可以实现双目融合),所以用户看到的景象是清楚的,不会出现重影(比如树的像有重影),而且人眼状态是舒适的。
在一些实施例中,VR眼镜等头戴式显示设备利用上述人眼视觉产生机制向用户展示虚拟环境。
为了方便对比,以VR眼镜向用户展示的虚拟环境为图4B所示的环境400为例,即,VR眼镜的两个显示器件上分别显示一张图像,这两张图像中包括环境400中的各个对象(比如树410、足球420、狗430)的像,如此,用户才能通过这两张图像感受到环境400。
示例性的,如图5,VR眼镜生成两张图像,即图像5100和图像5200。图像5100和图像5200中包括环境400中各个对象(比如树410、足球420、狗430)的像。其中,图像5100与图像5200上包括重叠区域。图像5100上的重叠区域为区域5110,图像5200上的重叠区域为区域5210,区域5110中的对象都包含在区域5210内,且区域5210中的对象也都包含在区域5110内。图像5100在第二显示器件120上显示。图像5200在第一显示器件110上显示。其中,当第二显示器件120显示图像5100时,图像5100的中心点R1对齐第二显示器件120的中心点S1,即R1与S1处于同一直线K3上,直线K3是经过第二显示器件120中心点S1且与第二显示器件120垂直的线。当第一显示器件110显示图像5200时,图像5200的中心点R2对齐第一显示器件110的中心点S2,即R2与S2处于同一直线K4上。直线K4是经过第一显示器件110中心点S2且与第一显示器件110垂直的线。其中,图像5100上重叠区域5110的中心点P3到直线K3的距离为L3。图像5200上重叠区域5210的中心点P4到直线K4的距离为L4。距离L3等于距离L4,而且,中心点P3到直线K3的方向与中心点P4到直线K4的方向相反。
当用户佩戴VR眼镜时,用户左眼眼球中心W1对齐第二显示器件120的中心点S1,采集到图像5300。即,图像5300的中心点T1与显示器件中心点S1对齐。也就是说,图像5100的中心点R1点、第二显示器件120中心点S1点、用户左眼眼球中心W1点、图像5300的中心点T1点处于同一直线K3上。其中,图像5300上的T1点是图像5100上的 R1对应的像点。
当用户佩戴VR眼镜时,用户右眼眼球中心W2对齐第一显示器件110的中心点S2,采集的图像5400。即,图像5400的中心点T2与第一显示器件110的中心点S2对齐。也就是说,图像5200的中心点R2点、第一显示器件110中心点S2点、用户右眼眼球中心W2点、图像5400的中心点T2点处于同一直线K4上。其中,图像5400上的T2点是图像5200上的R2对应的像点。
左眼采集的图像5300与右眼采集的图像5400上包括重叠区域。图像5300上的重叠区域为区域5310。图像5400上的重叠区域为区域5410。重叠区域5310中包括中心点P3’,P3’点是图像5100上P3点在图像5300上对应的像点。重叠区域5410中包括中心点P4’,P4’点是图像5200上P4点在图像5400上对应的像点。P3’点到直线K3的距离为L3’。P4’点到直线K4的距离为L4’。距离L3’等于距离L4’,而且,P3’点到直线K3的方向与P4’点到直线K4的方向相反。中心点P3’和中心点P4’相对于人脸的中心线D对称,即中心点P3’和中心点P4’相对于中间平面也对称。这样,大脑将图像5300与图像5400融合能够得到如图4B所示的环境400,从而可以模拟真实的环境,而且,因为图像5300与图像5400能够融合,用户眼睛是舒适的。也就是说,用户佩戴VR眼镜时,双眼采集的图像能够融合,而且眼睛舒适。
需要说明的是,在图5所示的实施例中,以用户左眼眼球中心W1与第二显示器件120的中心点S1对齐,并且用户右眼眼球中心W2与第一显示器件110的中心点S2对齐为例,在一些实施例中,可能存在情况:至少一个眼球的中心点无法与对应的显示器件的中心点对齐。比如,包括三种情况。情况1:左眼眼球中心W1点与中心点S1点能够对齐,但右眼眼球中心W2与中心点S2无法对齐。情况2:左眼眼球中心W1点与中心点S1点无法对齐,但右眼眼球中心W2点与中心点S2点能够对齐。情况3:左眼眼球中心W1点与中心点S1点无法对齐,且右眼眼球中心W2与中心点S2也无法对齐。
一种可能的场景为,VR眼镜的生产商在生成VR眼镜时,因为组装误差,导致至少一个显示器件(即显示屏)的中心无法对准眼睛。或者,因为用户使用VR眼镜的过程中,因为零件松动,导致至少一个显示器件的中心无法对准眼睛,例如,在组装时,显示屏与对应的光学器件没有对齐;在VR眼镜被佩戴时,光学器件一般会对齐眼睛,但显示屏的中心无法对准眼睛。
在一些实施例中,VR眼镜的两个显示器件之间的距离可调节,简称瞳距(Inter Pupillary Distance,IPD)调节,以适应不同用户之间的瞳距变化。比如,VR眼镜上设置按钮或手柄等可以用于调整VR眼镜的两个显示模组的间距,当显示屏和光学器件的位置随着显示模组的位置发生变化。举例来说,当家庭中成员A佩戴VR眼镜时,该成员A可能会通过按钮或手柄调节两个显示器件之间的距离,在该成员A结束佩戴之后,家庭中成员B佩戴该VR眼镜时,VR眼镜上的两个显示器件之间的距离可能不匹配该成员B的眼间距,此时,成员B可以重新调整两个显示器件的间距(可以理解为两个显示模组的间距)。一般,IPD调节时,两个显示器件调整的距离相同,方向相反。比如第一显示器件110向左移动距离1厘米,则第二显示器件120向右移动距离1厘米;或者,第一显示器件110右移距离2厘米,则第二显示器件120左移距离2厘米。应理解,对于具有组装偏差的VR眼镜,由于该VR眼镜本身就存在至少一个显示器件无法对齐眼睛的情况,所以该VR眼镜经过 IPD调节(两个显示器件调整相同距离)之后,仍然存在至少一个显示器件无法对齐眼睛的情况。因此,对于有组装偏差的VR眼镜,在经过IPD调节后仍然可以使用于本申请的技术方案。
总之,对于任何能够导致W1点与S1点无法对齐和/或W2点与S2点无法对齐的场景,本申请实施例提供的技术方案都可以适用。
下面先以情况1(左眼眼球中心W1点与中心点S1点能够对齐,但右眼眼球中心W2与中心点S2无法对齐)为例进行介绍。
示例性的,请参见图6A,用户佩戴VR眼镜时,左眼眼球中心W1对齐第二显示器件120的中心点S1,即中心点S1点与左眼眼球中心W1点处于同一直线K5上。直线K5是经过左眼眼球中心W1且垂直于左眼眼球的线。右眼眼球中心W2无法对齐第一显示器件110的中心点S2。比如,右眼眼球中心W2点对齐第一显示器件110上的S2’点(S2’点在S2点右侧距离N处),即,W2点与S2’点处于同一直线K6’上。直线K6’是经过右眼眼球中心W2且垂直于右眼眼球的直线。
继续以通过图6A所示的VR眼镜向用户展示的虚拟环境为图4B所示的环境400为例,即,VR眼镜的两个显示器件上分别显示一张图像,这两张图像中包括环境400中的各个对象(比如树410、足球420、狗430的像),如此,用户才能通过这两张图像感受到环境400。
示例性的,如图6B,VR眼镜生成两张图像,即图像6100和图像6200。其中,图像6100和图像6200上包括重叠区域。图像6100上的重叠区域为区域6110,图像6200上的重叠区域为区域6210,区域6110中的对象都包含在区域6210内,且区域6210中的对象也都包含在区域6110内。图像6100在第二显示器件120上显示。图像6200在第一显示器件110上显示。其中,当第二显示器件120显示图像6100时,图像6100的中心点R3对齐第二显示器件120的中心点S1,即中心点R3与中心点S1处于同一直线K5上。当第一显示器件110显示图像6200时,图像6200的中心点R4对齐第一显示器件110的中心点S2,即中心点R4与中心点S2处于同一直线K6上。直线K6是经过第一显示器件110中心点S2且与第一显示器件110垂直的线。其中,直线K6与直线K6’是不同直线,二者之间的距离为N。其中,图像6100上重叠区域6110的中心点P5到直线K5的距离为L5。图像6200上重叠区域6210的中心点P6到直线K6的距离为L6。距离L5等于距离L6,而且,中心点P5到直线K5的方向与中心点P6到直线K6的方向相反。
用户佩戴VR眼镜时,左眼眼球中心W1对齐第二显示器件120的中心点S1。左眼采集到图像6300。图像6300上包括中心点T3点。T3点是图像6100上中心点R3对应的像点。也就是说,R3点、S1点、W1点、T3点处于同一直线K5上。右眼眼球中心W2对齐第一显示器件110上的点S2’。S2’对齐图像6200上的R4’点。R4’点在中心点R4点右侧距离N处。右眼采集到图像6400。图像6400上包括中心点T4点。T4点是图像6200上R4’点对应的像点。也就是说,R4’点、S2’点、W2点、T4点处于同一直线K6’上。
需要说明的是,由于右眼眼球中心W2无法对齐第一显示器件110的中心点S2,导致无法对齐图像6200的中心点R4,而是对齐R4点右侧距离N处的R4’点。可以理解为,图像6200随着第一显示器件110向左偏移了距离N,使得右眼无法对齐图像6200的中心点R4,而是对齐R4’点。这样,图像6200上的部分区域会移出右眼视线范围内。比如图 像6200上左侧区域6201(阴影区域)移出右眼的视线范围内。所以右眼采集的图像6400上不包括这部分。由于右眼的视场角大小不变(比如110度),所以即便区域6201移出右眼视线范围,右眼采集的图像尺寸大小仍然不变。比如,图像6400上包括右侧区域6430(阴影区域),该区域6430不是图像6200的像,但是属于右眼采集的图像上的一部分。示例性的,区域6430中没有任何物体的像,比如是黑色区域。
可以理解的是,区域6201(阴影区域)移出右眼的视线范围内,会使得用户无法获得舒适的视野,而且用户所观察到的视野会变小。
请对比图6B与图5理解。图5中用户佩戴VR眼镜时,左眼眼球中心W1与第二显示器件120的中心点S1对齐,而且右眼眼球中心W2与第二显示器件120的中心点S2的情况下,用户左眼看到图像5300,右眼看到图像5400,大脑可以将重叠区域5310与重叠区域5410融合,所以能够看清楚环境400,并且人眼是舒适的。图6B中,用户佩戴VR眼镜时,左眼眼球中心W1点与第二显示器件120的中心点S1点对齐,但是右眼眼球中心W2点与第一显示器件110的中心点S2点无法对齐的情况下,左眼看到图像6300,右眼看到图像6400,大脑无法将重叠区域6310与重叠区域6410融合。这是因为,图像6400中重叠区域6410比图5中图像5400上重叠区域5410缺失了部分内容,导致重叠区域6310与重叠区域6410无法完全融合。由于重叠区域无法完全融合,所以用户无法看清楚环境400,此时,大脑本能地会控制右眼肌肉运动以使右眼眼球向左转动试图对准图像6200的中心点R4,这样会导致左眼向正前方看,右眼向左方看,双眼视线方向不一致会产生眩晕感,体验较差。
图6A和图6B是以情况1(即W1点与S1点能够对齐,但W2与S2无法对齐)为例,可以理解的是,对于情况2(左眼眼球中心W1点与中心点S1点无法对齐,但右眼眼球中心W2点与中心点S2点能够对齐)是相同原理,不重复赘述。
下面以情况3(左眼眼球中心W1点与中心点S1点无法对齐,且右眼眼球中心W2与中心点S2也无法对齐)为例进行介绍。
示例性的,请参见图7A,用户佩戴VR眼镜时,第二显示器件120的中心点S1与第一显示器件110的中心点S2之间的距离B1小于左眼眼球中心W1与右眼眼球中心W2之间的距离B2,其中,左眼眼球中心W1与右眼眼球中心W2之间的距离B2还可以理解为瞳孔之间的距离,也称为瞳距(Inter Pupillary Distance,IPD)。即,如图7A所示,左眼眼球中心W1与中心点S1无法对齐,右眼眼球中心W2与中心点S2也无法对齐。
比如,左眼眼球中心W1对齐显示器件上的S1’点(S1’点在S1点左侧距离N1处),即,左眼眼球中心W1与S1’点处于同一直线K7上,直线K7是经过左眼眼球中心W1且与左眼眼球中心垂直的线。右眼眼球中心W2对齐显示器件上的S2’点(S2’点在S2点右侧距离N2处),即,右眼眼球中心W2与S2’点处于同一直线K8上,直线K8是经过右眼眼球中心W2且与右眼眼球中心垂直的线。其中,S1’点到S1点的距离N1加上S2’点到S2点的距离N2等于B1与B2之间的距离差。距离N1可以等于或不等于距离N2。在一些实施例中,由于组装偏差等原因(例如显示器件组装偏差),距离N1可以不等于距离N2。
下文以距离N2大于距离N1为例进行介绍。
以通过图7A所示的VR眼镜向用户展示的虚拟环境为图4B所示的环境400为例,即, VR眼镜的两个显示器件上分别显示一张图像,这两张图像中包括环境400中的各个对象(比如树410、足球420、狗430)的像,如此,用户才能通过这两张图像感受到环境400,以模拟真实的环境。
示例性的,请参见图7B,VR眼镜生成两张图像,即图像7100和图像7200。其中,图像7100和图像7200上包括重叠区域。图像7100上的重叠区域为区域7110,图像7200上的重叠区域为区域7210。其中,重叠区域7110的中心点P7到图像7100的中心点R5的距离L7等于重叠域7210的中心点P8到图像7200的中心点R6的距离L8,而且P7点到R5点的方向与P8点到R6点的方向相反。
图像7100在第二显示器件120上显示。图像7200在第一显示器件110上显示。其中,当第二显示器件120显示图像7100时,第二显示器件120的中心点S1对齐图像7100的中心点R5,即中心点R5与中心点S1处于同一直线K7’上。其中,直线K7’与直线K7是不同直线,两者之间的距离为N1。当第一显示器件110显示图像7200时,第一显示器件110的中心点S2对齐图像7200的中心点R6,即中心点R6与中心点S2处于同一直线K8’上。其中,直线K8’与直线K8是不同直线,两者之间的距离为N2。N2大于N1。
用户佩戴VR眼镜时,左眼眼球中心W1对齐显示器件上的S1’点。S1’点对齐图像7100上的R5’点。R5’点在中心点R5左侧距离N1处。左眼采集到图像7300。图像7300上包括中心点T5。T5点是图像7100上R5’点对应的像点。也就是说,R5’点、S1’点、W1点、T5点处于同一直线K7上。
用户佩戴VR眼镜时,右眼眼球中心W2对齐第一显示器件110上的S2’点。S2’点对齐图像7200上的R6’点。R6’点在图像7200的中心点R6点右侧距离N2处。右眼采集到图像7400。图像7400上包括中心点T6。T6点是图像7200上R6’对应的像点。也就是说,R6’点、S2’点、W2点、T6点处于同一直线K8上。
请对比图像7100与图像7300。由于图像7100随着第二显示器件120一起相较于对应的光学器件140向右偏移距离N1,使得左眼无法对齐图像7100的中心点R5,而是对齐中心点R5左侧距离N1处的R5’点。这样,图像7100上右侧区域7101(阴影区域)会移出左眼的视线范围。比如,左侧区域7101中包括小树的像7102(可以理解为对象7102),该像7102移出左眼的视线范围,所以左眼采集的图像7300上不包括该小树的像7102。由于左眼的视场角大小不变(比如110度),所以即便区域7101移出左眼视线范围,左眼采集的图像尺寸大小仍然不变。比如,图像7300上包括左侧区域7330(阴影区域),该区域7330不是图像7100的像,但是属于左眼采集的图像上的一部分。示例性的,区域7330中没有任何物体的像,比如是黑色区域。可以理解的是,区域7101(阴影区域)移出左眼的视线范围内,会使得用户无法获得舒适的视野,而且用户所观察到的视野会变小,即在对图像7300和图像7400进行融合后,无法看到小树的像7102,影响VR体验。
请对比图像7200与图像7400。由于图像7200随着第一显示器件110一起相较于对应的光学器件130向左偏移距离N2,使得右眼无法对齐图像7200的中心点R6,而是对齐中心点R6右侧距离N2处的R6’点。所以图像7200上左侧区域7201(阴影区域)会移出右眼的视线范围。比如,右侧区域7201内包括树的像7202的一部分(如,左侧部分),这部分会移出右眼的视线范围,所以右眼采集的图像7400上只包括树的像7202未处于区域7201的部分。由于右眼的视场角大小不变(比如110度),所以即便区域7201移出右眼视线范围,右眼采集的图像尺寸大小仍然不变。比如,图像7400上包括右侧区域7430(阴 影区域),该区域7430不是图像7200的像,但是属于右眼采集的图像上的一部分。示例性的,区域7430中没有任何物体的像,比如是黑色区域。可以理解的是,区域7201(阴影区域)移出右眼的视线范围内,会使得用户无法获得舒适的视野,而且用户所观察到的视野会变小,影响VR体验。
以N2大于N1为例,图像7200上移出右眼视线范围的区域7201的宽度比图像7100上移出左眼视线范围的区域7101的宽度大。对应的,右眼采集的图像7400上区域7430的宽度比左眼采集的图像7300上区域7330的宽度大。
图7B中,大脑无法融合图像7300和图像7400上的重叠区域。因为,图像7300上重叠区域7310与图像7400上重叠区域7410包含的对象不完全相同。比如,重叠区域7410上只包括树的像的一半。所以重叠区域7310与重叠区域7410无法完全融合。由于重叠区域7310与重叠区域7410无法完全融合,所以用户无法看清楚环境400,此时,大脑本能地会控制左眼肌肉运动带动左眼眼球向右转动试图对准图像7100的中心R5,还会控制右眼肌肉运动以使右眼眼球向左转动试图对准图像7200的中心R6,这样会导致左眼朝右看,右眼朝作看,双眼视线不同会产生眩晕感。并且用户所观察到的视野变小,在对图像7300和图像7400进行融合后,无法看到小树的像7102。
以上列举了导致用户双目不融合的三种情况,即情况1至情况3。为了解决双目不融合或视野差等问题,图8A和图8B示出了一种可能的实施方式。
如图8A,VR眼镜的第一显示器件110包括两个区域,区域1和区域2。区域1可以是中心区域,区域2可以是边缘区域(阴影区域),区域2围绕区域1。也就是说,区域1的中心点可以与第一显示器件110的中心点重叠,该中心点为S2。其中,区域1的面积可以是预先设置好的,区域2的内边缘到区域1的外边缘之间的距离N4可以是预设的。第二显示器件120上包括两个区域,区域3和区域4。区域3可以是中心区域,区域4可以是边缘区域(阴影区域)。区域4围绕区域3。也就是说,区域3的中心点可以与第二显示器件120的中心点重叠,该中心点为S1。其中,区域3的面积可以是预先设置好的,区域4的内边缘到区域3的外边缘之间的距离N5可以是预设的。
假设当用户佩戴VR眼镜时,左眼眼球中心W1对齐第二显示器件120的中心点S1。第二显示器件120上区域3中显示图像8100。区域4中不显示任何内容(比如,区域4处于黑屏状态)。即,左眼眼球中心W1可以对齐图像8100的中心点S1。左眼采集到图像8300。当用户佩戴VR眼镜时,右眼眼球中心W2对齐第一显示器件110的中心点S2。第一显示器件110上区域1中显示图像8200。区域2中不显示任何内容(比如,区域2处于黑屏状态)。即,W2点可以对齐图像8200的中心点S2。右眼采集到图像8400。由于W1点与S1点对齐,而且W2点与S2点对齐,所以图像8300和图像8400可以融合,不会出现眩晕感。
图8A是左眼眼球中心W1与第二显示器件120的中心点S1对齐,右眼眼球中心W2与第一显示器件110的中心点S2对齐的情况,在一些实施例中,图8A所示的VR眼镜可能会出现前文所述的三种情况中的某一种,导致左眼眼球中心W1与第二显示器件120的中心点S1无法对齐,和/或,右眼眼球中心W2与第一显示器件110的中心点S2无法对齐,当出现该三种情况中的一种时,可以使用图8B的解决方式。为了方便描述,以情况1(左眼眼球中心W1与第二显示器件120的中心点S1对齐,但右眼眼球中心W2与第一显示器件110的中心点S2无法对齐)为例进行介绍。
如图8B,用户右眼眼球中心W2无法对齐第一显示器件110的中心点S2。比如,右眼眼球中心W2对齐第一显示器件110上的S2’点。S2’点在S2点左侧距离N6处。此时,不在第一显示器件110上的区域1内显示图像8200,而是在第一显示器件110上的区域5内显示图像8200。区域5的中心点是S2’点,这样区域5显示图像8200时,图像8200的中心点就是S2’点,所以W2点可以对齐图像8200的中心点。因此,右眼采集的图像8400能够与左眼采集的图像8300融合。
因此,通过在显示器件上预留显示区域的方式,可以解决W1点与S1点无法对齐和/或W2点与S2点无法对齐的问题。
图9示出解决双目不融合的另一种可实施方式。这种方式不需要在显示器件上预留显示区域,可以通过图像处理方式双目不融合问题,更有利于减小显示器件的尺寸,实现设备的小型化和轻便化。
继续以情况1(即图6A的情况)为例,请参见图9,左眼眼球中心W1点可以对齐第二显示器件120的中心点S1,右眼眼球中心W2无法对齐第一显示器件110的中心点S2,而是对齐显示器件上的S2’点(S2’点在S2点右侧距离N处)。其中,第一显示器件110和第二显示器件120相对于中间平面(或人脸的中心线)不对称。
以图9的VR眼镜向用户展示环境400为例,VR眼镜产生两张图像,即图像9100和图像9200。其中,第二显示器件120上全屏显示图像9100。第一显示器件110上全屏显示图像9200。即,显示器件上不需要预留显示区域,这样显示器件尺寸可以相对较小,可以节省成本,而且,显示器件尺寸小有利于设备轻小型的设计趋势。在第二显示器件120显示图像9100时,第二显示器件120的中心点S1对齐图像9100的中心点R9,即S1点与R9点处于同一直线K9上。在第一显示器件110显示图像9200时,第一显示器件110的中心点S2对齐图像9200的中心点R10,即S2点与R10点处于同一直线K10上。
其中,图像9100和图像9200上有重叠区域。图像9100上的重叠区域为区域9110。图像9200上的重叠区域为区域9210。其中,重叠区域9110的中心点P9到图像9100的中心点R9的距离为L9,重叠区域9110的中心点P9到图像9100的中心点R9的第一方向为左,重叠区域9210的中心点P10到图像9200的中心点R10的距离为L10,重叠区域9210的中心点P10到图像9200的中心点R10的第一方向为右。距离L9不等于距离L10。其中,距离L9与距离L10之间的距离差为N。需要说明的是,图9中以图像9100的中心点R9和图像9200的中心点R10为基准,在另一些实施例中,还可以以图像9110上其它点(比如左顶点)和图像9200上其它点(比如右顶点)为基准,即重叠区域9110的中心点P9到图像9100的左顶点的距离,不等于重叠区域9210的中心点P10到图像9200的右顶点的距离。其中,中心点P9和中心点P10相对于中间平面(或人脸的中心线)对称。
如图9,当用户佩戴VR眼镜时,左眼眼球中心W1点可以对齐第二显示器件120的中心S1。左眼采集的是图像9300。图像9300上包括中心点T9。T9点是图像9100上中心点R9对应的像点。即,R9点、S1点、W1点、T9点处于同一直线K9上。
如图9,右眼眼球中心W2对齐显示器件上的S2’点(S2’点在S2点右侧距离N处)。S2’点对齐图像9200上的R10’点,R10’点位于中心点R10右侧距离N处。右眼采集到图像9400。图像9400上包括中心点T10。T10点是图像9200上R10’点对应的像点。即T10点、W2点、S2’点、R10’点处于同一直线K10’上。其中,直线K10’与直线K10不同, 二者距离为N。
其中,图像9300上重叠区域9310中包括中心点P9’。P9’点是图像9100上P9点对应的像点。P9’点到直线K9的距离为L9’。图像9400上重叠区域9410中包括中心点P10’,P10’点是图像9200上P10点对应的像点。P10’点到直线K10’的距离为L10’。距离L9’等于L10’。而且,P9’点到直线K9的方向与P10’点到直线K10’的方向相反,中心点P9’和中心点P10’相对于中间平面(或人脸的中心线)对称。
虽然图9中第一显示器件110相较于对应的光学器件130向左偏移了距离N,但是由于重叠区域9210右移了距离N,补偿(或可以称为抵消)了第一显示器件110的偏移,所以右眼采集的图像上的重叠区域9410与左眼采集的图像上重叠区域9310可以融合。
当重叠区域9210右移了距离N后,第一显示器件110的最左侧会留出一个区域9230,区域9230的宽度为N。在一些实施例中,区域9230显示重叠区域9110左侧的部分图像,例如,在图像9100中,靠近重叠区域9110左侧宽度为N的区域显示背景对象(比如背景是蓝天和白云),则区域9230也显示背景对象(比如背景是蓝天和白云)。则区域9230和重叠区域9110左侧的部分区域的对象相同,这两个区域也可以理解为重叠区域。在一些实施例中,区域9230内包括新的对象,该对象在图像9100上没有。比如,如果第一显示器件110相较于对应的光学器件130向上偏移了距离N,则重叠区域9210向下移动距离N,区域9230会出现在第一显示器件110上面部分。由于图像9200是一张全景图像上的一个图像块,该区域9230内的对象可以是该全景图像块中位于重叠区域9210上方区域内的对象(在图像9100内没有的对象)。在一些实施例中,区域9230可以显示第一颜色,第一颜色的类型不作限定,比如,黑色、白色等等。可以理解的是,随着第一显示器件110相较于对应的光学器件(130)向左偏移距离N,区域9230可能会移出右眼视线范围内,所以区域9230也可以不显示任何内容,比如,区域9230可以不上电,即区域9230黑屏,可以节约耗电。需要说明的是,当区域9230移出右眼视线范围时,由于右眼视场角大小不变,所以右眼采集的图像9400上包括右侧区域9430(阴影区域),区域9430不是图像9200的像,比如是黑色区域,代表这部分没有任何物体的像展示。
需要说明的是,图9以情况1为例进行介绍,下面以情况3(即图7A)为例进行介绍。
如图10,以第二显示器件120相对于对应的光学器件140向右偏移距离N1,第一显示器件110相对于对应的光学器件130向左偏移距离N2为例,且N1小于N2。因此,当用户佩戴VR眼镜时,左眼眼球中心W1点无法对齐第二显示器件120的中心S1,而是对齐第二显示器件120上的S1’点,S1’点在S1点左侧距离N1处。右眼眼球中心W2无法对齐第一显示器件110的中心点S2,而是对齐显示器件上的S2’点,S2’点在S2点右侧距离N2处。其中,第一显示器件110和第二显示器件120相对于中间平面(或人脸的中心线)不对称。
以图10的VR眼镜向用户展示环境400为例,VR眼镜产生两张图像,即图像1000和图像1100。其中,第二显示器件120上全屏显示图像1000。第一显示器件110上全屏显示图像1100。即,显示器件上不需要预留显示区域,这样显示器件尺寸可以相对较小,可以节省成本,而且有利于设备轻小型的设计趋势。在第二显示器件120显示图像1000时,第二显示器件120的中心点S1对齐图像1000的中心点R11,即S1与R11处于同一直线K11上。在第一显示器件110显示图像1100时,第一显示器件110的中心点S2对齐图像1100的中心点R12,即S2与R12处于同一直线K12上。
其中,图像1000和图像1100上有重叠区域。图像1000上的重叠区域为区域1010,在一些实施例中,区域1030也可以是重叠区域,区域1030与区域1110右侧的部分图像重叠。在一些实施例中,为了省电,区域1030也可以不显示任何内容。在图10中,以区域1030也可以不显示任何内容为例进行说明。图像1100上的重叠区域为区域1110,在一些实施例中,区域1130也可以是重叠区域,区域1130与区域1010左侧的部分图像重叠。在一些实施例中,为了省电,区域1130也可以不显示任何内容。在图10中,以区域1130也可以不显示任何内容为例进行说明。
以下,以重叠区域为区域1010和区域1110为例进行说明。重叠区域1010的中心点P11到图像1000的中心点R11的距离为L11,重叠区域1010的中心点P11到图像1000的中心点R11的第一方向为左。重叠区域1110的中心点P12到图像1100的中心点R12的距离为L12,重叠区域1110的中心点P12到图像1100的第二方向为右。由于N1不等于N2,所以距离L11不等于距离L12。以N1小于N2为例,则距离L11大于距离L12。其中距离L11与距离L12之间的距离差等于N1与N2之间的距离差。在另一些实施例中,P11点到R11点的方向不同于P12到R12点的方向,比如,方向相反。其中,中心点P11和中心点P12相对于中间平面(或人脸的中心线)对称。
如图10,当用户佩戴VR眼镜时,左眼眼球中心W1点对齐第二显示器件120的S1’点。S1’点对齐图像1000上的R11’点,R11’点位于中心点R11左侧距离N1处。左眼采集到图像1200。图像1200上包括中心点T11。T11点是图像1000上R11’点对应的像点。即T11点、W1点、S1’点、R11’点处于同一直线K11’上。其中,直线K11’与直线K11不同,二者距离为N1。
如图10,右眼眼球中心W2对齐第一显示器件110上的S2’点。S2’点对齐图像1100上的R12’点,R12’点位于中心点R12右侧距离N2处。右眼采集到图像1300。图像1300上包括中心点T12。T12点是图像1100上R12’点对应的像点。即T12点、W2点、S2’点、R12’点处于同一直线K12’上。其中,直线K12’与直线K12不同,二者距离为N2。
其中,左眼采集的图像1200上重叠区域1210中包括中心点P11’。P11’点是图像1000上P11点对应的像点。P11’点到直线K11’的距离为L11’。右眼采集的图像1300上重叠区域1310中包括中心点P12’,P12’点是图像1100上P12点对应的像点。P12’点到直线K12’的距离为L12’。距离L11’等于L12’。而且,P11’点到直线K11’的方向与P12’点到直线K12’的方向相反,中心点P11’和中心点P12’相对于中间平面(或人脸的中心线)对称。
也就是说,虽然图10中第二显示器件120相对于对应的光学器件140向右偏移了距离N1,但是重叠区域1010向左移动了距离N1,补偿(或抵消)了第一显示器件110的偏移。同理,第一显示器件110相对于对应的光学器件130向左偏移了距离N2,但是重叠区域1110向右移动了距离N2,补偿(或抵消)了第一显示器件110的偏移,所以重叠区域1210与重叠区域1310可以融合。
继续参见图10,图像1000上的非重叠区域包括两个区域,即区域1030和区域1040。重叠区域1010位于区域1030和区域1040之间。其中,区域1030中可以显示第二颜色,第二颜色的类型不作限定,比如,黑色、白色等等,或者还可以是图像1000的背景色。图像1100上的非重叠区域包括两个区域,即区域1130和区域1140。重叠区域1110位于区域1130和区域1140之间。其中,区域1130中可以显示第一颜色,第一颜色的类型不作 限定,比如,黑色、白色等等,或者还可以是图像1100的背景色。第一颜色与第二颜色可以相同或不同。
需要说明的是,由于N1不等于N2,所以图像1000上非重叠区域1030的面积(或宽度)与图像1100上非重叠区域1130的面积(或宽度)不同。以N1小于N2为例,则区域1030的宽度小于区域1130的宽度。同理,图像1000上非重叠区域1040的面积(或宽度)与图像1100上非重叠区域1140的面积(或宽度)不同。以N1小于N2为例,则区域1140的宽度小于区域1040的宽度。
在一些实施例中,图像1000和图像1100分别是同一张全景图像上的不同区域内的图像块,比如图像1000是全景图像上位于第一显示区域内的图像块;图像1100是全景图像上位于第二显示区域内的图像块;其中,重叠区域为该第一显示区域和该第二显示区域的重叠区域。其中,区域1030可以是全景图像上位于区域1010右侧的区域。区域1130可以是全景图像上位于区域1110左侧的区域。
可以理解的是,随着第一显示器件110左移距离N2,区域1130会移出右眼视线范围内,所以区域1130也可以不显示任何内容,比如,不显示任何颜色。需要说明的是,当区域1130移出右眼视线范围时,由于右眼视场角大小不变,所以右眼采集的图像1300上包括右侧区域1330(阴影区域),区域1330不是图像1100的像,比如是黑色区域,代表这部分没有任何物体的像展示。同理,随着第二显示器件120相对于对应的光学器件140向右偏移距离N1,区域1030会移出左眼视线范围内,所以区域1030也可以不显示任何内容,比如,不显示任何颜色。需要说明的是,当区域1030移出左眼视线范围时,由于左眼视场角大小不变,所以左眼采集的图像1200上包括左侧区域1230(阴影区域),区域1230不是图像1000的像,比如是黑色区域,代表这部分没有任何物体的像展示。
可以理解的是,第一显示器件110和/或第二显示器件120的位置可以动态变化。以图10为例,在一些实施例中,重叠区域1010的中心点P11的位置可以随着第二显示器件120的位置移动而移动,其中,中心点P11的位置移动方向与第二显示器件120的移动方向相反,以补偿或抵消第二显示器件120的位置移动。同理,重叠区域1110的中心点P12的位置可以随着第一显示器件110的位置移动而移动,其中,中心点P12的位置移动方向与第一显示器件110的位置移动方向相反,以补偿或抵消第一显示器件110的位置移动。
如前文所述,具有组装偏差的VR眼镜在进行IPD调节时,仍然存在组装偏差。以图10为例,假设对该VR眼镜作IPD调节(即第一显示器件110和第二显示器件120移动相同距离且移动反向相反,例如,第一显示器件110和第二显示器件120分别向左移和右移,或第一显示器件110和第二显示器件120分别向右移和左移)。在IPD调节前后,第一显示器件110和第二显示器件120组装偏差的相对位置关系不变(即偏移量之差不变),故第一显示器件110和第二显示器件120分别显示的两个图像的位置移动关系不变,以保证在IPD调节前后双眼均能实现双目融合。例如,VR眼镜在显示第一图像和第二图像之前进行了IPD调节,与未进行IPD调节相比,距离L11与距离L12之间的距离差保持不变,且第一方向和第二方向之间的相对关系相较于IPD调节之前保持不变。在一些实施例中,当第一显示器件110和第二显示器件120分别向左移和右移,或第一显示器件110和第二显示器件120分别向右移和左移时,第一方向和第二方向相较于IPD调节之前均保持不变。
在一些实施例中,VR眼镜在显示第一图像和第二图像之前进行IPD调节,在IPD调节之前,当第一显示器件110和第二显示器件120分别显示图像时,第二显示器件120显 示的图像上重叠区域的中心点到第二显示器件120显示图像的中心点的距离为第三距离,第一显示器件110显示的图像上重叠区域的中心点到第一显示器件110显示图像的中心点的距离为第四距离,第三距离与第四距离之差为第一距离差;在经过IPD调节之后,第一显示器件110和第二显示器件120分别显示第一图像和第二图像,第一图像上重叠区域的中心点到第一图像的中心点的距离为距离L11,第二图像上重叠区域的中心点到第二图像的中心点的距离为距离L12,距离L11与距离L12之差为第二距离差,其中,第一距离差等于第二距离差。
在一些实施例中,VR眼镜在显示第一图像和第二图像之后进行IPD调节,在IPD调节之前,第一显示器件110和第二显示器件120分别显示第一图像和第二图像,第一图像上重叠区域的中心点到第一图像的中心点的距离为距离L11,第二图像上重叠区域的中心点到第二图像的中心点的距离为距离L12,距离L11与距离L12之差为第二距离差;在经过IPD调节之后,当第一显示器件110和第二显示器件120分别显示图像时,第二显示器件120显示的图像上重叠区域的中心点到第二显示器件120显示图像的中心点的距离为第五距离,第一显示器件110显示的图像上重叠区域的中心点到第一显示器件110显示图像的中心点的距离为第六距离,第五距离与第六距离之差为第三距离差;其中,第三距离差等于第二距离差。
示例性的,图11为本说明书一实施例提供的显示方法的一种流程示意图,该方法可以应用于上述任一显示方法,例如可以应用于图9或图10所示的显示方法。如图11所示,该流程包括:
S1101,VR眼镜获取三维图像数据。
其中,三维图像数据中包括二维图像信息以及深度信息。深度信息包括二维图像信息中每个像素点所对应的深度。三维图像数据可以是VR应用生成的,该VR应用比如VR游戏应用、VR教学应用、VR观影应用、VR驾驶应用,等等。
S1102,VR眼镜获取第一坐标转换矩阵和第二坐标转换矩阵。第一坐标转换矩阵是用于将三维图像数据转换为第一平面图像,第二坐标转换矩阵是用于将三维图像数据转换为第二平面图像。第一平面图像对应第一显示器件,第二平面图像对应第二显示器件。
其中,第一显示器件可以是前文中第二显示器件120,对应左眼,第二显示器件可以是前文中第一显示器件110,对应右眼。
第一坐标转换矩阵用于将三维图像数据从第一坐标系转换为第二坐标系,第一坐标系是三维图像数据所在坐标系,第二坐标系是第一显示器件或者左眼对应的坐标系。其中,左眼对应的坐标系可以是第一虚拟摄像头的坐标系。第一虚拟摄像头可以理解为模拟左眼而创建的虚拟摄像头。因为人眼的图像采集原理与摄像头的图像拍摄原理类似,所以可以创建虚拟摄像头以模拟人眼的图像采集过程。第一虚拟摄像头是模拟人的左眼。比如,第一虚拟摄像头的位置与左眼位置相同,和/或,第一虚拟摄像头的视场角与左眼的视场角相同。示例性的,一般来说,人眼的视角上下110度,左右110度,那么第一虚拟摄像头的视场角上下110度,左右110度。再比如,VR眼镜可以确定左眼位置,第一虚拟摄像头设置在左眼位置处。其中,确定左眼位置的方式有多种。比如,方式1、先确定第一显示器件所在位置,然后,第一显示器件所在位置加上间距A可以估算出左眼所在位置。这种方式确定的左眼位置比较准确。其中,间距A是显示器件与人眼之间的间距,可以是事先 存储好的。方式2、左眼位置等于第一显示器件所在位置。这种方式忽略了人眼与显示器件之间的间距,实现难度较低。
第二坐标转换矩阵用于将三维图像数据从第一坐标系转换为第三坐标系,第一坐标系是三维图像数据所在坐标系,第三坐标系是第二显示器件或者右眼对应的坐标系。其中,右眼对应的坐标系可以是第二虚拟摄像头的坐标系。该第二虚拟摄像头可以理解的为模拟用户右眼而创建的摄像头。比如,第二虚拟摄像头的位置与右眼位置相同,和/或,第二虚拟摄像头的视场角与右眼的视场角相同。
示例性的,第一坐标转换矩阵和第二坐标转换矩阵可以是事先存在VR眼镜中的。比如,存储在寄存器中,VR眼镜从寄存器中读取第一坐标转换矩阵和第二坐标转换矩阵。
S1103,VR眼镜根据第一坐标转换矩阵对三维图像数据处理得到第一平面图像,根据第二坐标转换矩阵对三维图像数据处理得到第二平面图像。
如前文该,第一平面图像和第二平面图像均是由三维图像数据经过坐标转换而来。该坐标转换过程可以理解为使用虚拟摄像头拍摄三维图像,完成三维到二维的转换。比如,通过第一虚拟摄像头拍摄三维图像数据,得到第一平面图像,通过第二虚拟摄像头拍摄三维图像数据得到第二平面图像。以第一虚拟摄像头为例,请参见图12A,为第一虚拟摄像头的示意图。第一虚拟摄像头包括四个参数,如视场(Field Of View,FOV)角、实际拍摄窗口的长宽比、近裁剪面、远裁剪面。实际拍摄窗口的长宽比可以是远裁剪面的长宽比。其中,远裁剪面可以理解为第一虚拟摄像头能够拍摄到的最远范围,近裁剪面可以理解为第一虚拟摄像头能够拍摄的最近范围。可以理解的是,三维图像数据中处于FOV内,且在近裁剪面与远裁剪面之间的对象,可以第一虚拟摄像头拍摄到。比如,三维图像数据中包括多个对象,例如球体1400、球体1401以及球体1402。其中,球体1400在FOV外,无法被拍摄到,球体1401和球体1402在FOV内且在近裁剪面与远裁剪面之间,可以被拍摄到。因此,第一虚拟摄像头拍摄的图像上包括球体1401和球体1402。第一摄像头拍摄的图像可以理解为三维图像数据在近裁剪面的投影图像。该投影图像上的各个像素点的坐标可以确定。比如,请参见图12B,以第一虚拟摄像头的中心建立三维坐标系O-XYZ。在X-Y平面内包括四个象限,即四个区域,分别是左上区域、右上区域、左下区域、右下区域。以三维图像数据中对象在近裁剪面的投影图像上的边缘点G1、边缘点G2、边缘点G3和边缘点G4为例。边缘点G1至边缘点G4对应的坐标为(l,r,t,b)。其中,l是左(left),t是上(top),r是右(right),b是下(bottom)。比如,边缘点G1坐标为(3、0、3、0),边缘点G2对应的坐标为(0、3、3、0),边缘点G3对应的坐标为(3、0、0、3),边缘点G4对应的坐标为(0、3、0、3)。四个边缘点的深度均为n。
因此,以第一平面图像为例,第一平面图像满足公式:A*H=K;其中,A是三维图像数据对应的矩阵,H是第一坐标转换矩阵,K是第一平面图像对应的矩阵。示例性的,A是(l,r,t,b),第一坐标转换矩阵H满足如下:
Figure PCTCN2022127013-appb-000001
其中,其中l表示左(left)、r表示右(right)、t表示高或上(top)、b表示底或下(bottom),n为在z轴方向上近裁剪面的深度。f为在z轴方向上远裁剪面的深度。对于三维图像数据中像素点,可以使用上述第一坐标转换矩阵H向上、下、左、右四个方向进行偏移,得到在第一平面图像中的位置。由于A是一行四列,H是四行四列,所以得到的K是一行四列,即第一平面图像上像素点的位置用上、下、左、右四个方向的参数描述。
第二平面图像的获取原理与第一平面图像的获取原理相同,不重复赘述。
S1104,VR眼镜获取第一显示器件的第一偏移量,和/或,第二显示器件的第二偏移量。
示例性的,第一偏移量和第二偏移量可以相同或不同。比如,以图10为例,第一显示器件110的第一偏移量为第一显示器件110相对于对应的光学器件130向左偏移距离N2,第二显示器件120的第二偏移量为第二显示器件120相对于对应的光学器件140向右偏移距离N1。在一些实施例中,如果第一偏移量和第二偏移量的距离和方向均相等,比如第一显示器件110的第一偏移量为第一显示器件110相对于对应的光学器件130向左偏移距离N3,第二显示器件120的第二偏移量为向第二显示器件120相对于对应的光学器件140左偏移距离N3,则第一图像和第二图像不需要进行坐标转换,用户的双眼可以进行融合,此时,可以不对第一图像和第二图像进行坐标转换。这种情形下,可选地,可以将第一图像和第二图像坐标向右移动距离N3,可以使图像的中心出现在人眼的正前方,避免斜视。
在一些实现方式中,VR眼镜中事先存储第一偏移量和第二偏移量。比如,存储在寄存器中。VR眼镜从寄存器中读取第一偏移量和第二偏移量。示例性的,第一偏移量和第二偏移量可以是VR眼镜出厂之前就标定出并存储在VR眼镜中的。
一种可实现的标定方式为,VR眼镜组装完成之后,使用双目融合检测装置标定VR眼镜上显示器件的位置偏移量。示例性的,如图12C,双目融合检测装置中包括相机1401以及光学***1402。VR眼镜的两个显示器件所显示的图像经过光学***1402的一系列的反射被相机1401捕捉。比如,VR眼镜的第二显示器件120显示图像1,第一显示器件110显示图像2。图像1和图像2上都有十字(图像1上的十字为虚线,图像2上的十字为实线),而且,图像1上十字的交叉点O1到图像1的中心点O2的距离L01等于图像2上的十字的交叉点O3到图像2的中心点O4的距离L02,而且,交叉点O1到中心点O2的方向与交叉点O3到中心点O4的方向相反。经过光学***1402的反射,相机1401捕捉到的图像3。图像3上包括两个十字。其中,一个十字的交叉点为O1’,是图像1上交叉点O1对应的像点,另一个十字的交叉点O3’,是图像2上交叉点O3对应的像点。要说明的是,如果没有出现组装的位移偏差,由于图像1和图像2中十字是左右对称的(即距离L01等于距离L02,且交叉点O1到中心点O2的方向与交叉点O3到中心点O4的方向相反),所以图像3上应该是一个十字(图像1和图像2的十字融合)。但由于两个显示器件发生位 置偏移(比如存在组装偏差),导致图像1和图像2上的十字无法融合,所以图像3中出现两个十字。比如,图像3上两个十字之间的间隔包括:在X方向上的间隔为x1,在Y轴上的间隔为y1。其中,两个十字之间的间隔可以用于确定第一显示器件的第一偏移量,以及第二显示器件的第二偏移量。
在一些实施例中,第一显示器件的第一偏移量为(x1/2,y1/2),第二显示器件的第二偏移量为(-x1/2,-y1/2)。或者,第一显示器件的第一偏移量为(x1,y1),第二显示器件的第二偏移量为0。或者,第一显示器件的第一偏移量为(x1/3,y1/3),第二显示器件的第二偏移量为(-2x1/3,-2y1/3)。总之,第一显示器件和第二显示器件在X轴方向上的位移偏移总和为x1,在Y轴方向上的位移偏移总和是y1即可。需要说明的是,以第一显示器件的第一偏移量为(x1/2,y1/2),第二显示器件的第二偏移量为(-x1/2,-y1/2)为例,表征第一显示器件X轴正方向、Y轴正方向偏移,因为第一偏移量是正数;第二显示器件向X轴反方向、Y轴反方向偏移,因为第二偏移量是负数。
在一些实施例中,图像上物体的偏移量大小与显示器件的位置偏移量大小呈正比,为了避免图像上物体偏移量过大,显示器件的偏移量不宜太大或者两个显示器件的偏移量差值不易过大,比如,第一显示器件的第一偏移量为(x1,y1),第二显示器件的第二偏移量为0,会导致第一显示器件上图像中的物体偏移量过大或者两个显示器件所显示的图像上物体偏移量差值过大,所以两个显示器件对应的偏移量可以分摊补偿总平移量,比如第一显示器件的第一偏移量为(x1/2,y1/2),第二显示器件的第二偏移量为(-x1/2,-y1/2),以保证图像上物体偏移量不至于过大或者两个显示器件所显示的图像上物理偏移量差值不至于过大。
S1105,VR眼镜基于第一偏移量对第一平面图像处理得到第三平面图像,和/或,根据第二平移量对第二平面图像处理得到第四平面图像。
继续以图12C为例,第一平面图像是图像1,第二平面图像是图像2。假设第二显示器件120的第一偏移量为(x1/2,y1/2),第一显示器件110的第二偏移量为(-x1/2,-y1/2)。那么,如图13,图像1中十字的位置(A1,B1)按照(x1/2,y1/2)偏移到(A1’,B1’),以补偿组装位置偏移,其中,A1’ A1+x1/2,B1’=B1+y1/2。同理,图像2中十字的位置(A2,B2)按照(-x1/2,-y1/2)偏移到(A2’,B2’),以补偿组装位置偏移,其中,A2’ A2-x1/2,B2’=B2-y1/2。当第二显示器件120显示经过处理后的图像1,第一显示器件110显示经过处理后的图像2时,相机捕捉到的图像3中包括一个十字,即两张图像1上的十字线融合。
其中,基于第一偏移量对第一平面图像处理得到第三平面图像,可以包括方式1或方式2中的至少一种。
方式1,使用第一偏移量对第一平面图像上每个像素点进行平移,得到第三平面图像。也就是说,第一平面图像作整体(包括重叠区域和非重叠区域)移动。
方式2,使用第一偏移量对第一平面图像上重叠区域内的像素点进行平移,得到第三平面图像。该重叠区域是第一平面图像和第二平面图像上的重叠区域,重叠区域内包括至少一个相同对象。方式2考虑到非重叠区域内的像素点不需要进行融合,所以只对重叠区域内的像素点平移以保证双目融合,这种方式可以降低工作量,提升效率。
举例来说,以第一偏移量是(x1/2,y1/2)为例,第一平面图像上重叠区域内的第一 点处于(X1,Y1),该第一点在第三平面图像上对应的位置为(X2,Y2),其中,X2=X1+x1/2,Y2=Y1+y1/2。其中,第一点可以是重叠区域内的任意一点,比如中心点,边缘点等。
其中,基于第二平移量对第二平面图像处理得到第四平面图像,可以包括方式1或方式2中的至少一种。方式1,使用第二偏移量对第二平面图像上每个像素点进行平移,得到第四平面图像。也就是说,第二平面图像作整体移动。方式2,使用第二偏移量对第二平面图像上重叠区域内的像素点进行平移,得到第四平面图像。该重叠区域是第一平面图像和第二平面图像上的重叠区域,重叠区域内包括至少一个相同对象。举例来说,以第二偏移量是(-x1/2,-y1/2)为例,第二平面图像上重叠区域内的第二点处于(X3,Y3),该第二点在第四平面图像上对应的位置为(X4,Y4),其中,X4=X3-x1/2,Y4=Y3-y1/2。其中,第二点可以是重叠区域内的任意一点,比如中心点,边缘点等。
在一些实施例中,重叠区域内可以包括多个对象,比如第一对象和第二对象。第一对象和第二对象的偏移量可以不同。
比如,以第一对象的第一特征点和第二对象的第二特征点为例进行说明。在第一平面图像上,第一对象的第一特征点的坐标为(X5,Y5),第二对象的第二特征点的坐标为(X6,Y6),其中,第一对象的第一特征点可以是第一对象的中心点或第一对象的某个顶点等,第二对象的第二特征点可以是第二对象的中心点或第二对象的某个顶点等。以第一特征点的第一偏移量为(x1/2,y1/2),第二特征点的第一偏移量为(x1/3,y1/3)为例,即第一特征点的第一偏移量大于第二特征点。这样,经过对第一平面图像处理后得到的第三平面图像,第三平面图像用于在第二显示器件120上显示。在一些实施例中,第三平面图像上第一对象的第一特征点的坐标为(X7,Y7),其中,X7=X5+x1/2,Y7=Y5+y1/2。在一些实施例中,第三平面图像上第二对象的第二特征点的坐标为(X8,Y8),其中,X8=X6+x1/3,Y8=Y6+y1/3。其中,坐标(X5,Y5)、坐标(X6,Y6)、坐标(X7,Y7)、坐标(X8,Y8)为同一个坐标系中的坐标,均是在第二显示器件120上的坐标。
再比如,在第二平面图像上,第一对象的第一特征点的坐标为(X9,Y9),第二对象的第二特征点的坐标为(X10,Y10)。第一特征点的第二偏移量为(x1/4,y1/4),第二特征点的第二偏移量为(x1/5,y1/5),即第一特征点的第二偏移量大于第二对象。这样,经过对第二平面图像处理后得到的第四平面图像,第四平面图像用于在第一显示器件110上显示。在一些实施例中,第四平面图像上第一对象的第一特征点的坐标为(X11,Y11),其中,X11=X9+x1/4,Y11=Y9+y1/4。在一些实施例中,第四平面图像上第二对象的第二特征点的坐标为(X12,Y12),其中,X12=X10+x1/5,Y12=Y10+y1/5,其中,坐标(X9,Y9)、坐标(X10,Y10)、坐标(X11,Y11)、坐标(X12,Y12)为同一个坐标系中的坐标,均是在第一显示器件110上的坐标。
假设第三平面图像上第一对象的第一特征点的坐标为(X7,Y7),第四平面图像上第一对象的第一特征点的坐标为(X11,Y11),则坐标(X7,Y7)和坐标(X11,Y11)之间的坐标差为(D1,D2),其中D1=X7-X11,D2=Y7-Y11。假设第三平面图像上第二对象的第二特征点的坐标为(X8,Y8),第四平面图像上第二对象的第二特征点的坐标为(X12,Y12),则坐标(X8,Y8)和坐标(X12,Y12)之间的坐标差为(D3,D4),其中D3=X8-X12,D4=Y8-Y12。其中,坐标差(D1,D2)和坐标差(D3,D4)不同,例如,D1>D3和/或D2>D4,因为第一对象的偏移量大于第二对象的偏移量,其中,第二对象的偏移量可以为0,即第二对象可以不进行偏移。
上面的例子中,以第一对象的偏移量大于第二对象为例,在一些实施例中,在满足如下条件中的至少一种时,第一对象的偏移量大于第二对象的偏移量,或者只对第一对象进行偏移,该条件包括:
条件1,该第一对象处于用户注视点所在区域内,该第二对象不处于该用户注视点所在区域内。例如,可以根据眼动追踪模组105获取的信息,得到用户的注视点。注视点所在区域可以是以注视点为中心的圆形区域或方形区域等。一般用户对注视点所在区域内的对象关注点高,对非注视点所在区域内的对象关注度低,所以非用户注视点所在对象偏移量小点甚至不偏移,不太会影响用户体验,而且可以节省计算的工作量。当用户的注视点变化时,不同对象的偏移量可以随之变化,以匹配注视点的变化,达到更加的视觉效果。
条件2,该第一对象和该第二对象均处于该用户注视点所在区域内,且该第二对象比该第一对象靠近该用户注视点所在区域的边缘。一般用户对注视点所在区域内的中间位置的对象关注点高,边缘位置的对象的关注度低,所以边缘位置的对象偏移量小点,不太会影响用户体验,而且可以节省工作量。
条件3,该第一对象与该第一图像的中心之间的距离大于该第二对象与该第二图像中心之间的距离,在融合的图像中,第一对象比第二对象更靠近中心。一般用户对图像上的中间位置的对象关注点高,边缘位置的对象的关注度低,所以图像上边缘位置的对象偏移量小点,可以节省工作量,而且不太会影响用户体验。例如,在电子设备播放影片时,用户会更关注屏幕中心的内容,则位于图像上边缘位置的第二对象可以少偏移甚至不偏移。
条件4,该第一对象对应的用户交互次数大于该第二对象对应的用户交互次数。一般,交互次数多的对象是用户感兴趣的对象,交互次数少的对象不是用户感兴趣的对象,所以交互次数少的对象偏移量小点,可以节省工作量,而且不太会影响用户体验。例如,电子设备可以记录用户与各对象之间的交互次数,如果用户与第一对象的交互次数大于第一阈值,则可以一直对第一对象进行偏移。如果用户与第二对象的交互次数小于第二阈值,则只有当第二对象位于注视点所在区域时,再对第二对象进行偏移,或进行较多的偏移。
条件5,该第一对象是用户指定对象,该第二对象不是用户指定对象。例如,当用户更关注第一对象是,用户可以根据自己的需要,选择只对第一对象进行偏移,或者第一对象的偏移较大。
S1106,第一显示器件显示第三平面图像,第二显示器件显示第四平面图像。
第三平面图像和第四平面图像是经过处理后的图像,当第一显示器件显示第三平面图像,第二显示器件显示第四平面图像时,用户不会出现无法实现双目融合的情况,能够清楚且舒适的看到虚拟环境。
需要说明的是,在图11所示的实施例中,先使用坐标转换矩阵对三维图像数据处理得到第一平面图像和第二平面图像,然后对第一平面图像和/或第二平面图像进行平移。在另一些实施例中,还可以先对坐标转换矩阵进行调整,然后利用经过调整后的坐标转换矩阵对三维图像数据处理得到平面图像,通过这种方式所得到的平面图像无需再进行平移,因为已经对坐标转换矩阵进行过调整,所以得到平面图像是已经平移过的图像。
示例性的,如图14,为本申请实施例提供的显示方法的另一种流程示意图。该流程包括:
S1501,获取三维图像数据。
S1502,获取第一坐标转换矩阵和第二坐标转换矩阵。第一坐标转换矩阵是用于将三维图像数据转换为第一平面图像,第二坐标转换矩阵是用于将三维图像数据转换为第二平面图像。第一平面图像在第一显示器件上显示,第二平面图像在第二显示器件上显示。
S1503,获取VR眼镜上第一显示器件的第一偏移量,和/或,第二显示器件的第二平移量。
其中,S1501至S1503的实现原理请参见图11中S1101至S1103的实现原理,不重复赘述。
S1504,根据第一偏移量对第一坐标转换矩阵进行处理,得到第三坐标系转换矩阵,和/或,根据第二平移量对第二坐标系转换矩阵进行处理,得到第四坐标转换矩阵。
示例性的,以第一坐标转换矩阵是如下矩阵为例:
Figure PCTCN2022127013-appb-000002
假设第一偏移量为(-x/2,y/2),将上述第一坐标转换矩阵中的l、r、t、b更新为l-x/2,r-x/2,t+y/2,b+y/2,带入上述第一坐标转换矩阵中,得到第三坐标转换矩阵,如下:
Figure PCTCN2022127013-appb-000003
上面以根据第一坐标转换矩阵得到第三坐标转换矩阵为例,根据第二坐标转换矩阵得到第四坐标转换矩阵的原理相同,不重复赘述。
S1505,根据第三坐标系转换矩阵对三维图像数据进行处理得到第一平面图像,和/或,根据第四坐标系转换矩阵对三维图像数据进行处理得到第二平面图像。
其中,第一平面图像和第二平面图像无需在进行平移。
S1506,在第一显示器件上显示第一平面图像,和/或,在第二显示器件上显示第二平面图像。
在本申请的各实施例中,距离可以用像素的个数表示。例如,在图9中,距离L9和距离L10可以用像素表示,距离L9可以表示为:重叠区域9110的中心点P9到图像9100的中心点R9之间有M1个像素;距离L10可以表示为:重叠区域9210的中心点P10到图 像9200的中心点R1之间有M2个像素;如果M1不等于M2,则距离L9不等于距离L10。
需要说明的是,前面的实施例,以第一显示器件110或第二显示器件120在水平方向上移动导致用户左眼眼球中心W1与第二显示器件120的中心点S1无法对齐,或者右眼眼球中心W2与中心点S2无法对齐为例,在其它实施例中,第一显示器件110或第二显示器件120还可以在其它方向移动比如,第一显示器件110或第二显示器件120在上下方向上移动,使得用户左眼眼球中心W1与第二显示器件120的中心点S1无法对齐或者右眼眼球中心W2与中心点S2无法对齐。或者,请参见图15,第一显示器件110旋转角度
Figure PCTCN2022127013-appb-000004
这种情况下,第一显示器件110上显示的图像在水平方向上具有投影图像,该投影图像的中心点能够对齐右眼眼球中心W2。
以上实施例主要以VR场景为例,即由VR头戴式显示设备执行本说明书的显示方法。对于AR场景,可以由AR头戴式显示设备执行本说明书的显示方法。AR头戴式显示设备可以是AR眼镜等。比如,AR眼镜包括用于投射图像光线的光机。其中,光机发出的光线能够在AR眼镜的镜片的耦入光栅上导入,并在镜片的耦出光栅位置导出进入人眼,从而使用户看到图像对应的虚像。对于光机和耦入光栅之间存在组装偏差的AR眼镜可以使用本说明书提供的显示方法。
AR眼镜包括单目显示AR眼镜和双目显示AR眼镜。其中,单目显示AR眼镜中,两个镜片的其中一个镜片的至少部分区域采用光波导结构;在双目显示AR眼镜中,两个镜片的至少部分区域采均用光波导结构。其中,光波导(optical waveguide)是引导光波在其中传播的介质装置,又称介质光波导。在一些实施例中,光波导是指利用全反射原理引导光波在其本身中进行全反射传播的光学元件。常见的波导基底可以为由光透明介质(如石英玻璃)构成的传输光频电磁波的导行结构。
虽然本说明书的描述将结合一些实施例一起介绍,但这并不代表此申请的特征仅限于该实施方式。恰恰相反,结合实施方式作申请介绍的目的是为了覆盖基于本说明书的权利要求而有可能延伸出的其它选择或改造。为了提供对本说明书的深度了解,以下描述中将包含许多具体的细节。本说明书也可以不使用这些细节实施。此外,为了避免混乱或模糊本说明书的重点,有些具体细节将在描述中被省略。需要说明的是,在不冲突的情况下,本说明书中的实施例及实施例中的特征可以相互组合。
图16所示为本申请提供的一种电子设备1600。该电子设备1600可以是前文中的VR头戴式显示设备。如图16所示,电子设备1600可以包括:一个或多个处理器1601;一个或多个存储器1602;通信接口1603,以及一个或多个计算机程序1604,上述各器件可以通过一个或多个通信总线1605连接。其中该一个或多个计算机程序1604被存储在上述存储器1602中并被配置为被该一个或多个处理器1601执行,该一个或多个计算机程序1604包括指令,上述指令可以用于执行如上面相应实施例中手机的相关步骤。通信接口1603用于实现与其他设备的通信,比如通信接口可以是收发器。
上述本申请提供的实施例中,从电子设备(例如VR头戴式显示设备)作为执行主体的角度对本申请实施例提供的方法进行了介绍。为了实现上述本申请实施例提供的方法中的各功能,电子设备可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。
以上实施例中所用,根据上下文,术语“当…时”或“当…后”可以被解释为意思是“如 果…”或“在…后”或“响应于确定…”或“响应于检测到…”。类似地,根据上下文,短语“在确定…时”或“如果检测到(所陈述的条件或事件)”可以被解释为意思是“如果确定…”或“响应于确定…”或“在检测到(所陈述的条件或事件)时”或“响应于检测到(所陈述的条件或事件)”。另外,在上述实施例中,使用诸如第一、第二之类的关系术语来区份一个实体和另一个实体,而并不限制这些实体之间的任何实际的关系和顺序。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行该计算机程序指令时,全部或部分地产生按照本发明实施例该的流程或功能。该计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。该计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,该计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。该计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。在不冲突的情况下,以上各实施例的方案都可以组合使用。
需要指出的是,本专利申请文件的一部分包含受著作权保护的内容。除了对专利局的专利文件或记录的专利文档内容制作副本以外,著作权人保留著作权。

Claims (20)

  1. 一种显示方法,其特征在于,应用于电子设备,所述电子设备包括第一显示屏和第二显示屏,所述方法包括:
    通过所述第一显示屏显示第一图像,所述第一显示屏对应用户的第一眼,
    通过所述第二显示屏显示第二图像,所述第二显示屏对应所述用户的第二眼,其中:
    所述第一图像和所述第二图像上存在重叠区域,所述重叠区域内包括至少一个相同对象;
    在所述第一图像上,所述重叠区域的中心点位于第一位置;
    在所述第二图像上,所述重叠区域的中心点位于第二位置;
    所述第一位置到所述第一图像的中心点的距离为第一距离,所述第二位置到所述第二图像的中心点的距离为第二距离,所述第一位置到所述第一图像的中心点的方向为第一方向,所述第二位置到所述第二图像的中心点的方向为第二方向;
    所述第一距离不等于所述第二距离,和/或,所述第一方向不同于所述第二方向。
  2. 根据权利要求1所述的方法,其特征在于,所述电子设备还包括第一光学器件和第二光学器件,所述第一光学器件对应所述第一显示屏,所述第二光学器件对应所述第二显示屏,所述第一光学器件和所述第二光学器件相对于中间平面对称;
    所述第一位置和所述第二位置相对于所述中间平面对称。
  3. 根据权利要求2所述的方法,其特征在于,所述第一显示屏和所述第二显示屏相对于所述中间平面不对称。
  4. 根据权利要求2或3所述的方法,其特征在于,所述电子设备为头戴式显示设备,当所述电子设备被所述用户佩戴时,所述第一显示屏位于所述第一光学器件背离所述第一眼的一侧,所述第二显示屏位于所述第二光学器件背离所述第二眼的一侧。
  5. 根据权利要求1所述的方法,其特征在于,所述电子设备为头戴式显示设备,当所述电子设备被用户佩戴时,所述第一位置和所述第二位置相对于所述用户的人脸的中心线对称。
  6. 根据权利要求5所述的方法,其特征在于,所述第一显示屏和所述第二显示屏相对于所述人脸的中心线不对称。
  7. 根据权利要求1-6任一所述的方法,其特征在于,所述第一位置随着所述第一显示屏的位置变化而变化。
  8. 根据权利要求7所述的方法,其特征在于,所述第一显示屏向第三方向移动的情况下,所述第一图像上所述重叠区域向与所述第三方向相反的方向移动。
  9. 根据权利要求1-8任一所述的方法,其特征在于,所述第二位置随着所述第二显示屏的位置变化而变化。
  10. 根据权利要求9所述的方法,其特征在于,所述第二显示屏向第四方向移动的情况下,所述第二图像上所述重叠区域向与所述第四方向相反的方向移动。
  11. 根据权利要求1-10任一所述的方法,其特征在于,
    在所述通过所述第一显示屏显示第一图像和所述通过所述第二显示屏显示第二图像之前,所述方法还包括所述第一显示屏和所述第二显示屏进行瞳距调节,所述瞳距调节包括:所述第一显示屏沿着第五方向移动一定距离,所述第二显示屏沿着与所述第五方向相反的 第六方向移动相同距离;其中,所述第五方向为所述第一显示屏远离所述第二显示屏的方向,或,所述第五方向为所述第一显示屏靠近所述第二显示屏的方向;
    所述第一距离与所述第二距离之间的距离差相较于所述瞳距调节之前保持不变,且所述第一方向和所述第二方向之间的相对关系相较于所述瞳距调节之前保持不变。
  12. 根据权利要求1-11中任一所述的方法,其特征在于,所述至少一个相同对象中包括第一对象和第二对象;
    在所述第一图像上,所述第一对象的第一特征点处于第一坐标,所述第二对象的第二特征点处于第二坐标;
    在所述第二图像上,所述第一对象的第一特征点处于第三坐标,所述第二对象的第二特征点处于第四坐标;
    所述第一坐标与所述第三坐标之间的坐标差与所述第二坐标与所述第四坐标之间的坐标差不同。
  13. 根据权利要求12所述的方法,其特征在于,所述方法还包括:
    在满足如下条件中的至少一种时,所述第一坐标与所述第三坐标之间的坐标差大于所述第二坐标与所述第四坐标之间的坐标差,所述条件包括:
    所述第一对象处于用户注视点所在区域内,所述第二对象处于所述用户注视点所在区域外;
    所述第一对象和所述第二对象均处于所述用户注视点所在区域内,且所述第二对象比所述第一对象靠近所述用户注视点所在区域的边缘;
    所述第一对象与所述第一图像的中心点之间的距离小于所述第二对象与所述第二图像中心之间的距离;
    所述第一对象对应的用户交互次数大于所述第二对象对应的用户交互次数;或,
    所述第一对象是用户指定对象,所述第二对象不是用户指定对象。
  14. 根据权利要求1所述的方法,其特征在于,所述电子设备包括第一显示模组和第二显示模组;所述第一显示模组包括所述第一显示屏和第一光学器件,所述第二显示模组包括所述第二显示屏和第二光学器件,所述第一显示屏的位置和所述第一光学器件的位置之间存在第一偏移量,所述第二显示屏的位置和所述第二光学器件的位置之间存在第二偏移量;所述方法还包括:
    获取三维图像数据;
    获取第一坐标转换矩阵和第二坐标转换矩阵,所述第一坐标转换矩阵对应第一光学器件,所述第二坐标转换矩阵对应第二光学器件;
    获取所述第一偏移量和所述第二偏移量;
    基于所述第一坐标转换矩阵和所述第一偏移量,将所述三维图像数据处理为所述第一图像;
    基于所述第二坐标转换矩阵和所述第二偏移量,将所述三维图像数据处理为所述第二图像。
  15. 根据权利要求14所述的方法,其特征在于,
    当所述第一显示模组的位置变化时,所述第一坐标转换矩阵变化;或者,
    当所述第二显示模组的位置变化时,所述第二坐标转换矩阵变化。
  16. 一种显示方法,其特征在于,应用于电子设备,所述电子设备包括第一显示模组和第二显示模组;所述第一显示模组包括第一显示屏和第一光学器件,所述第二显示模组包括第二显示屏和第二光学器件,所述第一显示屏的位置和所述第一光学器件的位置之间存在第一偏移量,所述第二显示屏的位置和所述第二光学器件的位置之间存在第二偏移量;所述方法还包括:
    获取三维图像数据;
    获取第一坐标转换矩阵和第二坐标转换矩阵,所述第一坐标转换矩阵对应第一光学器件,所述第二坐标转换矩阵对应第二光学器件;
    获取所述第一偏移量和所述第二偏移量;
    基于所述第一坐标转换矩阵和所述第一偏移量,将所述三维图像数据处理为第一图像,所述第一显示模组显示所述第一图像;
    基于所述第二坐标转换矩阵和所述第二偏移量,将所述三维图像数据处理为第二图像,所述第二显示模组显示所述第二图像。
  17. 根据权利要求16所述的方法,其特征在于,当所述第一显示模组的位置变化时,所述第一坐标转换矩阵变化;
    当所述第二显示模组的位置变化时,所述第二坐标转换矩阵变化。
  18. 一种电子设备,其特征在于,包括:
    处理器,存储器,以及,一个或多个程序;
    其中,所述一个或多个程序被存储在所述存储器中,所述一个或多个程序包括指令,当所述指令被所述处理器执行时,使得所述电子设备执行如权利要求1-15或权利要求16-17任一项所述的方法。
  19. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质用于存储计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如权利要求1-15或权利要求16-17中任意一项所述的方法。
  20. 一种计算机程序产品,其特征在于,包括计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如上述权利要求1-15或权利要求16-17中任意一项所述的方法。
PCT/CN2022/127013 2021-11-11 2022-10-24 一种显示方法与电子设备 WO2023082980A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP22891777.9A EP4400941A1 (en) 2021-11-11 2022-10-24 Display method and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111338178.3A CN116107421A (zh) 2021-11-11 2021-11-11 一种显示方法与电子设备
CN202111338178.3 2021-11-11

Publications (1)

Publication Number Publication Date
WO2023082980A1 true WO2023082980A1 (zh) 2023-05-19

Family

ID=86253322

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/127013 WO2023082980A1 (zh) 2021-11-11 2022-10-24 一种显示方法与电子设备

Country Status (3)

Country Link
EP (1) EP4400941A1 (zh)
CN (1) CN116107421A (zh)
WO (1) WO2023082980A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117631871A (zh) * 2023-10-25 2024-03-01 南京凯影医疗科技有限公司 一种交互式触控液晶显示屏

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204431A (zh) * 2016-08-24 2016-12-07 中国科学院深圳先进技术研究院 智能眼镜的显示方法及装置
US20190349576A1 (en) * 2018-05-14 2019-11-14 Dell Products L.P. Systems and methods for automatic adjustment for vertical and rotational imbalance in augmented and virtual reality head-mounted displays

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204431A (zh) * 2016-08-24 2016-12-07 中国科学院深圳先进技术研究院 智能眼镜的显示方法及装置
US20190349576A1 (en) * 2018-05-14 2019-11-14 Dell Products L.P. Systems and methods for automatic adjustment for vertical and rotational imbalance in augmented and virtual reality head-mounted displays

Also Published As

Publication number Publication date
EP4400941A1 (en) 2024-07-17
CN116107421A (zh) 2023-05-12

Similar Documents

Publication Publication Date Title
US11838518B2 (en) Reprojecting holographic video to enhance streaming bandwidth/quality
JP6391685B2 (ja) 仮想オブジェクトの方向付け及び可視化
US10955665B2 (en) Concurrent optimal viewing of virtual objects
CN102591449B (zh) 虚拟内容和现实内容的低等待时间的融合
US10175483B2 (en) Hybrid world/body locked HUD on an HMD
US20130293468A1 (en) Collaboration environment using see through displays
CN102566049A (zh) 用于扩展现实显示的自动可变虚拟焦点
WO2022252924A1 (zh) 图像传输与显示方法、相关设备及***
WO2021103990A1 (zh) 显示方法、电子设备及***
WO2018149267A1 (zh) 一种基于增强现实的显示方法及设备
CN112835445A (zh) 虚拟现实场景中的交互方法、装置及***
WO2023082980A1 (zh) 一种显示方法与电子设备
CN103018914A (zh) 一种眼镜式3d显示头戴电脑
WO2023001113A1 (zh) 一种显示方法与电子设备
US20230063078A1 (en) System on a chip with simultaneous usb communications
WO2021057420A1 (zh) 控制界面显示的方法及头戴式显示器
WO2023035911A1 (zh) 一种显示方法与电子设备
WO2023116541A1 (zh) 眼动追踪装置、显示设备及存储介质
US11994751B1 (en) Dual system on a chip eyewear
US20230034649A1 (en) Electronic device virtual machine operating system
WO2023040562A1 (zh) 信息显示方法、近眼显示设备以及电子设备
WO2023116571A1 (zh) 眼动追踪装置和眼动追踪方法
CN202929296U (zh) 一种眼镜式3d显示头戴电脑
CN107544661B (zh) 一种信息处理方法及电子设备
KR20240097658A (ko) 외부 전자 장치에 의해 제공되는 멀티미디어 콘텐트를 표시하기 위한 웨어러블 장치 및 그 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22891777

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022891777

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022891777

Country of ref document: EP

Effective date: 20240411