WO2021103990A1 - 显示方法、电子设备及*** - Google Patents

显示方法、电子设备及*** Download PDF

Info

Publication number
WO2021103990A1
WO2021103990A1 PCT/CN2020/127413 CN2020127413W WO2021103990A1 WO 2021103990 A1 WO2021103990 A1 WO 2021103990A1 CN 2020127413 W CN2020127413 W CN 2020127413W WO 2021103990 A1 WO2021103990 A1 WO 2021103990A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
head
mounted display
display device
display screen
Prior art date
Application number
PCT/CN2020/127413
Other languages
English (en)
French (fr)
Inventor
朱帅帅
曾以亮
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20892723.6A priority Critical patent/EP4044000A4/en
Priority to US17/780,409 priority patent/US20220404631A1/en
Publication of WO2021103990A1 publication Critical patent/WO2021103990A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • G02B2027/0134Head-up displays characterised by optical features comprising binocular systems of stereoscopic type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays

Definitions

  • This application relates to the field of virtual reality (VR) and terminal technologies, and in particular to display methods, electronic devices and systems.
  • VR virtual reality
  • VR technology uses computer simulation to generate a three-dimensional (3D) virtual reality scene, and provides a visual, auditory, tactile or other sensory simulation experience, making users feel as if they are in the environment.
  • IPD inter-pupillary distance
  • the embodiments of the present application provide a display method, electronic device, and system. This method can measure the user's IPD and correct the image displayed on the head-mounted display device according to the user's IPD, so that the user can comfortably and truly feel the 3D scene when wearing the head-mounted display device.
  • an embodiment of the present application provides a system that includes an electronic device and a head-mounted display device, the electronic device is connected to the head-mounted display device, and the head-mounted display device is used for Wear on the user's head.
  • the electronic device is used to send a user interface to the head-mounted display device;
  • the head-mounted display device is used to display the user interface on a display screen;
  • the electronic device is also used to obtain the The IPD of the user, the IPD of the user is obtained according to the user operation input by the user based on the user interface;
  • the electronic device is also used to obtain the source image, and correct the source image according to the IPD of the user to obtain the target
  • the head-mounted display device is also used to display the target image on the display screen.
  • the electronic device can measure the user's IPD and correct the image displayed on the head-mounted display device according to the user's IPD, so that the user can comfortably and truly feel the 3D scene when wearing the head-mounted display device .
  • the display screen includes a first display screen and a second display screen
  • the head-mounted display device further includes a first optical component corresponding to the first display screen and a corresponding device.
  • the second optical component of the second display screen, the first straight line between the center of the first display screen and the center of the first optical component is perpendicular to the third straight line, the center of the second display screen and A second straight line at which the center of the second optical component is located is perpendicular to the third straight line;
  • the third straight line is a straight line at which the center of the first optical component and the center of the second optical component are located;
  • the user interface includes a first user interface and a second user interface.
  • the head-mounted display device is specifically configured to display the first user interface on the first display screen and display on the second display screen.
  • the first display screen and the first optical component correspond to the left eye of the user, and the light emitted by the first display screen propagates to the left eye of the user through the first optical component.
  • the second display screen and the second optical component correspond to the user's right eye, and the light emitted by the second display screen propagates to the user's right eye through the second optical component.
  • the head-mounted display device is further used to: obtain a first position and a second position, the first position and the second position are based on the display
  • the first user interface is obtained by the user's action; the third position and the fourth position are obtained, and the third position and the fourth position are obtained according to the user's action when the second user interface is displayed. Obtained by an action; sending the first position, the second position, the third position, and the fourth position to the electronic device.
  • the electronic device is further configured to determine the offset ⁇ i1 of the user's eyes relative to the first straight line according to the first position and the second position, and according to the third position and the fourth position The position determines the offset ⁇ i2 of the user's eyes relative to the second straight line, and obtains the IPD of the user according to the ⁇ i1 and the ⁇ i2. In this way, the user can indicate the above-mentioned positions through the head-mounted display device, so that the electronic device can measure the IPD of the user.
  • the user's motion when the head-mounted display device displays the first user interface may be a turning motion of the user's eyes (for example, the left eye).
  • the user's motion when the head-mounted display device displays the second user interface may be a turning motion of the user's eyes (for example, the right eye).
  • the head-mounted display device is further used to: display the user's operation data collected when the first user interface is displayed, and, in the display The user's operation data collected in the second user interface is sent to the electronic device.
  • the electronic device is further configured to: obtain a first position and a second position, the first position and the second position are based on the user's position when the head-mounted display device displays the first user interface Operational data acquisition; acquisition of a third position and a fourth position, the third position and the fourth position are acquired according to the user's operation data when the head-mounted display device displays the second user interface ⁇ ; According to the first position and the second position to determine the user's eye (for example, left eye) relative to the first straight line offset ⁇ i1, according to the third position and the fourth position Determine the offset ⁇ i2 of the user's eyes (for example, the right eye) relative to the second straight line, and obtain the IPD of the user according to the ⁇ i1 and the ⁇ i2. In this way, the user can indicate the above-mentioned positions through the head-mounted display device, so that the electronic device can measure the IPD of the user.
  • the head-mounted display device is used to obtain the image of the eyeball of the user through the user's operation data collected when the first user interface is displayed.
  • the head-mounted display device is further configured to obtain an image of the user's eyeball through the user's operation data collected when the second user interface is displayed.
  • the system further includes an input device.
  • the input device is configured to: detect user operations detected when the head-mounted display device displays the first user interface, and detect when the head-mounted display device displays the second user interface The user operation is sent to the electronic device.
  • the electronic device is further configured to: obtain a first position and a second position, the first position and the second position are based on the input device when the head-mounted display device displays the first user interface Obtained by a detected user operation; Obtain a third position and a fourth position, the third position and the fourth position are based on the input device when the head-mounted display device displays the second user interface Obtained by a detected user operation; the offset ⁇ i1 of the user's eye (for example, the left eye) relative to the first straight line is determined according to the first position and the second position, and according to the third position Determine the offset ⁇ i2 of the user's eyes (for example, the right eye) relative to the second straight line with the fourth position, and obtain the IPD of the user according to the ⁇ i1 and the ⁇ i2.
  • the user can indicate the above several locations through the input device, so that the electronic device can measure the IPD of the user.
  • the first position is the position where the user's eyes (for example, the left eye) look to the left side of the first display screen when the head-mounted display device displays the first user interface
  • the second position is a position where the user's eyes (for example, the left eye) look to the right side of the first display screen when the head-mounted display device displays the first user interface
  • the third The position is the position where the user's eyes (for example, the right eye) look to the left side of the second display screen when the head-mounted display device displays the second user interface
  • the fourth position is the head When the wearable display device displays the second user interface, the user's eyes (for example, the right eye) look toward the right side of the second display screen.
  • the electronic device is specifically configured to calculate the ⁇ i1 according to the following formula:
  • JO' is the distance from the first position to the first straight line
  • KO' is the distance from the second position to the first straight line
  • M is the magnification of the first optical component
  • L is the diameter of the first optical component.
  • the electronic device is specifically configured to calculate the ⁇ i2 according to a formula similar to the above formula.
  • the electronic device is specifically configured to calculate the ⁇ i2 according to the following formula:
  • jo' is the distance from the third position to the second straight line
  • ko' is the distance from the fourth position to the second straight line
  • m is the magnification of the second optical component
  • l is The diameter of the second optical component.
  • the electronic device is specifically configured to calculate the interpupillary distance IPD of the user according to the following formula:
  • IPD IOD- ⁇ i1+ ⁇ i2
  • the IOD is the distance between the center of the first display screen and the center of the second display screen.
  • the electronic device is specifically configured to: use the source image to generate a first image and a second image according to the IPD of the user; A first target image is generated, and the center of the first target image is the offset amount by which the center of the first image is adjusted by ⁇ i1; a second target image is generated according to the second image, the second target image The center of is the offset by which ⁇ i2 is adjusted to the center of the second image.
  • the electronic device can correct the image displayed on the head-mounted display device when the head-mounted display device provides game scenes or other similar scenes, so that the user can wear the head-mounted display device. Feel the 3D scene comfortably and realistically.
  • the source image includes: a third image and a fourth image.
  • the electronic device is specifically configured to: generate a first target image according to the third image, and the center of the first target image is the offset amount by which the center of the third image is adjusted to the ⁇ i1;
  • the four images generate a second target image, and the center of the second target image is the offset by which the center of the fourth image is adjusted by the ⁇ i2.
  • the electronic device can correct the image displayed on the head-mounted display device when the head-mounted display device provides a 3D movie scene or other similar scenes, so that when the user wears the head-mounted display device Can feel the 3D scene comfortably and realistically.
  • the third image and the fourth image may be two images with parallax that are taken in advance by two cameras for the same object.
  • an embodiment of the present application provides a display method, which is applied to an electronic device.
  • the method includes: the electronic device sends a user interface to a head-mounted display device, the user interface is configured to be displayed on a display screen of the head-mounted display device; and the electronic device obtains the IPD of the user The IPD of the user is obtained according to the user operation input by the user based on the user interface; the electronic device obtains a source image, corrects the source image according to the IPD of the user to obtain a target image, and combines the The target image is sent to the head-mounted display device; the target image is used to be displayed on the display screen.
  • the steps performed by the electronic device in the display method of the second aspect can refer to the steps performed when the electronic device in the system of the first aspect implements the corresponding function, and refer to related descriptions. .
  • the electronic device can cooperate with a head-mounted display device to provide the user with a 3D scene, so that the user can comfortably and truly feel the user when wearing the head-mounted display device 3D scene.
  • the display screen includes a first display screen and a second display screen
  • the head-mounted display device further includes a first optical component corresponding to the first display screen and a corresponding device.
  • the user interface includes a first user interface and a second user interface, the first user interface is configured to be displayed on the first display screen, and the second user interface is configured to be displayed on the second display screen .
  • the target image includes a first target image and a second target image, the first target image is used for display on the first display screen, and the second target image is used for display on the second display screen .
  • the electronic device can obtain a first position, a second position, a third position, and a fourth position, and determine according to the first position and the second position
  • the offset ⁇ i1 of the user's eyes (for example, the left eye) relative to the first straight line is determined according to the third position and the fourth position, and the user's eyes (for example, the right eye) are relative to the first straight line.
  • the offset ⁇ i2 of the two straight lines is used to obtain the IPD of the user according to the ⁇ i1 and the ⁇ i2.
  • the first position, the second position, the third position, and the fourth position may refer to the related description in the first aspect.
  • the electronic device may obtain the first position, the second position, the third position, and the fourth position according to the following methods:
  • the electronic device receives the first position, the second position, the third position, and the fourth position sent by the head-mounted display device.
  • the first position and the second position are acquired by the head-mounted display device according to the user's action when the first user interface is displayed; the third position and the fourth position It is acquired by the head-mounted display device according to the user's action when the second user interface is displayed.
  • the user's actions when the head-mounted display device displays the first user interface or the second user interface reference may be made to the related description of the first aspect.
  • the electronic device receives the user's operation data collected when the head-mounted display device displays the first user interface, and the head-mounted display device is displaying the first user interface. 2.
  • the user’ s operation data collected during the user interface; the electronic device acquires a first position and a second position, and the first position and the second position are displayed according to the head-mounted display device.
  • the user’s operation data in the first user interface is acquired; the third position and the fourth position are acquired, and the third position and the fourth position are displayed according to the head-mounted display device to display the second user
  • the user's operation data at the time of the interface is obtained.
  • the user's operation data when the head-mounted display device displays the first user interface or the second user interface reference may be made to the related description in the first aspect.
  • the electronic device receives the user operation detected by the input device when the head-mounted display device displays the first user interface, and the input device is in the head-mounted display device The user operation detected when the second user interface is displayed; the electronic device acquires a first position and a second position, and the first position and the second position are based on the input device in the head-mounted Obtained by the user operation detected when the display device displays the first user interface; Obtain the third position and the fourth position, the third position and the fourth position are obtained according to the input device in the head-mounted Obtained by the user operation detected when the display device displays the second user interface.
  • the related description of the first aspect For user operations detected by the input device when the head-mounted display device displays the first user interface or the second user interface, reference may be made to the related description of the first aspect.
  • the electronic device may calculate the ⁇ i1 according to the following formula:
  • JO' is the distance from the first position to the first straight line
  • KO' is the distance from the second position to the first straight line
  • M is the magnification of the first optical component
  • L is the diameter of the first optical component.
  • the electronic device may calculate the ⁇ i2 according to the following formula:
  • jo' is the distance from the third position to the second straight line
  • ko' is the distance from the fourth position to the second straight line
  • m is the magnification of the second optical component
  • l is The diameter of the second optical component.
  • the electronic device may calculate the interpupillary distance IPD of the user according to the following formula:
  • IPD IOD- ⁇ i1+ ⁇ i2
  • the IOD is the distance between the center of the first display screen and the center of the second display screen.
  • the electronic device may obtain the target image in the following manner: according to the user's IPD, the source image is used to generate the first image and the second image; The electronic device generates a first target image according to the first image, and the center of the first target image is the offset amount by which the center of the first image is adjusted by the ⁇ i1; the electronic device generates a first target image according to the second image. The image generates a second target image, and the center of the second target image is an offset by which the center of the second image is adjusted by the ⁇ i2.
  • the electronic device can correct the image displayed on the head-mounted display device when the head-mounted display device provides game scenes or other similar scenes, so that the user can wear the head-mounted display device. Feel the 3D scene comfortably and realistically.
  • the source image includes: a third image and a fourth image.
  • the electronic device may obtain the target image in the following manner: the electronic device generates a first target image according to the third image, and the center of the first target image is adjusted to the center of the third image by the ⁇ i1
  • the electronic device generates a second target image according to the fourth image, and the center of the second target image is the offset by which the center of the fourth image is adjusted by ⁇ i2.
  • the electronic device can correct the image displayed on the head-mounted display device when the head-mounted display device provides a 3D movie scene or other similar scenes, so that when the user wears the head-mounted display device Can feel the 3D scene comfortably and realistically.
  • the third image and the fourth image may be two images with parallax that are taken in advance by two cameras for the same object.
  • an embodiment of the present application provides a display method, which is applied to a head-mounted display device.
  • the display method includes: the head-mounted display device displays a user interface on the display screen; the head-mounted display device obtains the IPD of the user, and the IPD of the user is based on the user interface based on the user interface. Obtained by an input user operation; the head-mounted display device obtains a source image, corrects the source image according to the user's IPD to obtain a target image; displays the target image on the display screen.
  • the head-mounted display device can measure the IPD of the user and independently provide the user with a 3D scene based on the user's IPD, so that the user can comfortably wear the head-mounted display device , Really feel the 3D scene.
  • the display screen includes a first display screen and a second display screen
  • the head-mounted display device further includes a first optical component corresponding to the first display screen and a corresponding device.
  • the user interface includes a first user interface and a second user interface, the first user interface is displayed on the first display screen, and the second user interface is displayed on the second display screen.
  • the target image includes a first target image and a second target image, the first target image is displayed on the first display screen, and the second target image is displayed on the second display screen.
  • the head-mounted display device can acquire a first position, a second position, a third position, and a fourth position, according to the first position and the first position
  • the second position determines the offset ⁇ i1 of the user's eyes (for example, the left eye) relative to the first straight line, and determines that the user's eyes (for example, the right eye) are relative to each other according to the third position and the fourth position
  • the offset ⁇ i2 of the second straight line obtains the IPD of the user according to the ⁇ i1 and the ⁇ i2.
  • the first position, the second position, the third position, and the fourth position may refer to the related description in the first aspect.
  • the head-mounted display device may obtain the first position, the second position, the third position, and the fourth position according to the following methods:
  • the head-mounted display device acquires the first position and the second position according to the user's action when the first user interface is displayed;
  • the user's action acquires the third position and the fourth position.
  • the head-mounted display device acquires the first position and the second position according to the user's operation data collected when the first user interface is displayed, and displays the second user according to The third position and the fourth position are acquired by the user's operation data collected during the interface.
  • the head-mounted display device displays the first user interface or the second user interface.
  • the third way the head-mounted display device is connected to the input device, and the head-mounted display device is configured according to the user operation detected by the input device when the head-mounted display device displays the first user interface
  • the first position and the second position are acquired, and the third position and the fourth position are acquired according to a user operation detected by the input device when the head-mounted display device displays the second user interface. position.
  • the head-mounted display device calculates the ⁇ i1 according to the following formula:
  • JO' is the distance from the first position to the first straight line
  • KO' is the distance from the second position to the first straight line
  • M is the magnification of the first optical component
  • L is the diameter of the first optical component
  • the head-mounted display device calculates the ⁇ i2 according to the following formula:
  • jo' is the distance from the third position to the second straight line
  • ko' is the distance from the fourth position to the second straight line
  • m is the magnification of the second optical component
  • l is The diameter of the second optical component.
  • the head-mounted display device calculates the interpupillary IPD of the user according to the following formula:
  • IPD IOD- ⁇ i1+ ⁇ i2
  • the IOD is the distance between the center of the first display screen and the center of the second display screen.
  • the head-mounted display device may use the source image to generate the first image and the second image according to the IPD of the user; A first target image is generated, and the center of the first target image is the offset amount by which the center of the first image is adjusted by ⁇ i1; a second target image is generated according to the second image, the second target image The center of is the offset amount by which the ⁇ i2 is adjusted to the center of the second image.
  • the source image includes: a third image and a fourth image.
  • the head-mounted display device may generate a first target image according to the third image, and the center of the first target image is the offset amount by which the center of the third image is adjusted by the ⁇ i1;
  • the four images generate a second target image, and the center of the second target image is the offset by which the center of the fourth image is adjusted by the ⁇ i2.
  • the head-mounted display device provides a 3D movie scene or other similar scenes, the displayed image can be corrected, so that the user can comfortably and truly feel the 3D scene when wearing the head-mounted display device.
  • an embodiment of the present application provides an electronic device that includes one or more processors and a memory; the memory is coupled with the one or more processors, and the memory is used to store computer program codes,
  • the computer program code includes computer instructions, and the one or more processors invoke the computer instructions to cause the electronic device to execute the display method in the second aspect or any one of the implementation manners of the second aspect.
  • an embodiment of the present application provides a head-mounted display device, the head-mounted display device includes: one or more processors, a memory, and a display screen; the memory is coupled to the one or more processors The memory is used to store computer program code, the computer program code includes computer instructions, and the one or more processors invoke the computer instructions to make the head-mounted display device execute the third aspect or any one of the implementation manners of the third aspect Display method in.
  • the display screen includes a first display screen and a second display screen
  • the head-mounted display device further includes a first optical component corresponding to the first display screen and a corresponding device.
  • an embodiment of the present application provides a chip, which is applied to an electronic device.
  • the chip includes: one or more processors and an interface; the interface is used to receive code instructions and transmit the code instructions to the processor, and the processor is used to run the code instructions to make the electronic device execute the second aspect or The display method provided by any one of the possible implementation manners of the second aspect.
  • the embodiments of the present application provide a computer program product containing instructions.
  • the computer program product is run on an electronic device, the electronic device is caused to execute the second aspect or the possible implementation manners of the second aspect. Any one of the provided display methods.
  • an embodiment of the present application provides a computer-readable storage medium, including instructions, which when the foregoing instructions run on an electronic device, cause the electronic device to execute any of the second aspect or the possible implementation manners of the second aspect One of the provided display methods.
  • an embodiment of the present application provides a chip, which is applied to a head-mounted display device.
  • the chip includes: one or more processors and an interface; the interface is used to receive code instructions and transmit the code instructions to the processor, and the processor is used to run the code instructions so that the head-mounted display device executes
  • the display method provided by any one of the three possible implementations of the third aspect or the third aspect.
  • an embodiment of the present application provides a computer program product containing instructions.
  • the computer program product When the computer program product is run on a head-mounted display device, the above-mentioned head-mounted display device can execute the third aspect or the third aspect.
  • the display method provided by any one of the possible implementation manners.
  • an embodiment of the present application provides a computer-readable storage medium, including instructions, when the instructions are executed on the head-mounted display device, the head-mounted display device is caused to execute the third aspect or the third aspect.
  • the display method provided by any one of the possible implementation manners.
  • FIG. 1 is a schematic diagram of a principle for a user to experience a 3D scene using a head-mounted display device according to an embodiment of the present application;
  • FIG. 2 is a schematic diagram of the architecture of a system provided by an embodiment of the present application.
  • 3A is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the present application.
  • 3B is a schematic diagram of the software structure of an electronic device provided by an embodiment of the present application.
  • 3C is a schematic structural diagram of a head-mounted display device provided by an embodiment of the present application.
  • FIGS. 4A and 4B are schematic diagrams of the positional relationship between the user's eyes and the optical components of the head-mounted display device according to an embodiment of the present application;
  • 5A-5C are user interfaces displayed on the head-mounted display device in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of geometric relationships used when calculating the user's IPD provided by an embodiment of the present application.
  • FIG. 7A is a schematic diagram of a 3D scene constructed by an electronic device provided by an embodiment of the present application and a simulating user being placed in the 3D scene;
  • FIG. 7B is a first image, a second image, a first target image, and a second target image determined according to the first IPD after the electronic device provided by the embodiment of the present application constructs the 3D scene as shown in FIG. 7A;
  • FIG. 7C is a first image, a second image, a first target image, and a second target image determined according to a second IPD after the electronic device provided by an embodiment of the present application constructs a 3D scene as shown in FIG. 7A;
  • FIG. 8A is an image synthesized by a user after the head-mounted display device according to an embodiment of the present application displays the first target image and the second target image as shown in FIG. 7B;
  • FIG. 8B is an image synthesized by a user after the head-mounted display device according to an embodiment of the present application displays the first target image and the second target image as shown in FIG. 7C;
  • FIG. 9A is a schematic diagram of the architecture of another system provided by an embodiment of the present application.
  • FIG. 9B is a schematic diagram of the hardware structure of another head-mounted display device provided by an embodiment of the present application.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Thus, the features defined with “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the embodiments of the present application, unless otherwise specified, “plurality” means two or more.
  • FIG. 1 is a schematic diagram of the principle of a user using a head-mounted display device to experience a 3D scene.
  • the head-mounted display device may include: a display screen 101, an optical component 102, a display screen 103, and an optical component 104.
  • the material, size, resolution, etc. of the display screen 101 and the display screen 103 are the same.
  • the material and structure of the optical component 102 and the optical component 104 are the same.
  • Both the optical component 102 and the optical component 104 are composed of one or more lenses, and the lenses may include one or more of a convex lens, a Fresnel lens, or other types of lenses.
  • the first display screen may be the display screen 101
  • the second display screen may be the display screen 103
  • the first optical component may be the optical component 102
  • the second optical component may be the optical component 104.
  • the following embodiments take the display screen 101, the display screen 103, the optical assembly 102, and the optical assembly 104 as examples for description.
  • the first line where the center of the display screen 101 and the center of the optical component 102 are located is perpendicular to the third line where the center of the optical component 102 and the center of the optical component 104 are located.
  • the display screen 101 and the optical assembly 102 correspond to the left eye of the user.
  • an image a1 may be displayed on the display screen 101.
  • the light emitted when the display screen 101 displays the image a1 is transmitted by the optical assembly 102 to form a virtual image a1' of the image a1 in front of the user's left eye.
  • the second straight line where the center of the display screen 103 and the center of the optical component 104 are located is perpendicular to the third straight line where the center of the optical component 102 and the center of the optical component 104 are located.
  • the display screen 103 and the optical assembly 104 correspond to the user's right eye.
  • the display screen 103 may display an image a2.
  • the light emitted when the display screen 103 displays the image a2 is transmitted by the optical component 104 to form a virtual image a2' of the image a2 in front of the user's right eye.
  • the center of the display screen may be the center of symmetry of the display screen, such as the center of a circular display screen, the center of symmetry of a rectangular display screen, and so on.
  • the center of the optical component can be the optical center, and usually the optical center is also the center of symmetry of the optical component.
  • the fourth straight line may be a straight line where the center of the display screen 101 and the center of the display screen 103 are located.
  • the image a1 and the image a2 are two images with parallax for the same object, such as the object a.
  • Parallax refers to the difference in the position of the object in the field of view when the same object is viewed from two points with a certain distance.
  • the virtual image a1' and the virtual image a2' are located on the same plane, and this plane may be called a virtual image plane.
  • the user's left eye will focus on the virtual image a1', and the user's right eye will focus on the virtual image a2'. Then, the virtual image a1' and the virtual image a2' will be superimposed in the user's brain to form a complete and three-dimensional image. This process is called convergence.
  • the intersection of the eyesight of the two eyes will be regarded by the user as the actual position of the object described by the image a1 and the image a2. Due to the convergence process, the user can feel the 3D scene provided by the head-mounted display device.
  • the manner in which the head-mounted display device generates the images displayed on the display screen 101 and the display screen 103 will be described below.
  • the head-mounted display device will make the following assumption: when the user wears the head-mounted display device, the center of the left eye pupil, the center of the display screen 101, and the center of the optical component 102 are in the same straight line. Above, the center of the pupil of the right eye, the center of the display screen 103 and the center of the optical component 104 are located on the same straight line. That is, the head-mounted display device assumes that the IPD of the user is equal to the distance between the center of the display screen 101 and the center of the display screen 103, and is also equal to the distance between the center of the optical component 102 and the optical component 104. The distance between centers (inter-optics distance, IOD).
  • IOD inter-optics distance
  • the head-mounted display device generates images displayed on the display screen 101 and the display screen 103 based on this assumption. Specifically, the head-mounted display device first obtains 3D scene information, and constructs a 3D scene according to the 3D scene information.
  • the 3D scene information describes some information about the 3D scene that you want the user to feel, that is, the 3D scene information indicates the objects that the user can see when the user is in the 3D scene, and the relationship between each object and the user relative position.
  • the head-mounted display device may simulate or assume that the user whose IPD is equal to the IOD is naturally in the constructed 3D scene, obtains the image seen by the user's left eye, and displays the image on the display screen 101; The image seen by the right eye of the user is acquired, and the image is displayed on the display screen 103.
  • the head-mounted display device may acquire the images displayed on the display screen 101 and the display screen 103 through two imaging cameras. For example, the head-mounted display device places two imaging cameras in a constructed 3D scene, and assumes that a user whose IPD is equal to IOD is naturally in the 3D scene.
  • An imaging camera is located at the position of the user's left eye, and is used to obtain the image that the user sees when viewing the 3D scene from the position, and the image is the image that the user sees with the left eye.
  • the other imaging camera is located at the position of the user's right eye, and is used to obtain the image that the user sees when viewing the 3D scene from this position, and the image is the image seen by the user's right eye.
  • the distance between the two imaging cameras is the same as the assumed user IPD, which is equal to the IOD.
  • the acquisition camera is a virtual concept, not an actual hardware.
  • the head-mounted display device After the head-mounted display device generates an image based on the assumption and displays the image on the display screen, when a user with IPD equals to IOD wears the head-mounted display device, the head-mounted display device can provide the user with 3D
  • the realism and immersion of the scene can also make the convergence process of the user when viewing objects in the 3D scene natural and comfortable, and the 3D scene actually felt by the user after convergence is consistent with the 3D scene constructed by the head-mounted display device .
  • the head-mounted display device Based on the manner in which the head-mounted display device generates the images displayed on the display screen 101 and the display screen 103 described above, the following describes how the IPD affects the user's experience of the 3D scene provided by the head-mounted display device with reference to FIG. 1.
  • the head-mounted display device displays the image a1 on the display screen 101 and displays the image a2 on the display screen 103.
  • the method of generating the image a1 and the image a2 reference may be made to the previous related description.
  • the user's IPD when the user's IPD is equal to the IOD, the user's left eye rotates to the right and focuses on the virtual image a1', and the right eye rotates to the left and focuses on the virtual image a2', thereby completing convergence.
  • convergence is natural, comfortable and relaxing for the user.
  • the position of point A1 in the figure relative to the user will be regarded by the user as the position of the object a relative to the user.
  • the 3D scene actually felt by the user is consistent with the 3D scene constructed by the head-mounted display device.
  • the user's IPD when the user's IPD is not equal to the IOD, for example, when the user's IPD is greater than the IOD, the user's left eye rotates to the right and focuses on the virtual image a1', and the user's right eye rotates to the left And focus on the virtual image a2' to complete convergence.
  • the rotation angle of the eyeball when the dotted eyeball converges is different from the rotation angle of the eyeball when the solid eyeball converges. Therefore, the convergence process is not necessarily natural and comfortable for the user.
  • the position of point A2 in the figure relative to the user will be regarded by the user as the position of the object a relative to the user. Therefore, if the IPD of the user is not equal to the IOD, the 3D scene felt by the user is inconsistent with the 3D scene constructed by the head-mounted display device, resulting in distortion.
  • an embodiment of the present application provides a display method.
  • the image displayed on the head-mounted display device is determined according to the IPD of the user, so that the user can comfortably and truly feel the 3D scene constructed by the electronic device.
  • this method reference may be made to the related descriptions in the subsequent embodiments, which will not be repeated here.
  • Fig. 2 exemplarily shows a system 10 provided by an embodiment of the present application.
  • the system 10 can use VR, augmented reality (AR), mixed reality (mixed reality, MR) and other technologies to display images, so that users can feel a 3D scene and provide users with a VR/AR/MR experience.
  • VR augmented reality
  • MR mixed reality
  • the system 10 may include: an electronic device 100, a head-mounted display device 200, and an input device 300.
  • the head-mounted display device 200 is worn on the head of the user, and the input device 300 is held by the user. It is understandable that the input device 300 is an optional device, that is, the system 10 may not include the input device 300.
  • the electronic device 100 and the head-mounted display device 200 may be connected in a wired or wireless manner.
  • the wired connection may include a wired connection for communication through an interface such as a USB interface and an HDMI interface.
  • the wireless connection may include one or more of wireless connections that communicate through technologies such as Bluetooth, Wi-Fi direct connection (such as Wi-Fi p2p), Wi-Fi softAP, Wi-Fi LAN, and radio frequency.
  • the electronic device 100 and the input device 300 can be wirelessly connected and communicated with each other through short-distance transmission technologies such as Bluetooth (BT), near field communication (NFC), ZigBee, etc., and can also communicate through a USB interface, HDMI interface or custom interface, etc. for wired connection and communication.
  • short-distance transmission technologies such as Bluetooth (BT), near field communication (NFC), ZigBee, etc.
  • BT Bluetooth
  • NFC near field communication
  • ZigBee ZigBee
  • USB interface HDMI interface or custom interface, etc. for wired connection and communication.
  • the electronic device 100 may be a portable terminal device equipped with iOS, Android, Microsoft or other operating systems, such as a mobile phone, a tablet computer, or a laptop computer with a touch-sensitive surface or a touch panel, and a laptop computer with a touch-sensitive surface or a touch panel.
  • Non-portable terminal equipment such as desktop computers with sensitive surfaces or touch panels.
  • the electronic device 100 can run an application program to generate an image for transmission to the head-mounted display device 200 for display.
  • the application program may be, for example, a video application, a game application, a desktop application, and so on.
  • Realizable forms of the head-mounted display device 200 include helmets, glasses, earphones, and other electronic devices that can be worn on a user's head.
  • the head-mounted display device 200 is used to display images, thereby presenting a 3D scene to the user, and bringing a VR/AR/MR experience to the user.
  • the 3D scene may include 3D images, 3D videos, audios, and so on.
  • the implementation form of the input device 300 may be a physical device, such as a physical handle, a mouse, a keyboard, a stylus, a wristband, etc., or a virtual device, such as a virtual device generated by the electronic device 100 and used by the head-mounted device.
  • the virtual keyboard displayed by the display device 200 and the like.
  • the input device 300 may be configured with various sensors, such as an acceleration sensor, a gyroscope sensor, a magnetic sensor, and a pressure sensor.
  • the pressure sensor can be arranged under the confirmation button of the input device 300.
  • the confirmation button can be a physical button or a virtual button.
  • the input device 300 is used to collect motion data of the input device 300 and data indicating whether the confirmation key of the input device 300 is pressed.
  • the motion data includes a sensor of the input device 300, for example, an acceleration sensor collects the acceleration of the input device 300, a gyroscope sensor collects the movement speed of the input device 300, and the like.
  • the data indicating whether the confirmation button of the input device 300 is pressed includes the pressure value collected by the pressure sensor provided under the confirmation button, the level generated by the input device 300, and the like.
  • the pressure value collected by the pressure sensor set under the confirmation button is 0, which means that the confirmation button of the input device 300 is pressed; the pressure value collected by the pressure sensor set under the confirmation button is not 0, It means that the confirmation button of the input device 300 has not been pressed.
  • the high level generated by the input device 300 indicates that the confirmation button of the input device 300 is pressed, and the low level generated by the input device 300 indicates that the confirmation button of the input device 300 is not pressed. .
  • the input device 300 may send the collected motion data of the input device 300 and the data indicating whether the confirmation button of the input device 300 is pressed to the electronic device 100 for analysis.
  • the electronic device 100 can determine the movement and state of the input device 300 according to the data collected by the input device 300.
  • the movement of the input device 300 may include, but is not limited to: whether it moves, the direction of the movement, the speed of the movement, the distance of the movement, the trajectory of the movement, and so on.
  • the state of the input device 300 may include: whether the confirmation key of the input device 300 is pressed.
  • the electronic device 100 can adjust the image displayed on the head-mounted display device 200 and/or activate corresponding functions according to the movement and/or state of the input device 300, such as moving the cursor in the image, the The movement track of the cursor is determined by the movement of the input device 300, and for example, the function of measuring the IPD is enabled according to the operation of the confirmation button of the input device 300 being pressed.
  • the user can trigger the electronic device 100 to perform the corresponding function by inputting a user operation on the input device 300.
  • the user can hold the input device 300 and move it 3 cm to the left, so that the electronic device 100 moves the cursor displayed on the head-mounted display device 200 to the left by 6 cm.
  • the user can move the cursor to any position on the display screen of the head-mounted display device 200 by manipulating the input device 300.
  • the user can press the confirmation button of the input device 300 to enable the electronic device 100 to activate the control corresponding to the control.
  • the head-mounted display device 200 is used to display images. After the user sees the image displayed by the head-mounted display device 200, he can indicate that he can display the image on the head-mounted display device 200 by inputting user operations on the input device 300 or the head-mounted display device 200. Seen on the edge of the display. The manner in which the user inputs user operations on the head-mounted display device 200 can refer to the subsequent related descriptions of the head-mounted display device 200, which will not be repeated here.
  • the head-mounted display device 200 or the input device 300 can send the collected data to the electronic device 100, and the electronic device 100 performs calculations based on the data to determine that the user can display in the head-mounted display device.
  • the edge seen on the display screen of the device 200, and the IPD of the user is calculated based on the edge.
  • the electronic device 100 can determine the image to be displayed on the head-mounted display device 200 according to the user’s IPD, and display the image on the head-mounted display device 200. On the display. In this way, the convergence process of the user when viewing objects in the 3D scene is natural and comfortable, and the 3D scene that the user actually feels after convergence is consistent with the 3D scene constructed by the electronic device, which improves the user's wearing comfort and avoids scene distortion .
  • the user indicates the way that he can see the edge on the display screen of the head-mounted display device 200, the way the electronic device 100 calculates the user’s IPD, and the electronic device 100 determines that the head-mounted display device
  • the electronic device 100 determines that the head-mounted display device
  • FIG. 3A shows a schematic structural diagram of the electronic device 100 provided by an embodiment of the present application.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, Antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, sensor module 180, camera 193, display screen 194.
  • a processor 110 an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, Antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, sensor module 180, camera 193, display screen 194.
  • USB universal serial bus
  • the sensor module 180 can include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than those shown in the figure, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU), etc.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching instructions and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may include one or more interfaces.
  • Interfaces can include integrated circuit (I2C) interfaces, integrated circuit built-in audio (inter-integrated circuit sound, I2S) interfaces, pulse code modulation (PCM) interfaces, universal asynchronous transmitters receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / Or Universal Serial Bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART mobile industry processor interface
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • SDA serial data line
  • SCL serial clock line
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the PCM interface can also be used for audio communication to sample, quantize and encode analog signals.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the MIPI interface can be used to connect the processor 110 with the display screen 194, the camera 193 and other peripheral devices.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transfer data between the electronic device 100 and peripheral devices. It can also be used to connect earphones and play audio through earphones.
  • the interface can also be used to connect other head-mounted display devices, such as VR devices.
  • the interface connection relationship between the modules illustrated in the embodiment of the present application is merely a schematic description, and does not constitute a structural limitation of the electronic device 100.
  • the electronic device 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the wireless communication function of the electronic device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100.
  • the mobile communication module 150 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), and the like.
  • the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellites. System (global navigation satellite system, GNSS), frequency modulation (FM), near field communication (NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 may also receive a signal to be sent from the processor 110, perform frequency modulation, amplify, and convert it into electromagnetic waves to radiate through the antenna 2.
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is an image processing microprocessor, which is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations and is used for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, and the like.
  • the display screen 194 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the electronic device 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the electronic device 100 can implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP is used to process the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transfers the electrical signal to the ISP for processing and transforms it into an image visible to the naked eye.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and is projected to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transfers the electrical signal to the ISP to convert it into a digital image signal.
  • the electronic device 100 may include one or N cameras 193, and N is a positive integer greater than one.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects the frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in multiple encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • MPEG2 MPEG2, MPEG3, MPEG4, and so on.
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • an external memory card such as a Micro SD card
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program (such as a sound playback function, an image playback function, etc.) required by at least one function, and the like.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by running instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the electronic device 100 can implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, and an application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
  • the speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 answers a call or voice message, it can receive the voice by bringing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations that act on the same touch position but have different touch operation strengths may correspond to different operation instructions. For example: when a touch operation whose intensity is less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the movement posture of the electronic device 100. In some embodiments, the angular velocity of the electronic device 100 around three axes (ie, x, y, and z axes) can be determined by the gyro sensor 180B. The gyro sensor 180B can be used for image stabilization.
  • the air pressure sensor 180C is used to measure air pressure.
  • the magnetic sensor 180D includes a Hall sensor.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected.
  • the electronic device 100 can measure the distance by infrared or laser.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the ambient light sensor 180L is used to sense the brightness of the ambient light.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100, which is different from the position of the display screen 194.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the internal memory 121 is used to store application programs of one or more applications, and the application programs include instructions.
  • the application program is executed by the processor 110, the electronic device 100 is caused to generate content for presentation to the user.
  • the application may include an application for managing the head-mounted display device 200, a game application, a conference application, a video application, a desktop application, or other applications, and so on.
  • the processor 110 is configured to determine, according to the data collected by the head-mounted display device 200 or the input device 300, that the user is able to display on the display screen of the head-mounted display device 200 See the edge.
  • the processor 110 is further configured to calculate the user's IPD according to the edges that the user can see on the display screen of the head-mounted display device 200.
  • the way in which the processor 110 determines the edge that the user can see on the display screen of the head-mounted display device 200, and the way in which the user's IPD is calculated can refer to the description of the subsequent embodiments.
  • the GPU is used to perform mathematical and geometric operations based on data obtained from the processor 110 (for example, data provided by an application program), use computer graphics technology, computer simulation technology, etc. to render images, and determine An image displayed on the head-mounted display device 200.
  • the GPU may add correction or pre-distortion to the rendering process of the image to compensate or correct the distortion caused by the optical components of the head-mounted display device 200.
  • the GPU is also used to determine the image to be displayed on the head-mounted display device 200 according to the user's IPD obtained from the processor 110.
  • the manner in which the GPU determines the image displayed on the head-mounted display device 200 can refer to the related description of the subsequent embodiments, which will not be repeated here.
  • the electronic device 100 may send the image processed by the GPU to the head-mounted display device 200 through the mobile communication module 150, the wireless communication module 160, or a wired interface.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present application takes a layered Android system as an example to illustrate the software structure of the electronic device 100 by way of example.
  • FIG. 3B is a block diagram of the software structure of the electronic device 100 according to an embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface.
  • the Android system is divided into four layers, from top to bottom, the application layer, the application framework layer, the Android runtime and system library, and the kernel layer.
  • the application layer can include a series of application packages.
  • the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer can include a window manager (window manager), content provider, view system, phone manager, resource manager, notification manager, screen manager (display manager, activity manager ( activity manager service, input manager, etc.
  • window manager window manager
  • content provider view system
  • phone manager resource manager
  • notification manager screen manager
  • screen manager display manager
  • activity manager activity manager service, input manager, etc.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, take a screenshot, etc.
  • the window manager, the screen manager, and the activity manager may cooperate to generate an image for display on the head-mounted display device 200.
  • the content provider is used to store and retrieve data and make these data accessible to applications.
  • the data may include videos, images, audios, phone calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, and so on.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the electronic device 100. For example, the management of the call status (including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, and so on.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or a scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window.
  • the status bar prompts text messages, sounds a prompt tone, the head-mounted display device vibrates, and the indicator light flashes.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function functions that the java language needs to call, and the other part is the core library of Android.
  • the application layer and application framework layer run in a virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), three-dimensional graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the head-mounted display device 200 may include: a processor 201, a memory 202, a communication module 203, a sensor system 204, a camera 205, a display device 206, and an audio device 207.
  • the above components can be coupled and connected and communicate with each other.
  • the structure shown in FIG. 3C does not constitute a specific limitation on the head-mounted display device 200.
  • the head-mounted display device 200 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the head-mounted display device 200 may also include physical keys such as an on-off key, a volume key, various interfaces, such as a USB interface for supporting the connection between the head-mounted display device 200 and the electronic device 100, and so on.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 201 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU), etc.
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller can generate operation control signals according to the instruction operation code and timing signals, complete the control of fetching and executing instructions, so that each component performs corresponding functions, such as human-computer interaction, motion tracking/prediction, rendering display, audio processing, etc.
  • the memory 202 stores executable program code used to execute the display method provided in the embodiments of the present application, and the executable program code includes instructions.
  • the memory 202 may include a program storage area and a data storage area.
  • the storage program area can store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.) and so on.
  • the data storage area can store data (such as audio data, etc.) created during the use of the head-mounted display device 200.
  • the memory 202 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • the processor 201 executes various functional applications and data processing of the head-mounted display device 200 by running instructions stored in the memory 202 and/or instructions stored in a memory provided in the processor.
  • the communication module 203 may include a wireless communication module.
  • the wireless communication module can provide a wireless communication solution such as WLAN, BT, GNSS, FM, IR, etc. applied to the head-mounted display device 200.
  • the wireless communication module may be one or more devices integrating at least one communication processing module.
  • the communication module 203 may support the head-mounted display device 200 and the electronic device 100 to communicate. It is understandable that, in some embodiments, the head-mounted display device 200 may not include the communication module 203, which is not limited in the embodiment of the present application.
  • the sensor system 204 may include an accelerometer, a compass, a gyroscope, a magnetometer, or other sensors for detecting motion, and so on.
  • the sensor system 204 is used to collect corresponding data, for example, an acceleration sensor collects the acceleration of the head-mounted display device 200, a gyroscope sensor collects the movement speed of the head-mounted display device 200, and so on.
  • the data collected by the sensor system 204 may reflect the head movement of the user wearing the head-mounted display device 200.
  • the sensor system 204 may be an inertial measurement unit (IMU) provided in the head-mounted display device 200.
  • the head-mounted display device 200 may send the data acquired by the sensor system to the electronic device 100 for analysis.
  • IMU inertial measurement unit
  • the electronic device 100 can determine the movement of the user's head according to the data collected by each sensor, and execute corresponding functions according to the movement of the user's head, such as starting the function of measuring IPD. That is, the user can trigger the electronic device 100 to perform the corresponding function by inputting a head movement operation on the head-mounted display device 200.
  • the movement of the user's head may include: whether to rotate, the direction of rotation, and so on.
  • the sensor system 204 may also include an optical sensor, which is used in conjunction with the camera 205 to track the user's eye position and capture eye movement data.
  • the eye movement data can be used, for example, to determine the distance between the user's eyes, the 3D position of each eye relative to the head-mounted display device 200, the amplitude and gaze direction of each eye's twist and rotation (i.e., turn, pitch, and shake), etc. Wait.
  • infrared light is emitted in the head-mounted display device 200 and reflected from each eye, the reflected light is detected by the camera 205 or an optical sensor, and the detected data is transmitted to the electronic device 100 to The electronic device 100 is enabled to analyze the position, pupil diameter, movement state, etc. of the user's eyes from the changes in the infrared light reflected by each eye.
  • the camera 205 can be used to capture still images or videos.
  • the static image or video may be an image or video surrounding the user facing the outside, or it may be an image or video facing the inside.
  • the camera 205 can track the movement of the user's single eye or both eyes.
  • the camera 205 includes, but is not limited to, a traditional color camera (RGB camera), a depth camera (RGB depth camera), a dynamic vision sensor (DVS) camera, and the like.
  • the depth camera can obtain the depth information of the subject.
  • the camera 205 can be used to capture images of the user's eyes and send the images to the electronic device 100 for analysis.
  • the electronic device 100 can determine the state of the user's eyes according to the image collected by the camera 205, and perform corresponding functions according to the state of the user's eyes. That is, the user can trigger the electronic device 100 to perform the corresponding function by inputting an eye movement operation on the head-mounted display device 200.
  • the state of the user's eyes may include: whether to rotate, the direction of rotation, whether it has not been rotated for a long time, the angle of looking to the outside world, and so on.
  • the head-mounted display device 200 uses a GPU, a display device 206, and an application processor to present or display images.
  • the GPU is an image processing microprocessor, which is connected to the display device 206 and the application processor.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display device 206 may include: one or more display screens, and one or more optical components.
  • the one or more display screens include the display screen 101 and the display screen 103.
  • the one or more optical components include the optical component 102 and the optical component 104.
  • the structure of the display screen 101, the display screen 103, the optical assembly 102, and the optical assembly 104 and the positional relationship between them can refer to the related description in FIG. 1.
  • the labels of the various devices in the display device 206 of the head-mounted display device 200 follow the labels in FIG. 1, that is, the head-mounted display device 200 includes a display screen 101, a display screen 103, Optical component 102 and optical component 104.
  • the display screens in the head-mounted display device 200 are used to receive data or content processed by the GPU of the electronic device 100 (for example, after rendering Image) and display it.
  • the head-mounted display device 200 may be a terminal device such as VR glasses with limited computing power. It needs to cooperate with the head-mounted display device 200 to present a 3D scene to the user, and provide the user with VR/AR/ MR experience.
  • the images displayed on the display screen 101 and the display screen 103 have parallax, thereby simulating binocular vision, so that the user can feel the depth of the object corresponding to the image, thereby generating a real 3D feeling.
  • the display screens may include a display panel, and the display panel may be used to display images, thereby presenting a three-dimensional virtual scene to the user.
  • the display panel can adopt liquid crystal display devices LCD, OLED, AMOLED, FLED, Miniled, MicroLed, Micro-oLed, QLED, etc.
  • Optical components such as the optical component 102 and the optical component 104, are used to guide the light from the display screen to the exit pupil for the user to perceive.
  • one or more optical elements e.g., lenses
  • the optical assembly may have one or more coatings, such as anti-reflective coatings.
  • the magnification of the image light by the optical components allows the display screen to be physically smaller, lighter, and consume less power.
  • the enlargement of the image light can increase the field of view of the content displayed on the display screen.
  • the optical component can make the field of view of the content displayed on the display screen the entire field of view of the user.
  • the optical assembly can also be used to correct one or more optical errors.
  • optical errors include barrel distortion, pincushion distortion, longitudinal chromatic aberration, lateral chromatic aberration, spherical aberration, coma aberration, field curvature, astigmatism, and the like.
  • the content provided to the display screen for display is pre-distorted, and the distortion is corrected by the optical component when the image light generated based on the content is received from the display screen.
  • the audio device 207 is used to implement audio collection and output.
  • the audio device 207 may include, but is not limited to: a microphone, a speaker, a headset, and so on.
  • the electronic device 100 described in the embodiment of FIGS. 3A and 3B, and the head-mounted display device 200 described in the embodiment of FIG. 3C the electronic device 100 and the head-mounted display device 200 are described below.
  • the display device 200 cooperates with providing a 3D scene as an example to describe in detail the display method provided in the embodiment of the present application.
  • the electronic device 100 may determine the IPD of the user when the user wears the head-mounted display device 200, and determine the IPD for displaying on the head-mounted display device 200 according to the IPD of the user. And display the image on the head-mounted display device 200.
  • This display method takes into account the user’s IPD when displaying images on the head-mounted display device 200, which makes the user’s convergence process natural and comfortable when viewing objects in the 3D scene, and the user’s actual 3D experience after convergence.
  • the scene is consistent with the 3D scene constructed by the electronic device, which improves the user's wearing comfort and avoids scene distortion. Subsequent embodiments will describe the display method in detail through the following three parts (1), (2), and (3).
  • the electronic device 100 obtains the user's IPD
  • the electronic device 100 can calculate the user's IPD based on the leftmost edge and the rightmost edge that the user can see on the display screen of the head-mounted display device. Because, in the head-mounted display device 200, the relative position between the display screen, the optical assembly and the lens barrel has been determined, and the IPD is the leftmost edge and the rightmost edge that affects the leftmost edge and the rightmost edge that the user can see on the display screen. The main factor. Therefore, the electronic device 100 can obtain the user's IPD according to the leftmost edge and the rightmost edge that the user can see on the display screen of the head-mounted display device 200.
  • FIG. 4A exemplarily shows a diagram of the positional relationship between the eyes and the optical component 102 and the optical component 104 when a user with an IPD less than IOD wears the head-mounted display device 200.
  • FIG. 4B exemplarily shows a diagram of the positional relationship between the eyes and the optical component 102 and the optical component 104 when a user with an IPD greater than IOD wears the head-mounted display device 200.
  • the offset of the user's left eye relative to the first straight line may be referred to as ⁇ i1.
  • the ⁇ i1 is a signed value.
  • the distance between the left eye of the user and the first straight line is equal to the absolute value of the ⁇ i1.
  • the following settings can be made: when the ⁇ i1 is a positive value, it means that the user's left eye is shifted to the right relative to the first straight line; when the ⁇ i1 is a negative value, it means that the user is left The eye is offset to the left with respect to the first straight line.
  • other settings can also be made, which is not limited in the embodiments of the present application.
  • the offset of the user's right eye relative to the second straight line may be referred to as ⁇ i2.
  • the ⁇ i2 is a signed value.
  • the distance between the user's right eye and the second straight line is equal to the absolute value of the ⁇ i2.
  • the following settings can be made: when the ⁇ i2 is a positive value, it means that the user's right eye is offset to the right with respect to the second straight line; when the ⁇ i2 is a negative value, it means that the user's right eye Offset to the left with respect to the second straight line.
  • other settings can also be made, which is not limited in the embodiments of the present application.
  • the electronic device 100 can be calculated according to formula 1. Get the user's IPD.
  • the IPD is the actual interpupillary distance of the user currently wearing the head-mounted display device 200.
  • the IOD is the distance between the center of the optical component 102 and the center of the optical component 104.
  • the ⁇ i1 is associated with the leftmost edge and the rightmost edge that the user's left eye can see on the display screen 101, that is, the value of the ⁇ i1 can be displayed on the display screen 101 by the user's left eye
  • the leftmost edge and the rightmost edge that can be seen are determined.
  • the ⁇ i2 is associated with the leftmost edge and the rightmost edge that the user's right eye can see on the display screen 103, that is, the value of ⁇ i2 can be displayed on the display screen 103 by the user's right eye
  • the leftmost edge and the rightmost edge that can be seen are determined.
  • the electronic device 100 obtains the user's IPD according to formula 1, the values of the following three parameters need to be determined: the IOD, the ⁇ i1, and the ⁇ i2. The following will describe in detail how the electronic device 100 determines the values of the three parameters.
  • the electronic device 100 determines the IOD
  • the IOD is fixed.
  • head-mounted display devices of the same model have the same IOD.
  • the electronic device 100 may obtain the specific value of the IOD in a pre-installed installation package of an application program for managing the head-mounted display device 200.
  • the electronic device 100 may also obtain the IOD of the head-mounted display device 200 from the Internet after it is connected to the head-mounted display device 200 and obtains the model of the head-mounted display device 200. The specific value.
  • the electronic device 100 determines the ⁇ i1
  • ⁇ i1 Since the value of ⁇ i1 affects the leftmost edge and the rightmost edge that the user's left eye can see on the display screen 101, the electronic device 100 can see according to the user's left eye on the display screen 101. To the leftmost edge and the rightmost edge to determine the ⁇ i1.
  • the electronic device 100 acquires a first position and a second position, the first position is located at the leftmost edge of the user's left eye that can be seen on the display screen 101, and the second position is located at the user's The left eye can see the rightmost edge on the display screen 101.
  • the electronic device 100 may display a user interface through the head-mounted display device 200, and the user interface may be used for the user to indicate the first position and the second position.
  • the electronic device 100 may obtain the first position and the second position according to a user's instruction.
  • the following takes the user interface shown in FIGS. 5A-5C as an example to describe in detail how the electronic device 100 obtains the first position and the second position.
  • FIG. 5A exemplarily shows the user interface 51 and the user interface 52 displayed on the head-mounted display device 200.
  • the user interface 51 and the user interface 52 are generated by the electronic device 100 and transmitted to the head-mounted display device 200.
  • the user interface 51 may be displayed on the display screen 101
  • the user interface 52 may be displayed on the display screen 103.
  • the electronic device 100 may construct a 3D scene according to pre-stored 3D scene information.
  • the 3D scene information describes some information of the 3D scene that the user feels. That is to say, the 3D scene information indicates the objects that the user can see when the user is in the 3D scene, and the relative positions between the objects and the user.
  • the electronic device 100 can simulate or assume that the user is naturally in the constructed 3D scene, obtain the image seen by the user's left eye, and display the image on the display screen 101, so as to pass through the display screen 101.
  • 101 displays the user interface 51; acquires an image seen by the user's right eye, and displays the image on the display screen 103, so that the user interface 52 is displayed on the display screen 103.
  • the image of the user interface 51 displayed on the display screen 101 and the image of the user interface 52 displayed on the display screen 103 have parallax.
  • both the user interface 51 and the user interface 52 display: a prompt box 501, a control 502, a control 503, and a cursor 504.
  • the cursor 504 is located on a certain position of the user interface 51.
  • the user can adjust the position of the cursor 504 in the user interface 51 through the input device 300, that is, the user can move the cursor 504 through the input device 300.
  • the implementation form of the cursor 504 may include an arrow, a circle, or other icons.
  • the cursor 504 shown in FIG. 5A is located on the control 503.
  • the prompt box 501 is used to display prompt information.
  • the prompt information can be used to prompt the user.
  • the prompt information may be the text "Do you want to measure the interpupillary distance? Measuring the interpupillary distance can present you with a better visual effect! to prompt the user to measure the IPD and the effect after the IPD.
  • the control 503 is used to stop and start the function of measuring the IPD of the electronic device 100.
  • the electronic device 100 does not measure the user's IPD in response to the user's interaction with the control 502.
  • the control 503 is used to activate the function of measuring the IPD of the electronic device 100.
  • the electronic device 100 may start to measure the user's IPD in response to the user's interaction with the control 503.
  • the electronic device 100 displays the user interface 51 and the user interface 52 as shown in FIG. 5A through the head-mounted display device 200, it may respond to the start of the electronic device.
  • the user operation of the function of measuring the IPD of the device 100 activates the function of measuring the IPD of the electronic device 100.
  • the user operation for starting the function of measuring the IPD of the electronic device 100 is input by the user according to the user interface 51 and the user interface 52.
  • the user operation for starting the function of measuring IPD of the electronic device 100 may be detected by the input device 300: after the input device 300 moves, the input device 300 Confirm the user's operation of the key being pressed. Based on the starting point of the cursor 504 in the user interface 51 and the user interface 52, the movement of the input device 300 causes the cursor 504 to move in the user interface 51 and the user interface 52 The end point of is the same as the position of the control 503. In other words, the cursor 504 is located on the control 503 when the movement of the input device 300 ends. In response to the movement of the input device 100, the electronic device moves the cursor 504 to the control 503, and the movement track of the cursor 504 to the control 503 is determined by the input device 100.
  • the movement trajectory when the movement occurs is determined.
  • the input device 300 can collect specific data (for example, acceleration collected by an acceleration sensor, movement speed and movement direction collected by a gyroscope sensor), and can send the specific data to the electronic device 100,
  • the specific data indicates that after the movement of the input device 300, the confirmation button of the input device 300 was pressed.
  • the electronic device 100 can start the function of measuring IPD according to the specific data. That is, the user can manipulate the input device 300 to move to trigger the electronic device 100 to move the cursor 504 to the control 503, and then press the confirmation button of the input device 300 to trigger the electronic device 100 Start the function of measuring IPD.
  • the user operation for starting the function of measuring the IPD of the electronic device 100 may be detected by the head-mounted display device 200: an operation of turning the user's head in a specific direction.
  • the specific direction may be the left, right, up, or down of the user.
  • the sensors in the sensor system 204 of the head-mounted display device 200 can collect specific data (for example, the movement speed and the movement direction collected by the gyroscope sensor), and send the specific data to the electronic device 100.
  • the specific data indicates that the user's head rotates in a specific direction.
  • the electronic device 100 can start the function of measuring IPD according to the specific data. In other words, the user can trigger the electronic device 100 to start the function of measuring IPD by turning the head in a specific direction.
  • the user operation for starting the function of measuring the IPD of the electronic device 100 may be a voice command detected by the head-mounted display device 200.
  • the voice command may be "start measurement", for example.
  • the microphone of the head-mounted display device 200 can collect voice data input by the user and send the voice data to the electronic device 100, and the voice data indicates the voice instruction.
  • the electronic device 100 can start the function of measuring IPD according to the voice data. In other words, the user can trigger the electronic device 100 to start the function of measuring IPD by speaking a voice command.
  • the user operation for starting the function of measuring IPD of the electronic device 100 may be detected by the head-mounted display device 200: the left eye of the user looks at the control 503 and is in advance. Set the user operation that does not rotate within the time period.
  • the camera of the head-mounted display device 200 can collect a specific image of the user's eyeball, and send the specific image to the electronic device 100.
  • the specific image indicates that the user's left eye is looking at the control. 503 and no rotation occurs within the preset time period.
  • the electronic device 100 can activate the function of measuring IPD according to the specific image. That is, the user can trigger the electronic device 100 to start the function of measuring IPD by looking at the control 503 for a long time.
  • the user operation for starting the function of measuring the IPD of the electronic device 100 may also be in other forms.
  • the user operation for starting the function of measuring the IPD of the electronic device 100 may also be detected by the head-mounted display device 200: two blinking operations when the user's left eye looks at the control 503, etc. .
  • other forms of user operations for starting the function of measuring the IPD of the electronic device 100 are not listed one by one.
  • the electronic device 100 may start the function of measuring the user's IPD and start measuring the user's IPD.
  • FIG. 5B exemplarily shows the user interface 53 and the user interface 54 displayed on the head-mounted display device 200.
  • the user interface 53 and the user interface 54 may be generated and transmitted by the electronic device 100 to the head-mounted display device 200 when the electronic device 100 starts the function of measuring the user's IPD, that is, the The user interface 53 and the user interface 54 may be generated by the electronic device 100 in response to a user operation for starting the function of measuring the IPD of the electronic device 100 and transmitted to the head-mounted display device 200.
  • the manner in which the electronic device 100 generates the user interface 53 and the user interface 54 may refer to the manner in which the electronic device 100 generates the user interface 51 and the user interface 52, which will not be repeated here.
  • the user interface 53 is displayed on the display screen 101, and the user interface 54 is displayed on the display screen 103.
  • the screen may be black (that is, the head-mounted display device 200 stops the power supply of the display screen 103), or a black image may be displayed. This helps the user to focus on the left eye.
  • the user interface 53 displays: a cursor 504, an image 505, and a prompt box 506. among them:
  • the cursor 504 is located at a certain position in the user interface 53.
  • the cursor 504 may refer to the cursor 504 in the user interface 51 and the user interface 52 shown in FIG. 5A, which will not be repeated here.
  • the prompt box 506 is used to display prompt information.
  • the prompt information can be used to prompt the user.
  • the prompt message may be the text "please look to the left with your left eye as much as possible, drag the slider to the left edge position where you can see and confirm" to prompt the user to indicate the first position.
  • the first prompt information may be the prompt information. It is not limited to the prompt information displayed in the user interface 53, and the first prompt information may also be voice or other types of prompt information output by the electronic device 100, which is not limited in the embodiment of the present application.
  • the embodiment of the present application does not limit the content of the image 505.
  • the image 505 may be, for example, an image of a ruler with a slider.
  • the ruler with a slider is parallel to the third straight line and passes through the midpoint of the display screen 101, that is, the ruler with a slider. And are located on the fourth straight line.
  • the image 505 is an image with a ruler with a slider as an example for description.
  • the electronic device 100 may obtain the first position in response to a user operation for indicating the first position after displaying the user interface as shown in FIG. 5B.
  • the user operation for indicating the first position is input by the user according to the user interface 53 displayed on the display screen 101.
  • the user operation for indicating the first position may be detected by the input device 300: after the input device 300 moves in the first track, the confirmation button of the input device 300 While being pressed, the input device 300 undergoes a second trajectory movement, after which the confirmation button of the input device 300 is stopped from being pressed.
  • the movement of the first trajectory generated by the input device 300 causes the end point of the cursor 504 after moving in the user interface 53 and the sliding
  • the positions of the images of the blocks are the same; after the movement of the second trajectory occurs in the input device 300, the end point of the cursor 504 after moving in the user interface 53 is the position where the image of the slider ends.
  • the input device 300 can collect specific data (for example, acceleration collected by an acceleration sensor, movement speed and movement direction collected by a gyroscope sensor), and can send the specific data to the electronic device 100,
  • the specific data indicates that: after the input device 300 moves on the first track, the confirmation button of the input device 300 is pressed while the input device 300 moves on the second track, and then the confirm button is stopped. Press.
  • the electronic device 100 moves the cursor 504 to the image of the slider, then moves the cursor 504 and the image of the slider, and The position where the image of the slider ends moving is determined as the first position. That is, the user can trigger the electronic device 100 to move the cursor 504 to the image of the slider by manipulating the input device 300 to move the first track, and then press the input device 300 to confirm Press the button while manipulating the input device 300 to move the second track, and then stop pressing the confirmation button to indicate the first position.
  • the user operation for indicating the first position may be detected by the head-mounted display device 200: a user operation in which the left eye of the user does not rotate within a preset period of time.
  • the camera of the head-mounted display device 200 can collect a specific image of the user's eyeball and send the specific image to the electronic device 100.
  • the specific image indicates that the user's left eye is within a preset period of time. No rotation occurred.
  • the electronic device 100 may determine, according to the specific image, a position where the user's left eye looks at the display screen 101 when the user's left eye does not rotate within a preset period of time as the first position. That is, the user can indicate a certain position in the display screen 101 as the first position by looking at the position for a long time.
  • the head-mounted display device 200 may also, after acquiring the specific image, look at the display when the left eye of the user determined according to the specific image does not rotate within a preset period of time. The position of the screen 101 is determined as the first position. After that, the head-mounted display device 200 may send the determined first position to the electronic device 100, so that the electronic device 100 obtains the first position.
  • the image of the eyeball of the user collected by the head-mounted display device 200 is used to obtain the user's operation data.
  • the user operation for indicating the first position may also be in other forms.
  • This embodiment of the present application no longer lists other forms of user operations used to indicate the first location one by one.
  • the electronic device 100 may also obtain the second position in the same manner as that shown in the above-mentioned embodiment in FIG. 5B to FIG. 5C.
  • the prompt information of the prompt box in the user interface 53 displayed by the head-mounted display device 200 can be changed to the text "Please try to turn your left eye to the right. Look sideways, drag the slider to the right edge where you can see and confirm".
  • the head-mounted display device 200 or the input device 300 may detect the user operation for indicating the second position, and will detect the specific data indicating the user operation for indicating the second position.
  • a specific image is sent to the electronic device 100, and the electronic device 100 may obtain the second position according to the specific data or the specific image.
  • the user operation for indicating the second position is input by the user according to the user interface 53 displayed on the display screen 101.
  • the user operation for indicating the second position is similar to the user operation for indicating the first position, and reference may be made to the user operation for indicating the first position, which will not be described in detail here.
  • the second prompt information may be the prompt information displayed in the user interface 53 for prompting the user to indicate the second position, for example, the above text "please turn your left eye to the right as much as possible. Look sideways, drag the slider to the right edge where you can see and confirm". It is not limited to this, and the first prompt information may also be voice or other types of prompt information output by the electronic device 100, which is not limited in the embodiment of the present application.
  • the first user interface may be a user interface displayed on the display screen 101 for the user to indicate the first position and the second position.
  • the first user interface may be the user interface 53 as shown in FIG. 5B. It should be noted that the user interface 53 is only an example, and the first user interface may also be implemented in other forms, which is not limited in the embodiment of the present application.
  • the embodiment of the present application does not limit the time sequence in which the electronic device 100 obtains the first position and the second position.
  • the electronic device 100 may first obtain the first position according to the detected user operation for indicating the first position, and then obtain the first position according to the detected user operation for indicating the first position.
  • the user operation in the second location obtains the second location.
  • the electronic device 100 may first obtain the second position according to the detected user operation for indicating the second position, and then obtain the second position according to the detected user operation for indicating the first position.
  • the user operation of the location obtains the first location.
  • the electronic device 100 may output the user interface 51 and the user interface 52 when the head-mounted display device 200 is turned on for the first time. In this way, the electronic device 100 can obtain the first position and the second position when the head-mounted display device 200 is turned on for the first time, so as to obtain the user's IPD, and display the user's IPD in the head-mounted display device according to the user's IPD. The image is displayed on the display device 200. In this way, after the head-mounted display device 200 is turned on for the first time, it can be ensured that the user can comfortably, easily and naturally converge when wearing the head-mounted display device 200, and the actual 3D scene and the electronic device 100 are actually felt. The constructed scene is the same.
  • the electronic device 100 may output the user interface 51 and the user interface 52 periodically.
  • the electronic device 100 may display the user interface 51 and the user interface 52 on the head-mounted display device 200 at a frequency of once a month or once a week, thereby periodically acquiring the user's IPD, and The image is displayed on the head-mounted display device 200 according to the IPD of the user. In this way, even if the user's IPD changes, it can be ensured that the user can comfortably, easily and naturally converge when wearing the head-mounted display device 200, and the actual 3D scene felt is consistent with the scene constructed by the electronic device 100.
  • the electronic device 100 may output the user interface 51 and the user interface 52 according to user requirements. For example, the user may actively trigger the electronic device 100 to output the user interface 51 and the user in the setting interface displayed by the head-mounted display device 200 after not using the head-mounted display device 200 for a long time. Interface 52.
  • the electronic device 100 may output the user interface 51 and the user interface 52 when a new user wears the head-mounted display device 200. Specifically, the electronic device 100 may recognize whether the current user is a new user when the user wears the head-mounted display device 200. The electronic device 100 may recognize the user through biological characteristics such as iris, fingerprint, voiceprint, face, etc., and the biological characteristics may be collected by the head-mounted display device 200 or the electronic device 100. In this way, the electronic device 100 can determine the IPD of each user, and adapt to the IPD of different users to display images on the head-mounted display device 200, ensuring that each user can wear the head-mounted display device 200. Convergence comfortably, easily and naturally, and the actual 3D scene felt is consistent with the scene constructed by the electronic device 100, which brings a good visual experience to each user.
  • the electronic device 100 may also obtain the first position and location in other ways.
  • the second position For example, the head-mounted display device 200 may use a camera to collect an eyeball image of the user during a period of time (for example, within a week or a month) of the user using the head-mounted display device 200, and send the image to the electronic device 100 , The electronic device 100 determines the leftmost edge and the rightmost edge on the display screen 101 that the user's left eye can see during this period of time according to the image, so as to obtain the first position and the first position. Two positions.
  • the electronic device 100 can determine the ⁇ i1 so as to obtain the user's IPD, and display an image on the head-mounted display device 200 according to the user's IPD. This method is simpler and more convenient for users, and the experience is better.
  • the electronic device 100 determines the ⁇ i1 according to the first position and the second position.
  • the electronic device 100 may calculate the ⁇ i1 according to the geometric relationship when the user wears the head-mounted display device 200.
  • the geometric relationship will be described in detail below, and the calculation formula of ⁇ i1 will be derived.
  • FIG. 6 is a schematic diagram of the geometric relationship when the user wears the head-mounted display device 200 that can be obtained by the electronic device 100 in an embodiment of the application. among them:
  • C' is the position of the left eye when the user wears the head-mounted display device 200.
  • O' is the center of the display screen 101.
  • J is the first position
  • K is the second position.
  • the way to determine J and K can refer to the relevant description in point (1) above.
  • D is the intersection of the left edge of the optical component 102 and the third straight line
  • E is the intersection of the right edge of the optical component 102 and the third straight line.
  • O is the center of the virtual image plane, which is also the imaging point corresponding to O'on the virtual image plane, or the intersection point of the first straight line and the virtual image plane.
  • A'and B' are the corresponding imaging points of J and K on the virtual image plane, respectively.
  • C is a point on the first straight line.
  • a and B are respectively assumed to be the imaging points of the first position and the second position corresponding to the left eye of the user on the virtual image plane when the user's left eye is assumed to be located at point C.
  • A, D, and C are on the same straight line, and B, E, and C are on the same straight line.
  • F is the vertical foot obtained after D is the vertical line of the virtual image plane.
  • H is the vertical foot obtained after passing E as the vertical line of the virtual image plane.
  • G is the vertical foot obtained after passing C'as the vertical line of the virtual image plane.
  • the ⁇ i1 calculated according to Formula 2 is a value with a sign. When the value of ⁇ i1 is positive, it means that the user's left eye is shifted to the right relative to the first straight line; when the value of ⁇ i1 is negative, it means that the user's left eye is shifted to the left relative to the first straight line. Offset. The offset distance of the user's left eye relative to the first straight line is the absolute value of the ⁇ i1.
  • the geometric relationship shown in FIG. 6 is actually invalid.
  • the geometric relationship shown in FIG. 6 is established and the ⁇ i1 is calculated according to Formula 2.
  • the calculated ⁇ i1 and the offset of the user's left eye relative to the first straight line represented by it have a slight error, but the user can indicate the first position and the first straight line in a richer form.
  • the second position For example, when the electronic device 100 displays the user interface 53 through the display screen 101 of the head-mounted display device 200, the ruler in the user interface 53 may be parallel to the third straight line and deviate from the third straight line. At the midpoint of the display screen 101, the user can indicate the first position and the second position according to the user interface 53.
  • the electronic device 100 calculates the ⁇ i1 according to formula 2, the values of the following parameters need to be determined: M, L, JO', and KO'. The following describes in detail how the electronic device 100 determines the values of these parameters.
  • M is the magnification of the optical component 102.
  • the M value of some head-mounted display devices is fixed, and is the ratio of the virtual image image height to the real image height.
  • the electronic device 100 can be used to manage the head-mounted display device pre-installed
  • the value of M can be obtained from the installation package of the 200 application program. After obtaining the model of the head-mounted display device 200, the value of M can be obtained from the Internet according to the model.
  • the M value of some head-mounted display devices is adjustable. In this case, the electronic device 100 may first obtain the focusing information of the head-mounted display device (for example, the current resistance value of the sliding rheostat), and then The focus adjustment information is used to calculate the current M value of the head-mounted display device 200.
  • L is the diameter of the optical component 102. After the head-mounted display device leaves the factory, L is fixed, and in general, head-mounted display devices of the same model have the same L.
  • the electronic device 100 may obtain the value of L in an installation package of an application program installed in advance for managing the head-mounted display device 200. In other embodiments, the electronic device 100 may also obtain the head-mounted display device 200 from the Internet according to the model after it is connected to the head-mounted display device 200 and obtains the model of the head-mounted display device 200. The value of L of 200.
  • JO' is the distance from the first position to the center of the display screen 101 when the user wears the head-mounted display device 200.
  • the electronic device 100 may calculate the value according to the first position. For the determination method of the first position, reference may be made to the related description of the first point above. In a specific embodiment, the electronic device 100 can calculate the number of pixels from the first position to the center of the display screen 101, and then multiply the number of pixels by the size of each pixel to obtain The value of JO'.
  • KO' is the distance from the second position to the center of the display screen 101 when the user wears the head-mounted display device 200.
  • the electronic device 100 may calculate the value according to the determined second position.
  • the method for determining the second position can refer to the related description of the first point above.
  • the electronic device 100 can calculate the number of pixels from the second position to the center of the display screen 101, and then multiply the number of pixels by the size of each pixel to obtain The value of KO'.
  • the electronic device 100 determines the ⁇ i2
  • ⁇ i2 Since the value of ⁇ i2 affects the leftmost edge and the rightmost edge that the user's right eye can see on the display screen 103, the electronic device 100 can see according to the user's right eye on the display screen 103. To the leftmost edge and the rightmost edge to determine the ⁇ i2.
  • the electronic device 100 may acquire the third position and the fourth position, and the third position is located at the leftmost edge that the user's right eye can see on the display screen 103, and the The fourth position is located at the rightmost edge that the user's right eye can see on the display screen 103.
  • the manner in which the electronic device 100 obtains the third position and the fourth position is similar to the manner in which the electronic device 100 obtains the first position and the second position in point (1) above, and may Refer to the related description above, so I won't repeat it here.
  • the electronic device 100 may display a user interface for the user to indicate the third position and the fourth position through the head-mounted display device 200.
  • the electronic device 100 may display the user interface 54 shown in FIG. 5B through the display screen 101 of the head-mounted display device 200, and display the user interface shown in FIG. 5B through the display screen 103 53.
  • the electronic device 100 may detect the user operation for indicating the third position, and obtain the third position in response to the user operation for indicating the third position; and may also detect A user operation indicating the fourth position, and the fourth position is acquired in response to the user operation indicating the fourth position.
  • the user operation for indicating the third position and the user operation for indicating the fourth position are input by the user according to the user interface 53 displayed on the display screen 103.
  • the user operation for indicating the third position and the user operation for indicating the fourth position may refer to the user operation for indicating the first position, the user operation for indicating the The user operation in the second position is described.
  • the user interface displayed by the electronic device 100 through the display screen 101 may not be the user interface 53, which is not limited in the embodiment of the present application.
  • the second user interface may be a user interface displayed by the electronic device 100 through the display screen 103 for the user to indicate the third position and the fourth position.
  • the embodiment of the present application does not limit the time sequence for the electronic device 100 to display the first user interface through the display screen 101 and to display the second user interface through the display screen 103. It can be in order or at the same time.
  • the embodiment of the present application does not limit the time sequence for the electronic device 100 to confirm the first position, the second position, the third position, and the fourth position. It can be in order or at the same time.
  • the manner in which the electronic device 100 determines the ⁇ i2 according to the third position and the fourth position, and the electronic device 100 in the aforementioned point (1) determines according to the first position and the second position The manner of the ⁇ i1 is similar, and reference may be made to the related description above, which will not be repeated here.
  • the electronic device 100 may calculate the ⁇ i2 according to a geometric relationship similar to that in FIG. 6.
  • the electronic device 100 can determine the specific values of the three parameters in Formula 1, and can obtain the user's IPD according to Formula 1.
  • the left eye and right eye of the user are symmetrical, that is, the vertical line of the line where the left eye and right eye of the user are located is also the line where the center of the optical component 102 and the center of the optical component 104 are located.
  • the ⁇ i1 and the ⁇ i2 are equal in magnitude and opposite in sign, and the electronic device 100 can calculate the user's IPD through the following formula 3 or formula 4:
  • the electronic device 100 can obtain the user's IPD after determining the ⁇ i1 or the ⁇ i2, which reduces the calculation process of the electronic device 100.
  • user operations can also be reduced, which is simpler and more convenient for users, and can improve user experience.
  • the electronic device 100 stores the user's IPD
  • the electronic device 100 may store the IPD of one or more users. In this way, when different users wear the head-mounted display device 200, the electronic device 100 can determine the image to be displayed on the head-mounted display device 200 according to the IPD of the user, and display the image On the head-mounted display device 200, the convergence process of the user when viewing objects in the 3D scene is natural and comfortable, and the 3D scene actually felt by the user after convergence is consistent with the 3D scene constructed by the electronic device 100 .
  • the electronic device 100 may store the user's IPD locally or in the cloud, which is not limited in the embodiment of the present application.
  • the electronic device 100 may store the obtained IPD of the user and the user ID in association. In some other embodiments, in addition to the user's IPD, the electronic device 100 may also store one or more of the ⁇ i1 and the ⁇ i2 in association with the user identification.
  • the user identification may include the user's name, nickname, fingerprint information, voiceprint information, face information, and so on.
  • Table 1 shows a possible multiple user identifiers and corresponding IPDs, ⁇ i1, and ⁇ i2 stored in association with the electronic device 100.
  • the distance between the center of the optical component 102 and the center of the optical component 104 of the head-mounted display device 200 may be 63 mm.
  • IPD User ID IPD ⁇ i1 ⁇ i2
  • the electronic device 100 corrects the source image according to the IPD of the user to obtain a target image, and sends the target same image to the head-mounted display device 200; the head-mounted display device 200 is on the display screen Display the target image
  • the electronic device 100 corrects the source image according to the user's IPD to obtain the target image.
  • the target image is an image sent by the electronic device 100 to the head-mounted display device 200 for the head-mounted display device 200 to display on the display screen.
  • the target image includes a first target image and a second target image; the first target image is displayed on the display screen 101, and the second target image is displayed on the display screen 103.
  • the size of the first target image is equal to the size of the display screen 101
  • the size of the second target image is equal to the size of the display screen 103.
  • the electronic device 100 first obtains a source image, and corrects the source image according to the user's IPD, so as to obtain the target image.
  • the source image can be preset in the installation package of the application program installed on the electronic device 100
  • the source image includes multiple sets of data; one set of the data corresponds to one IPD, and is used to construct a 3D scene for a user with the one IPD.
  • the 3D scene is the 3D scene that the electronic device 100 wants to present to the user.
  • the source image can indicate the objects that the user can see when the user is in the 3D scene, as well as the relative positions between the objects and the user.
  • the electronic device 100 may first generate a first image and a second image using the source image according to the IPD of the user, where the first image and the second image present the 3D scene for the user with the IPD; A set of data corresponding to the user's IPD in the source image is included in the multiple sets of data.
  • the electronic device 100 simulates the user's natural presence in the 3D scene according to the source image, obtains the image seen by the user's left eye and the image seen by the right eye according to the user's IPD, The image seen by the eye is regarded as the first image, and the image seen by the user's right eye is regarded as the second image.
  • the electronic device 100 may use two imaging cameras to obtain the image seen by the user's left eye and the image seen by the right eye when the user is naturally in the 3D scene.
  • the principle that the electronic device 100 obtains the image seen by the left eye and the image seen by the right eye of the user through the imaging camera can refer to the related description of the previous embodiment, which will not be repeated here.
  • FIG. 7A exemplarily shows a 3D scene that the electronic device 100 wants to present to the user and a scene that simulates the user being placed in the 3D scene.
  • the 3D scene includes the sun, mountains, trees, and grass.
  • FIG. 7B it exemplarily shows the first image and the second image generated by the electronic device 100 using the source image according to the first IPD.
  • the first IPD is equal to the IOD.
  • FIG. 7C it exemplarily shows the first image and the second image generated by the electronic device 100 using the source image according to the second IPD.
  • the second IPD is not equal to the IOD.
  • the first image in FIG. 7B and the first image in FIG. 7C have parallax
  • the second image in FIG. 7B and the second image in FIG. 7C have parallax
  • the electronic device 100 After that, the electronic device 100 generates a first target image according to the first image, the first target image is a part of the first image, and the first target image includes the center of the first image, And the offset of the center of the first image relative to the center of the first target image in the first target image is the ⁇ i1; the electronic device 100 generates a second target image according to the second image , The second target image is a part of the second image, the second target image includes the center of the second image, and the center of the second image in the second target image is opposite to the center of the second image.
  • the offset of the center of the second target image is the ⁇ i2.
  • the center of the first target image is the offset by which the center of the first image is adjusted by the ⁇ i1
  • the center of the second target image is the center of the second image is adjusted by the The offset of ⁇ i2.
  • the center of the first image in the first target image is shifted to the right from the center of the first target image; when the ⁇ i1 is a negative value, The center of the first image in the first target image is offset to the left with respect to the center of the first target image; and the offset distance is the absolute value of the ⁇ i1.
  • the ⁇ i2 is a positive value
  • the center of the second image in the second target image is shifted to the right relative to the center of the second target image
  • the ⁇ i2 is a negative value
  • the center of the second image in the second target image is offset to the left with respect to the center of the second target image
  • the offset distance is the absolute value of the ⁇ i2.
  • FIG. 7B also exemplarily shows a first target image generated by the electronic device 100 according to the first image shown in FIG. 7B, and a second target image generated according to the second image in FIG. 7B. image.
  • the ⁇ i1 and the ⁇ i2 are both zero.
  • FIG. 7C exemplarily shows a first target image generated by an electronic device according to the first image shown in FIG. 7C, and a second target image generated according to the second image in FIG. 7C. As shown in the figure, both the ⁇ i1 and the ⁇ i2 are not zero.
  • the source image includes a third image and a fourth image; the third image and the fourth image are used to present a 3D scene to the user.
  • the third image and the fourth image may be two images with parallax that are taken in advance by two cameras for the same object.
  • the electronic device 100 generates a first target image according to the third image, the first target image is a part of the third image, and the first target image includes the center of the third image , And the offset of the center of the third image in the first target image relative to the center of the first target image is the ⁇ i1; the electronic device 100 generates a second target according to the fourth image Image, the second target image is a part of the fourth image, the second target image includes the center of the fourth image, and the center of the fourth image in the second target image is opposite to The offset of the center of the second target image is the ⁇ i2.
  • the center of the first target image is the offset by which the center of the third image is adjusted by the ⁇ i1
  • the center of the second target image is the center of the fourth image is adjusted by the The offset of ⁇ i2.
  • the manner in which the electronic device 100 generates the first target image based on the third image may refer to FIG. 7B and the manner in which the electronic device 100 generates the first target image based on the first image;
  • the manner in which the electronic device 100 generates the second target image according to the fourth image refer to FIG. 7C and the manner in which the electronic device 100 generates the second target image according to the second image, which will not be repeated here. .
  • the electronic device 100 sends the target image to the head-mounted display device 200, and the head-mounted display device 200 displays the target image on the display screen.
  • the electronic device 100 sends the first target image and the second target image to the head-mounted display device 200, so that the head-mounted display device 200 is on the display screen 101
  • the first target image is displayed, and the second target image is displayed on the display screen 103.
  • FIG. 8A it shows that when the head-mounted display device 200 displays the first target image and the second target image as shown in FIG. 7B, the user whose actual IPD is equal to the first IPD uses the head-mounted display device 200.
  • the image synthesized in the mind is the 3D scene that the user actually feels.
  • the 3D scene actually felt by the user whose actual IPD is equal to the first IPD is consistent with the 3D scene that the electronic device 100 wants to present to the user in FIG. 7A.
  • FIG. 8B it shows that when the head-mounted display device 200 displays the first target image and the second target image as shown in FIG. 7C, the user whose actual IPD is equal to the second IPD uses the head-mounted display device 200.
  • the image synthesized in the mind is the 3D scene that the user actually feels.
  • the actual IPD is equal to the 3D scene actually felt by the second IPD and the 3D scene that the electronic device 100 first presented to the user in FIG. 7A is the same.
  • the implementation of the display method provided in the embodiment of the present application can make the user appear to be actually in the electronic device 100 when wearing the head-mounted display device 200.
  • the convergence process of the user when viewing objects in the 3D scene is natural and comfortable, and the 3D scene that the user actually feels after convergence and what the electronic device 100 wants to present to the user The 3D scene is consistent.
  • FIG. 5A-5C, FIG. 6, FIG. 7A-FIG. 7C, FIG. 8A-FIG. 8B, and related descriptions the electronic device 100 and the head-mounted display device 200 cooperate to provide a VR scene
  • the mentioned images displayed by the head-mounted display device 200 are all generated by the electronic device 100 and sent to the head-mounted display device 200 for display.
  • the principle of the electronic device 100 and the head-mounted display device 200 cooperating to provide a VR scene can refer to the related descriptions of the foregoing embodiments in FIG. 2, FIG. 3A-3B, and FIG. 3C.
  • FIG. 9A shows another system 20 provided by an embodiment of the present application.
  • the system 20 can display images by using technologies such as VR, AR, MR, etc., so that users can feel a 3D scene, and provide users with a VR/AR/MR experience.
  • the system 20 may include: a head-mounted display device 400 and an input device 500.
  • the head-mounted display device 400 is worn on the head of the user, and the input device 500 is held by the user. It is understandable that the input device 500 is an optional device, that is, the system 20 may not include the input device 500.
  • the difference between the system 20 and the system 10 is that the system 20 does not include electronic equipment, and the head-mounted display device 400 in the system 20 integrates the electronic equipment 100 in the above implementation.
  • the head-mounted display device 400 and the input device 500 can be wirelessly connected and communicated through short-distance transmission technologies such as Bluetooth, NFC, ZigBee, etc., and can also be wired and communicated through a USB interface, an HDMI interface, or a custom interface, etc. Communication.
  • short-distance transmission technologies such as Bluetooth, NFC, ZigBee, etc.
  • the achievable form of the head-mounted display device 400 may refer to the head-mounted display device 200.
  • the input device 500 reference may be made to the input device 300.
  • the user can trigger the head-mounted display device 400 to perform corresponding functions by inputting user operations on the input device 500.
  • specific implementation principles please refer to the relevant description in the system 10.
  • the operations performed by the electronic device 100 and the head-mounted display device 400 in the above-mentioned embodiments can all be performed by the head-mounted display device 400 alone.
  • the head-mounted display device 400 may obtain the first position, the second position, the third position, and the fourth position according to the user's instruction, and may also obtain the first position and the fourth position according to the instructions of the user.
  • the second position calculates the ⁇ i1, the third position and the fourth position calculates the ⁇ i2, the IPD is calculated according to formula 1, the first image and the second image are generated according to the user’s IPD, and the The first image and the second image and so on are displayed on the display screen.
  • each step when the head-mounted display device 400 executes the display method of the embodiment of the present application please refer to the above-mentioned FIG. 5A-5C, FIG. 6, FIG. 7A-7C, FIG. 8A-8B and related descriptions. .
  • FIG. 9B shows a schematic structural diagram of the head-mounted device 400 provided by an embodiment of the present application.
  • the head-mounted display device 400 may include: a processor 401, a memory 402, a communication module 403, a sensor system 404, a camera 405, a display device 406, and an audio device 407.
  • the above components can be coupled and connected and communicate with each other.
  • the structure shown in FIG. 9B does not constitute a specific limitation on the head-mounted display device 400.
  • the head-mounted display device 400 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the head-mounted display device 400 may also include physical keys such as an on-off key, a volume key, various interfaces such as a USB interface, and so on.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 401 may include one or more processing units, for example: the processor 110 may include AP, modem processor, GPU, ISP, controller, video codec, DSP, baseband processor, and/or NPU, etc. . Among them, the different processing units may be independent devices or integrated in one or more processors.
  • the controller can generate operation control signals according to the instruction operation code and timing signals, complete the control of fetching and executing instructions, so that each component performs corresponding functions, such as human-computer interaction, motion tracking/prediction, rendering display, audio processing, etc.
  • the memory 402 stores executable program code used to execute the display method provided in the embodiments of the present application, and the executable program code includes instructions.
  • the memory 402 may include a program storage area and a data storage area.
  • the storage program area can store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.) and so on.
  • the data storage area can store data (such as audio data, etc.) created during the use of the head-mounted display device 400.
  • the memory 402 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • the processor 401 executes various functional applications and data processing of the head-mounted display device 400 by running instructions stored in the memory 402 and/or instructions stored in a memory provided in the processor.
  • the communication module 403 may include a mobile communication module and a wireless communication module.
  • the mobile communication module can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the head-mounted display device 400.
  • the wireless communication module can provide wireless communication solutions including WLAN, BT, GNSS, FM, IR, etc., which are applied to the head-mounted display device 400.
  • the wireless communication module may be one or more devices integrating at least one communication processing module.
  • the sensor system 404 may include an accelerometer, a compass, a gyroscope, a magnetometer, or other sensors for detecting motion, and so on.
  • the sensor system 404 is used to collect corresponding data, for example, an acceleration sensor collects the acceleration of the head-mounted display device 400, a gyroscope sensor collects the movement speed of the head-mounted display device 400, and so on.
  • the data collected by the sensor system 404 may reflect the head movement of the user wearing the head-mounted display device 400.
  • the sensor system 404 may be an inertial measurement unit (IMU) provided in the head-mounted display device 400.
  • the head-mounted display device 400 may send the data acquired by the sensor system to the processor 401 for analysis.
  • IMU inertial measurement unit
  • the processor 401 may determine the movement of the user's head according to the data collected by each sensor, and execute corresponding functions according to the movement of the user's head, such as starting the function of measuring IPD. That is, the user can trigger the head-mounted display device 400 to perform the corresponding function by inputting a head movement operation on the head-mounted display device 400.
  • the movement of the user's head may include: whether to rotate, the direction of rotation, and so on.
  • the sensor system 404 may also include an optical sensor, which is used in conjunction with the camera 405 to track the user's eye position and capture eye movement data.
  • the eye movement data can be used, for example, to determine the distance between the eyes of the user, the 3D position of each eye relative to the head-mounted display device 400, the magnitude of the twist and rotation (i.e., turn, pitch, and shake) and gaze of each eye.
  • infrared light is emitted in the head-mounted display device 400 and reflected from each eye, and the reflected light is detected by the camera 405 or optical sensor, and the detected data is transmitted to the processor 401 to enable processing
  • the device 401 analyzes the position of the user's eyes, pupil diameter, movement state, etc. from changes in the infrared light reflected by each eye.
  • the camera 405 can be used to capture still images or videos.
  • the static image or video may be an image or video surrounding the user facing the outside, or it may be an image or video facing the inside.
  • the camera 405 can track the movement of the user's single eye or both eyes.
  • the camera 405 includes but is not limited to a traditional color camera (RGB camera), a depth camera (RGB depth camera), a dynamic vision sensor (DVS) camera, etc.
  • the depth camera can obtain the depth information of the subject.
  • the camera 405 can be used to capture images of the user's eyes and send the images to the processor 401 for analysis.
  • the processor 401 may determine the state of the user's eyes according to the images collected by the camera 405, and execute corresponding functions according to the state of the user's eyes.
  • the user can trigger the head-mounted display device 400 to perform the corresponding function by inputting an eye movement operation on the head-mounted display device 400.
  • the state of the user's eyes may include: whether to rotate, the direction of rotation, whether it has not been rotated for a long time, the angle to the outside world, and so on.
  • the head-mounted display device 400 presents or displays images through a GPU, a display device 406, an application processor, and the like.
  • the GPU is an image processing microprocessor, which is connected to the display device 406 and the application processor.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the GPU is used to perform mathematical and geometric calculations according to the data obtained from the processor 401, and to render images using computer graphics technology, computer simulation technology, etc., to provide content for display on the display device 406.
  • the GPU is also used to add correction or pre-distortion to the rendering process of the image to compensate or correct the distortion caused by the optical components in the display device 406.
  • the GPU may also adjust the content provided to the display device 406 based on the data from the sensor system 404. For example, the GPU may add depth information in the content provided to the display device 406 based on the 3D position, interpupillary distance, etc. of the user's eyes.
  • the display device 406 may include: one or more display screens, and one or more optical components.
  • the one or more display screens include the display screen 101 and the display screen 103.
  • the one or more optical components include the optical component 102 and the optical component 104.
  • the structure of the display screen 101, the display screen 103, the optical assembly 102, and the optical assembly 104 and the positional relationship between them can refer to the related description in FIG. 1.
  • the labels of the various components in the display device 206 of the head-mounted display device 200 follow the labels in FIG. 1, that is, the head-mounted display device 200 includes a display screen 101, a display screen 103, and optical components. 102 and optical assembly 104.
  • the display screens in the head-mounted display device 400 are used to receive the GPU processed data of the head-mounted display device 400 itself.
  • Data or content (such as a rendered image) and display it.
  • the head-mounted display device 400 itself has a relatively powerful calculation function and can independently render and generate an image.
  • the head-mounted display device 400 may be an all-in-one machine with powerful computing capabilities, etc., and can independently present a 3D scene to the user without using the electronic device 100, and provide the user with a VR/AR/MR experience.
  • the processor 401 may be configured to determine the IPD of the user according to the interaction between the user and the head-mounted display device 400.
  • the GPU of the head-mounted display device may also be used to determine the image to be displayed on the head-mounted display device 400 according to the user IPD obtained from the processor 210, and the head-mounted display device 400 may determine the GPU The image of is displayed on the display.
  • the images displayed on the display screen 101 and the display screen 103 have parallax, thereby simulating binocular vision, so that the user can feel the depth of the object corresponding to the image, thereby generating a real 3D feeling.
  • the display screens may include a display panel, and the display panel may be used to display images, thereby presenting a three-dimensional virtual scene to the user.
  • the display panel can adopt LCD, OLED, AMOLED, FLED, Miniled, MicroLed, Micro-oLed, QLED, etc.
  • the optical components such as the optical component 102 and the optical component 104, are used to guide the light from the display screen to the exit pupil for the user to perceive.
  • one or more optical elements e.g., lenses
  • the optical assembly may have one or more coatings, such as anti-reflective coatings.
  • the magnification of the image light by the optical components allows the display screen to be physically smaller, lighter, and consume less power.
  • the enlargement of the image light can increase the field of view of the content displayed on the display screen.
  • the optical component can make the field of view of the content displayed on the display screen the entire field of view of the user.
  • the optical assembly can also be used to correct one or more optical errors.
  • optical errors include barrel distortion, pincushion distortion, longitudinal chromatic aberration, lateral chromatic aberration, spherical aberration, coma aberration, field curvature, astigmatism, and the like.
  • the content provided to the display screen for display is pre-distorted, and the distortion is corrected by the optical component when the image light generated based on the content is received from the display screen.
  • the audio device 407 is used to implement audio collection and output.
  • the audio device 407 may include but is not limited to: a microphone, a speaker, a headset, and so on.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk).
  • the process can be completed by a computer program instructing relevant hardware.
  • the program can be stored in a computer readable storage medium. , May include the processes of the above-mentioned method embodiments.
  • the aforementioned storage media include: ROM or random storage RAM, magnetic disks or optical disks and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)

Abstract

一种显示方法、电子设备及***,其中在该方法中,电子设备(100)可以根据用户基于头戴式显示设备(200)上显示的用户界面输入的用户操作,来获取所述用户的IPD,根据所述用户的IPD校正源图像得到用于在所述头戴式显示设备(200)上显示的目标图像,并将所述目标图像发送给所述头戴式显示设备(200)。从而可以使得用户佩戴头戴式显示设备(200)时能够舒适、真实地感受到3D场景。

Description

显示方法、电子设备及***
本申请要求于2019年11月30日提交中国专利局、申请号为201911208308.4、申请名称为“显示方法、电子设备及***”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及虚拟现实(virtual reality,VR)及终端技术领域,尤其涉及显示方法、电子设备及***。
背景技术
随着计算机图形技术的发展,VR技术逐渐应用到人们的生活中。VR技术利用计算机模拟产生一个三维(three-dimensional,3D)的虚拟现实场景,并提供在视觉、听觉、触觉或其他感官上的模拟体验,让用户感觉仿佛身历其境。
用户双眼瞳孔中心之间的距离,即瞳距(inter-pupillary distance,IPD),是影响该用户能否舒适、真实地体验头戴式显示设备提供的3D场景的关键。由于不同的用户可能有不同的IPD,同一用户的IPD在不同年龄段也可能发生变化,如何保证用户使用头戴式显示设备时可以舒适、真实地体验3D场景,是业界研究的方向。
发明内容
本申请实施例提供了显示方法、电子设备及***。该方法可以测量用户的IPD,并根据用户的IPD校正头戴式显示设备上所显示的图像,使得用户佩戴头戴式显示设备时能够舒适、真实地感受到3D场景。
第一方面,本申请实施例提供了一种***,该***包括:电子设备和头戴式显示设备,所述电子设备和所述头戴式显示设备连接,所述头戴式显示设备用于佩戴于用户头部。其中,所述电子设备用于将用户界面发送给所述头戴式显示设备;所述头戴式显示设备用于在显示屏上显示所述用户界面;所述电子设备还用于获取所述用户的IPD,所述用户的IPD是根据所述用户基于所述用户界面输入的用户操作获取的;所述电子设备还用于获取源图像,根据所述用户的IPD校正所述源图像得到目标图像,并将所述目标图像发送给所述头戴式显示设备;所述头戴式显示设备还用于在所述显示屏上显示所述目标图像。
通过第一方面的***,电子设备可以测量用户的IPD,并根据用户的IPD校正头戴式显示设备上所显示的图像,使得用户佩戴头戴式显示设备时能够舒适、真实地感受到3D场景。
结合第一方面,在一些实施例中,所述显示屏包括第一显示屏和第二显示屏,所述头戴式显示设备还包括对应所述第一显示屏的第一光学组件和对应所述第二显示屏的第二光学组件,所述第一显示屏的中心和所述第一光学组件的中心所处的第一直线垂直于第三直线,所述第二显示屏的中心和所述第二光学组件的中心所处的第二直线垂直于所述第三直线;所述第三直线为所述第一光学组件的中心和所述第二光学组件的中心所处的直线;所 述用户界面包括第一用户界面和第二用户界面,所述头戴式显示设备具体用于在所述第一显示屏上显示所述第一用户界面、在所述第二显示屏上显示所述第二用户界面;所述目标图像包括第一目标图像和第二目标图像,所述头戴式显示设备具体用于在所述第一显示屏上显示所述第一目标图像、在所述第二显示屏上显示所述第二目标图像。
在上述实施例中,所述第一显示屏和所述第一光学组件对应于用户的左眼,所述第一显示屏发出的光经过所述第一光学组件传播到所述用户的左眼;所述第二显示屏和所述第二光学组件对应于用户的右眼,所述第二显示屏发出的光经过所述第二光学组件传播到所述用户的右眼。
结合第一方面的一些实施例,在一些实施例中,所述头戴式显示设备还用于:获取第一位置和第二位置,所述第一位置和所述第二位置是根据显示所述第一用户界面时的所述用户的动作获取的;获取第三位置和第四位置,所述第三位置和所述第四位置是根据显示所述第二用户界面时的所述用户的动作获取的;将所述第一位置、所述第二位置、所述第三位置和所述第四位置发送给所述电子设备。所述电子设备还用于:根据所述第一位置和所述第二位置确定所述用户的眼睛相对所述第一直线的偏移量Δi1,根据所述第三位置和所述第四位置确定所述用户的眼睛相对所述第二直线的偏移量Δi2,根据所述Δi1和所述Δi2获取所述用户的IPD。这样,所述用户可以通过所述头戴式显示设备来指示上述几个位置,从而使得所述电子设备测量所述用户的IPD。
这里,所述头戴式显示设备显示所述第一用户界面时的所述用户的动作可以是,所述用户的眼睛(例如左眼)的转动动作。所述头戴式显示设备显示所述第二用户界面时的所述用户的动作可以是,所述用户的眼睛(例如右眼)的转动动作。
结合第一方面的一些实施例,在一些实施例中,所述头戴式显示设备还用于:将在显示所述第一用户界面时采集到的所述用户的操作数据,和,在显示所述第二用户界面时采集到的所述用户的操作数据,发送给所述电子设备。所述电子设备还用于:获取第一位置和第二位置,所述第一位置和所述第二位置是根据所述头戴式显示设备显示所述第一用户界面时的所述用户的操作数据获取的;获取第三位置和第四位置,所述第三位置和所述第四位置是根据所述头戴式显示设备显示所述第二用户界面时的所述用户的操作数据获取的;根据所述第一位置和所述第二位置确定所述用户的眼睛(例如左眼)相对所述第一直线的偏移量Δi1,根据所述第三位置和所述第四位置确定所述用户的眼睛(例如右眼)相对所述第二直线的偏移量Δi2,根据所述Δi1和所述Δi2获取所述用户的IPD。这样,所述用户可以通过所述头戴式显示设备来指示上述几个位置,从而使得所述电子设备测量所述用户的IPD。
这里,所述头戴式显示设备用于通过在显示所述第一用户界面时采集到的所述用户的操作数据得到所述用户眼球的图像。所述头戴式显示设备还用于通过在显示所述第二用户界面时采集到的所述用户的操作数据得到所述用户眼球的图像。
结合第一方面的一些实施例,在一些实施例中,所述***还包括输入设备。所述输入设备用于:将在所述头戴式显示设备显示所述第一用户界面时检测到的用户操作,和,在所述头戴式显示设备显示所述第二用户界面时检测到的用户操作,发送给所述电子设备。所述电子设备还用于:获取第一位置和第二位置,所述第一位置和所述第二位置是根据所 述输入设备在所述头戴式显示设备显示所述第一用户界面时检测到的用户操作获取的;获取第三位置和第四位置,所述第三位置和所述第四位置是根据所述输入设备在所述头戴式显示设备显示所述第二用户界面时检测到的用户操作获取的;根据所述第一位置和所述第二位置确定所述用户的眼睛(例如左眼)相对所述第一直线的偏移量Δi1,根据所述第三位置和所述第四位置确定所述用户的眼睛(例如右眼)相对所述第二直线的偏移量Δi2,根据所述Δi1和所述Δi2获取所述用户的IPD。这样,所述用户可以通过所述输入设备来指示上述几个位置,从而使得所述电子设备测量所述用户的IPD。
在上述实施例中,所述第一位置是所述头戴式显示设备显示所述第一用户界面时所述用户的眼睛(例如左眼)看向所述第一显示屏的左侧的位置,所述第二位置是所述头戴式显示设备显示所述第一用户界面时所述用户的眼睛(例如左眼)看向所述第一显示屏的右侧的位置;所述第三位置是所述头戴式显示设备显示所述第二用户界面时所述用户的眼睛(例如右眼)看向所述第二显示屏的左侧的位置,所述第四位置是所述头戴式显示设备显示所述第二用户界面时所述用户的眼睛(例如右眼)看向所述第二显示屏的右侧的位置。
结合第一方面的一些实施例,在一些实施例中,所述电子设备具体用于根据以下公式计算所述Δi1:
Figure PCTCN2020127413-appb-000001
其中,JO′为所述第一位置到所述第一直线的距离,KO′为所述第二位置到所述第一直线的距离,M为所述第一光学组件的放大倍数,L为所述第一光学组件的直径。当所述Δi1的值为正时,所述用户的眼睛(例如左眼)相对所述第一直线向右偏移;当所述Δi1的值为负时,所述用户的眼睛(例如左眼)相对所述第一直线向左偏移。
结合第一方面的一些实施例,在一些实施例中,所述电子设备具体用于根据和上述公式类似的公式计算所述Δi2。例如,所述电子设备具体用于根据以下公式计算所述Δi2:
Figure PCTCN2020127413-appb-000002
其中,jo′为所述第三位置到所述第二直线的距离,ko′为所述第四位置到所述第二直线的距离,m为所述第二光学组件的放大倍数,l为所述第二光学组件的直径。当所述Δi2的值为正时,所述用户的眼睛(例如右眼)相对所述第二直线向右偏移;当所述Δi2的值为负时,所述用户的眼睛(例如右眼)相对所述第二直线向左偏移。
结合第一方面的一些实施例,在一些实施例中,所述电子设备具体用于根据以下公式计算所述用户的瞳距IPD:
IPD=IOD-Δi1+Δi2
其中,所述IOD为所述第一显示屏的中心和所述第二显示屏的中心之间的距离。
结合第一方面的一些实施例,在一些实施例中,所述电子设备具体用于:根据所述用户的IPD,利用所述源图像生成第一图像和第二图像;根据所述第一图像生成第一目标图像,所述第一目标图像的中心为将所述第一图像的中心调整所述Δi1的偏移量;根据所述第二图像生成第二目标图像,所述第二目标图像的中心为将所述第二图像的中心调整所述Δi2 的偏移量。这样,所述电子设备可以在所述头戴式显示设备提供游戏场景或其他类似场景时,校正所述头戴式显示设备上所显示的图像,使得用户佩戴所述头戴式显示设备时能够舒适、真实地感受到3D场景。
结合第一方面的一些实施例,在一些实施例中,所述源图像包括:第三图像和第四图像。所述电子设备具体用于:根据所述第三图像生成第一目标图像,所述第一目标图像的中心为将所述第三图像的中心调整所述Δi1的偏移量;根据所述第四图像生成第二目标图像,所述第二目标图像的中心为将所述第四图像的中心调整所述Δi2的偏移量。这样,所述电子设备可以在所述头戴式显示设备提供3D电影场景或其他类似场景时,校正所述头戴式显示设备上所显示的图像,使得用户佩戴所述头戴式显示设备时能够舒适、真实地感受到3D场景。这里,第三图像和第四图像可以是由两个摄像头预先拍摄好的、针对相同物体的具有视差的两幅图像。
第二方面,本申请实施例提供了一种显示方法,应用于电子设备。所述方法包括:所述电子设备将用户界面发送给头戴式显示设备,所述用户界面用于显示在所述头戴式显示设备的显示屏上;所述电子设备获取所述用户的IPD,所述用户的IPD是根据所述用户基于所述用户界面输入的用户操作获取的;所述电子设备获取源图像,根据所述用户的IPD校正所述源图像得到目标图像,并将所述目标图像发送给所述头戴式显示设备;所述目标图像用于显示在所述显示屏上。
可理解的,基于同一发明思想,第二方面的显示方法中所述电子设备执行的各个步骤可参考第一方面的***中的所述电子设备实现对应功能时所执行的步骤,可参考相关描述。
实施第二方面的显示方法,所述电子设备可以和头戴式显示设备配合为所述用户提供3D场景,使得所述用户佩戴所述头戴式显示设备时能够舒适、真实地感受到所述3D场景。
结合第二方面,在一些实施例中,所述显示屏包括第一显示屏和第二显示屏,所述头戴式显示设备还包括对应所述第一显示屏的第一光学组件和对应所述第二显示屏的第二光学组件,所述第一显示屏的中心和所述第一光学组件的中心所处的第一直线垂直于第三直线,所述第二显示屏的中心和所述第二光学组件的中心所处的第二直线垂直于所述第三直线;所述第三直线为所述第一光学组件的中心和所述第二光学组件的中心所处的直线。所述用户界面包括第一用户界面和第二用户界面,所述第一用户界面用于显示在所述第一显示屏上,所述第二用户界面用于显示在所述第二显示屏上。所述目标图像包括第一目标图像和第二目标图像,所述第一目标图像用于显示在所述第一显示屏上,所述第二目标图像用于显示在所述第二显示屏上。
结合第二方面的一些实施例,在一些实施例中,所述电子设备可以获取第一位置、第二位置、第三位置和第四位置,根据所述第一位置和所述第二位置确定所述用户的眼睛(例如左眼)相对所述第一直线的偏移量Δi1,根据所述第三位置和所述第四位置确定所述用户的眼睛(例如右眼)相对所述第二直线的偏移量Δi2,根据所述Δi1和所述Δi2获取所述用户的IPD。这里,所述第一位置、所述第二位置、所述第三位置和所述第四位置可参考第一方面中的相关描述。所述电子设备可以根据以下几种方式获取到所述第一位置、所述第二位置、所述第三位置和所述第四位置:
第一种方式,所述电子设备接收所述头戴式显示设备发送的第一位置、第二位置、第 三位置和第四位置。这里,所述第一位置和所述第二位置是所述头戴式显示设备根据显示所述第一用户界面时的所述用户的动作获取的;所述第三位置和所述第四位置是所述头戴式显示设备根据显示所述第二用户界面时的所述用户的动作获取的。所述头戴式显示设备显示所述第一用户界面或所述第二用户界面时的所述用户的动作,可参考第一方面的相关描述。
第二种方式,所述电子设备接收所述头戴式显示设备在显示所述第一用户界面时采集到的所述用户的操作数据,和,所述头戴式显示设备在显示所述第二用户界面时采集到的所述用户的操作数据;所述电子设备获取第一位置和第二位置,所述第一位置和所述第二位置是根据所述头戴式显示设备显示所述第一用户界面时的所述用户的操作数据获取的;获取第三位置和第四位置,所述第三位置和所述第四位置是根据所述头戴式显示设备显示所述第二用户界面时的所述用户的操作数据获取的。所述头戴式显示设备显示所述第一用户界面或所述第二用户界面时的所述用户的操作数据,可参考第一方面中的相关描述。
第三种方式,所述电子设备接收所述输入设备在所述头戴式显示设备显示所述第一用户界面时检测到的用户操作,和,所述输入设备在所述头戴式显示设备显示所述第二用户界面时检测到的用户操作;所述电子设备获取第一位置和第二位置,所述第一位置和所述第二位置是根据所述输入设备在所述头戴式显示设备显示所述第一用户界面时检测到的用户操作获取的;获取第三位置和第四位置,所述第三位置和所述第四位置是根据所述输入设备在所述头戴式显示设备显示所述第二用户界面时检测到的用户操作获取的。所述输入设备在所述头戴式显示设备显示所述第一用户界面或所述第二用户界面时检测到的用户操作,可参考第一方面的相关描述。
结合第二方面的一些实施例,在一些实施例中,所述电子设备可以根据以下公式计算所述Δi1:
Figure PCTCN2020127413-appb-000003
其中,JO′为所述第一位置到所述第一直线的距离,KO′为所述第二位置到所述第一直线的距离,M为所述第一光学组件的放大倍数,L为所述第一光学组件的直径。当所述Δi1的值为正时,所述用户的眼睛(例如左眼)相对所述第一直线向右偏移;当所述Δi1的值为负时,所述用户的眼睛(例如左眼)相对所述第一直线向左偏移。
结合第二方面的一些实施例,在一些实施例中,所述电子设备可以根据以下公式计算所述Δi2:
Figure PCTCN2020127413-appb-000004
其中,jo′为所述第三位置到所述第二直线的距离,ko′为所述第四位置到所述第二直线的距离,m为所述第二光学组件的放大倍数,l为所述第二光学组件的直径。当所述Δi2的值为正时,所述用户的眼睛(例如右眼)相对所述第二直线向右偏移;当所述Δi2的值为负时,所述用户的眼睛(例如右眼)相对所述第二直线向左偏移。
结合第二方面的一些实施例,在一些实施例中,所述电子设备可以根据以下公式计算 所述用户的瞳距IPD:
IPD=IOD-Δi1+Δi2
其中,所述IOD为所述第一显示屏的中心和所述第二显示屏的中心之间的距离。
结合第二方面的一些实施例,在一些实施例中,所述电子设备可以通过以下方式来得到目标图像:根据所述用户的IPD,利用所述源图像生成第一图像和第二图像;所述电子设备根据所述第一图像生成第一目标图像,所述第一目标图像的中心为将所述第一图像的中心调整所述Δi1的偏移量;所述电子设备根据所述第二图像生成第二目标图像,所述第二目标图像的中心为将所述第二图像的中心调整所述Δi2的偏移量。这样,所述电子设备可以在所述头戴式显示设备提供游戏场景或其他类似场景时,校正所述头戴式显示设备上所显示的图像,使得用户佩戴所述头戴式显示设备时能够舒适、真实地感受到3D场景。
结合第二方面的一些实施例,在一些实施例中,所述源图像包括:第三图像和第四图像。所述电子设备可以通过以下方式来得到目标图像:所述电子设备根据所述第三图像生成第一目标图像,所述第一目标图像的中心为将所述第三图像的中心调整所述Δi1的偏移量;所述电子设备根据所述第四图像生成第二目标图像,所述第二目标图像的中心为将所述第四图像的中心调整所述Δi2的偏移量。这样,所述电子设备可以在所述头戴式显示设备提供3D电影场景或其他类似场景时,校正所述头戴式显示设备上所显示的图像,使得用户佩戴所述头戴式显示设备时能够舒适、真实地感受到3D场景。这里,第三图像和第四图像可以是由两个摄像头预先拍摄好的、针对相同物体的具有视差的两幅图像。
第三方面,本申请实施例提供了一种显示方法,应用于头戴式显示设备。所述显示方法包括:所述头戴式显示设备在显示屏上显示用户界面;所述头戴式显示设备获取所述用户的IPD,所述用户的IPD是根据所述用户基于所述用户界面输入的用户操作获取的;所述头戴式显示设备获取源图像,根据所述用户的IPD校正所述源图像得到目标图像;在所述显示屏上显示所述目标图像。
实施第三方面的方法,所述头戴式显示设备可以测量用户的IPD并根据所述用户逇IPD独立为所述用户提供3D场景,使得所述用户佩戴所述头戴式显示设备时能够舒适、真实地感受到所述3D场景。
结合第三方面,在一些实施例中,所述显示屏包括第一显示屏和第二显示屏,所述头戴式显示设备还包括对应所述第一显示屏的第一光学组件和对应所述第二显示屏的第二光学组件,所述第一显示屏的中心和所述第一光学组件的中心所处的第一直线垂直于第三直线,所述第二显示屏的中心和所述第二光学组件的中心所处的第二直线垂直于所述第三直线;所述第三直线为所述第一光学组件的中心和所述第二光学组件的中心所处的直线。所述用户界面包括第一用户界面和第二用户界面,所述第一用户界面显示在所述第一显示屏上,所述第二用户界面显示在所述第二显示屏上。所述目标图像包括第一目标图像和第二目标图像,所述第一目标图像显示在所述第一显示屏上,所述第二目标图像显示在所述第二显示屏上。
结合第三方面的一些实施例,在一些实施例中,所述头戴式显示设备可以获取第一位置、第二位置、第三位置和第四位置,根据所述第一位置和所述第二位置确定所述用户的 眼睛(例如左眼)相对所述第一直线的偏移量Δi1,根据所述第三位置和所述第四位置确定所述用户的眼睛(例如右眼)相对所述第二直线的偏移量Δi2,根据所述Δi1和所述Δi2获取所述用户的IPD。这里,所述第一位置、所述第二位置、所述第三位置和所述第四位置可参考第一方面中的相关描述。所述头戴式显示设备可以根据以下几种方式获取到所述第一位置、所述第二位置、所述第三位置和所述第四位置:
第一种方式:所述头戴式显示设备根据显示所述第一用户界面时的所述用户的动作获取所述第一位置和所述第二位置;根据显示所述第二用户界面时的所述用户的动作获取所述第三位置和所述第四位置。所述头戴式显示设备显示所述第一用户界面或所述第二用户界面时的所述用户的动作,可参考第一方面的相关描述。
第二种方式:所述头戴式显示设备根据显示所述第一用户界面时采集到的所述用户的操作数据获取所述第一位置和所述第二位置,根据显示所述第二用户界面时采集到的所述用户的操作数据获取所述第三位置和所述第四位置。所述头戴式显示设备显示所述第一用户界面或所述第二用户界面时的所述用户的操作数据,可参考第一方面中的相关描述。
第三种方式:所述头戴式显示设备和输入设备连接,所述头戴式显示设备根据所述输入设备在所述头戴式显示设备显示所述第一用户界面时检测到的用户操作获取所述第一位置和所述第二位置,根据所述输入设备在所述头戴式显示设备显示所述第二用户界面时检测到的用户操作获取所述第三位置和所述第四位置。所述输入设备在所述头戴式显示设备显示所述第一用户界面或所述第二用户界面时检测到的用户操作,可参考第一方面的相关描述。
结合第三方面的一些实施例,在一些实施例中,所述头戴式显示设备根据以下公式计算所述Δi1:
Figure PCTCN2020127413-appb-000005
其中,JO′为所述第一位置到所述第一直线的距离,KO′为所述第二位置到所述第一直线的距离,M为所述第一光学组件的放大倍数,L为所述第一光学组件的直径;当所述Δi1的值为正时,所述用户的眼睛(例如左眼)相对所述第一直线向右偏移;当所述Δi1的值为负时,所述用户的眼睛(例如左眼)相对所述第一直线向左偏移。
结合第三方面的一些实施例,在一些实施例中,所述头戴式显示设备根据以下公式计算所述Δi2:
Figure PCTCN2020127413-appb-000006
其中,jo′为所述第三位置到所述第二直线的距离,ko′为所述第四位置到所述第二直线的距离,m为所述第二光学组件的放大倍数,l为所述第二光学组件的直径。当所述Δi2的值为正时,所述用户的眼睛(例如右眼)相对所述第二直线向右偏移;当所述Δi2的值为负时,所述用户的眼睛(例如右眼)相对所述第二直线向左偏移。
结合第三方面的一些实施例,在一些实施例中,所述头戴式显示设备根据以下公式计算所述用户的瞳距IPD:
IPD=IOD-Δi1+Δi2
其中,所述IOD为所述第一显示屏的中心和所述第二显示屏的中心之间的距离。
结合第三方面的一些实施例,在一些实施例中,所述头戴式显示设备可以根据所述用户的IPD,利用所述源图像生成第一图像和第二图像;根据所述第一图像生成第一目标图像,所述第一目标图像的中心为将所述第一图像的中心调整所述Δi1的偏移量;根据所述第二图像生成第二目标图像,所述第二目标图像的中心为将所述第二图像的中心调整所述Δi2的偏移量。这样,所述头戴式显示设备在提供游戏场景或其他类似场景时,可以校正所显示的图像,使得用户佩戴所述头戴式显示设备时能够舒适、真实地感受到3D场景。
结合第三方面的一些实施例,在一些实施例中,所述源图像包括:第三图像和第四图像。所述头戴式显示设备可以根据所述第三图像生成第一目标图像,所述第一目标图像的中心为将所述第三图像的中心调整所述Δi1的偏移量;根据所述第四图像生成第二目标图像,所述第二目标图像的中心为将所述第四图像的中心调整所述Δi2的偏移量。这样,所述头戴式显示设备在提供3D电影场景或其他类似场景时,可以校正所显示的图像,使得用户佩戴所述头戴式显示设备时能够舒适、真实地感受到3D场景。
第四方面,本申请实施例提供了一种电子设备,所述电子设备包括一个或多个处理器、存储器;该存储器与该一个或多个处理器耦合,该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令,该一个或多个处理器调用该计算机指令以使得该电子设备执行第二方面或第二方面任意一种实施方式中的显示方法。
第五方面,本申请实施例提供了一种头戴式显示设备,所述头戴式显示设备包括:一个或多个处理器、存储器、显示屏;该存储器与该一个或多个处理器耦合,该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令,该一个或多个处理器调用该计算机指令以使得该头戴式显示设备执行第三方面或第三方面任意一种实施方式中的显示方法。
结合第五方面,在一些实施例中,所述显示屏包括第一显示屏和第二显示屏,所述头戴式显示设备还包括对应所述第一显示屏的第一光学组件和对应所述第二显示屏的第二光学组件,所述第一显示屏的中心和所述第一光学组件的中心所处的第一直线垂直于第三直线,所述第二显示屏的中心和所述第二光学组件的中心所处的第二直线垂直于所述第三直线;所述第三直线为所述第一光学组件的中心和所述第二光学组件的中心所处的直线。
第六方面,本申请实施例提供了一种芯片,该芯片应用于电子设备。该芯片包括:一个或多个处理器、接口;该接口用于接收代码指令并将该代码指令传输至该处理器,该处理器用于运行该代码指令以使得该电子设备执行如第二方面或第二方面可能的实施方式中的任意一种所提供的显示方法。
第七方面,本申请实施例提供了一种包含指令的计算机程序产品,当上述计算机程序产品在电子设备上运行时,使得上述电子设备执行如第二方面或第二方面可能的实施方式中的任意一种所提供的显示方法。
第八方面,本申请实施例提供一种计算机可读存储介质,包括指令,当上述指令在电子设备上运行时,使得上述电子设备执行如第二方面或第二方面可能的实施方式中的任意一种所提供的显示方法。
第九方面,本申请实施例提供了一种芯片,该芯片应用于头戴式显示设备。该芯片包括:一个或多个处理器、接口;该接口用于接收代码指令并将该代码指令传输至该处理器,该处理器用于运行该代码指令以使得该头戴式显示设备执行如第三面或第三方面可能的实施方式中的任意一种所提供的显示方法。
第十方面,本申请实施例提供了一种包含指令的计算机程序产品,当上述计算机程序产品在头戴式显示设备上运行时,使得上述头戴式显示设备执行如第三面或第三方面可能的实施方式中的任意一种所提供的显示方法。
第十一方面,本申请实施例提供一种计算机可读存储介质,包括指令,当上述指令在头戴式显示设备上运行时,使得上述头戴式显示设备执行如第三面或第三方面可能的实施方式中的任意一种所提供的显示方法。
实施本申请实施例提供的技术方案,可以测量用户的IPD,并根据用户的IPD校正头戴式显示设备上所显示的图像,使得用户佩戴头戴式显示设备时能够舒适、真实地感受到3D场景。
附图说明
图1是本申请实施例提供的用户使用头戴式显示设备体验3D场景的原理示意图;
图2是本申请实施例提供的一种***的架构示意图;
图3A是本申请实施例提供的电子设备的硬件结构示意图;
图3B是本申请实施例提供的电子设备的软件结构示意图;
图3C是本申请实施例提供的一种头戴式显示设备的结构示意图;
图4A和图4B是本申请实施例提供的用户双眼和头戴式显示设备的光学组件之间的位置关系示意图;
图5A-图5C是本申请实施例中的头戴式显示设备上显示的用户界面;
图6是本申请实施例提供的计算用户的IPD时所使用的几何关系示意图;
图7A是本申请实施例提供的电子设备构造的3D场景以及模拟用户置身于该3D场景中的示意图;
图7B是本申请实施例提供的电子设备构造如图7A所示的3D场景后,根据第一IPD确定的第一图像、第二图像、第一目标图像和第二目标图像;
图7C是本申请实施例提供的提供的电子设备构造如图7A所示的3D场景后,根据第二IPD确定的第一图像、第二图像、第一目标图像和第二目标图像;
图8A是本申请实施例提供的头戴式显示设备显示如图7B所示的第一目标图像及第二目标图像后,用户合成的图像;
图8B是本申请实施例提供的头戴式显示设备显示如图7C所示的第一目标图像及第二目标图像后,用户合成的图像;
图9A是本申请实施例提供的另一种***的架构示意图;
图9B是本申请实施例提供的另一种头戴式显示设备的硬件结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。
其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,在本申请实施例的描述中,“多个”是指两个或多于两个。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
参考图1,图1为用户使用头戴式显示设备体验3D场景的原理示意图。
如图1所示,头戴式显示设备可包括:显示屏101、光学组件102、显示屏103、光学组件104。其中,所述显示屏101和所述显示屏103的材质、大小、分辨率等相同。所述光学组件102和所述光学组件104的材质、结构等相同。所述光学组件102和所述光学组件104均由一个或多个透镜组成,该透镜可包括凸透镜、菲涅尔透镜或其他类型的透镜中的一个或多个。
在本申请实施例中,第一显示屏可以为所述显示屏101,第二显示屏可以为所述显示屏103。第一光学组件可以为所述光学组件102,第二光学组件可以为所述光学组件104。以下实施例以所述显示屏101、所述显示屏103、所述光学组件102、所述光学组件104为例进行说明。
所述显示屏101的中心和所述光学组件102的中心所处的第一直线,垂直于,所述光学组件102的中心和所述光学组件104的中心所处的第三直线。所述显示屏101和所述光学组件102对应于用户的左眼。用户佩戴头戴式显示设备时,所述显示屏101上可以显示有图像a1。所述显示屏101显示所述图像a1时发出的光经过所述光学组件102的透射后将在用户左眼前方形成该图像a1的虚像a1’。
所述显示屏103的中心和所述光学组件104的中心所处的第二直线,垂直于,所述光学组件102的中心和所述光学组件104的中心所处的第三直线。所述显示屏103和所述光学组件104对应于用户的右眼。用户佩戴头戴式显示设备时,所述显示屏103可以显示有图像a2。所述显示屏103显示所述图像a2时发出的光经过所述光学组件104的透射后将在用户右眼前方形成该图像a2的虚像a2’。
在本申请实施例中,显示屏的中心可以为该显示屏的对称中心,例如圆形显示屏的圆心,长方形显示屏的对称中心等等。光学组件的中心可以为光学中心,通常情况下光学中心也是光学组件的对称中心。
在本申请实施例中,第四直线可以为所述显示屏101的中心和所述显示屏103的中心所处的直线。
所述图像a1和所述图像a2为针对同一物体例如物体a的具有视差的两幅图像。视差是指从有一定距离的两个点上观察同一个物体时,该物体在视野中位置的差异。所述虚像a1’和所述虚像a2’位于同一平面上,该平面可以被称为虚像面。
在佩戴头戴式显示设备时,用户的左眼会聚焦到所述虚像a1’上,用户的右眼会聚焦到所述虚像a2’上。然后,所述虚像a1’和所述虚像a2’会在用户的大脑中叠加成为一幅完整且 具有立体感的图像,该过程被称为辐辏。在辐辏过程中,双眼视线的交汇点会被用户认为是所述图像a1和所述图像a2所描述的物体实际所在的位置。由于辐辏过程,用户可以感受到所述头戴式显示设备提供的3D场景。
基于图1所示的用户体验3D场景的原理,下面介绍所述头戴式显示设备生成所述显示屏101和所述显示屏103上显示的图像的方式。
通常情况下,所述头戴式显示设备会作出以下假定:用户佩戴该头戴式显示设备时,左眼瞳孔中心和所述显示屏101的中心、所述光学组件102的中心位于同一条直线上,右眼瞳孔中心和所述显示屏103的中心、所述光学组件104的中心位于同一条直线上。即,所述头戴式显示设备假定用户的IPD等于所述显示屏101的中心和所述显示屏103的中心之间的距离,也等于所述光学组件102的中心和所述光学组件104的中心之间的距离(inter-optics distance,IOD)。
所述头戴式显示设备会基于该假定,生成所述显示屏101和所述显示屏103上显示的图像。具体的,所述头戴式显示设备会先获取3D场景信息,根据该3D场景信息构造3D场景。该3D场景信息描述了想要让用户感受到的3D场景的一些信息,也就是说,该3D场景信息指示用户处于该3D场景中时能够看到的物体,以及,各个物体和用户之间的相对位置。然后,所述头戴式显示设备可以模拟或者假定IPD等于IOD的用户自然地身处构造的该3D场景中,获取该用户左眼看到的图像,将该图像显示于所述显示屏101上;获取该用户右眼看到的图像,将该图像显示于所述显示屏103上。在一些实施例中,所述头戴式显示设备可以通过两个取像相机来获取所述显示屏101和所述显示屏103上显示的图像。例如,所述头戴式显示设备将两个取像相机放置于构建的3D场景中,并假定IPD等于IOD的用户自然地身处该3D场景中。一个取像相机位于该用户的左眼的位置,用于获取该用户从该位置观看3D场景时所看到的图像,该图像即用户左眼看到的图像。另一个取像相机位于该用户的右眼的位置,用于获取该用户从该位置观看3D场景时所看到的图像,该图像即用户右眼看到的图像。该两个取像相机的间距和假定的用户IPD相同,即等于IOD。取像相机为虚拟概念,而并非实际存在的硬件。
所述头戴式显示设备基于该假定生成图像并将该图像显示在显示屏上后,当IPD等于IOD的用户佩戴该头戴式显示设备时,所述头戴式显示设备可以为用户提供3D场景的真实感、沉浸感,还可以使得用户在观看3D场景中的物体时的辐辏过程自然、舒适,且辐辏后用户实际感受到的3D场景和所述头戴式显示设备构造的3D场景一致。
基于上面介绍的头戴式显示设备生成所述显示屏101和所述显示屏103上显示的图像的方式,下面结合图1描述IPD如何影响用户感受所述头戴式显示设备提供的3D场景。
参考图1,所述头戴式显示设备在所述显示屏101上显示所述图像a1,在所述显示屏103上显示所述图像a2。所述图像a1和所述图像a2的生成方式可参考前文相关描述。
参考图1中的实线眼球,当用户的IPD等于IOD时,该用户左眼往右侧旋转并聚焦到虚像a1',右眼往左侧旋转并聚焦到虚像a2',从而完成辐辏。此时,辐辏对用户来说是自然、舒适且轻松的。并且,图中A1点相对于用户的位置会被用户认为是物体a相对于用户 自己的位置。这里,该用户实际感受到的3D场景和所述头戴式显示设备构造的3D场景一致。
参考图1中的虚线眼球,当用户的IPD不等于IOD时,例如用户的IPD大于IOD时,该用户的左眼往右侧旋转并聚焦到虚像a1',该用户的右眼往左侧旋转并聚焦到虚像a2',从而完成辐辏。虚线眼球辐辏时眼球的旋转角度和实线眼球辐辏时眼球的旋转角度不同,因此辐辏过程对用户来说不一定自然、舒适。此外,当用户的IPD不等于IOD时,图中A2点相对于用户的位置会被用户认为是物体a相对于用户自己的位置。因此,用户的IPD不等于IOD会导致该用户感受到的3D场景和所述头戴式显示设备构造的3D场景不一致,出现失真。
为了让用户佩戴所述头戴式显示设备时能够舒适、轻松且自然地辐辏,并且实际感受到的3D场景和所述头戴式显示设备构造的场景一致,本申请实施例提供了一种显示方法。该显示方法中,所述头戴式显示设备上显示的图像是根据用户的IPD确定的,这样可以使得该用户能够舒适、真实地感受到电子设备构建的3D场景。该方法的具体实现可参考后续实施例的相关描述,在此暂不赘述。
为了更加清楚地描述本申请实施例提供的显示方法,下面首先介绍本申请实施例提供的***及装置。
参考图2,图2示例性示出了本申请实施例提供的一种***10。该***10可以利用VR、增强现实(augmented reality,AR)、混合现实(mixed reality,MR)等技术显示图像,使得用户感受到3D场景,为用户提供VR/AR/MR体验。
如图2所示,该***10可包括:电子设备100、头戴式显示设备200、输入设备300。所述头戴式显示设备200佩戴于用户头部,所述输入设备300由用户手持。可理解的,所述输入设备300是可选设备,也就是说,所述***10也可以不包括所述输入设备300。
所述电子设备100和所述头戴式显示设备200之间可以通过有线或者无线的方式连接。有线连接可包括通过USB接口、HDMI接口等接口进行通信的有线连接。无线连接可包括通过蓝牙、Wi-Fi直连(如Wi-Fi p2p)、Wi-Fi softAP、Wi-Fi LAN、射频等技术进行通信的无线连接中一项或多项。
所述电子设备100和所述输入设备300之间可以通过蓝牙(bluetooth,BT)、近场通信(near field communication,NFC)、ZigBee等近距离传输技术无线连接并通信,还可以通过USB接口、HDMI接口或自定义接口等来有线连接并通信。
所述电子设备100可以是搭载iOS、Android、Microsoft或者其它操作***的便携式终端设备,例如手机、平板电脑,还可以是具有触敏表面或触控面板的膝上型计算机(Laptop)、具有触敏表面或触控面板的台式计算机等非便携式终端设备。所述电子设备100可运行应用程序,以生成用于传输给所述头戴式显示设备200显示的图像。该应用程序例如可以是视频应用、游戏应用、桌面应用等等。
所述头戴式显示设备200的可实现形式包括头盔、眼镜、耳机等可以佩戴在用户头部的电子装置。所述头戴式显示设备200用于显示图像,从而向用户呈现3D场景,给用户带来VR/AR/MR体验。该3D场景可包括3D的图像、3D的视频、音频等等。
所述输入设备300的实现形式可以是实体设备,例如实体的手柄、鼠标、键盘、手写 笔、手环等等,也可以是虚拟设备,例如所述电子设备100生成并通过所述头戴式显示设备200显示的虚拟键盘等。
当所述输入设备300为实体设备时,所述输入设备300可配置有多种传感器,例如加速度传感器、陀螺仪传感器、磁传感器、压力传感器等。压力传感器可设置于所述输入设备300的确认按键下。确认按键可以是实体按键,也可以是虚拟按键。
所述输入设备300用于采集所述输入设备300的运动数据,和,表示所述输入设备300的确认按键是否被按压的数据。其中,所述运动数据包括所述输入设备300的传感器例如加速度传感器采集所述输入设备300的加速度、陀螺仪传感器采集所述输入设备300的运动速度等。所述表示所述输入设备300的确认按键是否被按压的数据包括设置于所述确认按键下的压力传感器采集到的压力值,所述输入设备300生成的电平等。设置于所述确认按键下的压力传感器采集到的压力值为0,则表示所述输入设备300的确认按键被按压;设置于所述确认按键下的压力传感器采集到的压力值不为0,则表示所述输入设备300的确认按键没有被按压。在一些实施例中,所述输入设备300生成的高电平表示所述输入设备300的确认按键被按压,所述输入设备300生成的低电平表示所述输入设备300的确认按键没有被按压。
所述输入设备300可以将采集到的采集所述输入设备300的运动数据,和,表示所述输入设备300的确认按键是否被按压的数据,发送给所述电子设备100进行分析。所述电子设备100可以根据所述输入设备300采集到的数据,确定所述输入设备300的运动情况以及状态。所述输入设备300的运动情况可包括但不限于:是否移动、移动的方向、移动的速度、移动的距离、移动的轨迹等等。所述输入设备300的状态可包括:所述输入设备300的确认按键是否被按压。所述电子设备100可以根据所述输入设备300的运动情况和/或状态,调整所述头戴式显示设备200上显示的图像和/或启动对应的功能,例如移动该图像中的光标,该光标的移动轨迹由所述输入设备300的运动情况确定,又例如根据所述输入设备300的确认按键被按压的操作启用测量IPD的功能等等。
也就是说,用户可通过在所述输入设备300上输入用户操作,来触发所述电子设备100执行对应的功能。例如,用户可以握持所述输入设备300向左移动3cm时,以使得所述电子设备100将所述头戴式显示设备200上显示的光标向左移动6cm。这样可以使得用户通过对所述输入设备300的操控,来将光标移动到所述头戴式显示设备200中显示屏上的任意位置。又例如,在光标被移动至所述头戴式显示设备200所显示的某个控件上后,用户可以按压所述输入设备300的确认按键,以使得所述电子设备100启用和该控件对应的功能。
在本申请实施例中,所述头戴式显示设备200用于显示图像。用户看到所述头戴式显示设备200显示的图像后,可以通过在所述输入设备300或者所述头戴式显示设备200输入用户操作,来指示自己能够在所述头戴式显示设备200的显示屏上看到的边缘。用户在所述头戴式显示设备200上输入用户操作的方式可参考后续头戴式显示设备200的相关描述,在此暂不赘述。所述头戴式显示设备200或所述输入设备300可以将采集到的数据发送至所述电子设备100,由所述电子设备100根据该数据进行计算,确定用户能够在所述头戴式显示设备200的显示屏上看到的边缘,并根据该边缘计算该用户的IPD。
在获取用户的IPD后,所述电子设备100可以根据该用户的IPD确定用于在所述头戴式显示设备200上显示的图像,并将该图像显示于所述头戴式显示设备200的显示屏上。这样可以使得用户在观看3D场景中的物体时的辐辏过程自然、舒适,且辐辏后用户实际感受到的3D场景和电子设备构造的3D场景一致,提高了用户佩戴的舒适感,避免了场景失真。
这里,用户指示自己能够在所述头戴式显示设备200的显示屏上看到的边缘的方式、所述电子设备100计算用户IPD的方式、所述电子设备100确定所述头戴式显示设备200上显示的图像并将该图像显示于所述头戴式显示设备200上的方式,可参考后续方法实施例的相关描述,在此暂不赘述。
参考图3A,图3A示出了本申请实施例提供的所述电子设备100的结构示意图。
所述电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,传感器模块180,摄像头193,显示屏194。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了***的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB) 接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等***器件。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与***设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他头戴式显示设备,例如VR设备等。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以 是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星***(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯***(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位***(global positioning system,GPS),全球导航卫星***(global navigation satellite system,GLONASS),北斗卫星导航***(beidou navigation satellite system,BDS),准天顶卫星***(quasi-zenith satellite system,QZSS)和/或星基增强***(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成 电信号,之后将电信号传递给ISP转换成数字图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作***,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备100的各种功能应用以及数据处理。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。
气压传感器180C用于测量气压。
磁传感器180D包括霍尔传感器。
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。
在本申请实施例中,内部存储器121用于存储一个或多个应用的应用程序,该应用程序包括指令。当该应用程序被处理器110执行时,使得所述电子设备100生成用于呈现给用户的内容。示例性的,该应用可以包括用于管理头戴式显示设备200的应用、游戏应用、会议应用、视频应用、桌面应用或其他应用等等。
在本申请实施例中,处理器110用于根据所述头戴式显示设备200或所述输入设备300采集到的数据,来确定用户能够在所述头戴式显示设备200的显示屏上能够看到的边缘。处理器110还用于根据用户能够在所述头戴式显示设备200的显示屏上能够看到的边缘,来计算用户的IPD。处理器110确定用户能够在所述头戴式显示设备200的显示屏上能够看到的边缘的方式,以及,计算用户的IPD的方式可参考后续实施例的描述。
在本申请实施例中,GPU用于根据从处理器110处获取到的数据(例如应用程序提供的数据)执行数学和几何运算,利用计算机图形技术、计算机仿真技术等来渲染图像,确定用于在头戴式显示设备200上显示的图像。在一些实施例中,GPU可以将校正或预失真添加到图像的渲染过程中,以补偿或校正由头戴式显示设备200的光学组件引起的失真。
在一些实施例中,GPU还用于根据从处理器110获取到的用户的IPD,确定用于在头戴式显示设备200上显示的图像。GPU确定头戴式显示设备200上显示的图像的方式可参 考后续实施例的相关描述,在此暂不赘述。
在本申请实施例中,电子设备100可通过移动通信模块150、无线通信模块160或者有线接口将GPU处理后得到的图像发送给头戴式显示设备200。
电子设备100的软件***可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android***为例,示例性说明电子设备100的软件结构。
图3B是本申请实施例的电子设备100的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android***分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和***库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图3B所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图3B所示,应用程序框架层可以包括窗口管理器(window manager),内容提供器,视图***,电话管理器,资源管理器,通知管理器、屏幕管理器(display manager、活动管理器(activity manager service)、输入管理器(input manager)等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。在本申请实施例中,窗口管理器、屏幕管理器和活动管理器可合作生成用于在头戴式显示设备200上所显示的图像。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图***包括可视控件,例如显示文字的控件,显示图片的控件等。视图***可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在***顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,头戴式显示设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓***的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
***库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子***进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
参考图3C,所述头戴式显示设备200可包括:处理器201、存储器202、通信模块203、传感器***204、摄像头205、显示装置206、音频装置207。以上各个部件可以耦合连接并相互通信。
可理解的,图3C所示的结构并不构成对头戴式显示设备200的具体限定。在本申请另一些实施例中,头戴式显示设备200可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。例如,头戴式显示设备200还可以包括物理按键如开关键、音量键、各类接口例如用于支持头戴式显示设备200和电子设备100连接的USB接口等等。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器201可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制,使得各个部件执行相应的功能,例如人机交互、运动跟踪/预测、渲染显示、音频处理等。
存储器202存储用于执行本申请实施例提供的显示方法的可执行程序代码,该可执行程序代码包括指令。存储器202可以包括存储程序区和存储数据区。其中,存储程序区可存储操作***、至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储头戴式显示设备200使用过程中所创建的数据(比如音频数据等)等。此外,存储器202可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器201通过运行存储在存储器202的指令,和/或存储在设置于处理器中的存储器的指令,执行头戴式显示设备200的各种功能应用以及数据处理。
通信模块203可包括无线通信模块。无线通信模块可以提供应用在头戴式显示设备200上WLAN,BT,GNSS,FM,IR等无线通信的解决方案。无线通信模块可以是集成至少一个通信处理模块的一个或多个器件。通信模块203可支持头戴式显示设备200和电子设备100进行通信。可理解的,在一些实施例中,所述头戴式显示设备200也可不包括通信模块203,本申请实施例对此不作限制。
传感器***204可包括加速度计、指南针、陀螺仪、磁力计、或用于检测运动的其他传感器等。传感器***204用于采集对应的数据,例如加速度传感器采集头戴式显示设备200加速度、陀螺仪传感器采集头戴式显示设备200的运动速度等。传感器***204采集到的数据可以反映佩戴该头戴式显示设备200的用户头部的运动情况。在一些实施例中,传感器***204可以为设置在头戴式显示设备200内的惯性测量单元(inertial measurement unit,IMU)。在一些实施例中,所述头戴式显示设备200可以将传感器***获取到的数据发送给所述电子设备100进行分析。所述电子设备100可以根据各个传感器采集到的数据,确定用户头部的运动情况,并根据用户头部的运动情况执行对应的功能,例如启动测量IPD的功能等。也就是说,用户可以可通过在所述头戴式显示设备200上输入头部运动操作,来触发所述电子设备100执行对应的功能。用户头部的运动情况可包括:是否转动、转动的方向等等。
传感器***204还可以包括光学传感器,用于结合摄像头205来跟踪用户的眼睛位置以及捕获眼球运动数据。该眼球运动数据例如可以用于确定用户的眼间距、每只眼睛相对于头戴式显示设备200的3D位置、每只眼睛的扭转和旋转(即转动、俯仰和摇动)的幅度和注视方向等等。在一个示例中,红外光在所述头戴式显示设备200内发射并从每只眼睛反射,反射光由摄像头205或者光学传感器检测到,检测到的数据被传输给所述电子设备100,以使得所述电子设备100从每只眼睛反射的红外光的变化中分析用户眼睛的位置、瞳孔直径、运动状态等。
摄像头205可以用于捕捉捕获静态图像或视频。该静态图像或视频可以是面向外部的用户周围的图像或视频,也可以是面向内部的图像或视频。摄像头205可以跟踪用户单眼或者双眼的运动。摄像头205包括但不限于传统彩色摄像头(RGB camera)、深度摄像头(RGB depth camera)、动态视觉传感器(dynamic vision sensor,DVS)相机等。深度摄像头可以获取被拍摄对象的深度信息。在一些实施例中,摄像头205可用于捕捉用户眼睛的图像,并将图像发送给所述电子设备100进行分析。所述电子设备100可以根据摄像头205采集到的图像,确定用户眼睛的状态,并根据用户眼睛所处的状态执行对应的功能。也就是说,用户可通过在所述头戴式显示设备200上输入眼睛运动操作,来触发所述电子设备100执行对应的功能。用户眼睛的状态可包括:是否转动、转动的方向、是否长时间未转动、看向外界的角度等等。
所述头戴式显示设备200通过GPU,显示装置206,以及应用处理器等来呈现或者显示图像。
GPU为图像处理的微处理器,连接显示装置206和应用处理器。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示装置206可包括:一个或多个显示屏、一个或多个光学组件。该一个或多个显示 屏包括所述显示屏101和所述显示屏103。该一个或多个光学组件包括所述光学组件102和所述光学组件104。这里,所述显示屏101、所述显示屏103、所述光学组件102和所述光学组件104的结构以及它们之间的位置关系可参考图1中的相关描述。后续方法实施例中,为了描述方便,所述头戴式显示设备200的显示装置206中各个器件的标号沿用图1中的标号,即头戴式显示设备200包括显示屏101、显示屏103、光学组件102和光学组件104。
在本申请实施例中,头戴式显示设备200中的显示屏,例如所述显示屏101、所述显示屏103,用于接收电子设备100的GPU处理后的数据或内容(例如经过渲染后的图像),并将其显示出来。在这种情况下,所述头戴式显示设备200可以为计算能力有限的VR眼镜等终端设备,需要和所述头戴式显示设备200配合为用户呈现3D场景,给用户提供VR/AR/MR体验。
所述显示屏101和显示屏103上显示的图像具有视差,从而模拟双眼视觉,可以使得用户感受到该图像对应物体的深度,从而产生真实的3D感。
显示屏,例如所述显示屏101、所述显示屏103,可包括显示面板,显示面板可以用于显示图像,从而为用户呈现立体的虚拟场景。显示面板可以采用液晶显示装置LCD,OLED,AMOLED,FLED,Miniled,MicroLed,Micro-oLed,QLED等。
光学组件,例如所述光学组件102、所述光学组件104,用于将来自显示屏的光引导至出射光瞳以供用户感知。在一些实施方式中,光学组件中的一个或多个光学元件(例如透镜)可具有一个或多个涂层,诸如,抗反射涂层。光学组件对图像光的放大允许显示屏在物理上更小、更轻、消耗更少的功率。另外,图像光的放大可以增加显示屏显示的内容的视野。例如,光学组件可以使得显示屏所显示的内容的视野为用户的全部视野。
光学组件还可用于校正一个或多个光学误差。光学误差的示例包括:桶形失真、枕形失真、纵向色差、横向色差、球面像差、彗形像差、场曲率、散光等。在一些实施方式中,提供给显示屏显示的内容被预先失真,由光学组件在从显示屏接收基于内容产生的图像光时校正该失真。
音频装置207用于实现音频的采集以及输出。音频装置207可包括但不限于:麦克风、扬声器、耳机等等。
基于上述图2实施例描述的***10、图3A及图3B实施例描述的电子设备100及图3C实施例描述的头戴式显示设备200,下面以所述电子设备100和所述头戴式显示设备200配合提供3D场景为例,详细描述本申请实施例提供的显示方法。
在该显示方法中,所述电子设备100可以在用户佩戴所述头戴式显示设备200时,确定该用户的IPD,根据该用户的IPD确定用于在所述头戴式显示设备200上显示的图像,并将该图像显示在所述头戴式显示设备200上。该显示方法在所述头戴式显示设备200上显示图像时,考虑到了用户的IPD,这使得用户在观看3D场景中的物体时的辐辏过程自然、舒适,且辐辏后用户实际感受到的3D场景和电子设备构造的3D场景一致,提高了用户佩戴的舒适感,避免了场景失真。后续实施例将通过下面(一)、(二)、(三)三个部分来详细描述该显示方法。
(一)所述电子设备100获取用户的IPD
所述电子设备100可通过用户在所述头戴式显示设备的显示屏上能够看到的最左侧边缘和最右侧边缘来计算用户的IPD。因为,在所述头戴式显示设备200中,显示屏、光学组件和镜筒之间的相对位置已经确定,IPD为影响用户在显示屏上能够看到的最左侧边缘和最右侧边缘的主要因素。所以,所述电子设备100可以根据用户在所述头戴式显示设备200的显示屏上能够看到的最左侧边缘和最右侧边缘来获取用户的IPD。
参考图4A,图4A示例性示出了IPD小于IOD的用户佩戴所述头戴式显示设备200时,双眼和所述光学组件102、所述光学组件104之间的位置关系图。
参考图4B,图4B示例性示出了IPD大于IOD的用户佩戴所述头戴式显示设备200时,双眼和所述光学组件102、所述光学组件104之间的位置关系图。
如图4A及图4B所示,可以将用户佩戴所述头戴式显示设备200时,用户左眼相对所述第一直线的偏移量称为Δi1。所述Δi1为带有符号的值。用户左眼和所述第一直线之间的距离等于所述Δi1的绝对值。在本申请的一些实施例中,可以作以下设定:所述Δi1为正值时,表示用户左眼相对所述第一直线向右偏移;所述Δi1为负值时,表示用户左眼相对所述第一直线向左偏移。在其他一些实施例中,也可以作其他设定,本申请实施例对此不作限制。
如图4A及图4B所示,可以将用户所述佩戴头戴式显示设备200时,用户右眼相对所述第二直线的偏移量称为Δi2。所述Δi2为带有符号的值。用户右眼和所述第二直线之间的距离等于所述Δi2的绝对值。在本申请的一些实施例中,可以作以下设定:所述Δi2为正值时,表示用户右眼相对所述第二直线向右偏移;所述Δi2为负值时,表示用户右眼相对所述第二直线向左偏移。在其他一些实施例中,也可以作其他设定,本申请实施例对此不作限制。
由图4A和图4B示出的用户佩戴所述头戴式显示设备200时双眼和所述光学组件102、所述光学组件104之间的位置关系可知,所述电子设备100可以根据公式1来获取用户的IPD。
IPD=IOD-Δi1+Δi2公式1
在公式1中,所述IPD是当前佩戴所述头戴式显示设备200的用户的实际瞳距。
所述IOD是所述光学组件102的中心和所述光学组件104的中心之间的距离。
所述Δi1,和,用户左眼在所述显示屏101上能够看到的最左侧边缘和最右侧边缘相关联,即所述Δi1的值可以由用户左眼在所述显示屏101上能够看到的最左侧边缘和最右侧边缘确定。
所述Δi2,和,用户右眼在所述显示屏103上能够看到的最左侧边缘和最右侧边缘相关联,即所述Δi2的值可以由用户右眼在所述显示屏103上能够看到的最左侧边缘和最右侧边缘确定。
所述电子设备100根据公式1来获取用户的IPD时,需要确定以下3个参数的值:所述IOD、所述Δi1和所述Δi2。下面将详细描述所述电子设备100如何确定该3个参数的值。
(1)所述电子设备100确定所述IOD
所述头戴式显示设备200出厂后,所述IOD即固定。一般情况下,同一型号的头戴式 显示设备具有相同的IOD。
在一些实施例中,所述电子设备100可以在预先安装的用于管理所述头戴式显示设备200的应用程序的安装包中获取所述IOD的具体值。
在另一些实施例中,所述电子设备100还可以在连接到所述头戴式显示设备200并获取该头戴式显示设备200的型号后,从互联网获取该头戴式显示设备200的IOD的具体值。
(2)所述电子设备100确定所述Δi1
由于所述Δi1的值影响用户左眼在所述显示屏101上能够看到的最左侧边缘和最右侧边缘,所述电子设备100可以根据用户左眼在所述显示屏101上能够看到的最左侧边缘和最右侧边缘来确定所述Δi1。
后续实施例将通过下面1、2两个部分来详细描述所述电子设备100如何确定所述Δi1。
1、所述电子设备100获取第一位置和第二位置,所述第一位置位于用户的左眼在所述显示屏101上能够看到的最左侧边缘,所述第二位置位于用户的左眼在所述显示屏101上能够看到的最右侧边缘。
在本申请实施例中,所述电子设备100可以通过所述头戴式显示设备200显示用户界面,该用户界面可用于用户指示所述第一位置和所述第二位置。所述电子设备100可以根据用户的指示来获取所述第一位置和所述第二位置。
下面以图5A-图5C所示的用户界面为例,详细介绍所述电子设备100如何获取所述第一位置和所述第二位置。
图5A示例性示出了所述头戴式显示设备200上显示的用户界面51和用户界面52。所述用户界面51和所述用户界面52是由所述电子设备100生成并传输给该头戴式显示设备200的。示例性的,所述用户界面51可以显示在所述显示屏101上,所述用户界面52可以显示在所述显示屏103上。具体实现中,所述电子设备100可以根据预先存储的3D场景信息构造3D场景。该3D场景信息描述了让用户感受到的3D场景的一些信息。也就是说,该3D场景信息指示用户处于该3D场景中时能够看到的物体,以及,各个物体和用户之间的相对位置。然后,所述电子设备100可以模拟或者假定用户自然地身处构造的该3D场景中,获取该用户左眼看到的图像,将该图像显示于所述显示屏101上,从而通过所述显示屏101显示所述用户界面51;获取该用户右眼看到的图像,将该图像显示于所述显示屏103上,从而通过所述显示屏103显示所述用户界面52。
需要注意的是,为了让用户感受到3D场景,所述显示屏101显示的所述用户界面51的图像和所述显示屏103显示的所述用户界面52的图像具有视差。
如图5A所示,所述用户界面51和所述用户界面52中均显示有:提示框501、控件502、控件503、光标504。
所述光标504位于所述用户界面51的某一个位置上。用户可以通过所述输入设备300来调整所述光标504在所述用户界面51中的位置,即用户可以通过所述输入设备300来移动所述光标504。用户通过所述输入设备300来移动所述光标504的具体实现可参考图2所示***10中的相关描述。所述光标504的实现形式可以包括箭头、圆圈或其他图标。图5A中示出的所述光标504位于所述控件503上。
所述提示框501用于显示提示信息。所述提示信息可以用于提示用户。例如,所述提 示信息可以是文本“是否测量瞳距?测量瞳距后可为您呈现更佳的视觉效果!”,用于提示用户测量IPD以及测量IPD后的效果。
所述控件503用于停止启动所述电子设备100的测量IPD的功能。所述电子设备100响应于用户和控件502的交互,不测量用户的IPD。
所述控件503用于启动所述电子设备100的测量IPD的功能。所述电子设备100可以响应于用户和所述控件503的交互,开始测量用户的IPD。
在本申请实施例中,所述电子设备100通过所述头戴式显示设备200显示如图5A所示的所述用户界面51和所述用户界面52后,可以响应于用于启动所述电子设备100的测量IPD的功能的用户操作,启动所述电子设备100的测量IPD的功能。该用于启动所述电子设备100的测量IPD的功能的用户操作由用户根据所述用户界面51和所述用户界面52输入。
在一些实施例中,该用于启动所述电子设备100的测量IPD的功能的用户操作可以是所述输入设备300检测到的:所述输入设备300在发生运动后,所述输入设备300的确认按键被按压的用户操作。基于所述光标504在所述用户界面51和所述用户界面52中的起点,所述输入设备300发生的所述运动使得所述光标504在所述用户界面51和所述用户界面52中移动后的终点,和所述控件503所在位置相同。也就是说,所述输入设备300发生的所述运动结束时所述光标504位于所述控件503上。响应于所述输入设备100发生的所述运动,所述电子设备将所述光标504移动至所述控件503上,所述光标504移动至所述控件503上的移动轨迹由所述输入设备100发生所述运动时的运动轨迹确定。具体实现中,所述输入设备300可以采集到特定数据(例如加速度传感器采集到的加速度、陀螺仪传感器采集到的运动速度及运动方向),并可以将该特定数据发送给所述电子设备100,该特定数据表明:所述输入设备300在发生所述运动后,所述输入设备300的确认按键被按压。所述电子设备100可以根据该特定数据启动测量IPD的功能。也就是说,用户可以操控所述输入设备300运动以触发所述电子设备100将所述光标504移动至所述控件503上后,然后按压所述输入设备300的确认按键,触发所述电子设备100启动测量IPD的功能。
在一些实施例中,该用于启动所述电子设备100的测量IPD的功能的用户操作可以是所述头戴式显示设备200检测到的:用户的头部往特定方向转动的操作。该特定方向可以是该用户的左方、右方、上方或下方等。具体实现中,所述头戴式显示设备200的所述传感器***204中的传感器可以采集到特定数据(例如陀螺仪传感器采集到的运动速度及运动方向),并将该特定数据发送给电子设备100,该特定数据表明:用户的头部往特定方向转动。所述电子设备100可以根据该特定数据启动测量IPD的功能。也就是说,用户可以通过往特定方向转动头部,触发所述电子设备100启动测量IPD的功能。
在一些实施例中,该用于启动所述电子设备100的测量IPD的功能的用户操作可以是所述头戴式显示设备200检测到的语音指令。该语音指令例如可以是“开始测量”。具体实现中,所述头戴式显示设备200的麦克风可以采集用户输入的语音数据,并将该语音数据发送至所述电子设备100,该语音数据表明该语音指令。所述电子设备100可以根据该语音数据启动测量IPD的功能。也就是说,用户可以通过说出语音指令,触发所述电子设备100启动测量IPD的功能。
在一些实施例中,该用于启动所述电子设备100的测量IPD的功能的用户操作可以是所述头戴式显示设备200检测到的:用户的左眼望向所述控件503且在预设时长内未发生转动的用户操作。具体实现中,所述头戴式显示设备200的摄像头可以采集到用户眼球的特定图像,并将该特定图像发送给所述电子设备100,该特定图像表明:用户的左眼望向所述控件503且在预设时长内未发生转动。所述电子设备100可以根据该特定图像启动测量IPD的功能。也就是说,用户可以通过长时间地看向所述控件503,触发所述电子设备100启动测量IPD的功能。
不限于上述实施例示例性列举的方式,在本申请实施例中,用于启动所述电子设备100的测量IPD的功能的用户操作还可以为其他形式。例如用于启动所述电子设备100的测量IPD的功能的用户操作还可以是所述头戴式显示设备200检测到的:用户的左眼望向所述控件503时的两次眨眼操作等等。本申请实施例不再对其他形式的用于启动所述电子设备100的测量IPD的功能的用户操作一一进行列举。
响应于用于启动所述电子设备100的测量IPD的功能的用户操作,所述电子设备100可以启动测量用户IPD的功能,开始测量用户的IPD。
图5B示例性示出了所述头戴式显示设备200上显示的用户界面53和用户界面54。所述用户界面53和所述用户界面54可以是由所述电子设备100启动测量用户IPD的功能时,所述电子设备100生成并传输给所述头戴式显示设备200的,即,所述该用户界面53和所述用户界面54可以是由所述电子设备100响应于用于启动所述电子设备100的测量IPD的功能的用户操作生成并传输给该头戴式显示设备200的。所述电子设备100生成所述用户界面53和所述用户界面54的方式,可参考所述电子设备100生成所述用户界面51和所述用户界面52的方式,这里不再赘述。
如图5B所示,所述用户界面53显示在所述显示屏101上,所述用户界面54显示在所述显示屏103上。所述显示屏103显示所述用户界面54时,可以黑屏(即所述头戴式显示设备200停止所述显示屏103的电源供应),也可以显示黑色的图像。这样可以帮助用户将注意力集中在左眼上。
所述用户界面53中显示有:光标504、图像505、提示框506。其中:
所述光标504位于所述用户界面53中的某一位置上。所述光标504可参考图5A所示的所述用户界面51和所述用户界面52中的光标504,这里不再赘述。
所述提示框506用于显示提示信息。所述提示信息可以用于提示用户。例如所述提示信息可以是文本“请您尽量将左眼往左侧看,将滑块拖动到您能看到的左侧边缘位置并确认”,用于提示用户指示第一位置。在本申请实施例中,第一提示信息可以是所述提示信息。不限于在所述用户界面53中显示的所述提示信息,第一提示信息还可以是所述电子设备100输出的语音或其他类型的提示信息等,本申请实施例对此不作限制。
本申请实施例对所述图像505的内容不做限制。所述图像505例如可以是带有滑块的标尺的图像等,该带有滑块的标尺和所述第三直线平行且经过所述显示屏101的中点,即该带有滑块的标尺和位于所述第四直线上。以下实施例以所述图像505为带有滑块的标尺的图像为例进行说明。
在本申请实施例中,所述电子设备100可以在显示如图5B所示的用户界面后,响应于 用于指示所述第一位置的用户操作获取所述第一位置。该用于指示所述第一位置的用户操作由用户根据所述显示屏101上显示的所述用户界面53输入。
在一些实施例中,该用于指示所述第一位置的用户操作可以是所述输入设备300检测到的:所述输入设备300发生第一轨迹的运动后,所述输入设备300的确认按键被按压的同时所述输入设备300发生第二轨迹的运动,之后,所述输入设备300的确认按键被停止按压的操作。其中,基于所述光标504在所述用户界面53上的起点,所述输入设备300发生的所述第一轨迹的运动使得所述光标504在所述用户界面53中移动后的终点和所述滑块的图像所在位置相同;所述输入设备300发生所述第二轨迹的运动后,所述光标504在所述用户界面53中移动后的终点为所述滑块的图像结束移动的位置。
具体实现中,所述输入设备300可以采集到特定数据(例如加速度传感器采集到的加速度、陀螺仪传感器采集到的运动速度及运动方向),并可以将该特定数据发送给所述电子设备100,该特定数据表明:所述输入设备300发生所述第一轨迹的运动后,所述输入设备300的确认按键被按压的同时所述输入设备300发生第二轨迹的运动,之后该确认按键被停止按压。
响应于该用于指示所述第一位置的用户操作,所述电子设备100移动所述光标504至所述滑块的图像上,然后移动所述光标504和所述滑块的图像,并将所述滑块的图像结束移动的位置确定为所述第一位置。也就是说,用户可以通过操控所述输入设备300做所述第一轨迹的运动以触发所述电子设备100将所述光标504移动至滑块的图像上,之后按压所述输入设备300的确认按键并同时操控所述输入设备300做所述第二轨迹的运动,然后停止按压所述确认按键,来指示所述第一位置。
在一些实施例中,该用于指示所述第一位置的用户操作可以是所述头戴式显示设备200检测到的:用户的左眼在预设时长内未发生转动的用户操作。具体实现中,所述头戴式显示设备200的摄像头可以采集到用户眼球的特定图像,并将该特定图像发送给所述电子设备100,该特定图像表明:用户的左眼在预设时长内未发生转动。所述电子设备100可根据该特定图像,将用户的左眼在预设时长内未发生转动时望向所述显示屏101的位置确定为所述第一位置。也就是说,用户可以通过长时间地看向所述显示屏101中的某个位置,来将该位置指示为所述第一位置。
在一些实施例中,所述头戴式显示设备200也可以在采集到所述特定图像后,将根据该特定图像确定的用户的左眼在预设时长内未发生转动时看向所述显示屏101的位置确定为所述第一位置。之后,所述头戴式显示设备200可以将确定的所述第一位置发送给所述电子设备100,使得所述电子设备100获取所述第一位置。
在本申请实施例中,所述头戴式显示设备200采集到的所述用户眼球的图像用于通过采集所述用户的操作数据得到。
不限于上述实施例示例性列举的形式,在本申请实施例中,所述用于指示所述第一位置的用户操作还可以为其他形式。本申请实施例不再对其他形式的用于指示所述第一位置的用户操作一一进行列举。
所述电子设备100还可以通过和上述图5B-图5C实施例所示的相同的方式来获取所述第二位置。例如,所述电子设备100获取所述第一位置之后,可以将所述头戴式显示设备 200显示的所述用户界面53中提示框的提示信息更改为文本“请您尽量将左眼往右侧看,将滑块拖动到您能看到的右侧边缘位置并确认”。之后,所述头戴式显示设备200或输入设备300可以检测到用于指示所述第二位置的用户操作,并将检测到的表征该用于指示所述第二位置的用户操作的特定数据或特定图像发送给所述电子设备100,所述电子设备100可以根据该特定数据或特定图像来获取所述第二位置。该用于指示所述第二位置的用户操作由用户根据所述显示屏101上显示的所述用户界面53输入。该用于指示所述第二位置的用户操作和所述用于指示所述第一位置的用户操作类似,可以参考所述用于指示所述第一位置的用户操作,这里不再详细描述。
在本申请实施例中,第二提示信息可以是在所述用户界面53中显示的用于提示用户指示所述第二位置的提示信息,例如可以为上述文本“请您尽量将左眼往右侧看,将滑块拖动到您能看到的右侧边缘位置并确认”。不限于此,第一提示信息还可以是所述电子设备100输出的语音或其他类型的提示信息等,本申请实施例对此不作限制。
在本申请实施例中,第一用户界面可以是所述显示屏101上显示的用于用户指示所述第一位置和所述第二位置的用户界面。例如,所述第一用户界面可以为如图5B所示的用户界面53。需要注意的是,所述用户界面53仅为示例,所述第一用户界面还可以实现为其他形式,本申请实施例对此不作限制。
本申请实施例对所述电子设备100获取所述第一位置和所述第二位置的时间先后顺序不作限制。在一些实施例中,所述电子设备100可以先根据检测到的所述用于指示所述第一位置的用户操作获取所述第一位置,再根据检测到的所述用于指示所述第二位置的用户操作获取所述第二位置。在另一些实施例中,电子设备100可以先根据检测到的所述用于指示所述第二位置的用户操作获取所述第二位置,再根据检测到的所述用于指示所述第一位置的用户操作获取所述第一位置。
在一些实施例中,所述电子设备100可以在所述头戴式显示设备200首次开机时,输出所述用户界面51和所述用户界面52。这样,所述电子设备100可以在所述头戴式显示设备200首次开机时获取所述第一位置和所述第二位置,从而获取用户的IPD,并根据用户的IPD在所述头戴式显示设备200上显示图像。这样在所述头戴式显示设备200首次开机后,就可以保证用户佩戴所述头戴式显示设备200时能够舒适、轻松且自然地辐辏,并且实际感受到的3D场景和所述电子设备100构造的场景一致。
在一些实施例中,所述电子设备100可以周期性地输出所述用户界面51和所述用户界面52。例如,所述电子设备100可以每月一次或每周一次的频率在所述头戴式显示设备200上显示所述用户界面51和所述用户界面52,从而周期性地获取用户的IPD,并根据用户的IPD在所述头戴式显示设备200上显示图像。这样即使用户的IPD有所变化,也能保证用户佩戴所述头戴式显示设备200时能够舒适、轻松且自然地辐辏,并且实际感受到的3D场景和所述电子设备100构造的场景一致。
在一些实施例中,所述电子设备100可以根据用户需求输出所述用户界面51和所述用户界面52。例如,用户可以在长时间未使用所述头戴式显示设备200后,在所述头戴式显示设备200显示的设置界面中主动触发所述电子设备100输出所述用户界面51和所述用户 界面52。
在一些实施例中,所述电子设备100可以在有新用户佩戴该头戴式显示设备200时,输出所述用户界面51和所述用户界面52。具体的,所述电子设备100可以在用户佩戴所述头戴式显示设备200时,识别当前用户是否为新用户。所述电子设备100可以通过生物特征例如虹膜、指纹、声纹、人脸等等来识别用户,生物特征可以由所述头戴式显示设备200或所述电子设备100来采集。这样,所述电子设备100可以确定每一个用户的IPD,并适应于不同用户的IPD在所述头戴式显示设备200上显示图像,保证每个用户佩戴所述头戴式显示设备200时能够舒适、轻松且自然地辐辏,并且实际感受到的3D场景和所述电子设备100构造的场景一致,给每个用户都带来良好的视觉体验。
不限于通过图5A-图5C中所示的输出特殊的所述第一用户界面的方式,在其他一些实施例中,所述电子设备100还可以通过其他方式来获取所述第一位置和所述第二位置。例如,所述头戴式显示设备200可以通过摄像头采集用户使用所述头戴式显示设备200一段时间内(例如一周或一个月内)的眼球图像,并将该图像发送至所述电子设备100,由所述电子设备100根据该图像确定这段时间内用户左眼能够看到的所述显示屏101上的最左侧边缘和最右侧边缘,从而获取所述第一位置和所述第二位置。这样,无需用户特意操作或反馈,所述电子设备100即可确定所述Δi1,从而获取用户的IPD,并根据用户的IPD在所述头戴式显示设备200上显示图像。这种方式对于用户来说更加简单方便,体验更佳。
2、所述电子设备100根据所述第一位置和所述第二位置确定所述Δi1。
所述电子设备100可以根据用户佩戴所述头戴式显示设备200时的几何关系,来计算所述Δi1。下面将详细介绍该几何关系,并推导所述Δi1的计算公式。
参考图6,图6为本申请实施例中所述电子设备100可以获取到的用户佩戴所述头戴式显示设备200时的几何关系示意图。其中:
C’为用户佩戴所述头戴式显示设备200时左眼所在位置。
O’为所述显示屏101的中心。
J为所述第一位置,K为所述第二位置。J、K的确定方式可参考前文第(1)点中的相关描述。
D为所述光学组件102左侧边缘和所述第三直线的交点,E为所述光学组件102的右侧边缘和所述第三直线的交点。
O为虚像面的中心,也是O’在虚像面上对应的成像点,还是所述第一直线和虚像面的交点。A’、B’分别为J、K在虚像面上对应的成像点。
由于A’是所述第一位置的虚像点,A’、D及C’位于同一条直线上。由于B’是所述第二位置的虚像点,B’、E及C’位于同一条直线上。
C为所述第一直线上的一个点。A、B分别为假定用户的左眼位于C点时,该假定的用户左眼对应的第一位置和第二位置在虚像面上的成像点。A、D及C位于同一条直线上,B、E及C位于同一条直线上。
F为经过D作虚像面的垂线后所得到的垂足。H为经过E作虚像面的垂线后所得到的垂足。G为经过C’作虚像面的垂线后所得到的垂足。
C’相对所述第一直线的偏移量为Δi1。
假定所述第一位置位于所述第四直线上(即J点位于第四直线上),则A’、D、F、C’、G位于同一平面上。假定所述第二位置位于所述第四直线上,(即K点位于第四直线上),则B’、E、H、C’、G位于同一平面上。那么从图6所示的几何关系可知,有以下两对相似三角形:
ΔA′DF~ΔA′C′G
ΔB′EH~ΔB′C′G
因此,可以得到以下比例关系:
Figure PCTCN2020127413-appb-000007
Figure PCTCN2020127413-appb-000008
由于D、E所在的所述第三直线和虚像面平行,DF=EH。因此,可以得到以下关系:
Figure PCTCN2020127413-appb-000009
由图6可知,A′F=A′O-L/2,A′G=A′O+Δi1,B′H=B′O-L/2,B′G=B′O-Δi1。其中,L为所述光学组件102的直径。可以得到以下关系:
Figure PCTCN2020127413-appb-000010
根据成像原理可知,A′O=M×JO′,B′O=M×KO′。M为所述光学组件102对图像光的放大倍数。因此,可以推导出以下公式2,所述电子设备100可以根据公式2来计算得到所述Δi1:
Figure PCTCN2020127413-appb-000011
根据公式2计算出的所述Δi1是具有符号的值。所述Δi1的值为正时,表示用户的左眼相对所述第一直线向右偏移;当所述Δi1的值为负时,表示用户的左眼相对所述第一直线向左偏移。用户的左眼相对所述第一直线的偏移距离为所述Δi1的绝对值。
在本申请的一些实施例中,所述第一位置不在所述第四直线上,或者,所述第二位置不在所述第四直线上时,图6所示的几何关系实际上不成立。在这种情况下,可以假定图6所示的几何关系成立并根据公式2来计算所述Δi1。此时计算出来所述的Δi1和其代表的用户左眼相对所述第一直线的偏移量稍有误差,但可以使得用户可以通过更丰富的形式来指示所述第一位置和所述第二位置。例如,所述电子设备100通过所述头戴式显示设备200的所述显示屏101显示所述用户界面53时,所述用户界面53中的标尺可以和所述第三直线平行且偏离所述显示屏101的中点,用户可以根据该用户界面53来指示所述第一位置和所述第二位置。
所述电子设备100根据公式2来计算所述Δi1时,需要确定以下几个参数的值:M、L、JO′和KO′。下面详细描述所述电子设备100如何确定这几个参数的值。
M为所述光学组件102的放大倍数。部分头戴式显示设备的M值是固定的,为虚像像高和实像像高的比值,在这种情况下,所述电子设备100可以在预先安装的用于管理所述头戴式显示设备200的应用程序的安装包中获取M的值,也可以在获取该头戴式显示设备200的型号后,根据该型号从互联网获取M的值。部分头戴式显示设备的M值是可调的,在这种情况下,所述电子设备100可以先获取该头戴式显示设备的调焦信息(例如滑动变阻器的当前阻值),并根据该调焦信息来计算所述头戴式显示设备200的当前M值。
L为所述光学组件102的直径。头戴式显示设备出厂后,L即固定,且一般情况下同一型号的头戴式显示设备具有相同的L。在一些实施例中,所述电子设备100可以在预先安装的用于管理所述头戴式显示设备200的应用程序的安装包中获取L的值。在另一些实施例中,所述电子设备100还可以在连接到所述头戴式显示设备200并获取该头戴式显示设备200的型号后,根据该型号从互联网获取该头戴式显示设备200的L的值。
JO′为用户佩戴所述头戴式显示设备200时,所述第一位置到所述显示屏101中心的距离。所述电子设备100可以根据所述第一位置计算得到该值。所述第一位置的确定方式可参照上述第1点的相关描述。在一个具体的实施例中,所述电子设备100可计算所述第一位置距离所述显示屏101中心的像素点个数,再用像素点个数乘以每个像素点的大小即可获取JO′的值。
KO′为用户佩戴所述头戴式显示设备200时,所述第二位置到所述显示屏101中心的距离。所述电子设备100可以根据确定的所述第二位置计算得到该值。所述第二位置的确定方式可参照上述第1点的相关描述。在一个具体的实施例中,所述电子设备100可计算所述第二位置距离所述显示屏101中心的像素点个数,再用像素点个数乘以每个像素点的大小即可获取KO′的值。
(3)所述电子设备100确定所述Δi2
由于所述Δi2的值影响用户右眼在所述显示屏103上能够看到的最左侧边缘和最右侧边缘,所述电子设备100可以根据用户右眼在所述显示屏103上能够看到的最左侧边缘和最右侧边缘来确定所述Δi2。
具体的,所述电子设备100可以获取所述第三位置和所述第四位置,所述第三位置位于用户的右眼在所述显示屏103上能够看到的最左侧边缘,所述第四位置位于用户的右眼在所述显示屏103上能够看到的最右侧边缘。
所述电子设备100获取所述第三位置、所述第四位置的方式,和上述第(1)点中所述电子设备100获取所述第一位置、所述第二位置的方式类似,可参考前文相关描述,这里不再赘述。
例如,所述电子设备100可以通过所述头戴式显示设备200显示用于用户指示所述第三位置、所述第四位置的用户界面。具体的,所述电子设备100可以通过所述头戴式显示设备200的所述显示屏101显示如图5B所示的用户界面54,通过所述显示屏103显示如图5B所示的用户界面53。之后,所述电子设备100可以检测到用于指示所述第三位置的用户操作,并响应于该用于指示所述第三位置的用户操作获取所述第三位置;还可以检测 到用于指示所述第四位置的用户操作,并响应于该用于指示所述第四位置的用户操作获取所述第四位置。用于指示所述第三位置的用户操作、用于指示所述第四位置的用户操作由用户根据所述显示屏103上显示的用户界面53输入。所述用于指示所述第三位置的用户操作和所述用于指示所述第四位置的用户操作,可参考所述用于指示所述第一位置的用户操作、所述用于指示所述第二位置的用户操作。
在一些实施例中,所述电子设备100通过所述显示屏101显示的用户界面也可以不是用户界面53,本申请实施例对此不作限制。在本申请实施例中,第二用户界面可以是所述电子设备100通过所述显示屏103显示的用于用户指示所述第三位置、所述第四位置的用户界面。
需要注意的是,本申请实施例对所述电子设备100通过所述显示屏101显示所述第一用户界面、通过所述显示屏103显示所述第二用户界面的时间先后不作限制。可以有先后顺序,也可以同时。
需要注意的是,本申请实施例对所述电子设备100确认所述第一位置、所述第二位置、所述第三位置和所述第四位置的时间先后不作限制。可以有先后顺序,也可以同时。
所述电子设备100根据所述第三位置和所述第四位置确定所述Δi2的方式,和上述第(1)点中所述电子设备100根据所述第一位置和所述第二位置确定所述Δi1的方式类似,可参考前文相关描述,这里不再赘述。例如,所述电子设备100可根据和图6类似的几何关系来计算得到所述Δi2。
至此,通过上述第(1)、(2)、(3)点的描述,所述电子设备100可以确定公式1中的三个参数的具体值,并可以根据上述公式1获取用户的IPD。
在一些可选实施例中,用户的左眼和右眼对称,即用户左眼和右眼所在连线的中垂线,也是所述光学组件102中心和所述光学组件104中心所在连线的中垂线。在这种情况下,所述Δi1和所述Δi2大小相等,符号相反,所述电子设备100可以通过以下公式3或公式4来计算用户的IPD:
IPD=IOD-2×Δi1公式3
IPD=IOD+2×Δi2公式4
这样,所述电子设备100在确定所述Δi1或所述Δi2后,即可获取用户的IPD,减少了所述电子设备100的计算过程。此外,也可以减少用户操作,对于用户来说更加简单方便,可以提升用户体验。
(二)所述电子设备100存储用户的IPD
所述电子设备100可以存储一个或多个用户的IPD。这样,在不同的用户佩戴所述头戴式显示设备200时,所述电子设备100均可以根据用户的IPD确定用于在所述头戴式显示设备200上显示的图像,并将该图像显示在所述头戴式显示设备200上,从而使得用户在观看3D场景中的物体时的辐辏过程自然、舒适,且辐辏后用户实际感受到的3D场景和所述电子设备100构造的3D场景一致。
所述电子设备100可以将用户的IPD存储在本地,也可以存储在云端,本申请实施例 对此不作限制。
具体实现中,所述电子设备100可以将获取到的用户的IPD和用户标识关联存储。在其他一些实施例中,除了用户的IPD,所述电子设备100还可以将所述Δi1、所述Δi2中的一项或多项和用户标识关联存储。用户标识可包括用户的姓名、昵称、指纹信息、声纹信息、人脸信息等等。
参考表1,表1示出了一种可能的所述电子设备100关联存储的多个用户标识和对应的IPD、Δi1以及Δi2。这里,所述头戴式显示设备200的所述光学组件102的中心和所述光学组件104的中心之间的距离可以为63mm。
用户标识 IPD Δi1 Δi2
用户标识1 IPD=60.97mm Δi1=1mm Δi2=-1.03mm
用户标识2 IPD=63mm Δi1=0mm Δi2=0mm
用户标识3 IPD=65.02mm Δi1=-1.02mm Δi2=1mm
表1用户标识和IPD、Δi1以及Δi2的关联存储表
(三)所述电子设备100根据用户的IPD校正源图像以得到目标图像,并将所述目标同图像发送给所述头戴式显示设备200;所述头戴式显示设备200在显示屏上显示所述目标图像
下面通过以下(1)、(2)点详细描述该过程。
(1)所述电子设备100根据用户的IPD校正源图像以得到目标图像。
在本申请实施例中,所述目标图像是所述电子设备100发送给所述头戴式显示设备200,以供所述头戴式显示设备200在显示屏上所显示的图像。所述目标图像包括第一目标图像和第二目标图像;所述第一目标图像显示于所述显示屏101上,所述第二目标图像显示于所述显示屏103上。在一些实施例中,所述第一目标图像的尺寸等于所述显示屏101的尺寸,所述第二目标图像的尺寸等于所述显示屏103的尺寸。
具体的,所述电子设备100先获取源图像,根据用户的IPD校正该源图像,从而得到所述目标图像。通常情况下,所述源图像可以预置在所述电子设备100安装的应用程序的安装包中
先描述在一些场景例如游戏场景下,所述电子设备100如何根据IPD校正源图像以得到所述目标图像。在一些实施例中,所述源图像包含多组数据;一组所述数据对应一个IPD,用于为具有所述一个IPD的用户构建3D场景。该3D场景是电子设备100想要为用户呈现的3D场景。换句话说,所述源图像可以指示用户处于该3D场景中时能够看到的物体,以及,各个物体和用户之间的相对位置。
电子设备100可以先根据用户的IPD,利用所述源图像生成第一图像和第二图像,所述第一图像和所述第二图像为具有所述IPD的用户呈现所述3D场景;所述用户的IPD在所述源图像中对应的一组数据包含在所述多组数据中。换句话说,所述电子设备100根据所述源图像,模拟用户自然地身处该3D场景中,根据该用户的IPD获取该用户左眼看到的图像和右眼看到的图像,将该用户左眼看到的图像作为第一图像,将该用户右眼看到的图像作为第二图像。在一些实施例中,所述电子设备100可以通过两个取像相机来获取用 户自然地身处该3D场景中时,该用户左眼看到的图像和右眼看到的图像。所述电子设备100通过取像相机获取该用户左眼看到的图像和右眼看到的图像的原理可参照前文实施例的相关描述,在此不再赘述。
示例性地,参考图7A,图7A示例性示出了所述电子设备100想要为用户呈现的3D场景以及模拟用户置身于该3D场景中的情景。如图7A所示,该3D场景中包括:太阳、山、树以及草。
参考图7B,其示例性示出了所述电子设备100根据第一IPD,利用所述源图像生成的第一图像和第二图像。第一IPD等于IOD。
参考图7C,其示例性示出了所述电子设备100根据第二IPD,利用所述源图像生成的第一图像和第二图像。第二IPD不等于IOD。
由于第一IPD和第二IPD不同,图7B中的第一图像和图7C中的第一图像具有视差,图7B中的第二图像和图7C中的第二图像具有视差。
之后,所述电子设备100根据所述第一图像生成第一目标图像,所述第一目标图像为所述第一图像的部分,所述第一目标图像中包含所述第一图像的中心,并且所述第一目标图像中的所述第一图像的中心相对所述第一目标图像的中心的偏移量为所述Δi1;所述电子设备100根据所述第二图像生成第二目标图像,所述第二目标图像为所述第二图像的部分,所述第二目标图像中包含所述第二图像的中心,并且所述第二目标图像中的所述第二图像的中心相对所述第二目标图像的中心的偏移量为所述Δi2。换句话说,所述第一目标图像的中心为将所述第一图像的中心调整所述Δi1的偏移量,所述第二目标图像的中心为将所述第二图像的中心调整所述Δi2的偏移量。
具体的,当所述Δi1为正值时,所述第一目标图像中的所述第一图像的中心相对所述第一目标图像的中心向右偏移;当所述Δi1为负值时,所述第一目标图像中的所述第一图像的中心相对所述第一目标图像的中心向左偏移;并且,偏移的距离为所述Δi1的绝对值。类似的,当所述Δi2为正值时,所述第二目标图像中的所述第二图像的中心相对所述第二目标图像的中心向右偏移;当所述Δi2为负值时,所述第二目标图像中的所述第二图像的中心相对所述第二目标图像的中心向左偏移;并且,偏移的距离为所述Δi2的绝对值。
示例性地,参考图7B,图7B还示例性示出了所述电子设备100根据图7B所示的第一图像生成的第一目标图像、根据图7B中的第二图像生成的第二目标图像。如图7B所示,所述Δi1和所述Δi2均为0。
示例性地,参考图7C,图7C示例性示出了电子设备根据图7C所示的第一图像生成的第一目标图像、根据图7C中的第二图像生成的第二目标图像。如图所示,所述Δi1和所述Δi2均不为0。
再描述在一些场景例如3D电影场景下,所述电子设备100如何根据IPD确定目标图像。在一些实施例中,所述源图像包括第三图像和第四图像;所述第三图像和所述第四图像用于为所述用户呈现3D场景。这里,所述第三图像和所述第四图像可以是由两个摄像头预先拍摄好的、针对相同物体的具有视差的两幅图像。
具体的,所述电子设备100根据所述第三图像生成第一目标图像,所述第一目标图像 为所述第三图像的部分,所述第一目标图像中包含所述第三图像的中心,并且所述第一目标图像中的所述第三图像的中心相对所述第一目标图像的中心的偏移量为所述Δi1;所述电子设备100根据所述第四图像生成第二目标图像,所述第二目标图像为所述第四图像的部分,所述第二目标图像中包含所述第四图像的中心,并且所述第二目标图像中的所述第四图像的中心相对所述第二目标图像的中心的偏移量为所述Δi2。换句话说,所述第一目标图像的中心为将所述第三图像的中心调整所述Δi1的偏移量,所述第二目标图像的中心为将所述第四图像的中心调整所述Δi2的偏移量。
这里,所述电子设备100根据所述第三图像生成所述第一目标图像的方式,可参考图7B以及所述电子设备100根据所述第一图像生成所述第一目标图像的方式;所述电子设备100根据所述第四图像生成所述第二目标图像的方式,可参考图7C以及所述电子设备100根据所述第二图像生成所述第二目标图像的方式,这里不再赘述。
(2)所述电子设备100将所述目标图像发送给所述头戴式显示设备200,所述头戴式显示设备200在所述显示屏上显示所述目标图像。
具体的,所述电子设备100将所述第一目标图像和所述第二目标图像发送给所述头戴式显示设备200,以使得所述头戴式显示设备200在所述显示屏101上显示所述第一目标图像,在所述显示屏103显示所述第二目标图像。
参考图8A,其示出了所述头戴式显示设备200显示如图7B所示的第一目标图像和第二目标图像时,实际IPD等于第一IPD的用户使用该头戴式显示设备200在脑海中合成的图像,即该用户实际感受到的3D场景。此时,实际IPD等于第一IPD的用户实际感受到的3D场景和图7A中所述电子设备100想要为用户呈现的3D场景一致。
参考图8B,其示出了所述头戴式显示设备200显示如图7C所示的第一目标图像和第二目标图像时,实际IPD等于第二IPD的用户使用该头戴式显示设备200在脑海中合成的图像,即该用户实际感受到的3D场景。此时,实际IPD等于第二IPD实际感受到的3D场景和图7A中所述电子设备100先要为用户呈现的3D场景一致。
由图8A及图8B可知,无论用户的实际IPD是多少,实施本申请实施例提供的显示方法,都可以使得用户佩戴所述头戴式显示设备200时好像真实地置身于所述电子设备100想要为用户呈现的3D场景中,可让用户在观看3D场景中的物体时的辐辏过程自然、舒适,且辐辏后用户实际感受到的3D场景和所述电子设备100想要为用户呈现的3D场景一致。
可理解的,在上述图5A-图5C、图6、图7A-图7C、图8A-图8B以及相关描述中,在所述电子设备100和所述头戴式显示设备200配合提供VR场景的情况下,所提及的所述头戴式显示设备200显示的图像,均为所述电子设备100生成后发送给所述头戴式显示设备200显示的。所述电子设备100和所述头戴式显示设备200配合提供VR场景的原理可参考前文图2、图3A-图3B以及图3C实施例的相关描述。
不限于上述实施例举例提及的VR场景,本申请实施例提供的显示方法还可以用于AR/MR等场景下,实施原理可参照前文实施例的相关描述。
不限于上述实施例描述的电子设备和头戴式显示设备配合提供VR/AR/MR场景的情况,本申请实施例提供的显示方法还可应用于头戴式显示设备独立提供VR/AR/MR场景的情况。
参考图9A,图9A示出了本申请实施例提供的另一种***20。该***20可以利用VR、AR、MR等技术显示图像,使得用户感受到3D场景,为用户提供VR/AR/MR体验。
如图9A所示,该***20可包括:头戴式显示设备400、输入设备500。所述头戴式显示设备400佩戴于用户头部,所述输入设备500由用户手持。可理解的,所述输入设备500是可选设备,也就是说,所述***20也可以不包括所述输入设备500。
所述***20和所述***10的不同之处在于,所述***20中不包括电子设备,且所述***20中的所述头戴式显示设备400集成了所述电子设备100在上述实施例提供的显示方法中所实现的功能以及相关的硬件装置。
所述头戴式显示设备400和所述输入设备500之间可以通过蓝牙、NFC、ZigBee等近距离传输技术无线连接并通信,还可以通过USB接口、HDMI接口或自定义接口等来有线连接并通信。
所述头戴式显示设备400的可实现形式可参考所述头戴式显示设备200。所述输入设备500的实现形式可参考所述输入设备300。用户可以通过在所述输入设备500上输入用户操作,来触发所述头戴式显示设备400执行对应的功能,具体的实现原理可参考所述***10中的相关描述。
上述实施例中由所述电子设备100和所述头戴式显示设备400执行的操作均可以由该头戴式显示设备400单独执行。例如,所述头戴式显示设备400可以根据用户的指示获取所述第一位置、所述第二位置、所述第三位置和所述第四位置,还可以根据所述第一位置和所述第二位置计算所述Δi1,根据所述第三位置和所述第四位置计算所述Δi2,根据公式1计算IPD,根据用户的IPD生成所述第一图像和所述第二图像并在显示屏上显示所述第一图像和所述第二图像等等。所述头戴式显示设备400执行本申请实施例的显示方法时的各个步骤的具体实现,可参考上述图5A-图5C、图6、图7A-图7C、图8A-图8B以及相关描述。
图9B示出了本申请实施例提供的所述头戴式设备400的结构示意图。
如图9B所示,所述头戴式显示设备400可包括:处理器401、存储器402、通信模块403、传感器***404、摄像头405、显示装置406、音频装置407。以上各个部件可以耦合连接并相互通信。
可理解的,图9B所示的结构并不构成对所述头戴式显示设备400的具体限定。在本申请另一些实施例中,所述头戴式显示设备400可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。例如,所述头戴式显示设备400还可以包括物理按键如开关键、音量键、各类接口例如USB接口等等。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器401可以包括一个或多个处理单元,例如:处理器110可以包括AP,调制解调处理器,GPU,ISP,控制器,视频编解码器,DSP,基带处理器,和/或NPU等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制,使得各个部件执行相应的功能,例如人机交互、运动跟踪/预测、渲染显示、音频处理等。
存储器402存储用于执行本申请实施例提供的显示方法的可执行程序代码,该可执行 程序代码包括指令。存储器402可以包括存储程序区和存储数据区。其中,存储程序区可存储操作***、至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储所述头戴式显示设备400使用过程中所创建的数据(比如音频数据等)等。此外,存储器402可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器401通过运行存储在存储器402的指令,和/或存储在设置于处理器中的存储器的指令,执行所述头戴式显示设备400的各种功能应用以及数据处理。
通信模块403可包括移动通信模块和无线通信模块。其中,移动通信模块可以提供应用在所述头戴式显示设备400上的包括2G/3G/4G/5G等无线通信的解决方案。无线通信模块可以提供应用在所述头戴式显示设备400上的包括WLAN,BT,GNSS,FM,IR等无线通信的解决方案。无线通信模块可以是集成至少一个通信处理模块的一个或多个器件。
传感器***404可包括加速度计、指南针、陀螺仪、磁力计、或用于检测运动的其他传感器等。传感器***404用于采集对应的数据,例如加速度传感器采集所述头戴式显示设备400加速度、陀螺仪传感器采集所述头戴式显示设备400的运动速度等。传感器***404采集到的数据可以反映佩戴该头戴式显示设备400的用户头部的运动情况。在一些实施例中,传感器***404可以为设置在所述头戴式显示设备400内的惯性测量单元(inertial measurement unit,IMU)。在一些实施例中,所述头戴式显示设备400可以将传感器***获取到的数据发送给处理器401进行分析。处理器401可以根据各个传感器采集到的数据,确定用户头部的运动情况,并根据用户头部的运动情况执行对应的功能,例如启动测量IPD的功能等。也就是说,用户可以可通过在所述头戴式显示设备400上输入头部运动操作,来触发头戴式显示设备400执行对应的功能。用户头部的运动情况可包括:是否转动、转动的方向等等。
传感器***404还可以包括光学传感器,用于结合摄像头405来跟踪用户的眼睛位置以及捕获眼球运动数据。该眼球运动数据例如可以用于确定用户的眼间距、每只眼睛相对于所述头戴式显示设备400的3D位置、每只眼睛的扭转和旋转(即转动、俯仰和摇动)的幅度和注视方向等等。在一个示例中,红外光在所述头戴式显示设备400内发射并从每只眼睛反射,反射光由摄像头405或者光学传感器检测到,检测到的数据被传输给处理器401,以使得处理器401从每只眼睛反射的红外光的变化中分析用户眼睛的位置、瞳孔直径、运动状态等。
摄像头405可以用于捕捉捕获静态图像或视频。该静态图像或视频可以是面向外部的用户周围的图像或视频,也可以是面向内部的图像或视频。摄像头405可以跟踪用户单眼或者双眼的运动。摄像头405包括但不限于传统彩色摄像头(RGB camera)、深度摄像头(RGB depth camera)、动态视觉传感器(dynamic vision sensor,DVS)相机等。深度摄像头可以获取被拍摄对象的深度信息。在一些实施例中,摄像头405可用于捕捉用户眼睛的图像,并将图像发送给处理器401进行分析。处理器401可以根据摄像头405采集到的图像,确定用户眼睛的状态,并根据用户眼睛所处的状态执行对应的功能。也就是说,用户可通过在所述头戴式显示设备400上输入眼睛运动操作,来触发所述头戴式显示设备400执行对应的功能。用户眼睛的状态可包括:是否转动、转动的方向、是否长时间未转动、 看向外界的角度等等。
所述头戴式显示设备400通过GPU,显示装置406,以及应用处理器等来呈现或者显示图像。
GPU为图像处理的微处理器,连接显示装置406和应用处理器。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。GPU用于根据从处理器401处得到的数据执行数学和几何计算,利用计算机图形技术、计算机仿真技术等来渲染图像,以提供用于在显示装置406上显示的内容。GPU还用于将校正或预失真添加到图像的渲染过程中,以补偿或校正由显示装置406中的光学组件引起的失真。GPU还可以基于来自传感器***404的数据来调整提供给显示装置406的内容。例如,GPU可以基于用户眼睛的3D位置、瞳距等在提供给显示装置406的内容中添加景深信息。
显示装置406可包括:一个或多个显示屏、一个或多个光学组件。该一个或多个显示屏包括所述显示屏101和所述显示屏103。该一个或多个光学组件包括所述光学组件102和所述光学组件104。这里,所述显示屏101、所述显示屏103、所述光学组件102和所述光学组件104的结构以及它们之间的位置关系可参考图1中的相关描述。后续方法实施例中,为了描述方便,头戴式显示设备200的显示装置206中各个器件的标号沿用图1中的标号,即头戴式显示设备200包括显示屏101、显示屏103、光学组件102和光学组件104。
在本申请实施例中,所述头戴式显示设备400中的显示屏,例如所述显示屏101、所述显示屏103,用于接收所述头戴式显示设备400本身的GPU处理后的数据或内容(例如经过渲染后的图像),并将其显示出来。可理解的,当所述头戴式显示设备400本身具有较为强大的计算功能,可以独立渲染生成图像时。在这种情况下,该头戴式显示设备400可以为计算能力强大的一体机等,无需借助电子设备100即可独立为用户呈现3D场景,给用户提供VR/AR/MR体验。
在本申请实施例中,处理器401可用于根据用户和所述头戴式显示设备400之间的交互来确定该用户的IPD。头戴式显示设备的GPU还可用于根据从处理器210获取到的用户IPD,确定用于在所述头戴式显示设备400上显示的图像,所述头戴式显示设备400可以将GPU确定的图像显示在显示屏上。
所述显示屏101和显示屏103上显示的图像具有视差,从而模拟双眼视觉,可以使得用户感受到该图像对应物体的深度,从而产生真实的3D感。
显示屏,例如显示屏101、显示屏103,可包括显示面板,显示面板可以用于显示图像,从而为用户呈现立体的虚拟场景。显示面板可以采用LCD,OLED,AMOLED,FLED,Miniled,MicroLed,Micro-oLed,QLED等。
光学组件,例如光学组件102、光学组件104,用于将来自显示屏的光引导至出射光瞳以供用户感知。在一些实施方式中,光学组件中的一个或多个光学元件(例如透镜)可具有一个或多个涂层,诸如,抗反射涂层。光学组件对图像光的放大允许显示屏在物理上更小、更轻、消耗更少的功率。另外,图像光的放大可以增加显示屏显示的内容的视野。例如,光学组件可以使得显示屏所显示的内容的视野为用户的全部视野。
光学组件还可用于校正一个或多个光学误差。光学误差的示例包括:桶形失真、枕形失真、纵向色差、横向色差、球面像差、彗形像差、场曲率、散光等。在一些实施方式中, 提供给显示屏显示的内容被预先失真,由光学组件在从显示屏接收基于内容产生的图像光时校正该失真。
音频装置407用于实现音频的采集以及输出。音频装置407可包括但不限于:麦克风、扬声器、耳机等等。
本申请的各实施方式可以任意进行组合,以实现不同的技术效果。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk)等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。
总之,以上所述仅为本发明技术方案的实施例而已,并非用于限定本发明的保护范围。凡根据本发明的揭露,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (28)

  1. 一种***,其特征在于,所述***包括:电子设备和头戴式显示设备,所述电子设备和所述头戴式显示设备连接,所述头戴式显示设备用于佩戴于用户头部;
    所述电子设备用于将用户界面发送给所述头戴式显示设备;所述头戴式显示设备用于在显示屏上显示所述用户界面;
    所述电子设备还用于获取所述用户的IPD,所述用户的IPD是根据所述用户基于所述用户界面输入的用户操作获取的;所述电子设备还用于获取源图像,根据所述用户的IPD校正所述源图像得到目标图像,并将所述目标图像发送给所述头戴式显示设备;
    所述头戴式显示设备还用于在所述显示屏上显示所述目标图像。
  2. 根据权利要求1所述的***,其特征在于,所述显示屏包括第一显示屏和第二显示屏,所述头戴式显示设备还包括对应所述第一显示屏的第一光学组件和对应所述第二显示屏的第二光学组件,所述第一显示屏的中心和所述第一光学组件的中心所处的第一直线垂直于第三直线,所述第二显示屏的中心和所述第二光学组件的中心所处的第二直线垂直于所述第三直线;所述第三直线为所述第一光学组件的中心和所述第二光学组件的中心所处的直线;
    所述用户界面包括第一用户界面和第二用户界面,所述头戴式显示设备具体用于在所述第一显示屏上显示所述第一用户界面、在所述第二显示屏上显示所述第二用户界面;
    所述目标图像包括第一目标图像和第二目标图像,所述头戴式显示设备具体用于在所述第一显示屏上显示所述第一目标图像、在所述第二显示屏上显示所述第二目标图像。
  3. 根据权利要求2所述的***,其特征在于,
    所述头戴式显示设备还用于:
    获取第一位置和第二位置,所述第一位置和所述第二位置是根据显示所述第一用户界面时的所述用户的动作获取的;获取第三位置和第四位置,所述第三位置和所述第四位置是根据显示所述第二用户界面时的所述用户的动作获取的;将所述第一位置、所述第二位置、所述第三位置和所述第四位置发送给所述电子设备;
    所述电子设备还用于:
    根据所述第一位置和所述第二位置确定所述用户的眼睛相对所述第一直线的偏移量Δi1,根据所述第三位置和所述第四位置确定所述用户的眼睛相对所述第二直线的偏移量Δi2,根据所述Δi1和所述Δi2获取所述用户的IPD;
    其中,所述第一位置是所述头戴式显示设备显示所述第一用户界面时所述用户的眼睛看向所述第一显示屏的左侧的位置,所述第二位置是所述头戴式显示设备显示所述第一用户界面时所述用户的眼睛看向所述第一显示屏的右侧的位置;所述第三位置是所述头戴式显示设备显示所述第二用户界面时所述用户的眼睛看向所述第二显示屏的左侧的位置,所述第四位置是所述头戴式显示设备显示所述第二用户界面时所述用户的眼睛看向所述第二显示屏的右侧的位置。
  4. 根据权利要求2所述的***,其特征在于,
    所述头戴式显示设备还用于:
    将在显示所述第一用户界面时采集到的所述用户的操作数据,和,在显示所述第二用户界面时采集到的所述用户的操作数据,发送给所述电子设备;
    所述电子设备还用于:
    获取第一位置和第二位置,所述第一位置和所述第二位置是根据所述头戴式显示设备显示所述第一用户界面时的所述用户的操作数据获取的;获取第三位置和第四位置,所述第三位置和所述第四位置是根据所述头戴式显示设备显示所述第二用户界面时的所述用户的操作数据获取的;
    根据所述第一位置和所述第二位置确定所述用户的眼睛相对所述第一直线的偏移量Δi1,根据所述第三位置和所述第四位置确定所述用户的眼睛相对所述第二直线的偏移量Δi2,根据所述Δi1和所述Δi2获取所述用户的IPD;
    其中,所述第一位置是所述头戴式显示设备显示所述第一用户界面时所述用户的眼睛看向所述第一显示屏的左侧的位置,所述第二位置是所述头戴式显示设备显示所述第一用户界面时所述用户的眼睛看向所述第一显示屏的右侧的位置;所述第三位置是所述头戴式显示设备显示所述第二用户界面时所述用户的眼睛看向所述第二显示屏的左侧的位置,所述第四位置是所述头戴式显示设备显示所述第二用户界面时所述用户的眼睛看向所述第二显示屏的右侧的位置。
  5. 根据权利要求2所述的***,其特征在于,所述***还包括输入设备,
    所述输入设备用于:
    将在所述头戴式显示设备显示所述第一用户界面时检测到的用户操作,和,在所述头戴式显示设备显示所述第二用户界面时检测到的用户操作,发送给所述电子设备;
    所述电子设备还用于:
    获取第一位置和第二位置,所述第一位置和所述第二位置是根据所述输入设备在所述头戴式显示设备显示所述第一用户界面时检测到的用户操作获取的;获取第三位置和第四位置,所述第三位置和所述第四位置是根据所述输入设备在所述头戴式显示设备显示所述第二用户界面时检测到的用户操作获取的;
    根据所述第一位置和所述第二位置确定所述用户的眼睛相对所述第一直线的偏移量Δi1,根据所述第三位置和所述第四位置确定所述用户的眼睛相对所述第二直线的偏移量Δi2,根据所述Δi1和所述Δi2获取所述用户的IPD;
    其中,所述第一位置是所述头戴式显示设备显示所述第一用户界面时所述用户的眼睛看向所述第一显示屏的左侧的位置,所述第二位置是所述头戴式显示设备显示所述第一用户界面时所述用户的眼睛看向所述第一显示屏的右侧的位置;所述第三位置是所述头戴式显示设备显示所述第二用户界面时所述用户的眼睛看向所述第二显示屏的左侧的位置,所述第四位置是所述头戴式显示设备显示所述第二用户界面时所述用户的眼睛看向所述第二显示屏的右侧的位置。
  6. 根据权利要求3-5任一项所述的***,其特征在于,
    所述电子设备具体用于根据以下公式计算所述Δi1:
    Figure PCTCN2020127413-appb-100001
    其中,JO′为所述第一位置到所述第一直线的距离,KO′为所述第二位置到所述第一直线的距离,M为所述第一光学组件的放大倍数,L为所述第一光学组件的直径;
    当所述Δi1的值为正时,所述用户的眼睛相对所述第一直线向右偏移;当所述Δi1的值为负时,所述用户的眼睛相对所述第一直线向左偏移。
  7. 根据权利要求3-6任一项所述的***,其特征在于,
    所述电子设备具体用于根据以下公式计算所述用户的瞳距IPD:
    IPD=IOD-Δi1+Δi2
    其中,所述IOD为所述第一显示屏的中心和所述第二显示屏的中心之间的距离。
  8. 根据权利要求3-7任一项所述的***,其特征在于,
    所述电子设备具体用于:
    根据所述用户的IPD,利用所述源图像生成第一图像和第二图像;
    根据所述第一图像生成第一目标图像,所述第一目标图像的中心为将所述第一图像的中心调整所述Δi1的偏移量;
    根据所述第二图像生成第二目标图像,所述第二目标图像的中心为将所述第二图像的中心调整所述Δi2的偏移量。
  9. 根据权利要求3-7任一项所述的***,其特征在于,所述源图像包括:第三图像和第四图像;
    所述电子设备具体用于:
    根据所述第三图像生成第一目标图像,所述第一目标图像的中心为将所述第三图像的中心调整所述Δi1的偏移量;
    根据所述第四图像生成第二目标图像,所述第二目标图像的中心为将所述第四图像的中心调整所述Δi2的偏移量。
  10. 一种显示方法,其特征在于,所述方法包括:
    所述电子设备将用户界面发送给头戴式显示设备,所述用户界面用于显示在所述头戴式显示设备的显示屏上;
    所述电子设备获取所述用户的IPD,所述用户的IPD是根据所述用户基于所述用户界面输入的用户操作获取的;
    所述电子设备获取源图像,根据所述用户的IPD校正所述源图像得到目标图像,并将所述目标图像发送给所述头戴式显示设备;所述目标图像用于显示在所述显示屏上。
  11. 根据权利要求10所述的方法,其特征在于,所述显示屏包括第一显示屏和第二显示屏,所述头戴式显示设备还包括对应所述第一显示屏的第一光学组件和对应所述第二显示屏的第二光学组件,所述第一显示屏的中心和所述第一光学组件的中心所处的第一直线垂直于第三直线,所述第二显示屏的中心和所述第二光学组件的中心所处的第二直线垂直于所述第三直线;所述第三直线为所述第一光学组件的中心和所述第二光学组件的中心所处的直线;
    所述用户界面包括第一用户界面和第二用户界面,所述第一用户界面用于显示在所述 第一显示屏上,所述第二用户界面用于显示在所述第二显示屏上;
    所述目标图像包括第一目标图像和第二目标图像,所述第一目标图像用于显示在所述第一显示屏上,所述第二目标图像用于显示在所述第二显示屏上。
  12. 根据权利要求11所述的方法,其特征在于,所述电子设备获取所述用户的IPD,具体包括:
    所述电子设备接收所述头戴式显示设备发送的第一位置、第二位置、第三位置和第四位置;
    所述电子设备根据所述第一位置和所述第二位置确定所述用户的眼睛相对所述第一直线的偏移量Δi1,根据所述第三位置和所述第四位置确定所述用户的眼睛相对所述第二直线的偏移量Δi2,根据所述Δi1和所述Δi2获取所述用户的IPD;
    其中,所述第一位置是所述头戴式显示设备显示所述第一用户界面时所述用户的眼睛看向所述第一显示屏的左侧的位置,所述第二位置是所述头戴式显示设备显示所述第一用户界面时所述用户的眼睛看向所述第一显示屏的右侧的位置;所述第三位置是所述头戴式显示设备显示所述第二用户界面时所述用户的眼睛看向所述第二显示屏的左侧的位置,所述第四位置是所述头戴式显示设备显示所述第二用户界面时所述用户的眼睛看向所述第二显示屏的右侧的位置。
  13. 根据权利要求11所述的方法,其特征在于,所述电子设备获取所述用户的IPD,具体包括:
    所述电子设备接收所述头戴式显示设备在显示所述第一用户界面时采集到的所述用户的操作数据,和,所述头戴式显示设备在显示所述第二用户界面时采集到的所述用户的操作数据;
    所述电子设备获取第一位置和第二位置,所述第一位置和所述第二位置是根据所述头戴式显示设备显示所述第一用户界面时的所述用户的操作数据获取的;获取第三位置和第四位置,所述第三位置和所述第四位置是根据所述头戴式显示设备显示所述第二用户界面时的所述用户的操作数据获取的;
    所述电子设备根据所述第一位置和所述第二位置确定所述用户的眼睛相对所述第一直线的偏移量Δi1,根据所述第三位置和所述第四位置确定所述用户的眼睛相对所述第二直线的偏移量Δi2,根据所述Δi1和所述Δi2获取所述用户的IPD;
    其中,所述第一位置是所述头戴式显示设备显示所述第一用户界面时所述用户的眼睛看向所述第一显示屏的左侧的位置,所述第二位置是所述头戴式显示设备显示所述第一用户界面时所述用户的眼睛看向所述第一显示屏的右侧的位置;所述第三位置是所述头戴式显示设备显示所述第二用户界面时所述用户的眼睛看向所述第二显示屏的左侧的位置,所述第四位置是所述头戴式显示设备显示所述第二用户界面时所述用户的眼睛看向所述第二显示屏的右侧的位置。
  14. 根据权利要求11所述的方法,其特征在于,所述电子设备获取所述用户的IPD,具体包括:
    所述电子设备接收所述输入设备在所述头戴式显示设备显示所述第一用户界面时检测到的用户操作,和,所述输入设备在所述头戴式显示设备显示所述第二用户界面时检测到 的用户操作;
    所述电子设备获取第一位置和第二位置,所述第一位置和所述第二位置是根据所述输入设备在所述头戴式显示设备显示所述第一用户界面时检测到的用户操作获取的;获取第三位置和第四位置,所述第三位置和所述第四位置是根据所述输入设备在所述头戴式显示设备显示所述第二用户界面时检测到的用户操作获取的;
    所述电子设备根据所述第一位置和所述第二位置确定所述用户的眼睛相对所述第一直线的偏移量Δi1,根据所述第三位置和所述第四位置确定所述用户的眼睛相对所述第二直线的偏移量Δi2,根据所述Δi1和所述Δi2获取所述用户的IPD;
    其中,所述第一位置是所述头戴式显示设备显示所述第一用户界面时所述用户的眼睛看向所述第一显示屏的左侧的位置,所述第二位置是所述头戴式显示设备显示所述第一用户界面时所述用户的眼睛看向所述第一显示屏的右侧的位置;所述第三位置是所述头戴式显示设备显示所述第二用户界面时所述用户的眼睛看向所述第二显示屏的左侧的位置,所述第四位置是所述头戴式显示设备显示所述第二用户界面时所述用户的眼睛看向所述第二显示屏的右侧的位置。
  15. 根据权利要求12-14任一项所述的方法,其特征在于,所述电子设备根据所述第一位置和所述第二位置确定所述用户的眼睛相对所述第一直线的偏移量Δi1,具体包括:
    所述电子设备根据以下公式计算所述Δi1:
    Figure PCTCN2020127413-appb-100002
    其中,JO′为所述第一位置到所述第一直线的距离,KO′为所述第二位置到所述第一直线的距离,M为所述第一光学组件的放大倍数,L为所述第一光学组件的直径;
    当所述Δi1的值为正时,所述用户的眼睛相对所述第一直线向右偏移;当所述Δi1的值为负时,所述用户的眼睛相对所述第一直线向左偏移。
  16. 根据权利要求12-15任一项所述的方法,其特征在于,所述电子设备根据所述Δi1和所述Δi2获取所述用户的IPD,具体包括:
    所述电子设备根据以下公式计算所述用户的瞳距IPD:
    IPD=IOD-Δi1+Δi2
    其中,所述IOD为所述第一显示屏的中心和所述第二显示屏的中心之间的距离。
  17. 根据权利要求12-16任一项所述的方法,其特征在于,所述电子设备根据所述用户的IPD校正所述源图像得到目标图像,具体包括:
    所述电子设备根据所述用户的IPD,利用所述源图像生成第一图像和第二图像;
    所述电子设备根据所述第一图像生成第一目标图像,所述第一目标图像的中心为将所述第一图像的中心调整所述Δi1的偏移量;
    所述电子设备根据所述第二图像生成第二目标图像,所述第二目标图像的中心为将所述第二图像的中心调整所述Δi2的偏移量。
  18. 根据权利要求12-16任一项所述的方法,其特征在于,所述源图像包括:第三图像和第四图像;所述电子设备根据所述用户的IPD校正所述源图像得到目标图像,具体包括:
    所述电子设备根据所述第三图像生成第一目标图像,所述第一目标图像的中心为将所述第三图像的中心调整所述Δi1的偏移量;
    所述电子设备根据所述第四图像生成第二目标图像,所述第二目标图像的中心为将所述第四图像的中心调整所述Δi2的偏移量。
  19. 一种电子设备,其特征在于,所述电子设备包括:一个或多个处理器、存储器;
    所述存储器与所述一个或多个处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行:
    将用户界面发送给头戴式显示设备,所述用户界面用于显示在所述头戴式显示设备的显示屏上;
    获取用户的IPD,所述用户的IPD是根据所述用户基于所述用户界面输入的用户操作获取的;
    获取源图像,根据所述用户的IPD校正所述源图像得到目标图像,并将所述目标图像发送给所述头戴式显示设备;所述目标图像用于显示在所述显示屏上。
  20. 根据权利要求19所述的电子设备,其特征在于,所述显示屏包括第一显示屏和第二显示屏,所述头戴式显示设备还包括对应所述第一显示屏的第一光学组件和对应所述第二显示屏的第二光学组件,所述第一显示屏的中心和所述第一光学组件的中心所处的第一直线垂直于第三直线,所述第二显示屏的中心和所述第二光学组件的中心所处的第二直线垂直于所述第三直线;所述第三直线为所述第一光学组件的中心和所述第二光学组件的中心所处的直线;
    所述用户界面包括第一用户界面和第二用户界面,所述第一用户界面用于显示在所述第一显示屏上,所述第二用户界面用于显示在所述第二显示屏上;
    所述目标图像包括第一目标图像和第二目标图像,所述第一目标图像用于显示在所述第一显示屏上,所述第二目标图像用于显示在所述第二显示屏上。
  21. 根据权利要求20所述的电子设备,其特征在于,所述一个或多个处理器具体用于调用所述计算机指令以使得所述电子设备执行:
    接收所述头戴式显示设备发送的第一位置、第二位置、第三位置和第四位置;
    根据所述第一位置和所述第二位置确定所述用户的眼睛相对所述第一直线的偏移量Δi1,根据所述第三位置和所述第四位置确定所述用户的眼睛相对所述第二直线的偏移量Δi2,根据所述Δi1和所述Δi2获取所述用户的IPD;
    其中,所述第一位置是所述头戴式显示设备显示所述第一用户界面时所述用户的眼睛看向所述第一显示屏的左侧的位置,所述第二位置是所述头戴式显示设备显示所述第一用户界面时所述用户的眼睛看向所述第一显示屏的右侧的位置;所述第三位置是所述头戴式显示设备显示所述第二用户界面时所述用户的眼睛看向所述第二显示屏的左侧的位置,所述第四位置是所述头戴式显示设备显示所述第二用户界面时所述用户的眼睛看向所述第二显示屏的右侧的位置。
  22. 根据权利要求20所述的电子设备,其特征在于,所述一个或多个处理器具体用于 调用所述计算机指令以使得所述电子设备执行:
    接收所述头戴式显示设备在显示所述第一用户界面时采集到的所述用户的操作数据,和,所述头戴式显示设备在显示所述第二用户界面时采集到的所述用户的操作数据;
    获取第一位置和第二位置,所述第一位置和所述第二位置是根据所述头戴式显示设备显示所述第一用户界面时的所述用户的操作数据获取的;获取第三位置和第四位置,所述第三位置和所述第四位置是根据所述头戴式显示设备显示所述第二用户界面时的所述用户的操作数据获取的;
    根据所述第一位置和所述第二位置确定所述用户的眼睛相对所述第一直线的偏移量Δi1,根据所述第三位置和所述第四位置确定所述用户的眼睛相对所述第二直线的偏移量Δi2,根据所述Δi1和所述Δi2获取所述用户的IPD;
    其中,所述第一位置是所述头戴式显示设备显示所述第一用户界面时所述用户的眼睛看向所述第一显示屏的左侧的位置,所述第二位置是所述头戴式显示设备显示所述第一用户界面时所述用户的眼睛看向所述第一显示屏的右侧的位置;所述第三位置是所述头戴式显示设备显示所述第二用户界面时所述用户的眼睛看向所述第二显示屏的左侧的位置,所述第四位置是所述头戴式显示设备显示所述第二用户界面时所述用户的眼睛看向所述第二显示屏的右侧的位置。
  23. 根据权利要求20所述的电子设备,其特征在于,所述一个或多个处理器具体用于调用所述计算机指令以使得所述电子设备执行:
    接收所述输入设备在所述头戴式显示设备显示所述第一用户界面时检测到的用户操作,和,所述输入设备在所述头戴式显示设备显示所述第二用户界面时检测到的用户操作;
    获取第一位置和第二位置,所述第一位置和所述第二位置是根据所述输入设备在所述头戴式显示设备显示所述第一用户界面时检测到的用户操作获取的;获取第三位置和第四位置,所述第三位置和所述第四位置是根据所述输入设备在所述头戴式显示设备显示所述第二用户界面时检测到的用户操作获取的;
    根据所述第一位置和所述第二位置确定所述用户的眼睛相对所述第一直线的偏移量Δi1,根据所述第三位置和所述第四位置确定所述用户的眼睛相对所述第二直线的偏移量Δi2,根据所述Δi1和所述Δi2获取所述用户的IPD;
    其中,所述第一位置是所述头戴式显示设备显示所述第一用户界面时所述用户的眼睛看向所述第一显示屏的左侧的位置,所述第二位置是所述头戴式显示设备显示所述第一用户界面时所述用户的眼睛看向所述第一显示屏的右侧的位置;所述第三位置是所述头戴式显示设备显示所述第二用户界面时所述用户的眼睛看向所述第二显示屏的左侧的位置,所述第四位置是所述头戴式显示设备显示所述第二用户界面时所述用户的眼睛看向所述第二显示屏的右侧的位置。
  24. 根据权利要求21-23任一项所述的电子设备,其特征在于,所述一个或多个处理器具体用于调用所述计算机指令以使得所述电子设备执行:
    根据以下公式计算所述Δi1:
    Figure PCTCN2020127413-appb-100003
    其中,JO′为所述第一位置到所述第一直线的距离,KO′为所述第二位置到所述第一直线的距离,M为所述第一光学组件的放大倍数,L为所述第一光学组件的直径;
    当所述Δi1的值为正时,所述用户的眼睛相对所述第一直线向右偏移;当所述Δi1的值为负时,所述用户的眼睛相对所述第一直线向左偏移。
  25. 根据权利要求21-24任一项所述的电子设备,其特征在于,所述一个或多个处理器具体用于调用所述计算机指令以使得所述电子设备执行:
    根据以下公式计算所述用户的瞳距IPD:
    IPD=IOD-Δi1+Δi2
    其中,所述IOD为所述第一显示屏的中心和所述第二显示屏的中心之间的距离。
  26. 根据权利要求21-25任一项所述的电子设备,其特征在于,所述一个或多个处理器具体用于调用所述计算机指令以使得所述电子设备执行:
    根据所述用户的IPD,利用所述源图像生成第一图像和第二图像;
    根据所述第一图像生成第一目标图像,所述第一目标图像的中心为将所述第一图像的中心调整所述Δi1的偏移量;
    根据所述第二图像生成第二目标图像,所述第二目标图像的中心为将所述第二图像的中心调整所述Δi2的偏移量。
  27. 根据权利要求21-25任一项所述的电子设备,其特征在于,所述一个或多个处理器具体用于调用所述计算机指令以使得所述电子设备执行:
    根据所述第三图像生成第一目标图像,所述第一目标图像的中心为将所述第三图像的中心调整所述Δi1的偏移量;
    根据所述第四图像生成第二目标图像,所述第二目标图像的中心为将所述第四图像的中心调整所述Δi2的偏移量。
  28. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行如权利要求10-18中任一项所述的方法。
PCT/CN2020/127413 2019-11-30 2020-11-09 显示方法、电子设备及*** WO2021103990A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20892723.6A EP4044000A4 (en) 2019-11-30 2020-11-09 DISPLAY METHOD, ELECTRONIC DEVICE AND SYSTEM
US17/780,409 US20220404631A1 (en) 2019-11-30 2020-11-09 Display method, electronic device, and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911208308.4 2019-11-30
CN201911208308.4A CN111103975B (zh) 2019-11-30 2019-11-30 显示方法、电子设备及***

Publications (1)

Publication Number Publication Date
WO2021103990A1 true WO2021103990A1 (zh) 2021-06-03

Family

ID=70420916

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/127413 WO2021103990A1 (zh) 2019-11-30 2020-11-09 显示方法、电子设备及***

Country Status (4)

Country Link
US (1) US20220404631A1 (zh)
EP (1) EP4044000A4 (zh)
CN (1) CN111103975B (zh)
WO (1) WO2021103990A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111103975B (zh) * 2019-11-30 2022-09-23 华为技术有限公司 显示方法、电子设备及***
CN112068326B (zh) * 2020-09-17 2022-08-09 京东方科技集团股份有限公司 3d显示装置
CN114327032A (zh) * 2021-02-08 2022-04-12 海信视像科技股份有限公司 一种虚拟现实设备及vr画面显示方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104076513A (zh) * 2013-03-26 2014-10-01 精工爱普生株式会社 头戴式显示装置、头戴式显示装置的控制方法、以及显示***
CN105190427A (zh) * 2013-03-13 2015-12-23 索尼电脑娱乐公司 数字瞳孔间距离调节
CN105469004A (zh) * 2015-12-31 2016-04-06 上海小蚁科技有限公司 一种显示装置和显示方法
US20170184847A1 (en) * 2015-12-28 2017-06-29 Oculus Vr, Llc Determining interpupillary distance and eye relief of a user wearing a head-mounted display
CN107682690A (zh) * 2017-10-19 2018-02-09 京东方科技集团股份有限公司 自适应视差调节方法和虚拟现实vr显示***
CN207706338U (zh) * 2017-12-04 2018-08-07 深圳市冠旭电子股份有限公司 Vr智能头戴设备和vr图像显示***
CN111103975A (zh) * 2019-11-30 2020-05-05 华为技术有限公司 显示方法、电子设备及***

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5767820A (en) * 1995-05-09 1998-06-16 Virtual Research Systems Head-mounted visual display apparatus
KR100444981B1 (ko) * 2000-12-15 2004-08-21 삼성전자주식회사 착용형 디스플레이 시스템
KR100406945B1 (ko) * 2001-02-19 2003-11-28 삼성전자주식회사 착용형 디스플레이 장치
HUP0203993A2 (hu) * 2002-11-19 2004-08-30 László Domján Binokuláris videoszemüveg optikai rendszere
JP2004333661A (ja) * 2003-05-02 2004-11-25 Nippon Hoso Kyokai <Nhk> 立体画像表示装置、立体画像表示方法および立体画像表示プログラム
US20070273983A1 (en) * 2006-05-26 2007-11-29 Hebert Raymond T Devices, methods, and systems for image viewing
US8348429B2 (en) * 2008-03-27 2013-01-08 Doheny Eye Institute Optical coherence tomography device, method, and system
US20100149073A1 (en) * 2008-11-02 2010-06-17 David Chaum Near to Eye Display System and Appliance
US20120212499A1 (en) * 2010-02-28 2012-08-23 Osterhout Group, Inc. System and method for display content control during glasses movement
US8964298B2 (en) * 2010-02-28 2015-02-24 Microsoft Corporation Video display modification based on sensor input for a see-through near-to-eye display
JP2011205195A (ja) * 2010-03-24 2011-10-13 Nikon Corp 画像処理装置、プログラム、画像処理方法、椅子および観賞システム
CN103930015B (zh) * 2011-02-17 2017-10-20 伟伦公司 光反射照相眼部筛查装置和方法
KR101824501B1 (ko) * 2011-05-19 2018-02-01 삼성전자 주식회사 헤드 마운트 디스플레이 장치의 이미지 표시 제어 장치 및 방법
US20130271575A1 (en) * 2012-04-11 2013-10-17 Zspace, Inc. Dynamically Controlling an Imaging Microscopy System
US9274742B2 (en) * 2012-10-25 2016-03-01 PixiOnCloud, Inc. Visual-symbolic control of remote devices having display-based user interfaces
CN102961119B (zh) * 2012-11-26 2015-01-07 深圳恒兴视光科技有限公司 瞳距仪
WO2014108799A2 (en) * 2013-01-13 2014-07-17 Quan Xiao Apparatus and methods of real time presenting 3d visual effects with stereopsis more realistically and substract reality with external display(s)
US9313481B2 (en) * 2014-02-19 2016-04-12 Microsoft Technology Licensing, Llc Stereoscopic display responsive to focal-point shift
KR102229890B1 (ko) * 2014-05-30 2021-03-19 삼성전자주식회사 데이터 처리 방법 및 그 전자 장치
US10271042B2 (en) * 2015-05-29 2019-04-23 Seeing Machines Limited Calibration of a head mounted eye tracking system
US10304446B2 (en) * 2016-02-03 2019-05-28 Disney Enterprises, Inc. Self calibration for smartphone goggles
WO2017177187A1 (en) * 2016-04-08 2017-10-12 Vizzario, Inc. Methods and systems for obtaining. analyzing, and generating vision performance data and modifying media based on the data
WO2018100875A1 (ja) * 2016-11-30 2018-06-07 ソニー株式会社 情報処理装置、情報処理方法およびプログラム
CN107506036B (zh) * 2017-08-23 2020-10-09 歌尔股份有限公司 Vr瞳距调节方法和装置
US10459237B2 (en) * 2018-02-01 2019-10-29 Dell Products L.P. System, head mounted device (HMD) and method for adjusting a position of an HMD worn by a user
CN109856802B (zh) * 2019-04-17 2021-08-31 京东方科技集团股份有限公司 瞳距调节方法、装置和虚拟显示设备
CN110162186A (zh) * 2019-06-19 2019-08-23 Oppo广东移动通信有限公司 控制方法、头戴设备和存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105190427A (zh) * 2013-03-13 2015-12-23 索尼电脑娱乐公司 数字瞳孔间距离调节
CN104076513A (zh) * 2013-03-26 2014-10-01 精工爱普生株式会社 头戴式显示装置、头戴式显示装置的控制方法、以及显示***
US20170184847A1 (en) * 2015-12-28 2017-06-29 Oculus Vr, Llc Determining interpupillary distance and eye relief of a user wearing a head-mounted display
CN105469004A (zh) * 2015-12-31 2016-04-06 上海小蚁科技有限公司 一种显示装置和显示方法
CN107682690A (zh) * 2017-10-19 2018-02-09 京东方科技集团股份有限公司 自适应视差调节方法和虚拟现实vr显示***
CN207706338U (zh) * 2017-12-04 2018-08-07 深圳市冠旭电子股份有限公司 Vr智能头戴设备和vr图像显示***
CN111103975A (zh) * 2019-11-30 2020-05-05 华为技术有限公司 显示方法、电子设备及***

Also Published As

Publication number Publication date
CN111103975B (zh) 2022-09-23
EP4044000A4 (en) 2022-12-14
US20220404631A1 (en) 2022-12-22
CN111103975A (zh) 2020-05-05
EP4044000A1 (en) 2022-08-17

Similar Documents

Publication Publication Date Title
WO2021103990A1 (zh) 显示方法、电子设备及***
KR102289389B1 (ko) 가상 오브젝트 방위 및 시각화
CN109471522B (zh) 用于在虚拟现实中控制指示器的方法和电子设备
WO2020192458A1 (zh) 一种图像处理的方法及头戴式显示设备
US20160063767A1 (en) Method for providing visual reality service and apparatus for the same
KR20180062174A (ko) 햅틱 신호 생성 방법 및 이를 지원하는 전자 장치
KR20180074369A (ko) 3차원 컨텐츠의 썸네일 관리 방법 및 그 장치
CN110503959B (zh) 语音识别数据分发方法、装置、计算机设备及存储介质
KR20180028796A (ko) 이미지 표시 방법, 저장 매체 및 전자 장치
CN112835445B (zh) 虚拟现实场景中的交互方法、装置及***
US11887261B2 (en) Simulation object identity recognition method, related apparatus, and system
WO2022160991A1 (zh) 权限控制方法和电子设备
US20180176536A1 (en) Electronic device and method for controlling the same
WO2022095744A1 (zh) Vr显示控制方法、电子设备及计算机可读存储介质
WO2022252924A1 (zh) 图像传输与显示方法、相关设备及***
US11798234B2 (en) Interaction method in virtual reality scenario and apparatus
CN116703693A (zh) 一种图像渲染方法及电子设备
CN112770177A (zh) 多媒体文件生成方法、多媒体文件发布方法及装置
WO2023082980A1 (zh) 一种显示方法与电子设备
US20230152900A1 (en) Wearable electronic device for displaying virtual object and method of controlling the same
WO2022111593A1 (zh) 一种用户图形界面显示方法及其装置
CN108476314B (zh) 内容显示方法以及执行该方法的电子装置
CN114637392A (zh) 显示方法及电子设备
KR102405385B1 (ko) 3d 컨텐츠를 위한 여러 오브젝트를 생성하는 방법 및 시스템
WO2024046182A1 (zh) 一种音频播放方法、***及相关装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20892723

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020892723

Country of ref document: EP

Effective date: 20220511

NENP Non-entry into the national phase

Ref country code: DE