WO2023001113A1 - Procédé d'affichage et dispositif électronique - Google Patents

Procédé d'affichage et dispositif électronique Download PDF

Info

Publication number
WO2023001113A1
WO2023001113A1 PCT/CN2022/106280 CN2022106280W WO2023001113A1 WO 2023001113 A1 WO2023001113 A1 WO 2023001113A1 CN 2022106280 W CN2022106280 W CN 2022106280W WO 2023001113 A1 WO2023001113 A1 WO 2023001113A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
depth
frame
field
user
Prior art date
Application number
PCT/CN2022/106280
Other languages
English (en)
Chinese (zh)
Inventor
王松
沈钢
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023001113A1 publication Critical patent/WO2023001113A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the present application relates to the field of electronic technology, in particular to a display method and electronic equipment.
  • VR technology is a means of human-computer interaction created with the help of computer and sensor technology.
  • VR technology integrates computer graphics technology, computer simulation technology, sensor technology, display technology and other scientific technologies to create a virtual environment, and users can immerse themselves in the virtual environment by wearing VR glasses.
  • the virtual environment is presented through continuous refreshing of many three-dimensional images, and the three-dimensional images include objects in different depths of field, giving users a sense of three-dimensionality.
  • VAC Vergence accommodation conflict
  • the purpose of the present application is to provide a display method and an electronic device for improving VR experience.
  • a display method includes: displaying N frames of images to the user through a display device; N is a positive integer; on the i-th frame of the N frames of images, the definition of the first object in the first depth of field is the first definition; In the j-th frame of the N frames of images, the definition of the first object in the first depth of field is the second definition; in the k-th frame of the N frames of images, in the first The sharpness of the first object in the depth of field is the third sharpness; wherein, the first sharpness is smaller than the second sharpness, and the second sharpness is greater than the third sharpness, i, j , k are all positive integers less than N, i ⁇ j ⁇ k; wherein, the first depth of field is greater than the second depth of field, or, the distance between the first depth of field and the depth of field where the user gaze point is greater than the first distance.
  • the definition of the first object in the first depth of field in the image stream (that is, N frames of images) displayed by the VR display device (for example, VR glasses) to the user varies from high to low.
  • the definition of the first object with a larger depth of field or the first object farther away from the user's gaze point varies from high to low.
  • the VR display device displays the i-th frame or the k-th frame of images
  • the definition of the first object is relatively low, it can relieve human brain fatigue.
  • the VR display device displays the jth frame of image
  • the brain can capture the details of the first object, and the details of the first object will not be lost. Therefore, in this way, the fatigue of the human brain can be relieved, and it can also ensure that the human brain can absorb enough details of the object, and the user experience is better.
  • the gaze point of the user remains unchanged. That is to say, when the user wears VR glasses to watch the virtual environment, if the user's gaze point does not change, the definition of the first object far away from the user's gaze point may be high or low (or rise or fall). For example, when the VR glasses display the i-th frame or the k-th frame of images, since the definition of the first object is relatively low, it can relieve human brain fatigue.
  • the details of the first object can be avoided from being lost, and sufficient details of the first object can be captured in the brain. In this way, it can not only alleviate the fatigue of the human brain, but also ensure that the human brain can absorb enough details of the object, and the user experience is better.
  • the first depth of field changes accordingly.
  • the distance between the first depth of field and the depth of field where object A is located is greater than the first distance (for example, 0.5m), so the first depth of field is greater than or equal to 1m ;
  • the distance between the first depth of field and the depth of field where object B is located is greater than the first distance (for example, 0.5m), so the first depth of field becomes greater than or equals 1.3m; therefore, as the user's focus changes, the first depth of field changes.
  • the second depth of field includes: the depth of field specified by the user, the depth of field of the user's gaze point, the default depth of field of the system, the depth of field of the virtual image plane, the depth of field corresponding to the virtual scene, and the subject object on the i-th frame image At least one of the depths of field. That is to say, in the image stream displayed by the display device, the definition of the first object whose depth of field is greater than the second depth of field varies from high to low.
  • the second depth of field may be determined in multiple ways, for example, method 1, the second depth of field may be determined according to a VR scene, and the preset depth of field varies with different VR scenes.
  • the VR scene includes but is not limited to at least one of VR games, VR viewing, VR teaching and the like.
  • the second depth of field may be set by the user. It should be understood that different VR applications may set different second depths of field. Mode 3, the second depth of field may also be the default depth of field. Mode 4, the second depth of field may also be the depth of field where the virtual image plane is located. Mode 5, the second depth of field may also be the depth of field where the main object is located in the picture currently being displayed by the VR display device. Method 6. The second depth of field is the depth of field where the user's gaze point is located. It should be noted that several methods for determining the second depth of field are listed above, but the embodiments of the present application are not limited to the above methods, and other methods for determining the second depth of field are also feasible.
  • the method before displaying the N frames of images, the method further includes: detecting that the user triggers an operation for starting the eye protection mode, the user's viewing time is greater than the first duration, and the user's eyesight within the second duration At least one of the number of blinks/squints is greater than the first number.
  • the display device initially displays with the same resolution, that is, all objects in the displayed image stream have the same resolution.
  • the user’s viewing time is greater than the first duration
  • the number of blinks/squints of the user’s eyes within the second duration is greater than the first number, the first in the displayed image stream
  • the sharpness of the first object in the depth of field goes up and down.
  • the technical solution of the present application is started (the definition of the first object at the first depth of field in the image stream increases or decreases), saving images. Power consumption caused by processing (such as blurring the first object on the i-th frame and the k-th frame image).
  • the clarity of the second object in the third depth of field in the N frames of images is the same. That is to say, in the image stream, the first object at the first depth of field has higher or lower sharpness, while the second object at the third depth of field has the same sharpness.
  • the third depth of field is smaller than the first depth of field. That is, the definition of the first object with a larger depth of field varies from high to low, while the definition of the second object with a smaller depth of field remains unchanged.
  • the human eye looks at nearby objects, the more details it sees, the clearer it is, and when looking at distant objects, the less details it sees, the more blurred. Therefore, when the distant objects are blurred and the nearby objects are clear on the image displayed by the display device, the virtual environment felt by people is more in line with the real situation.
  • the clarity of distant objects in the image stream displayed by the display device varies from high to low, and is not always in a blurred state, so it can ensure that the human brain can obtain sufficient details of distant objects.
  • the definition of objects far away from the user's gaze point varies from high to low, and is not always in a blurred state, so it can ensure that the human brain can obtain sufficient details of objects far away from the user's gaze point.
  • a possible implementation manner is that the time interval between the display time of the j-th frame of image and the display time of the i-th frame of image is less than or equal to the user's visual dwell time; and/or, the k-th The time interval between the display time of the frame image and the display time of the jth frame image is less than or equal to the time dwell time.
  • the definition of the first object on the i-th frame of image is low, and the definition of the first object on the j-th frame of image is high. Display the i-th frame image and the j-th frame image, in this case, the human brain will fuse the i-th frame image and the j-th frame image to ensure that the human brain can capture enough details of the first object.
  • n and m may be determined according to the user's visual dwell time and image refresh frame rate. Assume that the user's visual dwell time is T, and the image refresh frame rate is P. Then, within T time, T/P frame images can be displayed, then n is less than or equal to T/P, and m is less than or equal to T/P. Therefore, the i-th frame image and the j-th frame image are displayed within the user's visual dwell time. In this way, the human brain will fuse the i-th frame image and the j-th frame image to ensure that the human brain can capture enough details of the first object.
  • the display device includes a first display screen and a second display screen, the first display screen is used to present images to the user's left eye, and the second display screen is used to display images to the user's right eye presenting an image; the images displayed on the first display screen and the second display screen are synchronized.
  • the synchronization of the images displayed on the first display screen and the second display screen can be understood as that the first display screen displays the i-th frame image, then the second display screen also displays the i-th frame image, ensuring the order of the images displayed on the two display screens unanimous.
  • the first display screen and the second display screen respectively display the N frames of images; it can be understood that the first display screen and the second display screen display the same group of image streams, and the first object in the image streams Clarity is high and low. Since the image streams displayed on the two display screens are the same, and the displayed images are synchronized, for example, the i-th frame and the j-th frame are displayed at the same time, and so on. Therefore, when the first object is blurred on the first display screen, the first object is also blurred on the second display screen. That is to say, the change trend of the clarity of the first object in the image stream displayed on the first display screen and the image stream displayed on the second display screen is the same, for example, both have a change trend of "blurry-clear-blurry-clear".
  • the first display screen displays the N frames of images; the second display screen displays another N frames of images; the other N frames of images have the same image content as the N frames of images; the other On the i-th image in the N frames of images, the definition of the first object in the first depth of field is the fourth definition; in the j-th image in the other N frames of images, in the first depth of field The definition of the first object is the fifth definition; the definition of the first object in the first depth of field on the k-th frame image in the other N frames of images is the sixth definition; Wherein, the fourth definition is greater than the fifth definition, and the fourth definition is smaller than the sixth definition.
  • the first object is blurred on the first display screen
  • the first object is clear on the second display screen. That is to say, the change trend of the definition of the first object in the image stream displayed on the first display screen and the image stream displayed on the second display screen may be opposite.
  • the sharpness of the first object in the image stream displayed on the first display screen changes alternately from "blur-clear-fuzzy-clear"
  • the distant objects in the image stream displayed on the second display screen appear "clear-fuzzy-clear-clear-clear”. Fuzzy" alternately.
  • the fourth definition is greater than the first definition; and/or, the fifth definition is smaller than the second definition; and/or, the sixth definition is greater than the first Three clarity.
  • the display screens corresponding to the left and right eyes synchronously display images (for example, the i-th frame image)
  • the first object on the left-eye image is blurred
  • the first object on the right-eye image is clear.
  • the display screens corresponding to the left and right eyes synchronously display images (for example, the j-th frame image)
  • the first object on the left-eye image is clear
  • the first object on the right-eye image is blurred.
  • the display method provided by the embodiment of the present application may be applicable to various application scenarios.
  • game applications eg, VR game applications
  • simulated driving eg, VR driving
  • simulated teaching eg, VR teaching
  • the like Take VR games and VR driving as examples below.
  • the N frames of images are images related to games; the games may be VR games.
  • the N frames of images are images generated by a VR game application.
  • the second depth of field includes: in the game scene, the depth of field where the game character corresponding to the user is located, or the depth of field where the body part (such as an arm) of the game character corresponding to the user is located, or the current depth of field of the game character corresponding to the user.
  • the depth of field where the game equipment held (such as guns) is located; and/or, the depth of field where the user’s gaze point is located includes: in the game scene, the depth of field where the game character corresponding to the game opponent is located, or the depth of field where the building is located, or, the depth of field where the user’s corresponding
  • the N frames of images are images related to vehicle driving; for example, the N frames of images are images generated by a VR driving application.
  • the second depth of field includes: in the vehicle driving scene, the depth of field of the vehicle currently driven by the user, or the depth of field of the steering wheel of the vehicle currently driven by the user, or the depth of field of the windshield of the vehicle currently driven by the user; and /or, the depth of field where the user's gaze point is located includes: in the vehicle driving scene, other users driving vehicles on the driving road (such as a vehicle driving in front of the user's current driving vehicle), or objects set on the roadside of the driving road (such as, trees along the road, signs, etc.).
  • the definition of the first object whose depth of field is greater than the second depth of field or the first object far away from the user's gaze point can be high or low, which can not only relieve fatigue, but also ensure that the first object is captured in the human brain enough details to ensure the vehicle driving (eg, VR driving) experience.
  • the image in the i-th frame is an image after blurring the first object on the original image in the i-th frame;
  • the image in the j-th frame is the original image in the j-th frame or The image of the first object on the j-frame original image after clearing processing;
  • the k-th frame image is an image after blurring the first object on the k-th frame of the original image; wherein, the first The definition of all objects on the i-frame original image, the j-th frame original image, and the k-th frame original image is the same;
  • the j-th frame image is after the definition processing of the first object on the j-th frame original image image, including: the image of the jth frame is an image obtained by fusing the image of the ith frame and the original image of the jth frame; or, the image of the jth frame is blurred on the image of the ith frame An image obtained by fusing the image information lost during processing with the original image of the jth frame.
  • the j-th frame image is an image obtained by fusing the i-th frame image and the j-th frame original image, including: the area where the first object is located on the j-th frame image
  • the image block in is obtained by fusion of the first image block and the second image block; wherein, the first image block is an image block in the area where the first object is located on the ith frame image, and the second image block is The image block is an image block in the area where the first object is located on the original image of the jth frame.
  • an electronic device including:
  • processor memory, and, one or more programs
  • the one or more programs are stored in the memory, the one or more programs include instructions, and when the instructions are executed by the processor, the electronic device performs the above-mentioned first aspect The method steps provided.
  • a computer-readable storage medium is provided, the computer-readable storage medium is used to store a computer program, and when the computer program is run on a computer, the computer executes the method as provided in the above-mentioned first aspect .
  • a computer program product including a computer program, and when the computer program is run on a computer, the computer is made to execute the method provided in the first aspect above.
  • a graphical user interface on an electronic device has a display screen, a memory, and a processor, the processor is used to execute one or more computer programs stored in the memory, the The graphical user interface includes a graphical user interface displayed when the electronic device executes the method provided in the first aspect above.
  • the embodiment of the present application further provides a chip system, the chip system is coupled with the memory in the electronic device, and is used to call the computer program stored in the memory and execute the technical solution of the first aspect of the embodiment of the present application.
  • “Coupling” in the embodiments of the application means that two components are directly or indirectly combined with each other.
  • FIG. 1 is a schematic diagram of vergence-accommodation conflict provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of a VR system provided by an embodiment of the present application.
  • FIG. 3A is a schematic diagram of the principle of human eye convergence provided by an embodiment of the present application.
  • Fig. 3B is a schematic diagram of the human eye structure provided by an embodiment of the present application.
  • Fig. 4 is a schematic diagram of the adjustment of the human eye ciliary muscle to the lens provided by an embodiment of the present application;
  • Fig. 5 is a schematic diagram of VR glasses provided by an embodiment of the present application.
  • FIGS. 6A to 6C are schematic diagrams of virtual image planes corresponding to images displayed by VR glasses provided by an embodiment of the present application.
  • FIGS. 7A to 7B are schematic diagrams of a first application scenario provided by an embodiment of the present application.
  • FIGS. 8A to 8B are schematic diagrams of a second application scenario provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a VR wearable device provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of clear gaze points of users and blurred gaze points of non-users in a VR virtual environment provided by an embodiment of the present application;
  • FIG. 11 is a schematic flow chart of an image generation principle provided by an embodiment of the present application.
  • Fig. 14 is a schematic diagram of an image acquired by a user's human brain provided by an embodiment of the present application.
  • FIG. 15 is a schematic diagram of an image stream generation process provided by an embodiment of the present application.
  • FIG. 16 is a schematic diagram of fuzzy objects in the foreground and clear objects in the foreground in the VR virtual environment provided by an embodiment of the present application;
  • Fig. 17 is another schematic flowchart of the principle of image generation provided by an embodiment of the present application.
  • 20 to 23 are schematic diagrams of image streams displayed on the left-eye display screen and the right-eye display screen on the display device provided by an embodiment of the present application;
  • FIG. 24 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • At least one of the embodiments of the present application involves one or more; wherein, a plurality means greater than or equal to two.
  • words such as “first” and “second” are only used for the purpose of distinguishing descriptions, and cannot be understood as express or implied relative importance, nor can they be understood as express or imply order.
  • the first object and the second object do not represent the importance of the two, or represent the order of the two, but to distinguish the objects.
  • VR technology is a means of human-computer interaction created with the help of computer and sensor technology.
  • VR technology integrates computer graphics technology, computer simulation technology, sensor technology, display technology and other science and technology to create a virtual environment.
  • the virtual environment includes three-dimensional realistic images generated by computers and dynamically played in real time to bring visual perception to users; moreover, in addition to the visual perception generated by computer graphics technology, there are also perceptions such as hearing, touch, force, and movement.
  • the user can see the VR game interface by wearing the VR wearable device, and can interact with the VR game interface through gestures, handles, and other operations, as if in a game.
  • Augmented Reality (AR) technology refers to superimposing computer-generated virtual objects on real-world scenes to enhance the real world.
  • AR technology needs to collect real-world scenes, and then add a virtual environment to the real world.
  • VR technology creates a complete virtual environment, and all users see are virtual objects; while AR technology superimposes virtual objects on the real world, that is, it includes objects in the real world. Also includes dummy objects.
  • the user wears transparent glasses, through which the real environment around can be seen, and virtual objects can also be displayed on the glasses, so that the user can see both real objects and virtual objects.
  • Mixed reality technology is to build a bridge of interactive feedback information between the virtual environment, the real world and users by introducing real scene information (or called real scene information) into the virtual environment. , thereby enhancing the realism of the user experience.
  • the real object is virtualized (for example, using a camera to scan the real object for 3D reconstruction to generate a virtual object), and the virtualized real object is introduced into the virtual environment, so that the user can see in the virtual environment real object.
  • the technical solution provided by the embodiment of the present application may be applicable to a VR scene, an AR scene or an MR scene.
  • VR VR
  • AR and MR it can also be applied to other scenarios.
  • glasses-free 3D scenes glasses-free 3D display, glasses-free 3D projection, etc.
  • theaters such as 3D movies
  • VR software in electronic equipment etc., in short, can be applied to any scene that needs to display three-dimensional images.
  • the following mainly introduces the VR scene as an example.
  • FIG. 2 is a schematic diagram of a VR system according to an embodiment of the present application.
  • the VR system includes a VR wearable device 100 and an image generating device 200 .
  • the image generating device 200 includes a host (such as a VR host) or a server (such as a VR server).
  • the VR wearable device 100 is connected (wired connection or wireless connection) with a VR host or a VR server.
  • the VR host or VR server may be a device with relatively large computing power.
  • the VR host can be a device such as a mobile phone, a tablet computer, or a notebook computer, and the VR server can be a cloud server, etc.
  • the VR host or VR server is responsible for generating images, etc., and then sends the images to the VR wearable device 100 for display, and the user wearing the VR wearable device 100 can see the images.
  • the VR wearable device 100 may be a head mounted device (Head Mounted Display, HMD), such as glasses, a helmet, and the like.
  • the VR system in FIG. 2 may not include the image generating device 200 .
  • the VR wearable device 100 has image generation capabilities locally, and does not need to acquire images from the image generation device 200 (VR host or VR server) for display.
  • the VR wearable device 100 can display a three-dimensional image. Since different objects have different depths of field (refer to the introduction below) on the three-dimensional image, the virtual environment can be shown to the user through the three-dimensional image.
  • a three-dimensional image includes objects at different image depths.
  • a VR wearable device displays a three-dimensional image, and the user wearing the VR wearable device sees a three-dimensional scene (that is, a virtual environment).
  • a three-dimensional scene that is, a virtual environment.
  • Different objects in the three-dimensional scene have different distances from the user's eyes, presenting a three-dimensional effect. Therefore, the image depth can be understood as the distance between the object on the 3D image and the user's eyes.
  • the larger the image depth the farther away from the user's eyes visually, which looks like a distant view.
  • the smaller the image depth the closer to the user's eyes visually. , which looks like a close-up.
  • Image depth may also be referred to as "depth of field”.
  • the human eye can obtain a light signal in the actual scene and process the light signal in the brain to realize visual experience.
  • the optical signal in the actual scene may include reflected light from different objects, and/or an optical signal directly emitted by a light source. Since the light signal of the actual scene can carry the relevant information (such as size, position, color, etc.) of each object in the actual scene, the brain can obtain the information of the object in the actual scene by processing the light signal , that is, to obtain visual experience.
  • the left eye and the right eye watch the same object, the viewing angles are slightly different. Therefore, the scene seen by the left eye and the right eye is actually different.
  • the left eye can obtain the light signal of the two-dimensional image (hereinafter referred to as the left eye image) on the plane where the focus of the human eye is perpendicular to the line of sight of the left eye.
  • the right eye can obtain the light signal of the two-dimensional image (hereinafter referred to as the right eye image) on the plane where the focus of the human eye is perpendicular to the line of sight of the right eye.
  • the image for the left eye is slightly different from the image for the right eye.
  • the brain can obtain information about different objects in the current scene by processing the light signals of the left-eye image and the right-eye image.
  • Stereo vision experience can also be called binocular stereo vision.
  • converging when viewing an object in an actual scene, the user will experience two processes of convergence (vergence) and zoom (accommodation). Wherein, converging may also be referred to as converging, which is not limited in this application.
  • Convergence can be understood as adjusting the line of sight of the human eye so that it points to the object.
  • the object is a triangle in FIG. 3A as an example
  • the sight lines of the left eye and the right eye can be respectively directed to the The object turns (points to the object).
  • Vergence angle (Vergence angle) and Vergence depth (Vergence distance)
  • the convergence angle ⁇ the angle formed by the line of sight of the left eye and the line of sight of the right eye when the two eyes observe an object.
  • the brain can judge the depth of the object by obtaining the convergence angle ⁇ of the eyes, that is, the convergence depth. It can be understood that the closer the observed object is to the human eye, the larger the convergence angle ⁇ and the smaller the convergence depth. Correspondingly, the farther the observed object is from the human eye, the smaller the convergence angle ⁇ and the greater the convergence depth.
  • Zooming can be understood as adjusting the human eye to the correct focal length when observing an object.
  • the brain controls the lens to adjust to the correct focus through the ciliary muscle.
  • Figure 3B is a schematic diagram of the composition of the human eye.
  • the human eye may include a lens and ciliary muscle, as well as a retina located in the fundus.
  • the crystalline lens can function as a zoom lens, converging the light rays entering the human eye. In order to converge the incident light onto the retina of the human eye fundus, so that the scene in the actual scene can form a clear image on the retina.
  • the ciliary muscle can be used to adjust the shape of the lens.
  • the ciliary muscle can adjust the diopter of the lens by contracting or relaxing, so as to achieve the effect of adjusting the focal length of the lens. Therefore, objects at different distances in the actual scene can be clearly imaged on the retina through the lens.
  • FIG. 4 is a diagram illustrating the adjustment of the ciliary muscle to the lens when the human eye observes objects at different distances.
  • FIG. 4 is a diagram illustrating the adjustment of the ciliary muscle to the lens when the human eye observes objects at different distances.
  • FIG. 4 which is a diagram illustrating the adjustment of the ciliary muscle to the lens when the human eye observes objects at different distances.
  • FIG. 4 which is a diagram illustrating the adjustment of the ciliary muscle to the lens when the human eye observes objects at different distances.
  • (a) of FIG. 4 when the human eye observes a distant object, take the object as a non-light source as an example.
  • the ciliary muscle can control the state of the lens to the state shown in Figure 4 (a), such as the ciliary muscle relaxes, and controls the lens to be flat and the diopter is small, so that parallel incident light can pass through the lens Converge on the retina of the fundus.
  • the human eye observes a relatively close object, in combination with (b) in FIG. 4 , take the object as a non-light source as an example.
  • the reflected light from the surface of the object may enter the human eye according to the optical path shown in (b) in FIG. 4 .
  • the ciliary muscle can keep the state of the lens in the state shown in (b) in Figure 4, as the ciliary muscle contracts, the lens protrudes, and the diopter becomes larger, so that the lens shown in (b) in Figure 4 Incident light can pass through the lens and then converge on the retina in the fundus of the eye. That is to say, when the human eye observes objects at different distances, the contraction or relaxation state of the ciliary muscle is different.
  • depth This depth may be referred to as zoom depth.
  • the human brain will determine the convergence depth according to the convergence angle ⁇ .
  • the human brain will determine the zoom depth according to the contraction or relaxation state of the ciliary muscle. Both the convergence depth and the zoom depth represent the object distance The distance between the user's glasses.
  • the depth of convergence and the depth of zoom are coordinated or consistent.
  • the brain cannot accurately judge the depth of the object, and a sense of fatigue will occur, which will affect the user experience.
  • the depth inconsistency of the object indicated by the vergence depth and the zoom depth may also be called vergence accommodation conflict (Vergence accommodation conflict, VAC).
  • FIG. 5 is a schematic diagram of VR glasses.
  • two display screens (such as display screen 501 and display screen 502) can be set in VR glasses, and display screen 501 and display screen 502 can be independent display screens, or display screen 501 and display screen 502 can be different display areas on the same display screen), and each display screen has a display function.
  • Each display screen can be used to display corresponding content to one eye (such as left eye or right eye) of the user through a corresponding eyepiece.
  • the display screen 501 corresponds to the eyepiece 503
  • the display screen 502 corresponds to the eyepiece 504
  • a left-eye image corresponding to the virtual environment may be displayed.
  • the light of the left-eye image can pass through the eyepiece 503 and converge at the left eye, so that the left eye can see the left-eye image.
  • the right-eye image corresponding to the virtual environment may be displayed.
  • the light of the right-eye image can pass through the eyepiece 504 and converge at the right eye, so that the right eye sees the right-eye image.
  • the brain can fuse the left-eye image and the right-eye image, so that the user can see objects in the virtual environment corresponding to the left-eye image and the right-eye image.
  • the image seen by human eyes is actually an image corresponding to the image displayed on the display screen on the virtual image plane 600 as shown in FIG. 6A .
  • the left-eye image seen by the left eye may be a virtual image corresponding to the left-eye image on the virtual image plane 600 .
  • the right-eye image seen by the right eye may be a virtual image corresponding to the right-eye image on the virtual image plane 600 .
  • the zoom distance may be the distance from the virtual image plane 600 to the human eye (depth 1 as shown in FIG. 6B ).
  • the objects in the virtual environment displayed by the VR glasses to the user are often not on the virtual image plane 600 .
  • the observed object triangle in the virtual environment in FIG. 6B (because it is a virtual environment, the triangle is represented by a dotted line) is not on the virtual image plane 600 .
  • the depth of convergence should be the depth of the observed object (ie, triangle) in the virtual environment.
  • the depth of convergence may be depth 2 as shown in FIG. 6B .
  • depth 1 and depth 2 are not consistent at this time. In this way, the brain cannot accurately judge the depth of the observed object, thereby causing brain fatigue and affecting user experience.
  • a virtual environment includes multiple observed objects. As shown in FIG. 6C , there are two observed objects, wherein the observed object 1 is a triangle (dotted line) as an example, and the observed object 2 is a sphere (dashed line) as an example. For each observed object, there will be cases where the depth of convergence and the depth of zoom are different.
  • the clarity of all virtual objects (that is, observed objects) on an image is the same.
  • the observed objects are described by taking triangles and spheres as examples.
  • the triangle and the sphere are displayed on the same image with the same definition, and the virtual image plane 600 corresponding to the triangle and the sphere are at the same depth (that is, depth 1), so the human brain will consider the zoom of the two observed objects based on the same virtual image plane The depth should be the same. But in fact, the convergence depth of the two observed objects is different.
  • the convergence depth of the triangle is depth 2
  • the convergence depth of the sphere is depth 3. Based on the different convergence depths, the human brain will think that the zoom depth of the two observed objects should be It is different.
  • this technology can adjust the virtual image plane 600 to the depth of field where the observed object (such as a triangle) is located in the virtual environment, so that the zoom depth and the convergence depth are consistent.
  • this technology can adjust the virtual image surface corresponding to the triangle to the depth of field where the triangle is located, and adjust the virtual image surface corresponding to the sphere to the depth of field where the circle is located.
  • the zoom depth corresponding to the triangle is consistent with the depth of convergence
  • the zoom depth and convergence depth corresponding to the ball are also consistent, thus overcoming VAC.
  • this technology of adjusting the position of the virtual image plane requires the support of certain optical hardware, such as stepping motors.
  • additional optical hardware will increase the cost; on the other hand, adding optical hardware will increase the volume of VR glasses. Difficult to apply to light and small VR wearable devices.
  • an embodiment of the present application provides a display method.
  • a VR display device such as VR glasses
  • the clarity of the sphere and the triangle can be set to be different.
  • a sphere is blurred and a triangle is clear.
  • the human brain will think that the zoom depth of the triangle and the sphere is the same, that is, depth 1.
  • the human brain Since the triangle is clear, the human brain will think that the depth 1 is accurate for the triangle; but the ball is blurred, so the human brain will think that the depth 1 is inaccurate or not adjusted for the ball, and the human brain will Try to adjust the ciliary muscle to see the sphere clearly. In this way, the human brain will judge that the zoom depths of the triangle and the sphere are no longer the same depth, which is no longer the same as "the human brain thinks that the zoom depths of the two observed objects should be different based on the different convergence depths of the triangle and the sphere". Conflict relieves the fatigue of the human brain.
  • the display method provided by the embodiment of the present application can alleviate the user's fatigue when viewing the virtual environment through VR glasses, improve user experience, and does not need to rely on the support of special optical hardware such as stepping motors, and is low in cost and helpful
  • the equipment is light and miniaturized.
  • FIG. 7A and FIG. 7B are schematic diagrams of the first application scenario provided by the embodiment of the present application.
  • This application scenario takes a user wearing VR glasses to play a VR game as an example.
  • the VR glasses display an image 701 .
  • the image 701 may be an image generated by a VR game application, including objects such as guns, containers, and trees.
  • objects such as guns, containers, and trees.
  • the gun is at depth 1
  • the container is at depth 2
  • Depth of Field 3 > Depth of Field 2 > Depth of Field 1.
  • the user’s gaze point is on the gun (such as a scope)
  • the depth of field 3 where the tree is located is farther from depth of field 1
  • the depth of field 2 where the container is located is closer to depth of field 1
  • the definition of the tree on image 701 is clearer than that of the container low degree. Therefore, the tree in the VR game screen seen by the user wearing VR glasses is blurred, but the container is clear.
  • the sharpness of different objects on the image is different. Specifically, objects far away from the user's gaze point (such as trees) are relatively blurred, and objects relatively close to the user's gaze point (such as containers) are relatively clear.
  • the human brain will think that the convergence depth of the tree and the container is different; in addition, the definition of the tree and the container is different, the human brain will think that the zoom depth of the tree and the container should be different, which is different from the human brain. It is thought that the different depths of convergence of the tree and the container match, which can relieve brain fatigue.
  • the different depths of convergence of the tree and the container match, which can relieve brain fatigue.
  • the trees farther from the user's gaze point are blurred, and the containers closer to the user's gaze point are clear, so that the virtual environment seen by the human eye is more in line with the real situation.
  • the VR glasses display a frame of image (ie, image 701 ) as an example.
  • general VR glasses display an image stream (such as an image stream generated by a VR game application).
  • An image stream includes multiple frames of images.
  • the user's gaze point may change.
  • VR glasses can detect the user's gaze point in real time through the eye tracking module.
  • the gaze point changes, the clarity of the object is determined based on the new gaze point. For example, objects far from the new fixation point are blurred, and objects closer to the new fixation point are sharp.
  • One possible implementation is that while the user's gaze remains on the gun, the VR glasses display an image stream, and trees (objects far away from the user's gaze) are blurred on each frame of the image stream. That is to say, during the period when the user's gaze is on the gun, objects far away from the user's gaze are always in a blurred state. Although this can relieve fatigue, it will lose object (eg, tree) details, so that the user may miss some details and cause the game to fail.
  • object eg, tree
  • the VR glasses display the image stream.
  • tree can be high or low, and does not need to be in a blurry state all the time.
  • the VR glasses display an image stream, and the image stream includes an i-th frame, a j-th frame, and a k-th frame.
  • the tree (the object far away from the user's gaze point) is blurred in the i-th frame image
  • the tree is clear in the j-th frame image
  • the tree is blurred in the k-th frame image.
  • the VR glasses display the image of the i-th frame
  • the tree that the user sees is blurred, which can relieve fatigue.
  • the VR glasses display the image of the j-th frame, the tree that the user sees is clear.
  • the human brain will fuse the i-th frame image with the tree on the j-th frame image, so although the tree in the i-th frame image is blurred, it can still ensure that the details of the tree will not be lost in the human brain. That is to say, in Fig. 7B, when the VR glasses display the image stream, for objects (such as trees) that are far away from the user's gaze point, the definition may be high or low, and it does not need to be in a blurred state all the time. In this way, not only It can relieve fatigue and ensure that the human brain can capture enough details of objects far away from the user's gaze point, and the user experience is better.
  • an object such as a container
  • its definition may not change.
  • its definition of the tree since the definition of the tree varies from high to low, its definition must not exceed that of the container.
  • the tree and the container are clear on the image frame j in FIG. 7B , but the resolution of the tree is lower than or equal to that of the container.
  • the sharpness can also be high or low, as long as the sharpness of objects close to the user's gaze point on the same image is higher than or equal to that of objects far away from the user's gaze point degree can be.
  • the tree is blurred on the i-th frame image in Figure 7B
  • the container can also be blurred, but the blurring degree of the container is lower than that of the tree, so that the definition of the container is higher than that of the tree.
  • the user's gaze point is on a gun as an example.
  • the user's gaze point can be on the game equipment (such as a gun) currently held by the game character corresponding to the user.
  • the point of gaze can also be at the game character corresponding to the opponent in the game, or at the building, or at the body part of the game character corresponding to the user, etc., and the embodiments of the present application do not give examples one by one.
  • FIG. 8A and FIG. 8B are examples of the second application scenario provided by the embodiment of the present application.
  • the user wears VR glasses for VR driving as an example.
  • the VR glasses display an image 801 .
  • the image 801 may be an image generated by a VR driving application.
  • the image 801 includes vehicle information such as a steering wheel and a monitor, and also includes roads, trees on the roads, vehicles in front, and the like.
  • the depth of field 2 where the tree is located is greater than the depth of field 1 where the vehicle in front is located. Therefore, the sharpness of the trees in image 801 is lower than that of the vehicle ahead. That is, objects with a larger depth of field are blurred, and objects with a smaller depth of field are sharper.
  • This application scenario is different from the application scenarios shown in Fig. 7A and Fig. 7B.
  • the user's gaze point shall prevail. Objects far away from the user's gaze point are blurred, and objects close to the user's gaze point are clear. Objects are clear and uncorrelated with the user's gaze point.
  • the VR glasses display a frame of image (that is, image 801 ) as an example.
  • the VR glasses can display an image stream (such as an image stream generated by a VR driving application).
  • the image stream includes multiple frames of images.
  • a possible implementation manner is that distant objects on each image frame in the image stream are blurred. Although this method can relieve fatigue, it will lose the details of distant objects.
  • the clarity of distant objects (such as trees) in the image stream can be high or low, and it does not need to be constantly fuzzy state.
  • the VR glasses display an image stream, and the image stream includes an i-th frame, a j-th frame, and a k-th frame.
  • the tree on the i-th frame image is blurred
  • the tree on the j-th frame image is clear
  • the tree on the k-th frame image is blurred.
  • the VR glasses display the i-th frame of image the tree that the user sees is blurred, which can relieve fatigue.
  • the VR glasses display the j-th frame of image the tree that the user sees is clear.
  • the tree on the i-th frame image and the j-th frame image will be fused, so although the tree in the i-th frame image is blurred, it can still ensure that the details of the tree will not be lost in the human mind. That is to say, in FIG. 8B , objects with larger depths of field in the image stream displayed by the VR glasses may have high or low clarity, and do not need to be in a blur state all the time. In this way, it can not only relieve fatigue, but also ensure that the details of distant objects can be captured in the human brain, and the user experience is better.
  • a nearby object eg, a vehicle in front
  • its clarity may also remain unchanged.
  • the sharpness of the distant object varies from high to low
  • the sharpness thereof must not exceed the sharpness of the near object.
  • the tree on the image frame j in FIG. 8B is clear, and the vehicle in front is also clear, but the clarity of the tree is lower than or equal to that of the vehicle in front.
  • the sharpness of nearby objects can also be high or low, as long as the sharpness of nearby objects on the same image is higher than or equal to the sharpness of distant objects.
  • the tree on the i-th frame image in Figure 8B is blurred, and the vehicle in front can also be blurred, but the blurring degree of the vehicle in front is lower than the blurring degree of the tree, so that the clarity of the tree is lower than that of the vehicle in front .
  • objects with a larger depth of field have higher blurring and lower definition, and objects with smaller depth of field have lower blurring and higher definition.
  • the tree is blurred and the vehicle in front is clear (the depth of field 2 where the tree is located is greater than the depth of field 1 where the vehicle in front is located) as an example.
  • the depth of field 2 is greater than the depth of field 1 object blurring.
  • the depth of field 3 may also be used as the criterion, and the objects at the depth of field 1 and the depth of field 2 that are greater than the depth of field 3 can be blurred.
  • the depth of field 3 is the depth of field where the user is currently driving the vehicle as an example.
  • the depth of field 3 can also be the depth of field where the steering wheel of the user is currently driving the vehicle, or the user is currently driving the vehicle. Depth of field where the windshield is, and so on.
  • the VR game application is taken as an example, and the user's gaze point shall prevail. Objects that are closer to the point of view are sharp, and, when the image stream is displayed, the sharpness of objects in the image stream that are farther from the user's gaze point increases and decreases.
  • the second application scenario taking the VR driving application as an example, the distant objects are blurred and the nearby objects are clear, and when the image stream is displayed, the clarity of the distant objects in the image stream increases or decreases. drop.
  • Objects far away from the user's gaze point are blurred, and objects close to the user's gaze point are clear.
  • the mode is called the first eye protection mode, and the mode in which the distant objects are blurred and the near objects are clear in the second application scene is called the second eye protection mode.
  • the above are the two application scenarios listed in this application, mainly taking VR games and VR driving as examples.
  • the first application scenario such as VR game application
  • the second application scenario can also apply the solution of the first application scenario to achieve the user's gaze point as the standard, Objects farther from the user's gaze are blurred, and objects closer to the user's gaze are sharp.
  • the technical solutions provided by the embodiments of the present application can be applied to other application scenarios, such as VR viewing of cars, VR viewing of houses, VR chatting, VR teaching, VR theaters, and any other scenarios that need to display a virtual environment to the user.
  • FIG. 9 shows a schematic structural diagram of a VR wearable device provided by an embodiment of the present application by taking a VR wearable device (such as VR glasses) as an example.
  • the VR wearable device 100 may include a processor 110, a memory 120, a sensor module 130 (which may be used to acquire the user's posture), a microphone 140, buttons 150, an input and output interface 160, a communication module 170, a camera 180, battery 190 , optical display module 1100 , eye tracking module 1200 and so on.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the VR wearable device 100 .
  • the VR wearable device 100 may include more or fewer components than shown in the illustration, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 is generally used to control the overall operation of the VR wearable device 100, and may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), image signal processor (image signal processor, ISP), video processing unit (video processing unit, VPU) controller, memory, video codec, digital signal processor (digital signal processor, DSP ), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • ISP image signal processor
  • video processing unit video processing unit
  • VPU video processing unit
  • memory video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • the processor 110 may be used to control the optical power of the VR wearable device 100 .
  • the processor 110 may be used to control the optical power of the optical display module 1100 to realize the function of adjusting the optical power of the VR wearable device 100 .
  • the processor 110 can adjust the relative positions of the various optical devices (such as lenses, etc.) When the human eye is imaging, the position of the corresponding virtual image plane can be adjusted. In this way, the effect of controlling the optical power of the VR wearable device 100 is achieved.
  • processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, a universal asynchronous receiver/transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general input and output (general -purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and/or universal serial bus (universal serial bus, USB) interface, serial peripheral interface (serial peripheral interface, SPI) interface etc.
  • I2C integrated circuit
  • MIPI mobile industry processor interface
  • GPIO general input and output
  • subscriber identity module subscriber identity module
  • SIM subscriber identity module
  • USB serial peripheral interface
  • SPI serial peripheral interface
  • the processor 110 may perform blurring processing to different degrees on objects at different depths of field, so that objects at different depths of field have different sharpness.
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL).
  • processor 110 may include multiple sets of I2C buses.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is generally used to connect the processor 110 and the communication module 170 .
  • the processor 110 communicates with the Bluetooth module in the communication module 170 through the UART interface to realize the Bluetooth function.
  • the MIPI interface can be used to connect the processor 110 with the display screen in the optical display module 1100 , the camera 180 and other peripheral devices.
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 180 , the display screen in the optical display module 1100 , the communication module 170 , the sensor module 130 , the microphone 140 and so on.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the camera 180 may collect images including real objects
  • the processor 110 may fuse the images collected by the camera with the virtual objects, and display the fused images through the optical display module 1100 .
  • the camera 180 may also collect images including human eyes.
  • the processor 110 performs eye tracking through the images.
  • the USB interface is an interface that conforms to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface can be used to connect a charger to charge the VR wearable device 100, and can also be used to transmit data between the VR wearable device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices, such as mobile phones.
  • the USB interface may be USB3.0, which is compatible with high-speed display port (DP) signal transmission, and can transmit video and audio high-speed data.
  • DP display port
  • the VR wearable device 100 may include a wireless communication function, for example, the VR wearable device 100 may receive images from other electronic devices (such as a VR host) for display.
  • the communication module 170 may include a wireless communication module and a mobile communication module.
  • the wireless communication function can be realized by an antenna (not shown), a mobile communication module (not shown), a modem processor (not shown), and a baseband processor (not shown).
  • Antennas are used to transmit and receive electromagnetic wave signals. Multiple antennas may be included in the VR wearable device 100, and each antenna may be used to cover a single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module can provide applications on the VR wearable device 100 including second generation (2th generation, 2G) network/third generation (3th generation, 3G) network/fourth generation (4th generation, 4G) network/fifth generation (5th generation, 5G) network and other wireless communication solutions.
  • the mobile communication module may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module can receive electromagnetic waves through the antenna, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module can also amplify the signal modulated by the modem processor, and convert it into electromagnetic wave and radiate it through the antenna.
  • at least part of the functional modules of the mobile communication module may be set in the processor 110 .
  • at least part of the functional modules of the mobile communication module and at least part of the modules of the processor 110 may be set in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is passed to the application processor after being processed by the baseband processor.
  • the application processor outputs sound signals through audio equipment (not limited to speakers, etc.), or displays images or videos through the display screen in the optical display module 1100 .
  • the modem processor may be a stand-alone device. In some other embodiments, the modem processor may be independent from the processor 110, and be set in the same device as the mobile communication module or other functional modules.
  • the wireless communication module can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the VR wearable device 100.
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module may be one or more devices integrating at least one communication processing module.
  • the wireless communication module receives electromagnetic waves through the antenna, frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module can also receive the signal to be sent from the processor 110, frequency-modulate it, amplify it, and convert it into electromagnetic wave through the antenna to radiate out.
  • the antenna of the VR wearable device 100 is coupled to the mobile communication module, so that the VR wearable device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technology, etc.
  • GNSS can include global positioning system (global positioning system, GPS), global navigation satellite system (global navigation satellite system, GLONASS), Beidou satellite navigation system (beidou navigation satellite system, BDS), quasi-zenith satellite system (quasi-zenith) satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • Beidou satellite navigation system beidou navigation satellite system, BDS
  • quasi-zenith satellite system quasi-zenith satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the VR wearable device 100 realizes the display function through the GPU, the optical display module 1100 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the optical display module 1100 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the memory 120 may be used to store computer-executable program code, including instructions.
  • the processor 110 executes various functional applications and data processing of the VR wearable device 100 by executing instructions stored in the memory 120 .
  • the memory 120 may include an area for storing programs and an area for storing data.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage data area can store data (such as audio data, phone book, etc.) created during the use of the VR wearable device 100 .
  • the memory 120 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the VR wearable device 100 can implement audio functions through an audio module, a speaker, a microphone 140, an earphone interface, and an application processor. Such as music playback, recording, etc.
  • the audio module is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module can also be used to encode and decode audio signals.
  • the audio module may be set in the processor 110 , or some functional modules of the audio module may be set in the processor 110 .
  • Loudspeakers also called “horns" are used to convert audio electrical signals into sound signals.
  • the wearable device 100 can listen to music through the speaker, or listen to hands-free calls.
  • the microphone 140 also called “microphone” or “microphone” is used to convert sound signals into electrical signals.
  • the VR wearable device 100 may be provided with at least one microphone 140 .
  • the VR wearable device 100 can be provided with two microphones 140, which can also implement a noise reduction function in addition to collecting sound signals.
  • the VR wearable device 100 can also be provided with three, four or more microphones 140 to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions, etc.
  • the headphone jack is used to connect wired headphones.
  • the headphone interface can be a USB interface, or a 3.5mm (mm) open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface .
  • mm 3.5mm
  • CTIA cellular telecommunications industry association of the USA
  • the VR wearable device 100 may include one or more buttons 150 , and these buttons may control the VR wearable device and provide users with access to functions on the VR wearable device 100 .
  • Keys 150 may be in the form of buttons, switches, dials, and touch or near-touch sensing devices such as touch sensors. Specifically, for example, the user can turn on the optical display module 1100 of the VR wearable device 100 by pressing a button.
  • the keys 150 include a power key, a volume key and the like.
  • the key 150 may be a mechanical key. It can also be a touch button.
  • the VR wearable device 100 can receive key input and generate key signal input related to user settings and function control of the VR wearable device 100 .
  • the VR wearable device 100 may include an input-output interface 160, and the input-output interface 160 may connect other devices to the VR wearable device 100 through suitable components.
  • Components may include, for example, audio/video jacks, data connectors, and the like.
  • the optical display module 1100 is used for presenting images to the user under the control of the processor 110 .
  • the optical display module 1100 can convert the real pixel image display into a near-eye projection virtual image display through one or several optical devices such as mirrors, transmission mirrors, or optical waveguides, so as to realize virtual interactive experience, or realize virtual and Interactive experience combined with reality.
  • the optical display module 1100 receives image data information sent by the processor 110 and presents corresponding images to the user.
  • the VR wearable device 100 may further include an eye tracking module 1200, which is used to track the movement of human eyes, and then determine the point of gaze of the human eyes.
  • the position of the pupil can be located by image processing technology, the coordinates of the center of the pupil can be obtained, and then the gaze point of the person can be calculated.
  • the following describes the display method of the embodiment of the present application by taking the VR wearable device shown in FIG. 9 as VR glasses as an example.
  • VR glasses display images to users, and different objects on the images have different clarity. For example, an object far from the user's gaze point on the image (referred to as a first object for convenience of description) is blurred, and an object close to the user's gaze point (referred to as a second object for convenience of description) is clear.
  • Embodiment 1 may be applicable to the application scenarios shown in FIG. 7A and FIG. 7B above.
  • the sharpness may include image resolution or scanning resolution, etc. In the following embodiments, the sharpness is described by taking the resolution of a displayed image as an example.
  • FIG. 11 is a schematic flowchart of an image generation method provided in Embodiment 1. As shown in Figure 11, the process includes:
  • the depth of each object can be automatically saved when the rendering pipeline is running, or can be calculated by relying on binocular vision, which is not limited in the embodiment of the present application.
  • S1103. Determine the blurring degree of the object according to the distance between the user's gaze point and the object.
  • the object when the distance between the depth of field where the object is located and the depth of field where the user's gaze is located is less than the preset distance, the object may not be blurred; when the distance between the depth of field where the object is located and the depth of field where the user's gaze is located is greater than the preset distance Blur the object at a distance.
  • the specific value of the preset distance is not limited in this embodiment of the present application.
  • the degree of blurring of different objects on the image may increase in order according to the distance between the depth of field where the object is located and the depth of field where the user's gaze point is located. For example, suppose the depth of field where the user gazes at is depth 1. The distance between the depth of field where object 1 is located and depth of field 1 is distance 1, the distance between the depth of field where object 2 is and depth 1 is distance 2, and the distance between the depth of field where object 3 is and depth 1 is distance 3. If distance 1 ⁇ distance 2 ⁇ distance 3, then the degree of blur of object 1 ⁇ the degree of blur of object 2 ⁇ the degree of blur of object 3, so that the sharpness of object 1>the sharpness of object 2>the sharpness of object 3. That is to say, in the virtual environment seen by the user's eyes, objects farther from the user's gaze point are more blurred, and objects closer to the user's gaze point are clearer.
  • the VR device may generate an image first, and all objects on the image have the same definition, and then use an image blurring algorithm to perform blurring processing to different degrees on different objects on the image.
  • the image blurring algorithm includes at least one of Gaussian blur, image down-sampling, defocus blur (defocus blur) algorithm based on deep learning, level of detail (level of detail, LOD) data structure, etc., and this application will not go into details .
  • the user's gaze point may change. As the user's gaze point changes, the sharpness of objects on the image adjusts accordingly. For example, continue to take Figure 10 as an example.
  • the clarity of each object is re-determined based on the new gaze point (ie, the tree). Objects far away from the point of view are blurred.
  • Figure 10 uses VR glasses to display a frame of image as an example. It can be understood that, generally, the VR glasses display an image stream, and the image stream includes multiple frames of images.
  • the VR image generation device uses the image stream generated by the process shown in FIG. 11 by default (that is, objects far away from the user’s gaze point on each frame of image are blurred) .
  • the VR image generation device uses the image stream generated by the process shown in FIG. 11 by default (that is, objects far away from the user’s gaze point on each frame of image are blurred) .
  • the VR image generation device takes the VR image generation device as a mobile phone as an example, when the mobile phone detects at least one of the connection of the VR glasses, the startup of the VR glasses, and the startup of the VR application (such as a VR game), the mobile phone starts to use the process shown in Figure 11 to generate images by default. stream, and then display it through VR glasses.
  • the VR image generation device uses the existing way to generate images by default (that is, all objects on the image have the same clarity), and when the instruction for starting the first eye protection mode is detected, the process shown in Figure 11 is used Generate an image.
  • the first eye protection mode please refer to the previous description. That is to say, all objects on the image displayed by the VR glasses at the beginning have the same clarity, and after the instruction for starting the first eye protection mode is detected, objects far away from the user's gaze point on the displayed image are blurred.
  • FIG. 12 before frame i+1, all objects on the image have the same definition.
  • the instruction for starting the first eye protection mode includes but is not limited to: detecting that the user triggers an operation for starting the first eye protection mode (for example, a VR application includes a button for starting the first eye protection mode, so The above operation may be an operation of clicking the button), the user's viewing time is greater than the preset time period, and the number of times the user blinks/squints within the preset time period is greater than the preset number of times.
  • the first eye protection mode is activated to relieve user fatigue.
  • a prompt message may also be output, the prompt message is used to prompt the user whether to switch to the first eye protection mode, after the user confirms the indication to switch to the first eye protection mode is detected , switch to the first eye protection mode.
  • objects far away from the user's gaze point in the image stream generated by the VR image generation device are blurred, which can relieve human brain fatigue, but it is easy to lose the details of objects far away from the user's gaze point.
  • the object far away from the user's gaze point is always blurred, and the user cannot obtain the details of the object;
  • the object at the foveated point will always be blurred, and the user will not be able to get the details of the object.
  • the definition of objects far away from the user's gaze point in the image stream generated by the VR image generation device may be high or low (see Figure 15 below for the specific generation process).
  • the image stream includes multiple cycles, and each cycle includes multiple frames of images, and in each cycle, the sharpness of objects far away from the user's gaze point on the image increases first and then decreases.
  • the resolution of the tree (object far away from the user's gaze point) on the image frame i is lower than that of the tree on the image frame j, and the tree on the image frame j
  • the sharpness of is higher than the sharpness of the tree on the kth frame image. That is, the sharpness of an object far away from the user's gaze point (that is, a tree) within a cycle shows a change trend of "fuzzy-clear-fuzzy".
  • the resolution of the tree on the image of the kth frame is lower than that of the tree on the image of the pth frame, and the resolution of the tree on the image of the pth frame is higher than that of the tree on the image of the qth frame, that is, the next During the cycle, the sharpness of objects far from the user's gaze point (that is, trees) also shows a "fuzzy-clear-fuzzy" change trend.
  • the two periods in FIG. 13 may be the same or different, without limitation.
  • this sharpness change trend can alleviate the fatigue of the human brain, and on the other hand, it can prevent the user from losing image details of objects far away from the user's gaze point.
  • the n frame, the kth frame is the m frame after the jth frame, the pth frame is the w frame after the kth frame, and the qth frame is the s frame after the pth frame.
  • n, m, q, s are integers greater than or equal to 1.
  • n, m, p, and s can be determined according to the user's visual dwell time and image refresh frame rate. Assume that the user's visual dwell time is T, and the image refresh frame rate is P. Wherein, the visual dwell time T may be any value within the range of 0.1s to 3s, or may be set by the user, which is not limited in this embodiment of the present application. Then, within T time, T/P frame images can be displayed, then n is less than or equal to T/P, m is less than or equal to T/P, q is less than or equal to T/P, and s is less than or equal to T/P.
  • the time difference between the display moment of the j-th frame image and the display moment of the i-th frame image is less than the user's visual dwell time.
  • the image information on the j-th frame image and the i-frame image can be combined Superposition, since the object far away from the user's gaze point on the i-th frame image is blurred, the j-th frame image's object far away from the user's gaze point is clear, and the superposition of the two images can ensure the accuracy of the object far away from the user's gaze point The details are sufficient.
  • the time difference between the display moment of the jth frame of image and the display moment of the kth frame of image is less than the user's visual dwell time, and will not be repeated here.
  • the following object is far from the user's gaze point is a lighthouse as an example to give a specific example. Please refer to Figure 14, the object (i.e. the lighthouse) far away from the user's gaze point on the image frame i is blurred, and the lighthouse on the image frame j is clear.
  • the sharpness of objects close to the user's gaze point in the image stream may not change.
  • the sharpness of the object that is, the mountain
  • the GPU outputs N frames of images (for the convenience of description, the images output by the GPU are referred to as original images), and the definition of all objects on the N frames of images output by the GPU is the same, and the first object (that is, far away from the user).
  • the objects at the fixation point, such as the tree in Fig. 13) are N frames of new pictures whose resolutions change alternately.
  • the new image of frame i is the blurred image of the first object on the original image of frame i;
  • the new image of frame j is the original image of frame j or the first object of the original image of frame j
  • the new image of the kth frame is the image after blurring the first object on the original image of the kth frame;
  • the new image of the pth frame is the original image of the pth frame or the original image of the pth frame
  • the first object in the i-th frame of the original image output by the GPU is blurred to obtain the i-th frame of the new image.
  • more details of the first object may be included in the new image of the jth frame.
  • One feasible way is to superimpose (or fuse) the new image of frame i and the original image of frame j output by the GPU to obtain the new image of frame j. Therefore, the definition of the first object on the new image of frame j It is higher than the original image of the jth frame and the new image of the ith frame.
  • only the image block in the area where the first object is located on the new image of the i-th frame can be combined with the original image of the j-th frame The image blocks in the region where the first object is located are superimposed.
  • VR glasses display images to users, and different objects on the images have different clarity. For example, an object with a larger depth of field (called a first object for convenience of description) on an image is blurred, and an object with a smaller depth of field (called a second object for convenience of description) is clear.
  • Embodiment 2 may be applicable to the application scenarios shown in FIG. 8A and FIG. 8B above.
  • the second depth of field where the mountain is located is greater than the first depth of field where the tree is located, so the mountain is blurred and the tree is clear. In this way, in the virtual environment that the user sees, the mountain is blurred and the tree is clear. .
  • FIG. 17 is a schematic flow diagram of the image generation method provided in the second embodiment. As shown in FIG. 17, the flow of the method includes:
  • the preset depth of field may be a specific depth of field value or a depth of field range, which is not limited in this embodiment of the present application.
  • the preset depth of field is used to judge which objects are distant objects and which objects are near objects. For example, an object whose depth of field is greater than the preset depth of field is a distant object, and an object whose depth of field is smaller than the preset depth of field is a near object.
  • distant objects may be blurred, but close-range objects may not be blurred.
  • There are multiple ways to determine the preset depth of field including but not limited to at least one of the following ways.
  • the preset depth of field may be determined according to a VR scene, and the preset depth of field varies with different VR scenes.
  • the VR scene includes but is not limited to at least one of VR games, VR viewing, VR teaching and the like.
  • the VR game includes game characters, and the preset depth of field can be determined according to the game characters.
  • the preset depth of field can be the depth of field of the game character corresponding to the user in the game scene, or the depth of field of the body parts of the game character corresponding to the user, or the depth of field of the game equipment currently held by the game character corresponding to the user .
  • the game character's arm is holding a gun, and the depth of field where the arm or gun is located can be determined as the preset depth of field.
  • the depth of field of the game character controlling the game can be used as the preset depth of field.
  • VR viewing includes a display screen, and the depth of field where the display screen is located can be determined as the preset depth of field.
  • VR teaching includes teaching equipment such as blackboards, display screens, and projections, and the depth of field where the teaching equipment is located can be determined as the preset depth of field.
  • the preset depth of field can be set by the user.
  • the user can set the preset depth of field on the VR glasses or an electronic device (such as a mobile phone) connected to the VR glasses.
  • the electronic device includes various VR applications, and different preset depths of field may be set for different VR applications.
  • the user can set the preset depth of field of the VR applications on the electronic device in batches, or can set individually for each VR application.
  • the preset depth of field can also be the default depth of field, which can be understood as the default setting of the VR glasses, or the default setting of the electronic device (such as a mobile phone) connected to the VR glasses, or the electronic device connected to the VR glasses ( For example, the VR application currently running on the mobile phone) is set by default, etc., which are not limited in this embodiment of the present application.
  • the preset depth of field may also be the depth of field where the virtual image plane is located. Taking FIG. 6C as an example, the depth of field where the dotted line surface is located is depth 1, so the preset depth of field may be depth 1.
  • the preset depth of field can also be based on the depth of field of the main object in the picture currently being displayed by the VR glasses.
  • the main object may include an object occupying the largest area in the screen, an object located in the center of the screen, or a virtual object (such as a UI interface) in the screen, and the like.
  • the image includes a tree, a house, and the sun. Assuming that the house is in the center, then the house is determined to be the main object, and the preset depth of field may be the depth of field where the house is located. Because the depth of field of the mountain is greater than that of the house, the mountain is blurred, and the depth of field of the tree is smaller than that of the house, so the tree is clear.
  • the preset depth of field is the depth of field where the user gazes.
  • the VR glasses may include an eye tracking module, through which the user's gaze point can be determined, and the depth of field at which the user's gaze point is determined is the preset depth of field.
  • the image includes trees, houses and the sun, assuming that the user's focus is on the house, then the preset depth of field may be the depth of field where the house is located.
  • the depth of field of the mountain is greater than that of the house, so the mountain is blurred, and the depth of field of the tree is smaller than that of the house, so the tree is clear.
  • the depth of each object can be automatically saved when the rendering pipeline is running, or can be calculated by relying on binocular vision, which is not limited in the embodiment of the present application.
  • S1703. Determine the blurring degree of the object according to the distance between the depth of the object and a preset depth of field.
  • the object when the depth of the object is less than or equal to the preset depth, the object may not be blurred; when the depth of the object is greater than the preset depth, the object needs to be blurred.
  • the blurring degrees of different objects on the image may increase sequentially from small to large depth of field.
  • the depth of field 1 where the object 1 is located ⁇ the depth of field 2 where the object 2 is located ⁇ the depth of field 3 where the object 3 is located
  • the degree of blurring of the object 1 ⁇ the degree of blurring of the object 2 ⁇ the degree of blurring of the object 3 .
  • the sharpness among objects in the foreground, objects in the middle, and objects in the foreground decreases in turn.
  • the VR device can first generate an image, and then use an image blurring algorithm to blur different objects on the image to different degrees.
  • the image blurring algorithm includes at least one of Gaussian blur, image down-sampling, defocus blur (defocus blur) algorithm based on deep learning, level of detail (LOD) data structure, etc. limited.
  • LOD is a multi-layer data structure.
  • the data structure can be understood as an image processing algorithm, and the multi-layer data structure includes a multi-layer image processing algorithm.
  • LOD0 to LOD3 are included; wherein, each layer in LOD0 to LOD3 corresponds to an image processing algorithm.
  • different layers in LOD0 to LOD3 correspond to different algorithms, specifically, the higher the layer, the simpler the corresponding image processing algorithm. For example, LOD3 has the highest level, and the corresponding image processing algorithm is the simplest; LOD0 has the lowest level, and the corresponding image processing algorithm is the most complex.
  • LODs can be used to generate 3D images.
  • LOD0 to LOD3 as an example; wherein, each layer in LOD0 to LOD3 can be used to generate a layer in a 3D image, and then different layers are used to synthesize a 3D image; wherein, different layers correspond to different depth ranges.
  • the image depth can be divided according to the number of LOD levels, for example, there are four layers LOD0 to LOD3, and the image depth can be divided into four ranges.
  • LOD0 corresponds to depth range 1, that is, the image processing algorithm corresponding to LOD0 is used to process layers in depth range 1
  • LOD1 corresponds to depth range 2, that is, the image processing algorithm corresponding to LOD1 is used to process layers in depth range 2 layer for processing
  • LOD2 corresponds to depth range 3, that is, the image processing algorithm corresponding to LOD2 is used to process layers in depth range 3
  • LOD3 corresponds to depth range 4, that is, the image processing algorithm corresponding to LOD3 is used to process layers in depth range 3 Layers within range 4 are processed. Because this application considers that objects with greater depth are more blurred.
  • a layer with a larger depth range corresponds to a higher-level LOD layer (the higher the LOD level, the simpler the algorithm, see the previous description).
  • a layer with a smaller depth range corresponds to a lower-level LOD layer, because the lower the level The more complex the algorithm corresponding to the lower LOD layer, the clearer the processed layer.
  • the depth range 1 is 0-0.3m, which corresponds to LOD0 (because the layer generated by LOD0 has the highest clarity).
  • the depth range 2 is 0.3-0.5m, which corresponds to LOD1 (the layer generated by LOD1 has a lower resolution than the layer generated by LOD0).
  • Depth range 3 is 0.5-0.8m, corresponding to LOD2 (the definition of the layer generated by LOD2 is lower than that of LOD1 layer).
  • the depth range 3 is 0.8-1m, corresponding to LOD3 (the definition of the layer generated by LOD3 is lower than that of the LOD2 layer). That is, as the depth increases, the sharpness of the layer becomes lower and lower.
  • layers corresponding to different depth ranges synthesize an image, and the image is displayed on a VR display device.
  • Figure 16 uses VR glasses to display a frame of image as an example. It can be understood that, generally, the VR glasses display an image stream, and the image stream includes multiple frames of images.
  • the VR image generating device uses the image stream generated by the process shown in FIG. 17 by default (that is, the distant objects on each frame of image are blurred).
  • the VR image generation device uses the image stream generated by the process shown in FIG. 17 by default (that is, the distant objects on each frame of image are blurred).
  • the mobile phone detects at least one of the connection of the VR glasses, the startup of the VR glasses, and the startup of the VR application (such as a VR game)
  • the mobile phone starts to use the process shown in Figure 17 to generate images by default. stream, and then display it through VR glasses.
  • the VR image generation device uses the existing method to generate images by default (that is, all objects on the image have the same clarity), and when detecting the indication for starting the second eye protection mode, use the process shown in Figure 17 to generate image.
  • the second eye protection mode please refer to the previous description. That is to say, the resolution of all objects on the image displayed by the VR glasses at the beginning is the same. Vague.
  • FIG. 18 before the i+1th frame, all objects on the image have the same definition.
  • distant objects such as mountains and the sun
  • the indication for starting the second mode includes but is not limited to: detecting that the user triggers an operation for starting the second mode (for example, a VR application includes a button for starting the second eye protection mode, and the operation can be is the operation of clicking the button), the viewing time of the user is greater than the preset duration, and the number of times the user blinks/squints within the preset duration is greater than the preset number of times.
  • prompt information may also be output, which is used to prompt the user whether to switch to the second eye protection mode. , switch to the second eye protection mode. Since the second eye protection mode is activated, distant objects (such as mountains) on the image are blurred, so the fatigue of the human brain is relieved, and the user experience is better.
  • the distant objects in the image stream generated by the VR image generation device are blurred, which can relieve the fatigue of the human brain, but it is easy to lose the details of the distant objects.
  • the distant object is always blurred, and the user cannot obtain the details of the object;
  • the distant object will always be blurred. is blurred, and the user cannot obtain the details of the object.
  • the clarity of distant objects in the image stream generated by the VR image generating device in the first manner or the second manner may be high or low.
  • the image stream includes multiple periods, and each period includes multiple frames of images, and in each period, the definition of the first object on the image increases first and then decreases.
  • the sharpness of distant objects (such as mountains) on the i-th frame image is lower than the sharpness of the j-th frame image on the mountain, and the definition of the j-th frame image on the mountain is The sharpness is higher than that of the kth frame image uphill. That is, the sharpness of distant objects shows a change trend of "fuzzy-clear-fuzzy" within a cycle.
  • the sharpness of the k-th frame image uphill is lower than that of the p-th frame image uphill, and the p-th frame image's sharpness of the uphill is higher than the qth frame image's sharpness of the uphill.
  • the sharpness of distant objects also shows a change trend of "fuzzy-clear-fuzzy".
  • the two periods in Fig. 19 may be the same or different, without limitation.
  • this sharpness change trend can alleviate the fatigue of the human brain, and on the other hand, it can prevent users from losing image details of distant objects.
  • n, m, q, s are integers greater than or equal to 1.
  • n, m, p, and s are all 1, that is, the j-th frame image is the next frame of the i-th frame image, the k-th frame image is the next frame of the j-th frame image, and the p-th frame is the k-th frame The next frame of , the qth frame is the next frame of the pth frame.
  • n, m, p, and s can be determined according to the user's visual dwell time and image refresh frame rate, which are the same as the implementation principle of Embodiment 1, and will not be repeated.
  • the definition of objects in the foreground in the image stream may not change.
  • the clarity of the tree may not change.
  • Embodiment 1 and Embodiment 2 can be implemented independently or in combination.
  • the VR image generation device may use the technical solution of Embodiment 1 by default (such as the first method or the second method in Embodiment 1), or use the technical solution of Embodiment 2 by default (such as the first method in Embodiment 2).
  • One method or the second method) or, the VR image generating device includes a switching button, through which the VR image generating device can be set to use the technical solution of the first embodiment or the technical solution of the second embodiment; or, the VR application includes a button, through which the user can set whether the VR application uses the technical solution of Embodiment 1 or the technical solution of Embodiment 2.
  • the VR glasses have two display screens, a first display screen and a second display screen.
  • the first display screen is used to present images to the user's left eye
  • the second display screen is used to present images to the user's right eye.
  • the display screen corresponding to the left eye is referred to as the left-eye display screen
  • the display screen corresponding to the right eye is referred to as the right-eye display screen.
  • the left-eye display and the right-eye display are used to display image streams, respectively.
  • the image stream may be an image stream generated using the method of Embodiment 1 (such as the image stream shown in FIG. 13 or 12 ), or an image stream generated using the method of Embodiment 2 (such as FIG. 18 or image stream shown in Figure 19).
  • the image stream displayed on the left-eye display screen and the image stream displayed on the right-eye display screen are both the image streams shown in FIG. 13 in the first embodiment.
  • the images displayed on the left-eye display screen and the right-eye display screen are synchronized.
  • the i-th frame of image is displayed on the left-eye display screen
  • the i-th frame of image is also displayed on the right-eye display screen. Since the object (for example, a tree) away from the user's gaze point on the i-th frame image is blurred, at this time, the trees seen by the left eye and the right eye are both blurred.
  • the left-eye display screen displays the j-th frame of image
  • the right-eye display screen also displays the j-th frame of image.
  • the trees seen by the left eye and the right eye are both clear at this time.
  • the left and right eye display screens display the i-th frame of image synchronously, objects far away from the user's gaze point on the image obtained by synthesizing the left and right eye display screen images in the human brain will be blurred, which can relieve fatigue.
  • the left and right eye display screens synchronously display the j-th frame of image, the image obtained by synthesizing the images of the left and right eye display screens in the human brain is clear, and the objects far away from the user's gaze point can be obtained. detail.
  • the sharpness change trend of objects far away from the user's gaze point in the image stream displayed on the left-eye display screen and the image stream displayed on the right-eye display screen is the same, and both are "blurry-clear-blurry-clear" change trends.
  • the sharpness change trends of objects far away from the user's gaze point in the image stream displayed on the left-eye display screen and the image stream displayed on the right-eye display screen may be opposite.
  • the sharpness of objects far away from the user's gaze in the image stream displayed on the left-eye display shows an alternating change of "fuzzy-clear-fuzzy-clear”
  • the sharpness of objects far away from the user's gaze in the image stream displayed on the right-eye display shows "Clear-blur-clear-blur” alternate change.
  • the i-th frame image is displayed on the left-eye display screen
  • the i-th frame image is also displayed on the right-eye display screen.
  • Objects far away from the user's gaze point (for example, trees) on the i-th frame image on the left-eye display screen are blurred, and the tree is clear in the i-th frame image on the right-eye display screen. Therefore, at this time, the tree seen by the left eye is blurred, and the tree seen by the right eye is clear.
  • the left-eye display screen displays the j-th frame of image
  • the right-eye display screen also displays the j-th frame of image.
  • the tree is clear in the jth frame image on the left-eye display screen, and the tree is blurred in the j-th frame image on the right-eye display screen.
  • the tree seen by the left eye is clear, and the tree seen by the right eye is blurred.
  • the left and right eye display screens display images synchronously (the i-th frame image or the j-th frame image)
  • the objects far away from the fixation point on the left-eye image are clear, and the objects far from the fixation point on the right-eye image are blurred.
  • it can alleviate the fatigue of the human brain, and the image obtained by superimposing the left-eye image and the right-eye image in the human brain will not be too blurred for the object far away from the gaze point, and avoid losing too much detail of the object.
  • both the image stream displayed on the left-eye display screen and the image stream displayed on the right-eye display screen are the image streams shown in FIG. 19 in the second embodiment.
  • the images displayed on the left-eye display screen and the right-eye display screen are synchronized.
  • the i-th frame image is displayed on the left-eye display screen
  • the i-th frame image is also displayed on the right-eye display screen. Since the distant objects (such as mountains and the sun) are blurred on the i-th frame image, at this time, the distant objects seen by the left eye and the right eye are both blurred.
  • the left-eye display screen displays the j-th frame of image
  • the right-eye display screen also displays the j-th frame of image.
  • the distant objects eg, the mountain and the sun
  • the distant objects seen by the left eye and the right eye are both clear.
  • the human brain synthesizes the images of the left and right eye display screens to obtain blurred distant objects on the image, which can relieve fatigue.
  • the left and right eye display screens synchronously display the j-th frame of image
  • the image obtained by combining the images of the left and right eye display screens in the human brain is clear, and the details of the distant object can be obtained.
  • the image stream displayed on the left-eye display screen and the image stream displayed on the right-eye display screen have the same changing trend of the sharpness of distant objects, both of which are "fuzzy-clear-fuzzy-clear".
  • the sharpness change trend of the distant object in the image stream displayed on the left-eye display screen and the image stream displayed on the right-eye display screen may be opposite.
  • the sharpness of distant objects in the image stream displayed on the left-eye display shows an alternating change of "blur-clear-blur-clear”
  • the image stream displayed on the right-eye display shows “clear-blur-clear-blur" " Alternate changes.
  • the i-th frame image when the i-th frame image is displayed on the left-eye display screen, the i-th frame image is also displayed on the right-eye display screen.
  • the distant objects (such as mountains and the sun) are blurred on the i-th frame image on the left-eye display screen, and the distant objects are clear in the i-th frame image on the right-eye display screen. Therefore, at this time, the distant objects seen by the left eye are blurred, and the distant objects seen by the right eye are clear.
  • the left-eye display screen displays the j-th frame of image
  • the right-eye display screen also displays the j-th frame of image.
  • the distant objects (such as mountains and the sun) are clear on the jth frame image on the left-eye display screen, and the distant objects (such as mountain and sun) are blurred in the j-th frame image on the right-eye display screen. Therefore, at this time, the distant objects seen by the left eye are clear, and the distant objects seen by the right eye are blurred.
  • the left-eye and right-eye display screens display images synchronously (i-th frame image or j-th frame image)
  • the distant objects on the left-eye image are clear, and the distant objects on the right-eye image are blurred, which can be achieved to a certain extent. Relieve the fatigue of the human brain, and the image of the distant object on the image obtained by superimposing the left eye image and the right eye image in the human brain will not be too blurred, and avoid losing too much detail of the object.
  • FIG. 24 shows an electronic device 2400 provided by this application.
  • the electronic device 2400 may be the aforementioned VR wearable device (eg, VR glasses).
  • the electronic device 2400 may include: one or more processors 2401; one or more memories 2402; a communication interface 2403, and one or more computer programs 2404, and each of the above devices may communicate through one or more bus 2405 connection.
  • the one or more computer programs 2404 are stored in the above-mentioned memory 2402 and configured to be executed by the one or more processors 2401, the one or more computer programs 2404 include instructions, and the above-mentioned instructions can be used to perform the above-mentioned Relevant steps of the mobile phone in the corresponding embodiment.
  • the communication interface 2403 is used to implement communication with other devices, for example, the communication interface may be a transceiver.
  • the methods provided in the embodiments of the present application are introduced from the perspective of an electronic device (such as a mobile phone) as an execution subject.
  • the electronic device may include a hardware structure and/or a software module, and realize the above-mentioned functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether one of the above-mentioned functions is executed in the form of a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraints of the technical solution.
  • the terms “when” or “after” may be interpreted to mean “if” or “after” or “in response to determining" or “in response to detecting ".
  • the phrases “in determining” or “if detected (a stated condition or event)” may be interpreted to mean “if determining" or “in response to determining" or “on detecting (a stated condition or event)” or “in response to detecting (a stated condition or event)”.
  • relational terms such as first and second are used to distinguish one entity from another, without limiting any actual relationship and order between these entities.
  • references to "one embodiment” or “some embodiments” or the like in this specification means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically stated otherwise.
  • the terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless specifically stated otherwise.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, all or part of the processes or functions described in this embodiment will be generated.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, DVD), or a semiconductor medium (for example, a Solid State Disk (SSD)).
  • a magnetic medium for example, a floppy disk, a hard disk, or a magnetic tape
  • an optical medium for example, DVD
  • a semiconductor medium for example, a Solid State Disk (SSD)

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne un procédé d'affichage et un dispositif électronique, qui sont utilisés pour atténuer la sensation de fatigue du cerveau humain qui se produit lorsqu'un utilisateur porte des lunettes de RV pour regarder une scène de RV. Le procédé comprend les étapes consistant à : afficher N trames d'image à un utilisateur au moyen d'un dispositif d'affichage, sur une i-ième trame d'image parmi les N trames d'image, la définition d'un premier objet positionné dans une première profondeur de champ étant une première définition, sur une j-ème trame d'image parmi les N trames d'image, la définition du premier objet positionné dans la première profondeur de champ étant une deuxième définition, et sur une k-ième trame d'image parmi les N trames d'image, la définition du premier objet positionné dans la première profondeur de champ étant une troisième définition, la première définition étant inférieure à la deuxième définition, et la deuxième définition étant supérieure à la troisième définition, i, j, k représentant tous des entiers positifs Inférieurs à N, et i < j < k ; et la première profondeur de champ est supérieure à une deuxième profondeur de champ, ou la distance entre la première profondeur de champ et la profondeur de champ dans laquelle le regard de l'utilisateur est situé est supérieure à une première distance.
PCT/CN2022/106280 2021-07-21 2022-07-18 Procédé d'affichage et dispositif électronique WO2023001113A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110824187.7 2021-07-21
CN202110824187.7A CN115686181A (zh) 2021-07-21 2021-07-21 一种显示方法与电子设备

Publications (1)

Publication Number Publication Date
WO2023001113A1 true WO2023001113A1 (fr) 2023-01-26

Family

ID=84980024

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/106280 WO2023001113A1 (fr) 2021-07-21 2022-07-18 Procédé d'affichage et dispositif électronique

Country Status (2)

Country Link
CN (1) CN115686181A (fr)
WO (1) WO2023001113A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116850012A (zh) * 2023-06-30 2023-10-10 广州视景医疗软件有限公司 一种基于双眼分视的视觉训练方法及***

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105072436A (zh) * 2015-08-28 2015-11-18 胡东海 虚拟现实及增强现实成像景深自动调节方法以及调节装置
CN106484116A (zh) * 2016-10-19 2017-03-08 腾讯科技(深圳)有限公司 媒体文件的处理方法和装置
US20170160798A1 (en) * 2015-12-08 2017-06-08 Oculus Vr, Llc Focus adjustment method for a virtual reality headset
CN110095870A (zh) * 2019-05-28 2019-08-06 京东方科技集团股份有限公司 光学显示***、显示控制装置和增强现实设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105072436A (zh) * 2015-08-28 2015-11-18 胡东海 虚拟现实及增强现实成像景深自动调节方法以及调节装置
US20170160798A1 (en) * 2015-12-08 2017-06-08 Oculus Vr, Llc Focus adjustment method for a virtual reality headset
CN106484116A (zh) * 2016-10-19 2017-03-08 腾讯科技(深圳)有限公司 媒体文件的处理方法和装置
CN110095870A (zh) * 2019-05-28 2019-08-06 京东方科技集团股份有限公司 光学显示***、显示控制装置和增强现实设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116850012A (zh) * 2023-06-30 2023-10-10 广州视景医疗软件有限公司 一种基于双眼分视的视觉训练方法及***
CN116850012B (zh) * 2023-06-30 2024-03-12 广州视景医疗软件有限公司 一种基于双眼分视的视觉训练方法及***

Also Published As

Publication number Publication date
CN115686181A (zh) 2023-02-03

Similar Documents

Publication Publication Date Title
US11899212B2 (en) Image display method and device for head mounted display
US10009542B2 (en) Systems and methods for environment content sharing
US11024083B2 (en) Server, user terminal device, and control method therefor
EP3862845B1 (fr) Procédé de commande d'écran d'affichage conformément au point de focalisation du globe oculaire et équipement électronique monté sur la tête
JP7408678B2 (ja) 画像処理方法およびヘッドマウントディスプレイデバイス
JP6094190B2 (ja) 情報処理装置および記録媒体
WO2022252924A1 (fr) Procédé de transmission et d'affichage d'image et dispositif et système associés
WO2021013043A1 (fr) Procédé et appareil interactifs dans une scène de réalité virtuelle
CN111103975B (zh) 显示方法、电子设备及***
WO2023001113A1 (fr) Procédé d'affichage et dispositif électronique
CN108989784A (zh) 虚拟现实设备的图像显示方法、装置、设备及存储介质
WO2023082980A1 (fr) Procédé d'affichage et dispositif électronique
WO2022233256A1 (fr) Procédé d'affichage et dispositif électronique
EP3961572A1 (fr) Système et procédé de rendu d'image
WO2023035911A1 (fr) Procédé d'affichage et dispositif électronique
WO2021057420A1 (fr) Procédé d'affichage d'interface de commande et visiocasque
CN116934584A (zh) 一种显示方法与电子设备
US12015758B1 (en) Holographic video sessions
WO2023116541A1 (fr) Appareil de suivi oculaire, dispositif d'affichage et support de stockage
US20230262406A1 (en) Visual content presentation with viewer position-based audio
EP4261768A1 (fr) Système et procédé de traitement d'image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22845278

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22845278

Country of ref document: EP

Kind code of ref document: A1