CN113534959A - Screen display method, screen display device, virtual reality equipment and program product - Google Patents

Screen display method, screen display device, virtual reality equipment and program product Download PDF

Info

Publication number
CN113534959A
CN113534959A CN202110854543.XA CN202110854543A CN113534959A CN 113534959 A CN113534959 A CN 113534959A CN 202110854543 A CN202110854543 A CN 202110854543A CN 113534959 A CN113534959 A CN 113534959A
Authority
CN
China
Prior art keywords
real
time
video
user
virtual scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110854543.XA
Other languages
Chinese (zh)
Inventor
刘�东
王志国
张弛
黄刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Music Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Music Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Music Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202110854543.XA priority Critical patent/CN113534959A/en
Publication of CN113534959A publication Critical patent/CN113534959A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a picture display method, a picture display device, virtual reality equipment and a program product, and belongs to the technical field of virtual reality. The method comprises the following steps: when the virtual reality equipment presents a virtual scene corresponding to the current video, determining the real-time focusing point coordinates of the pupil of the user in the virtual scene; wherein the current video comprises at least one video character, each video character having a set of real-time coordinates in the virtual scene, the set of real-time coordinates determined from the position of each video character in a different video frame; determining a target video role concerned by the user from the at least one video role according to the real-time focusing point coordinate and the real-time coordinate set of all the video roles; displaying a close-up view of the target video character in a real-time viewable area of the virtual scene. The invention is used for providing personalized picture close-up service for users.

Description

Screen display method, screen display device, virtual reality equipment and program product
Technical Field
The present invention relates to the field of virtual reality technologies, and in particular, to a method and an apparatus for displaying a screen, a virtual reality device, and a program product.
Background
Virtual Reality (VR) technology is a set of simulation technology and multiple technologies such as computer graphics, human-computer interface technology, multimedia technology, sensing technology, network technology, etc., with the continuous development of VR technology, multiple VR devices such as VR eyecups appear, and a Virtual scene is generated by the VR devices through operation simulation.
However, in the related art, the close-up of the virtual content is amplified to be non-targeted close-up, and personalized virtual content close-up service cannot be provided for the user.
Disclosure of Invention
The invention mainly aims to provide a picture display method, a picture display device, virtual reality equipment and a program product, and aims to solve the technical problem that personalized virtual content feature service cannot be provided for a user in the prior art.
In order to achieve the above object, the present invention provides a screen display method for a virtual reality device, the screen display method including the steps of:
when the virtual reality equipment presents a virtual scene corresponding to the current video, determining the real-time focusing point coordinates of the pupil of the user in the virtual scene; wherein the current video comprises at least one video character, each video character having a set of real-time coordinates in the virtual scene, the set of real-time coordinates determined from the position of each video character in a different video frame;
determining a target video role concerned by the user from the at least one video role according to the real-time focusing point coordinate and the real-time coordinate set of all the video roles;
displaying a close-up view of the target video character in a real-time viewable area of the virtual scene.
In one embodiment, the step of determining the real-time focal point coordinates of the user's pupil in the virtual scene includes:
acquiring first real-time offset information of pupils of a user and second real-time offset information of lenses of the virtual reality equipment; the first real-time offset information is offset information of a real-time focusing point of the pupil of the user on the lens relative to a mirror surface central point of the lens; the second real-time offset information is offset information of the mirror surface central point relative to a reference point in the virtual scene, and the reference point is an orthographic projection point of the mirror surface central point in the virtual scene when the head of a user is not offset;
and determining the real-time focusing point coordinates of the pupils of the user in the virtual scene according to the first real-time offset information and the second real-time offset information.
In an embodiment, the determining the real-time focus point coordinates of the pupil of the user in the virtual scene according to the first real-time offset information and the second real-time offset information includes:
determining a real-time longitude coordinate of the real-time focusing point coordinate according to a first longitude coordinate of the first real-time offset information and a second longitude coordinate of the second real-time offset information;
and determining the real-time latitude coordinate of the real-time focusing point coordinate according to the first latitude coordinate of the first real-time offset information and the second latitude coordinate of the second real-time offset information.
In an embodiment, prior to the step of displaying a close-up view of the target video character in the real-time viewable area of the virtual scene, the method further comprises:
recording the focusing time length of the target video role concerned by the user;
the step of displaying a close-up view of the target video character in a real-time viewable area of the virtual scene, comprising:
and if the focusing time length meets a preset condition, displaying a close-up picture of the target video role in the real-time visual area of the virtual scene.
In an embodiment, the step of displaying a close-up picture of the target video character in the real-time visual area of the virtual scene if the focusing duration satisfies a preset condition includes:
if the focusing duration is greater than or equal to a first preset threshold value, displaying a low-magnification close-up picture of an image displayed by the target video character in a currently played video frame in a real-time visual area of the virtual scene;
if the focusing duration is greater than or equal to a second preset threshold, displaying a high-magnification close-up picture of the image displayed by the target video role in the currently played video frame in the real-time visual area of the virtual scene; wherein the second preset threshold is greater than the first preset threshold.
In an embodiment, prior to the step of displaying a close-up view of the target video character in the real-time viewable area of the virtual scene, the method further comprises:
counting the accumulated focusing time length of the target video role concerned by the user;
if the focusing duration meets a preset condition, displaying a close-up picture of the target video character in the real-time visual area of the virtual scene, wherein the step comprises the following steps:
and if the accumulated focusing time length is larger than a third preset threshold value, displaying the display image feature of the target video role in a real-time visual area of the virtual scene.
In an embodiment, the step of determining the target video character focused by the user from the at least one video character according to the real-time focus point coordinate and the real-time coordinate set of all the video characters comprises:
comparing the real-time focusing point coordinates with the real-time coordinate sets of all the video roles, and screening out a target real-time coordinate set comprising the real-time focusing point coordinates;
and determining the target video role concerned by the user according to the target real-time coordinate set.
In a second aspect, the present invention further provides a screen display apparatus for a virtual reality device, where the screen display apparatus includes:
the real-time coordinate determination module is used for determining real-time focusing point coordinates of pupils of a user in a virtual scene when the virtual reality device presents the virtual scene corresponding to the current video; wherein the current video comprises at least one video character, each video character having a set of real-time coordinates in the virtual scene, the set of real-time coordinates determined from the position of each video character in a different video frame;
the target role determination module is used for determining a target video role concerned by the user from the at least one video role according to the real-time focusing point coordinate and the real-time coordinate set of all the video roles;
a screen content display module to display a close-up screen of the target video character in a real-time viewable area of the virtual scene.
In a third aspect, an embodiment of the present invention further provides a virtual reality device, including:
a virtual reality device body;
a memory;
a processor; and
a display program stored on the memory and executable on the processor, the display program configured to implement the steps of the screen display method as described above.
In a fourth aspect, the present invention further provides a computer program product including executable program code, where the program code, when executed by a processor, performs the steps of the screen display method as described above.
According to the picture display method provided by the embodiment of the invention, the special attention and love of a user to a certain video role in a currently played video are determined through the real-time focusing point coordinates of the pupil of the user in the virtual scene, and the close-up picture of the video role is displayed in the real-time visible area of the virtual reality equipment, so that the personalized virtual content close-up service is provided for the user.
Drawings
FIG. 1 is a schematic structural diagram of a virtual reality device recommendation device according to the present invention;
FIG. 2 is a schematic diagram of a VR visual space in the image display method of the present invention;
FIG. 3 is a flowchart illustrating a first exemplary embodiment of a screen displaying method according to the present invention;
FIG. 4 is a schematic view of a ray method according to a first embodiment of the image display method of the present invention;
FIG. 5 is a schematic view showing a close-up view in the first embodiment of the screen displaying method according to the present invention;
FIG. 6 is a schematic coordinate diagram of first real-time offset information according to a first embodiment of a frame display method of the present invention;
FIG. 7 is a schematic coordinate diagram of second real-time offset information according to the first embodiment of the image display method of the present invention;
FIG. 8 is a flowchart illustrating a second exemplary embodiment of a screen displaying method according to the present invention;
FIG. 9 is a flowchart illustrating a screen display method according to a third embodiment of the present invention;
FIG. 10 is a functional block diagram of a screen display apparatus according to a first embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Virtual reality equipment such as VR glasses simulate and produce the virtual world of a three-dimensional space, provide the simulation of user about sense organ such as vision, sense of hearing, sense of touch, let the user like the own experience, can in time observe the thing in the three-dimensional space without the restriction. Therefore, some users may use VR glasses to watch concert videos or evening program videos to obtain an immersive experience. In the related technology, virtual reality equipment such as VR glasses can capture the local video frame of the concert video or the evening program video, and the video frame segments after capture are spliced into the original concert video or the evening program video, so that the local feature amplification function is realized.
However, the feature-up enlarging function in the related art is a non-targeted feature, which cannot provide a personalized virtual content feature-up service to a user who actually watches the video, depending on the preference of a video producer or the prediction of a favorite segment of the viewer by the video producer.
Therefore, the embodiment of the invention provides an image display method, which determines the special attention and love of a user to a certain video role in a currently played video through the real-time focusing point coordinates of the pupil of the user in the virtual scene, and displays the close-up image of the video role in the real-time visual area of the virtual reality equipment, thereby providing personalized virtual content close-up service for the user.
The inventive concept of the present application is further illustrated below with reference to some specific embodiments.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a recommended virtual reality device of a screen display method according to an embodiment of the present invention.
This virtual reality equipment can be VR helmet, VR glasses, VR motion seat etc.. The virtual reality equipment body is not specially limited by the application. The virtual reality equipment body comprises a lens and a display device which are arranged at intervals, a posture sensor and an eyeball tracking device.
The display device is used for playing the currently played video. The attitude sensor is used for acquiring real-time movement information of the lens, is a high-performance three-dimensional movement attitude measurer based on Micro-Electro-Mechanical System (MEMS) technology, and generally comprises auxiliary movement sensors such as a three-axis gyroscope, a three-axis accelerometer and a three-axis electronic compass, and the attitude sensor acquires attitude information of the lens by using the auxiliary movement sensors. The eyeball tracking device is used for realizing positioning tracking of the pupils of the user and collecting the focusing center of the pupils. In some embodiments, the eye tracking device may be a built-in camera aimed at the user's eye.
The virtual reality device further comprises: at least one processor 301, a memory 302, and a display program stored on the memory and executable on the processor, the display program configured to implement the steps of the screen display method as follows.
The processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 301 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen.
Memory 302 may include one or more computer-readable storage media, which may be non-transitory. Memory 302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 302 is used to store at least one instruction for execution by processor 301 to implement a screen display method provided by method embodiments herein.
The virtual reality device also includes a communication interface 303. The processor 301, the memory 302 and the communication interface 303 may be connected by a bus or signal lines. Various peripheral devices may be connected to communication interface 303 via a bus, signal line, or circuit board.
The communication interface 303 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 301 and the memory 302. . In some embodiments, processor 301, memory 302, and communication interface 303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 301, the memory 302 and the communication interface 303 may be implemented on a single chip or circuit board, which is not limited in this embodiment.
Those skilled in the art will appreciate that the configuration shown in fig. 1 does not constitute a limitation of the virtual reality device, and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
The embodiment of the invention provides a picture display method. Referring to fig. 3, fig. 3 is a flowchart illustrating a first embodiment of a screen display method. In this embodiment, the screen display method is used in a virtual reality device.
Some of the concepts related to this application are shown below:
VR visual space: referring to fig. 2, a virtual northern hemisphere with a position of a character at a user perspective of the VR device as a sphere center and a VR scene as an inner sphere is taken as a virtual hemisphere. The virtual scene of the current playing video played by the VR device is positioned on the northern hemisphere. Any pixel point in the virtual scene can represent its specific coordinate in the VR visual space by longitude and latitude of north latitude. Specifically, the north hemisphere latitude is 0 ° to 180 °. The VR visual space has a primary stage area. The central longitudinal axis of the main stage area in the VR visual space is taken as a meridian, namely a spherical arc with 0 longitude, and the longitudinal section of the center and the meridian is taken as a boundary, and the left and the right are respectively a west meridian arc surface and an east meridian arc surface. The west longitude arc surface ranges are: -180 ° to 0 °; the longitude range of east longitude is 0-180 deg. And (3) the coordinate of the central point of the main stage area is expressed as (0, hx), and the difference value between the coordinate of any point in the VR visual space and the coordinate of the central point of the main stage area is the offset of the point relative to the central point of the main stage area.
Real-time visual area: due to the actions of head deviation and the like of a user, a virtual northern hemisphere area corresponding to a VR lens area in a VR visual space changes, so that the user observes or sees different virtual scene areas in real time along with different head postures of the user, and the partial area scene area seen in a certain posture of the head of the user is a real-time visual area.
In this embodiment, the screen display method includes the steps of:
s101, when a virtual scene corresponding to a current video is presented by virtual reality equipment, determining real-time focusing point coordinates of a pupil of a user in the virtual scene; the current video comprises at least one video role, each video role has a real-time coordinate set in a virtual scene, and the real-time coordinate set is determined according to the position of each video role in different video frames.
The current video is a VR video, and a corresponding virtual scene can be presented when the VR video is played on virtual reality equipment. The VR video includes a background and a plurality of dynamic character images. And the plurality of dynamic character images belong to at least one video role. When watching a VR video, a user generally pays special attention to a certain video role which is loved by the user. Specifically, the pupil of the user may track the area where the video character appears in the virtual scene, and thus, the pupil of the user may shift in real time. When the shift of the pupil of the user is embodied in the virtual scene, that is, in each frame, the pupil of the user can be focused at different positions on the virtual hemispherical north of the VR visual space, and the positions can be expressed by a longitude and latitude coordinate system established on the virtual hemispherical north, that is, the real-time focusing point coordinates of the pupil of the user in the virtual scene.
And the real-time image of the motion character image of each video character in each frame is a static image, and the real-time coordinate set of the closed area occupied by the motion character image in the VR visual space of the VR video virtual scene can be expressed as a function F (w, j). The function F (w, j) is a closed single-connected region. The function F (w, j) may be pre-extracted and stored in the virtual reality device.
For example, the VR video may be a concert VR video composed of 7 idol stars and including video characters corresponding to the 7 idol stars. It is readily understood that the singing of the idol combination will be choreographed differently, and that the 7 stars have different positions at different times, and as the camera lens moves, the result is that in each frame of the VR video, the 7 idol stars are often in different coordinate regions in the VR visual space. Generally, a user will pay special attention to or love one of the 7 idol stars. When the user watches the concert VR video, the pupil of the user unconsciously and automatically tracks the position area where the idol star appears in the virtual scene. At this time, the real-time focusing point coordinates of the user's pupils express the user's preference for one of the video characters.
It is worth mentioning that, since the user watches the VR video, the real-time focusing point coordinates of the pupil of the user can be determined every other frame duration with the frame duration as the granularity. In this embodiment, the coordinate at which the actual focus point of the user falls in the VR visual space is (J)ua,Wua)。
In one embodiment, step S101 includes:
step A10, acquiring first real-time offset information of a pupil of a user and second real-time offset information of a lens of virtual reality equipment; the first real-time offset information is offset information of a real-time focusing point of a pupil of a user on the lens relative to a mirror surface central point of the lens; the second real-time offset information is offset information of the central point of the mirror surface relative to a reference point in the virtual scene, and the reference point is an orthographic projection point of the central point of the mirror surface in the virtual scene when the head of the user is not offset.
And A20, determining the real-time focusing point coordinates of the pupil of the user in the virtual scene according to the first real-time offset information and the second real-time offset information.
In this embodiment, the first real-time offset information and the second real-time offset information are both represented in the VR visual space, and since the VR visual space is a virtual hemispherical northern surface, step a20 includes:
(1) step a20 determines a real-time longitude coordinate of the real-time focus point coordinate from a first longitude coordinate of the first real-time offset information and a second longitude coordinate of the second real-time offset information.
(2) And determining the real-time latitude coordinate of the real-time focusing point coordinate according to the first latitude coordinate of the first real-time offset information and the second latitude coordinate of the second real-time offset information.
Specifically, due to the actions such as head deviation of the user, the virtual hemispherical-north area corresponding to the VR mirror area in the VR visual space changes, and then the user observes or visualizes different virtual scene areas in real time according to different head postures of the user, that is, the real-time visual area of the virtual scene changes along with the head deviation of the user. In addition, when the head of the user does not shift, the pupil of the user also shifts, that is, in the same virtual time zone, the user may pay attention to the picture at different positions in the same real-time visual zone due to the shift of the pupil of the user. Therefore, in order to accurately determine the focus center of the pupil of the user in the virtual scene, the first real-time offset information of the pupil and the second real-time offset information of the lens are required to be jointly determined.
For example, the virtual reality device may track first real-time shift information of the pupil through the eye tracking device, and acquire a change state of the VR lens corresponding to the virtual scene through the attitude sensor.
Referring to fig. 6, as the user rotates the eye, the pupil focus is offset relative to the VR lens. Defining a coordinate system by taking the center of the VR mirror surface as an origin (0,0) of the mirror surface of the VR lens, wherein the horizontal axis is longitude J, the vertical axis is latitude W, and the pupil falls on the VR lens surface with the coordinate (J) of the VR lens surface at the focusing intersection point of the VR lens focuseye,Weye) The offset of the point from the center of the VR mirror can be represented by the coordinates of the point (J)eye,Weye) I.e. the first real-time offset information. J. the design is a squareeyeIs a first longitude coordinate, WeyeIs a first latitude coordinate.
In this step, the time length of each frame can be used as the granularity, and the first real-time offset information of the pupil corresponding to the VR lens is collected once every other frame of time length through the eyeball tracking device.
By masters in virtual scenesAnd taking the central point (0, hx) of the stage as an anchor point, taking the duration of each frame as granularity, and acquiring the offset of the VR lens corresponding to the main stage of the virtual scene once every other frame of duration, namely second real-time offset information. The intersection point of the visual angle central point, the extension line of the VR lens central point and the VR virtual scene sphere is called as a VR central projection point, and the longitude and latitude of the virtual scene sphere are (J)vr,Wvr) And when the longitude and latitude of the VR central projection point coincide with the central point (0, hx) of the main stage area, the second real-time offset information is (0, 0). Wherein, JvrIs a second longitude coordinate, WvrIs the second latitude coordinate.
When the head of a person moves to cause the deviation of the VR lens, the central projection point of the VR is positioned at the position (J) of the VR visual spacevr,Wvr) And the difference value of the VR central projection point and the longitude and latitude of the central point of the main stage area is called second real-time offset information (delta j, delta w).
Wherein, the central projection point longitude of the VR-the central point longitude of the main stage area is Jvr–0=Jvr
Delta W is equal to VR central projection point latitude-main stage area central point latitude is equal to Wvr-hx。
Specifically, referring to fig. 7, the real-time focus point coordinates of the pupil of the user in the virtual scene are determined according to the first real-time offset information and the second real-time offset information.
After the first real-time offset information and the second real-time offset information are integrated, the real-time focusing point coordinate (J) of the user pupil with the frame duration as the granularity can be obtainedua,Wua)。
Real-time focusing point coordinate is main stage area center + second real-time shift information (VR lens shift) + first real-time shift information (pupil lens shift)
Longitude of the real-time focus point coordinate: j. the design is a squareua=Jvr+Jeye
Latitude of real-time focus point coordinates: wua=Wvr-hx+Weye
In this embodiment, based on the above steps, the real-time focus point coordinate of the focus center of the pupil of the user in the VR visual space may be determined by combining the first real-time offset information of the pupil of the user with the second real-time offset information of the VR lens caused by the head movement of the user, so as to accurately determine that the user pays particular attention to and likes a certain video character in the currently played video, and then display the close-up picture of the video character in the real-time visual area of the virtual reality device, thereby providing an individualized virtual content close-up service for the user, and improving the viewing experience of the user.
And S102, determining a target video role focused by the user from at least one video role according to the real-time focusing point coordinate and the real-time coordinate set of all the video roles.
In each frame, the distribution area of the dynamic character image of each video character in the VR visual space is determined, namely a real-time coordinate set: function F (w, j). Thus, judgment (J)ua,Wua) Whether the video characters are located within F (w, j) perceived by each video, the target video character concerned by the user can be determined from at least one video character.
Specifically, step S102 includes:
and step B10, comparing the real-time focusing point coordinates with the real-time coordinate sets of all video roles, and screening out a target real-time coordinate set comprising the real-time focusing point coordinates.
And step B20, determining the target video role concerned by the user according to the target real-time coordinate set.
Specifically, the real-time focusing point coordinate determined in one time is (J)ua,Wua). At this moment, the real-time coordinate set of a certain video character in the playing frame of the VR video in the corresponding frame is F (w, j). Coordinate of real-time focus point (J)ua,Wua) Comparing the real-time coordinate set F (w, J) of the video role in the corresponding frame, and judging the real-time focus point coordinate (J)ua,Wua) Whether it falls within F (w, j) of a certain video character. If the real-time focal point coordinate (J)ua,Wua) If not, F (w, J) does not contain the real-time focal point coordinate (J)ua,Wua). If the real-time focal point coordinate falls within F (w, J), F (w, J) includes the real-time focal point coordinate (J)ua,Wua). In one embodiment, the real-time focal point coordinates (J) may be determined by ray methodua,Wua) Whether it falls within F (w, j) of a certain video character.
Referring to FIG. 4, a point (J) is determinedua,Wua) Whether it falls in F (w, J), from which point (J) is requiredua,Wua) And (4) emitting a ray to the east longitude and latitude, wherein the ray is intersected with F (w, j), if the number of the intersection points is an odd number, the point is judged to fall into F (w, j), and otherwise, the point falls out of F (w, j). If a point one (w1, j1) is rayed east at the same latitude, the intersection points of the point one (w1, j1) and F (w, j) are 2, and are even numbers, namely fall outside F (w, j). If a point two (w2, j2) emits rays to the same latitude, and the intersection point of the point two (w2, j2) and F (w, j) is only 1, the point falls within F (w, j).
For example, the VR video is a concert VR video composed of 7 idol stars, and the VR video includes 7 video characters, including an idol star with a large number of fans, a and B, and a user prefers to idol star a. In each frame, idocam star a and idocam star B have different standing positions and dance gestures in the virtual scene. If at a certain moment, the real-time focusing point coordinate of the pupil of the user is (J)ua,Wua) At this time, the real-time coordinate set of the still picture of the even star a is FA (w, j), and the real-time coordinate set of the still picture of the even star B is FB (w, j). Thus, the real-time focal point coordinates (J) of the pupil of the user may be passedua,Wua) And when the video image falls into the FA (w, j) of the real-time coordinate set, judging that the target video role concerned by the user is the even image star A.
And step S103, displaying a close-up picture of the target video character in the real-time visual area of the virtual scene.
Referring to fig. 5, after determining a favorite target video character of a user in a certain video, a personalized close-up picture service can be provided for the user, that is, a close-up picture of the target video character is displayed in the current VR visual space. And to ensure that the user can view a close-up view of the target video character, the close-up view is presented in a real-time viewable area in the VR viewing space.
The VR video can be pre-edited, and dynamic images of each frame of each video role can be extracted to produce a special writing picture. And storing the close-up pictures, and when determining the favorite target video role of the user, extracting and playing the part matched with the playing frame at the moment in the corresponding close-up picture of the target video role.
Based on the above steps, the embodiment determines that the user pays particular attention to and likes a certain video character in the currently played video according to the real-time focus point coordinates of the pupil of the user, and displays the close-up picture of the video character in the real-time visible area of the virtual reality device, so as to provide personalized virtual content close-up service for the user, thereby improving the viewing experience of the user.
As one embodiment, a second embodiment of the screen display method of the present invention is proposed on the basis of the first embodiment of the screen display method of the present invention. Referring to fig. 8, fig. 8 is a flowchart illustrating a second embodiment of a screen display method according to the present invention.
In this embodiment, the screen display method includes the steps of:
step S201, when a virtual reality device presents a virtual scene corresponding to a current video, determining real-time focusing point coordinates of a pupil of a user in the virtual scene; wherein the current video comprises at least one video character, each video character having a set of real-time coordinates in the virtual scene, the set of real-time coordinates being determined according to the position of each video character in a different video frame.
Step S202, determining a target video role concerned by the user from the at least one video role according to the real-time focusing point coordinate and the real-time coordinate set of all the video roles.
And step S203, recording the focusing time length of the target video role continuously concerned by the user.
Therefore, the virtual reality equipment can calculate the focusing time length of the dynamic character image of the real-time focusing point coordinate of the pupil of the user focusing on the target video role through the timer frame by frame accumulation. The focusing duration is the time of continuously accumulating the dynamic character image focused on the target video character by the pupil of the user during the period that the timer is not reset.
Specifically, the focal length can be obtained in the following manner.
In any group of two continuous video frames, if a second target video role corresponding to a first current frame is different from a first target video role corresponding to a first previous frame, timing is started to record the focusing time length of the second target video role concerned by a user at present, and the timing is stopped until a third target video role corresponding to any subsequent video frame is different from the second target video role.
Specifically, during recording, in any group of two consecutive video frames, if the real-time focal point coordinate of the pupil of the user in the current frame falls into the real-time coordinate set of a certain video character, and the real-time focal point coordinate of the pupil of the user in the previous frame does not fall into the real-time coordinate set of the video character, the timer is started. It is worth mentioning that only 1 counter in the virtual reality device works at the same time, that is, the real-time focus point coordinates of the user's pupil only fall into the real-time coordinate set of 1 video role.
When the VR video is continuously played, in any one group of subsequent continuous video frames, if the real-time focusing point coordinate of the pupil of the user in the frame falls into the real-time coordinate set of a certain video role and the real-time focusing point coordinate of the pupil of the user in the previous frame also falls into the real-time coordinate set of the video role, the timer increases the time length of one frame and records the focusing time length.
When the VR video is continuously played, in any one group of subsequent continuous two video frames, if the real-time focusing point coordinate of the pupil of the user in the frame does not fall into the real-time coordinate set of a certain video role and the real-time focusing point coordinate of the pupil of the user in the previous frame falls into the real-time coordinate set of the certain video role, the timer is stopped, and the focusing time length is stored to record the accumulated value of the focusing time length.
And S204, if the focusing time length meets the preset condition, displaying a close-up picture of the target video character in the real-time visual area of the virtual scene.
In this embodiment, the display precondition of the close-up picture is defined by the focusing duration, so that the close-up picture is displayed in the virtual scene only when the preset condition is met, thereby avoiding the frequent occurrence of the close-up picture or the frequent occurrence of the close-up pictures of different video characters from affecting the viewing experience of the user.
Specifically, step S204 includes:
and C10, if the focusing time length is greater than or equal to the first preset threshold value, displaying a low-magnification close-up picture of the target video character in the currently played video frame in the real-time visible area of the virtual scene.
The first preset threshold value is N, if the timer in the virtual reality device reaches N, the virtual reality device is triggered to play the low-magnification close-up shot of the target video role of the frame, and the duration is t 1.
Step C20, if the focusing duration is greater than or equal to a second preset threshold, displaying a high-magnification close-up picture of the target video role in the currently played video frame in the real-time visible area of the virtual scene; and the second preset threshold is greater than the first preset threshold.
The second predetermined threshold is M, and M > N. And when the counter in the virtual reality device reaches M, triggering the virtual reality device to play the high-magnification close-up shot of the target video character playing the frame, wherein the duration is t 2. T1 and t2 may be equal or different, and this embodiment is not limited thereto.
In this embodiment, the difference between M and N may be greater than t1, at this time, after the low-magnification close-up is played, if the user continues to pay attention to the target video character, the counter continues to count time to reach M, so that the high-magnification close-up of the currently played frame can be played, and the requirement of the user on paying attention to the target video character is met.
In addition, in this embodiment, the difference between M and N may also be smaller than t1, at this time, in the low-magnification close-up playing process, if the user continues to pay attention to the target video character, the counter continues to count time to reach M, so that the high-magnification close-up of the currently played frame can be played, so as to meet the requirement of the user on paying attention to the target video character.
It should be noted that the close-up picture is a close-up picture of an image displayed in a currently playing video frame, that is, for the target video character, during the user viewing process, the dynamic character image of the target video character is continuous, so as to ensure the viewing experience of the user.
For ease of understanding, the following is exemplified:
if the VR video is a concert VR video composed of 7 idol stars, the VR video includes 7 video characters. The concert video is 5 minutes in total, if the video starts from 1:50, the fact that the user starts to continuously pay attention to the dance gesture picture of the idol star A is monitored, and when the video reaches the 2:05, if the user still continuously pays attention to the dance gesture picture of the idol star A, the virtual display equipment is triggered to start to play the low-magnification close-up picture of the display image of the idol star A in each frame in the time period of 2:05-2:10 in a superposition mode in the virtual scene. At 2:15, the virtual display device is triggered to begin to play in superimposition the high-magnification close-up view of the display image of the even star a in each frame for the time period 2:15-3:00 in the virtual scene.
In an embodiment, based on the above steps, the tracking attention to the target video role can be judged according to the residence time of the pupil focus of the user on the display picture of the target video role, so as to judge the special attention and love of the user on a certain video role, and personalized and stepped picture feature-up service is provided according to the attention duration of the user on the video role, so as to meet the watching demand of the user on the target video role and improve the watching experience of the user.
As one embodiment, a third embodiment of the screen display method of the present invention is proposed on the basis of the first and second embodiments of the screen display method of the present invention. Referring to fig. 9, fig. 9 is a flowchart illustrating a screen display method according to a third embodiment of the present invention.
In this embodiment, the screen display method includes:
step S301, when the virtual reality device presents a virtual scene corresponding to a current video, determining real-time focusing point coordinates of a pupil of a user in the virtual scene; wherein the current video comprises at least one video character, each video character having a set of real-time coordinates in the virtual scene, the set of real-time coordinates being determined according to the position of each video character in a different video frame.
Step S302, determining a target video role concerned by the user from the at least one video role according to the real-time focusing point coordinate and the real-time coordinate set of all the video roles.
Step S303, counting the accumulated focusing time length of the target video role concerned by the user;
and step S304, if the accumulated focusing time is greater than a third preset threshold, displaying the display image feature of the target video character in a real-time visual area of the virtual scene.
The accumulated focusing duration is the sum of the values of all focusing durations during which the user focuses on the same target video character while watching the current video. The focusing time length can refer to the description of the above embodiments, and is not described herein.
The display image feature may be a single person feature created after all display images of the target video character in the current video are presented. At this time, the virtual reality device can jump to the single feature video playing the target video, so that the watching demand of the user on the target video role is met.
For ease of understanding, the following is exemplified:
if the VR video is a concert VR video composed of 7 idol stars, the VR video includes 7 video characters. The concert video is 5 minutes in total, if the video starts from 1:50, the fact that the user starts to continuously pay attention to the dance gesture picture of the idol star A is monitored, and when the video reaches the 2:05, if the user still continuously pays attention to the dance gesture picture of the idol star A, the virtual display equipment is triggered to start to play the low-magnification close-up picture of the display image of the idol star A in each frame in the time period of 2:05-2:10 in a superposition mode in the virtual scene. At 2:15, the virtual display device is triggered to begin to play in superimposition the high-magnification close-up view of the display image of the even star a in each frame for the time period 2:15-3:00 in the virtual scene. If the ratio is 3:05, the virtual display device continuously monitors the accumulated value of the focusing time of the user on the even image star A, namely the accumulated focusing time is more than 1 minute, the VR video can be paused to be played, and the single person feature of the even image star A in the VR video of the concert is played.
In this embodiment, based on the above steps, the tracking attention to the target video role can be determined according to the residence time of the pupil focus of the user on the display picture of the target video role, so as to determine the special attention and love of the user on a certain video role, and according to the attention duration of the user on the video role, the personalized and stepped picture feature service is provided, so as to meet the watching demand of the user on the target video role, and improve the watching experience of the user.
In addition, the embodiment of the invention also provides a first embodiment of the picture display device. Referring to fig. 10, fig. 10 is a functional block diagram of the first embodiment of the screen display device. The picture display device is used for virtual reality equipment.
In this embodiment, the screen display device includes:
the real-time coordinate determination module 10 is configured to determine a real-time focus point coordinate of a pupil of a user in a virtual scene when the virtual reality device presents the virtual scene corresponding to a current video; wherein the current video comprises at least one video character, each video character having a set of real-time coordinates in the virtual scene, the set of real-time coordinates determined from the position of each video character in a different video frame;
a target role determination module 20, configured to determine, according to the real-time focus point coordinate and the real-time coordinate set of all the video roles, a target video role focused by the user from the at least one video role;
a screen content display module 30 for displaying a close-up screen of the target video character in the real-time viewable area of the virtual scene.
In one embodiment, the real-time coordinate determination module 10 includes:
the information acquisition unit is used for acquiring first real-time offset information of pupils of a user and second real-time offset information of lenses of the virtual reality equipment; the first real-time offset information is offset information of a real-time focusing point of the pupil of the user on the lens relative to a mirror surface central point of the lens; the second real-time offset information is offset information of the mirror surface central point relative to a reference point in the virtual scene, and the reference point is an orthographic projection point of the mirror surface central point in the virtual scene when the head of the user is not offset.
And the coordinate determination unit is used for determining the real-time focusing point coordinates of the pupils of the user in the virtual scene according to the first real-time offset information and the second real-time offset information.
In one embodiment, the screen display apparatus further includes:
and the time length recording module is used for recording the focusing time length of the target video role concerned by the user.
The picture content display module 30 is configured to display a close-up picture of the target video character in the real-time visual area of the virtual scene if the focusing duration satisfies a preset condition
In one embodiment, the screen content display module 30 includes:
a low-magnification close-up unit, configured to display, in the real-time visual area of the virtual scene, a low-magnification close-up picture in which the target video character displays an image in a currently playing video frame if the focusing duration is greater than or equal to a first preset threshold;
a high-magnification close-up unit, configured to display, in the real-time visual area of the virtual scene, a high-magnification close-up picture in which the target video character displays an image in a currently playing video frame if the focusing duration is greater than or equal to a second preset threshold; wherein the second preset threshold is greater than the first preset threshold
In one embodiment, the target role determination module 20 includes:
the coordinate comparison unit is used for comparing the real-time focusing point coordinate with the real-time coordinate sets of all the video roles so as to screen out a target real-time coordinate set comprising the real-time focusing point coordinate;
a role determination unit for determining the target video role concerned by the user according to the target real-time coordinate set
Other embodiments or specific implementations of the image display apparatus of the present invention can refer to the above method embodiments, and are not described herein again.
Furthermore, an embodiment of the present invention further provides a computer program product, which includes executable program code, where the program code, when executed by a processor, implements the steps of the screen display method as above. Therefore, a detailed description thereof will be omitted. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in the embodiments of the computer program product referred to in the present application, reference is made to the description of the embodiments of the method of the present application. It is determined that, by way of example, the program instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
It should be noted that the above-described embodiments of the apparatus are merely schematic, where units illustrated as separate components may or may not be physically separate, and components illustrated as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present invention may be implemented by software plus necessary general hardware, and may also be implemented by special hardware including special integrated circuits, special CPUs, special memories, special components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions may be various, such as analog circuits, digital circuits, or dedicated circuits. However, the implementation of a software program is a more preferable embodiment for the present invention. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, where the computer software product is stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a Read-only memory (ROM), a random-access memory (RAM), a magnetic disk or an optical disk of a computer, and includes instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A screen display method for a virtual reality device, the screen display method comprising:
when the virtual reality equipment presents a virtual scene corresponding to the current video, determining the real-time focusing point coordinates of the pupil of the user in the virtual scene; wherein the current video comprises at least one video character, each video character having a set of real-time coordinates in the virtual scene, the set of real-time coordinates determined from the position of each video character in a different video frame;
determining a target video role concerned by the user from the at least one video role according to the real-time focusing point coordinate and the real-time coordinate set of all the video roles;
displaying a close-up view of the target video character in a real-time viewable area of the virtual scene.
2. The screen display method according to claim 1, wherein the step of determining real-time focal point coordinates of the user's pupil in the virtual scene comprises:
acquiring first real-time offset information of pupils of a user and second real-time offset information of lenses of the virtual reality equipment; the first real-time offset information is offset information of a real-time focusing point of the pupil of the user on the lens relative to a mirror surface central point of the lens; the second real-time offset information is offset information of the mirror surface central point relative to a reference point in the virtual scene, and the reference point is an orthographic projection point of the mirror surface central point in the virtual scene when the head of a user is not offset;
and determining the real-time focusing point coordinates of the pupils of the user in the virtual scene according to the first real-time offset information and the second real-time offset information.
3. The screen display method according to claim 2, wherein the determining the real-time focus point coordinates of the pupil of the user in the virtual scene according to the first real-time offset information and the second real-time offset information includes:
determining a real-time longitude coordinate of the real-time focusing point coordinate according to a first longitude coordinate of the first real-time offset information and a second longitude coordinate of the second real-time offset information;
and determining the real-time latitude coordinate of the real-time focusing point coordinate according to the first latitude coordinate of the first real-time offset information and the second latitude coordinate of the second real-time offset information.
4. The screen display method according to claim 1, wherein, prior to the step of displaying the close-up screen of the target video character in the real-time visual area of the virtual scene, the method further comprises:
recording the focusing time length of the target video role concerned by the user;
the step of displaying a close-up view of the target video character in a real-time viewable area of the virtual scene, comprising:
and if the focusing time length meets a preset condition, displaying a close-up picture of the target video role in the real-time visual area of the virtual scene.
5. The image display method according to claim 4, wherein the step of displaying a close-up image of the target video character in the real-time visual area of the virtual scene if the focusing duration satisfies a preset condition comprises:
if the focusing duration is greater than or equal to a first preset threshold value, displaying a low-magnification close-up picture of an image displayed by the target video character in a currently played video frame in a real-time visual area of the virtual scene;
if the focusing duration is greater than or equal to a second preset threshold, displaying a high-magnification close-up picture of the image displayed by the target video role in the currently played video frame in the real-time visual area of the virtual scene; wherein the second preset threshold is greater than the first preset threshold.
6. The screen display method according to claim 1, wherein, prior to the step of displaying the close-up screen of the target video character in the real-time visual area of the virtual scene, the method further comprises:
counting the accumulated focusing time length of the target video role concerned by the user;
if the focusing duration meets a preset condition, displaying a close-up picture of the target video character in the real-time visual area of the virtual scene, wherein the step comprises the following steps:
and if the accumulated focusing time length is larger than a third preset threshold value, displaying the display image feature of the target video role in a real-time visual area of the virtual scene.
7. The screen display method according to any one of claims 1 to 6, wherein said step of determining a target video character of interest to the user from the at least one video character based on the real-time focus point coordinates and the set of real-time coordinates of all the video characters comprises:
comparing the real-time focusing point coordinates with the real-time coordinate sets of all the video roles, and screening out a target real-time coordinate set comprising the real-time focusing point coordinates;
and determining the target video role concerned by the user according to the target real-time coordinate set.
8. A screen display apparatus for a virtual reality device, the display apparatus comprising:
the real-time coordinate determination module is used for determining real-time focusing point coordinates of pupils of a user in a virtual scene when the virtual reality device presents the virtual scene corresponding to the current video; wherein the current video comprises at least one video character, each video character having a set of real-time coordinates in the virtual scene, the set of real-time coordinates determined from the position of each video character in a different video frame;
the target role determination module is used for determining a target video role concerned by the user from the at least one video role according to the real-time focusing point coordinate and the real-time coordinate set of all the video roles;
a screen content display module to display a close-up screen of the target video character in a real-time viewable area of the virtual scene.
9. A virtual reality device, comprising:
a virtual reality device body;
a memory;
a processor; and
a display program stored on the memory and executable on the processor, the display program configured to implement the steps of the screen display method according to any one of claims 1 to 7.
10. A computer program product comprising executable program code, wherein the program code, when executed by a processor, performs the steps of the picture display method according to any one of claims 1 to 7.
CN202110854543.XA 2021-07-27 2021-07-27 Screen display method, screen display device, virtual reality equipment and program product Pending CN113534959A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110854543.XA CN113534959A (en) 2021-07-27 2021-07-27 Screen display method, screen display device, virtual reality equipment and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110854543.XA CN113534959A (en) 2021-07-27 2021-07-27 Screen display method, screen display device, virtual reality equipment and program product

Publications (1)

Publication Number Publication Date
CN113534959A true CN113534959A (en) 2021-10-22

Family

ID=78121110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110854543.XA Pending CN113534959A (en) 2021-07-27 2021-07-27 Screen display method, screen display device, virtual reality equipment and program product

Country Status (1)

Country Link
CN (1) CN113534959A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114546118A (en) * 2022-02-21 2022-05-27 国网河北省电力有限公司保定供电分公司 Safety prompting method, device, medium and equipment based on VR technology

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160085301A1 (en) * 2014-09-22 2016-03-24 The Eye Tribe Aps Display visibility based on eye convergence
US20170285737A1 (en) * 2016-03-31 2017-10-05 Verizon Patent And Licensing Inc. Methods and Systems for Gaze-Based Control of Virtual Reality Media Content
CN107957775A (en) * 2016-10-18 2018-04-24 阿里巴巴集团控股有限公司 Data object exchange method and device in virtual reality space environment
CN109613982A (en) * 2018-12-13 2019-04-12 叶成环 Wear-type AR shows the display exchange method of equipment
CN111068309A (en) * 2019-12-04 2020-04-28 网易(杭州)网络有限公司 Display control method, device, equipment, system and medium for virtual reality game
CN111131904A (en) * 2019-12-31 2020-05-08 维沃移动通信有限公司 Video playing method and head-mounted electronic equipment
CN112578911A (en) * 2016-12-06 2021-03-30 美国景书公司 Apparatus and method for tracking head and eye movements

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160085301A1 (en) * 2014-09-22 2016-03-24 The Eye Tribe Aps Display visibility based on eye convergence
US20170285737A1 (en) * 2016-03-31 2017-10-05 Verizon Patent And Licensing Inc. Methods and Systems for Gaze-Based Control of Virtual Reality Media Content
CN107957775A (en) * 2016-10-18 2018-04-24 阿里巴巴集团控股有限公司 Data object exchange method and device in virtual reality space environment
CN112578911A (en) * 2016-12-06 2021-03-30 美国景书公司 Apparatus and method for tracking head and eye movements
CN109613982A (en) * 2018-12-13 2019-04-12 叶成环 Wear-type AR shows the display exchange method of equipment
CN111068309A (en) * 2019-12-04 2020-04-28 网易(杭州)网络有限公司 Display control method, device, equipment, system and medium for virtual reality game
CN111131904A (en) * 2019-12-31 2020-05-08 维沃移动通信有限公司 Video playing method and head-mounted electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114546118A (en) * 2022-02-21 2022-05-27 国网河北省电力有限公司保定供电分公司 Safety prompting method, device, medium and equipment based on VR technology

Similar Documents

Publication Publication Date Title
TWI669635B (en) Method and device for displaying barrage and non-volatile computer readable storage medium
JP6518582B2 (en) Information processing apparatus and operation reception method
CN109741463B (en) Rendering method, device and equipment of virtual reality scene
CN106162203B (en) Panoramic video playback method, player and wear-type virtual reality device
US20200241731A1 (en) Virtual reality vr interface generation method and apparatus
US20150331242A1 (en) Head mounted display device displaying thumbnail image and method of controlling the same
CN103765346A (en) Eye gaze based location selection for audio visual playback
CN106774821B (en) Display method and system based on virtual reality technology
US11521346B2 (en) Image processing apparatus, image processing method, and storage medium
JP2014095853A (en) Image processor, projector, image processing method, and program
WO2017181588A1 (en) Method and electronic apparatus for positioning display page
US20190347864A1 (en) Storage medium, content providing apparatus, and control method for providing stereoscopic content based on viewing progression
WO2020264149A1 (en) Fast hand meshing for dynamic occlusion
CN107065164B (en) Image presentation method and device
US11831853B2 (en) Information processing apparatus, information processing method, and storage medium
US11477433B2 (en) Information processor, information processing method, and program
CN113534959A (en) Screen display method, screen display device, virtual reality equipment and program product
US20200082603A1 (en) Information processing apparatus, information processing method and storage medium
CN114401362A (en) Image display method and device and electronic equipment
WO2021015035A1 (en) Image processing apparatus, image delivery system, and image processing method
JP2017097854A (en) Program, recording medium, content providing device, and control method
CN106921890A (en) A kind of method and apparatus of the Video Rendering in the equipment for promotion
US11778155B2 (en) Image processing apparatus, image processing method, and storage medium
JP2018190380A (en) Program, system, and method for providing stereoscopic video of virtual space
Meijers Panoramic Perspectives-Evaluating spatial widgets in immersive video through heuristic evaluation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination