CN113888725A - Virtual reality interface loading method, device, equipment and medium - Google Patents

Virtual reality interface loading method, device, equipment and medium Download PDF

Info

Publication number
CN113888725A
CN113888725A CN202111164196.4A CN202111164196A CN113888725A CN 113888725 A CN113888725 A CN 113888725A CN 202111164196 A CN202111164196 A CN 202111164196A CN 113888725 A CN113888725 A CN 113888725A
Authority
CN
China
Prior art keywords
target object
interface
virtual reality
mask
reality interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111164196.4A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Urban Network Neighbor Information Technology Co Ltd
Beijing Chengshi Wanglin Information Technology Co Ltd
Original Assignee
Beijing Chengshi Wanglin Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chengshi Wanglin Information Technology Co Ltd filed Critical Beijing Chengshi Wanglin Information Technology Co Ltd
Priority to CN202111164196.4A priority Critical patent/CN113888725A/en
Publication of CN113888725A publication Critical patent/CN113888725A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/16Real estate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Computer Graphics (AREA)
  • Tourism & Hospitality (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure provides a loading method, apparatus, device, medium, and computer program product for a virtual reality interface. The loading method of the virtual reality interface comprises the following steps: receiving a display request of a virtual reality interface of a target object; in response to receiving the display request, acquiring a plurality of images of the target object; generating a mask interface including a stereoscopic view of the target object based on the plurality of images, and displaying the mask interface; loading and rendering a virtual reality interface of the target object based on the plurality of images during the presentation of the mask interface; and after the rendering of the virtual reality interface is completed, hiding the mask interface and displaying the virtual reality interface of the target object.

Description

Virtual reality interface loading method, device, equipment and medium
Technical Field
The present disclosure relates to the field of virtual reality, and more particularly, to a method, an apparatus, a device, a medium, and a computer program product for loading a virtual reality interface.
Background
Nowadays, Virtual Reality (VR) technology has been widely used in various fields. For example, in the area of real estate brokering, users can utilize virtual reality technology to view a three-dimensional panorama of a house source and can roam personally around a room, thereby bringing a near-real viewing experience to the user.
In a conventional device or application program supporting VR presentation, when a user selects to view a VR interface of a target object (such as a certain room), a web page view (webview) needs to be established first, and then the VR interface of the target object is loaded and rendered in the webview, both operations require a certain processing time, especially in a poor network environment, the processing time is further prolonged, which results in a long white screen time before the VR interface is finally presented to the user, thereby seriously affecting the user experience.
Disclosure of Invention
In order to solve the above problems, the present disclosure provides a loading method, an apparatus, a device, a medium, and a computer program product for a virtual reality interface.
According to an aspect of the embodiments of the present disclosure, a loading method of a virtual reality interface is provided, including: receiving a display request of a virtual reality interface of a target object; in response to receiving the display request, obtaining a plurality of images of the target object; generating a mask interface including a stereoscopic view of the target object based on the plurality of images and displaying the mask interface; loading and rendering the virtual reality interface of the target object based on the plurality of images during presentation of the mask interface; and after the rendering of the virtual reality interface is completed, hiding the mask interface and displaying the virtual reality interface of the target object.
According to an example of an embodiment of the present disclosure, wherein the plurality of images are a plurality of images representing views of the target object in different orientations.
According to an example of an embodiment of the present disclosure, the mask interface may be rotated to show views of the target object in different orientations.
According to an example of an embodiment of the present disclosure, the virtual reality interface is scalable and rotatable to present virtual reality views of the target object in different orientations at different field angles, and has one or more user interactable modules.
According to an example of an embodiment of the present disclosure, wherein generating a mask interface including a stereoscopic view of the target object based on the plurality of images comprises: generating each of the stereoscopic views of the mask interface using each of the plurality of images.
According to an example of an embodiment of the present disclosure, wherein hiding the mask interface and revealing the virtual reality interface of the target object after the rendering of the virtual reality interface is completed comprises: after the rendering of the virtual reality interface is completed, acquiring visual parameters of the mask interface of the target object; and hiding the mask interface, and displaying the virtual reality interface of the target object based on the visual parameters, so that the visual effects of the target object in the displayed virtual reality interface and the target object in the mask interface are consistent.
According to an example of an embodiment of the present disclosure, wherein the visual parameters comprise at least a current field angle, a current azimuth angle, a current steering and a current rotation speed of the target object in the mask interface.
According to an example of the embodiment of the present disclosure, wherein presenting the virtual reality interface of the target object based on the visual parameters such that the visual effect of the target object in the presented virtual reality interface and the target object in the mask interface are consistent comprises: displaying the target object in the virtual reality interface with the current field angle, the current azimuth angle, the current rotation speed and the current rotation speed of the target object in the mask interface, so that the field angle, the azimuth angle, the rotation speed and the rotation speed of the target object in the virtual reality interface are consistent with those of the target object in the mask interface.
According to an example of the embodiment of the present disclosure, wherein, in response to receiving the presentation request, acquiring the plurality of images of the target object comprises: in response to receiving the presentation request, preloading the plurality of images of the target image and storing the plurality of images in a cache; and wherein loading and rendering the virtual reality interface of the target object based on the plurality of images during presentation of the mask interface comprises: and acquiring the plurality of images from the cache, and loading and rendering the virtual reality interface of the target object based on the plurality of images.
According to an example of an embodiment of the present disclosure, wherein the target object is a room of a building, the mask interface is a hexahedral interface representing a panoramic stereoscopic view of the target object.
According to another aspect of the embodiments of the present disclosure, there is provided a loading apparatus for a virtual reality interface, including: a receiving unit configured to receive a presentation request for a virtual reality interface of a target object; an acquisition unit configured to acquire a plurality of images of the target object in response to receiving the presentation request; a presentation unit configured to generate a mask interface including a stereoscopic view of the target object based on the plurality of images and present the mask interface; and a rendering unit configured to load and render the virtual reality interface of the target object based on the plurality of images during presentation of the mask interface, the presentation unit further configured to hide the mask interface and present the virtual reality interface of the target object after rendering of the virtual reality interface is completed.
According to an example of an embodiment of the present disclosure, wherein the presentation unit is further configured to: after the rendering of the virtual reality interface is completed, acquiring visual parameters of the mask interface of the target object; and hiding the mask interface, and displaying the virtual reality interface of the target object based on the visual parameters, so that the visual effects of the target object in the displayed virtual reality interface and the target object in the mask interface are consistent.
According to an example of an embodiment of the present disclosure, wherein the visual parameters comprise at least a current field angle, a current azimuth angle, a current steering and a current rotation speed of the target object in the mask interface.
According to an example of an embodiment of the present disclosure, wherein the presentation unit is further configured to: displaying the target object in the virtual reality interface with the current field angle, the current azimuth angle, the current rotation speed and the current rotation speed of the target object in the mask interface, so that the field angle, the azimuth angle, the rotation speed and the rotation speed of the target object in the virtual reality interface are consistent with those of the target object in the mask interface.
According to another aspect of the embodiments of the present disclosure, there is provided a loading device of a virtual reality interface, including: one or more processors; and one or more memories having computer-readable instructions stored therein, which when executed by the one or more processors, cause the one or more processors to perform the methods of the various aspects described above.
According to another aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer-readable instructions, which, when executed by a processor, cause the processor to perform the method according to any one of the above aspects of the present disclosure.
According to another aspect of embodiments of the present disclosure, there is provided a computer program product comprising computer readable instructions which, when executed by a processor, cause the processor to perform the method according to any one of the above aspects of the present disclosure.
By utilizing the loading method, the loading device, the loading equipment, the loading medium and the loading computer program product of the virtual reality interface according to the aspects of the embodiment of the disclosure, the problem of screen blank time during loading and rendering of the virtual reality interface can be effectively solved by quickly generating the mask interface of the target object and firstly displaying the mask interface to a user, simultaneously loading the virtual reality interface for rendering the target object in the background during the display period of the mask interface, and hiding the mask interface and displaying the virtual reality interface after the rendering of the virtual reality interface is completed; meanwhile, the user can not perceive the switching from the mask interface to the virtual reality interface through smooth transition and seamless connection from the mask interface to the virtual reality interface, and therefore user experience is improved to the maximum extent.
Drawings
The above and other objects, features and advantages of the embodiments of the present disclosure will become more apparent by describing in more detail the embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 illustrates a flow diagram of a method of loading a virtual reality interface according to an embodiment of the disclosure;
FIG. 2 shows a schematic diagram of an example mask interface and virtual reality interface, in accordance with an embodiment of the present disclosure;
fig. 3 shows a schematic structural diagram of a loading device of a virtual reality interface according to an embodiment of the present disclosure;
fig. 4 shows a schematic structural diagram of a loading device of a virtual reality interface according to an embodiment of the present disclosure;
FIG. 5 shows a schematic diagram of a computer-readable storage medium according to an embodiment of the disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure. It is to be understood that the described embodiments are merely exemplary of some, and not all, of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without any inventive step, are intended to be within the scope of the present disclosure.
In a conventional VR presentation-enabled device or application, when a user selects to view a VR interface of a target object (such as a certain room), there is often a white screen time before the VR interface of the target object is finally presented to the user, during which the user can only see a blank screen and wait for the final presentation of the VR interface. The white screen time is typically made up of two parts, the time required to build a web page view (webview), and the time required to load and render the VR interface of the target object in the webview. The existence of the white screen time seriously affects the experience of a user watching the VR interface, especially under the condition of poor network environment, the white screen time can be further prolonged, the time of the user waiting for the VR interface is increased, and the user experience is further deteriorated.
Therefore, the present disclosure provides a loading method, device, equipment, medium and computer program product for a virtual reality interface, which can effectively solve the problem of white screen time and greatly improve user experience.
Next, a loading method of a virtual reality interface according to an embodiment of the present disclosure is described first with reference to fig. 1. Fig. 1 shows a flowchart of a loading method 100 of a virtual reality interface according to an embodiment of the present disclosure. The loading 100 may be implemented in software, hardware, firmware or any combination thereof, and may be performed by, for example, a smart phone, a tablet computer, a notebook computer, a desktop computer, a web server, a Personal Digital Assistant (PDA), a smart wearable device, or the like, which is not specifically limited by the embodiments of the present disclosure.
As shown in fig. 1, in step S110, a presentation request for a virtual reality interface of a target object is received. In the disclosed embodiment, the target object may be any object whose virtual reality interface is to be presented, for example, a room of a building or the like. For example, if a user desires to view a virtual reality interface of a building room, a presentation request may be sent through a user interface or control located on the device performing the loading method 100. It should be understood that the target object is taken as a building room as an example, but the embodiments of the present disclosure are not limited thereto, and the target object may be any other object that can use virtual reality technology to show a virtual reality interface thereof.
In step S120, in response to receiving the presentation request, a plurality of images of the target object are acquired. In the disclosed embodiments, the plurality of images of the target object may be a plurality of images representing views of the target object in different orientations, e.g., the plurality of images may include a left view, a right view, a front view, a rear view, a top view, and a plan view, etc. of the target object. In an example where the target object is a building room, the plurality of images may comprise 6 views of the room, namely a left view, a right view, a front view, a rear view, a top view and a top view, the 6 views encompassing all views of the building room. In the embodiment of the present disclosure, the plurality of images of the target object may be acquired by the camera, for example, and may be stored in the storage device or the server in advance, and then in response to receiving the presentation request, the plurality of images may be acquired directly from the storage device or may be quickly loaded from the server.
Then, in step S130, a mask interface including a stereoscopic view of the target object is generated based on the acquired plurality of images, and the mask interface is displayed. In embodiments of the present disclosure, each of the plurality of images of the target object may constitute each of the stereoscopic views of the mask interface, which may present a stereoscopic view of the target object as the plurality of images may represent views of the target object in different orientations. For example, in case the plurality of images comprises 6 views, the mask interface may be a hexahedral interface representing a panoramic stereoscopic view of the target object, wherein each of the 6 views of the target object constitutes one face of the hexahedron. The mask interface may rotate, for example, by itself or by user manipulation, in different directions and speeds to present views of the target object in different orientations.
It should be noted that, although the above description is given by taking an example that the multiple views of the target object include 6 views and the mask interface is a hexahedral interface, the embodiments of the present disclosure are not limited thereto, the multiple views of the target object may include more or fewer views, and the mask interface generated based on the multiple views may also be any polyhedron or any interface with a three-dimensional structure.
Taking the example that the target object is a building room, for example, the plurality of images acquired in step S120 may include 6 views of a left view, a right view, a front view, a rear view, a top view, and a top view of the room, and based on the 6 views, a mask interface including a panoramic stereoscopic view representing the room may be generated, the mask interface being a hexahedron, and the hexahedron may be rotated to show views of the room in various orientations.
After the user sends a request for showing the virtual reality interface of the target object, for example, through a user interface or a control located on the device executing the loading method 100, a mask interface of the target object may be quickly generated through operations of steps S120 and S130 and presented to the user first, so as to avoid that only a white screen interface can be presented to the user before the virtual reality interface of the target object is rendered.
Meanwhile, in step S140, during the presentation of the mask interface, the virtual reality interface of the target object is loaded and rendered. As mentioned above, the generating of the virtual reality interface of the target object may include two processes, i.e., establishing a webview and loading the virtual reality interface rendering the target object in the webview, both of which may be performed in step S140, and step S140 may be performed by a processor of the device executing the loading method 100, for example. Moreover, during the display of the mask interface, the loading and rendering of the virtual reality interface of the target object are performed in the background, so that the display of the mask interface is not influenced, and the user cannot perceive the loading and rendering process.
According to an example of the embodiment of the present disclosure, after the plurality of images of the target object are acquired in step S120, the plurality of images may be stored in a cache; the plurality of images may be directly retrieved from the cache and a virtual reality interface of the rendering target object may be loaded based on the plurality of images in step S140. By multiplexing the acquired images, the speed of generating the virtual reality interface can be further increased, and the loading efficiency is improved.
In step S150, after the rendering of the virtual reality interface of the target object is completed, the mask interface is hidden and the virtual reality interface of the target object is displayed.
As previously described, the mask interface of the target object may be rotated to show the view of the target object in different orientations. Likewise, the virtual reality interface of the target object may also be rotated to present a virtual reality view of the target object in different orientations. In addition, the virtual reality interface of the target object can also be zoomed and the like so as to show virtual reality views of the target object in different orientations at different angles of view. And the virtual reality interface of the target object can also be provided with one or more user interactive modules, so that a user can initiate interactive operation based on the virtual reality interface through the one or more user interactive modules.
After a user sends a display request for a virtual reality interface of a target object, a mask interface of the target object is quickly generated and displayed to the user, the virtual reality interface of the rendered target object is loaded in a background mode during the display period of the mask interface, and after the rendering of the virtual reality interface is completed, the mask interface is hidden and the virtual reality interface is displayed, so that the problem of screen blank time during the loading and rendering of the virtual reality interface can be effectively solved. In addition, in order to further improve the user experience, it is desirable that the user does not perceive the switching process from the mask interface to the virtual reality interface, and it is necessary to enable the mask interface to be seamlessly connected with the virtual reality interface displayed later, so as to make smooth transition.
Specifically, after rendering of the virtual reality interface of the target object is complete, an instruction indicating that rendering of the virtual reality interface is complete may be issued, for example, by webview. In response to receiving the instruction, current visual parameters of a mask interface of the target object may be obtained, wherein the visual parameters may include at least a current field angle, a current azimuth angle, a current steering, a current rotation speed, and so on of the target object in the mask interface. The current field angle may represent a field size or a zoom scale at which the user is currently viewing the target object; the current azimuth may represent a specific azimuth at which the user currently views the target object, for example, in an example where the mask interface is a hexahedral interface, which face of the hexahedron the user views may be represented; as previously mentioned, the mask interface may be rotated, and the current turn and current rotation speed represent the current turn and current rotation speed of the mask interface, or more specifically, the target object in the mask interface.
After the visual parameters of the mask interface of the target object are acquired, the mask interface can be hidden, and the virtual reality interface of the target object is displayed based on the visual parameters, so that the visual effects of the target object in the displayed virtual reality interface and the target object in the mask interface are kept consistent. That is, the target object in the virtual reality interface may be presented with the current field angle, the current azimuth, the current yaw, and the current yaw rate of the target object in the mask interface to keep the field angle, the azimuth, the yaw rate, and the yaw rate of the target object in the virtual reality interface consistent with those of the target object in the mask interface.
Through the operation, the field angle, the azimuth angle, the steering direction, the rotating speed and the like of the target object are kept consistent before and after the mask interface is switched to the virtual reality interface, namely the visual effect of the target object is completely the same for the user, and the mask interface and the virtual reality interface can respond to the same user operation. Thus, the user does not perceive the switch from the mask interface to the virtual reality interface, resulting in an optimal user experience.
The target object is further explained with reference to fig. 2 as a building room. Fig. 2 shows a schematic diagram of an example mask interface and virtual reality interface, according to an embodiment of the present disclosure. In this example, after issuing a virtual reality exhibition request for a certain room, the user first generates a mask interface for the room, which is a hexahedron representing a stereoscopic view of the room, according to the method as described in the above steps S110-S130. The hexahedron may rotate on its own at a set rotational speed, or the user may operate to rotate the hexahedron to view a view of a certain orientation of the room. For example, the user may rotate the hexahedron to see a view of the room in azimuth β, as shown in fig. 2 (a).
During the displaying of the mask interface of the room, the virtual reality interface of the room may be loaded and rendered in the background according to the step S140, and after the rendering is completed, the mask interface is hidden and the virtual reality interface is displayed. As described in the above step S150, before switching from the masking interface to the virtual reality interface, the visual parameters of the masking interface, such as the current field angle, the current azimuth angle, the current steering direction, and the current rotation speed, of the target object may be obtained, and the virtual reality interface displayed later may be made to be consistent with the visual effect in the masking interface. For example, assume that before switching from the mask interface to the virtual reality interface of the room, the user views the room with an angle of view α, an azimuth β, a clockwise direction of rotation, and a rotational speed r, as shown in fig. 2 (a); after switching to the virtual reality interface, the angle of view of the room in the virtual reality interface can be also α, the azimuth angle is also β, the direction of rotation is also clockwise, and the rotation speed is also r, as shown in fig. 2 (b).
Thus, before and after the switch, due to the smooth transition from the mask interface to the virtual reality interface, the visual effect of the room seen by the user is seamless, and the user does not perceive the change of the picture from the mask interface to the real virtual reality interface. The difference is that in the virtual reality interface of the room, the user can perform more operations based on the virtual reality interface, for example, the user can perform operations such as zooming on the room; a watch-with call may be initiated to communicate with the house broker in a room-based virtual reality interface, and so on.
In addition, according to an example of the embodiment of the present disclosure, optionally, the presentation request for the virtual reality interface of the target object received in step S110 may include a configuration protocol parameter, which may indicate whether the loading method as described in steps S120 to S150 is adopted or not, for example, to solve the white screen time problem. If the configuration protocol parameters indicate that the loading method as described in steps S120-S150 is adopted, the operations of steps S120-S150 may be continuously performed; otherwise, steps S120-S130 may be omitted, i.e., the mask interface of the target object is not generated, and the virtual reality interface of the target object is directly loaded and rendered.
According to the loading method of the virtual reality interface, the mask interface of the target object is quickly generated and displayed to a user, meanwhile, the virtual reality interface of the rendering target object is loaded in the background during the display period of the mask interface, after the rendering of the virtual reality interface is completed, the mask interface is hidden and the virtual reality interface is displayed, the problem of screen blank time during the loading and rendering of the virtual reality interface can be effectively solved, and the problem of screen blank time caused by the establishment of webview and the loading and rendering of the virtual reality interface in webview can be effectively solved; meanwhile, the user can not perceive the switching from the mask interface to the virtual reality interface through smooth transition and seamless connection from the mask interface to the virtual reality interface, and therefore user experience is improved to the maximum extent. In particular, the loading method of the virtual reality interface according to the embodiment of the present disclosure is particularly suitable for a situation where a user loads the virtual reality interface of the target object for the first time (i.e., first screen loading), so that the user does not have to face the trouble of the white screen time problem.
A loading apparatus of a virtual reality interface according to an embodiment of the present disclosure is described below with reference to fig. 3. Fig. 3 shows a schematic structural diagram of a loading device 300 of a virtual reality interface according to an embodiment of the present disclosure. As shown in fig. 3, the loading device 300 includes a receiving unit 310, an obtaining unit 320, a presentation unit 330, and a rendering unit 340. The loading device 300 may include other units or components in addition to the four units, but a detailed description thereof is omitted herein since the units or components are not related to the embodiments of the present disclosure. In addition, since part of the functions of the loading device 300 are the same as the details of the steps of the loading method 100 described above with reference to fig. 1, repeated descriptions of the same are omitted here for the sake of simplicity.
The receiving unit 310 is configured to receive a presentation request for a virtual reality interface of a target object. In the disclosed embodiment, the target object may be any object whose virtual reality interface is to be presented, for example, a room of a building or the like. For example, if a user desires to view a virtual reality interface of a building room, a presentation request may be sent through a user interface or control on the loading device 300. It should be understood that the target object is taken as a building room as an example, but the embodiments of the present disclosure are not limited thereto, and the target object may be any other object that can use virtual reality technology to show a virtual reality interface thereof.
The obtaining unit 320 is configured to obtain a plurality of images of the target object in response to receiving the presentation request. In the disclosed embodiments, the plurality of images of the target object may be a plurality of images representing views of the target object in different orientations, e.g., the plurality of images may include a left view, a right view, a front view, a rear view, a top view, and a plan view, etc. of the target object. In an example where the target object is a building room, the plurality of images may comprise 6 views of the room, namely a left view, a right view, a front view, a rear view, a top view and a top view, the 6 views encompassing all views of the building room. In the embodiment of the present disclosure, the plurality of images of the target object may be acquired by the camera, for example, and may be stored in the storage device or the server in advance, and in response to receiving the presentation request, the acquiring unit 320 may acquire the plurality of images directly from the storage device or may quickly load the plurality of images from the server.
Thereafter, the presentation unit 330 is configured to generate a mask interface including a stereoscopic view of the target object based on the acquired plurality of images, and present the mask interface. In embodiments of the present disclosure, each of the plurality of images of the target object may constitute each of the stereoscopic views of the mask interface, which may present a stereoscopic view of the target object as the plurality of images may represent views of the target object in different orientations. For example, in case the plurality of images comprises 6 views, the mask interface may be a hexahedral interface representing a panoramic stereoscopic view of the target object, wherein each of the 6 views of the target object constitutes one face of the hexahedron. The mask interface may rotate, for example, by itself or by user manipulation, in different directions and speeds to present views of the target object in different orientations.
It should be noted that, although the above description is given by taking an example that the multiple views of the target object include 6 views and the mask interface is a hexahedral interface, the embodiments of the present disclosure are not limited thereto, the multiple views of the target object may include more or fewer views, and the mask interface generated based on the multiple views may also be any polyhedron or any interface with a three-dimensional structure.
Taking the example that the target object is a building room, for example, the plurality of images acquired by the acquisition unit 320 may include 6 views of a left view, a right view, a front view, a rear view, a top view, and a top view of the room, and based on the 6 views, a mask interface including a panoramic stereoscopic view representing the room may be generated, the mask interface being a hexahedron, and the hexahedron may be rotated to show views of the room in various orientations.
After the user sends a display request for the virtual reality interface of the target object, for example, through the user interface or the control on the loading device 300, a mask interface of the target object may be quickly generated through the operations of the obtaining unit 320 and the display unit 330, and may be presented to the user, so as to avoid that only a white screen interface may be presented to the user before the virtual reality interface of the target object is rendered.
Meanwhile, during the presentation of the mask interface, the rendering unit 340 may load and render the virtual reality interface of the target object. As mentioned above, generating the virtual reality interface of the target object may include two processes, namely, establishing a webview in which the virtual reality interface rendering the target object is loaded, both of which may be executed by the rendering unit 340 and may be performed, for example, by the processor of the loading apparatus 300. Moreover, during the display of the mask interface, the loading and rendering of the virtual reality interface of the target object are performed in the background, so that the display of the mask interface is not influenced, and the user cannot perceive the loading and rendering process.
According to an example of the embodiment of the present disclosure, after the obtaining unit 320 obtains the plurality of images of the target object, the plurality of images may be stored in a cache; the rendering unit 340 may directly acquire the plurality of images from the cache and load a virtual reality interface of the rendering target object based on the plurality of images. By multiplexing the acquired images, the speed of generating the virtual reality interface can be further increased, and the loading efficiency is improved.
After the rendering of the virtual reality interface of the target object is completed, the presentation unit 330 may hide the mask interface and present the virtual reality interface of the target object.
As previously described, the mask interface of the target object may be rotated to show the view of the target object in different orientations. Likewise, the virtual reality interface of the target object may also be rotated to present a virtual reality view of the target object in different orientations. In addition, the virtual reality interface of the target object can also be zoomed and the like so as to show virtual reality views of the target object in different orientations at different angles of view. And the virtual reality interface of the target object can also be provided with one or more user interactive modules, so that a user can initiate interactive operation based on the virtual reality interface through the one or more user interactive modules.
After a user sends a display request for a virtual reality interface of a target object, a mask interface of the target object is quickly generated and displayed to the user, the virtual reality interface of the rendered target object is loaded in a background mode during the display period of the mask interface, and after the rendering of the virtual reality interface is completed, the mask interface is hidden and the virtual reality interface is displayed, so that the problem of screen blank time during the loading and rendering of the virtual reality interface can be effectively solved. In addition, in order to further improve the user experience, it is desirable that the user does not perceive the switching process from the mask interface to the virtual reality interface, and it is necessary to enable the mask interface to be seamlessly connected with the virtual reality interface displayed later, so as to make smooth transition.
Specifically, after the rendering unit 340 completes rendering the virtual reality interface of the target object, an instruction indicating that the rendering of the virtual reality interface is completed may be issued, for example, by webview. In response to receiving the instruction, the presentation unit 340 may obtain current visual parameters of a mask interface of the target object, where the visual parameters may include at least a current angle of view, a current azimuth, a current steering, a current rotation speed, and the like of the target object in the mask interface. The current field angle may represent a field size or a zoom scale at which the user is currently viewing the target object; the current azimuth may represent a specific azimuth at which the user currently views the target object, for example, in an example where the mask interface is a hexahedral interface, which face of the hexahedron the user views may be represented; as previously mentioned, the mask interface may be rotated, and the current turn and current rotation speed represent the current turn and current rotation speed of the mask interface, or more specifically, the target object in the mask interface.
After the visual parameters of the mask interface of the target object are obtained, the display unit 340 may hide the mask interface and display the virtual reality interface of the target object based on the visual parameters, so that the visual effects of the target object in the displayed virtual reality interface and the target object in the mask interface are kept consistent. That is, the presentation unit 340 may present the target object in the virtual reality interface at the current field angle, the current azimuth, the current rotation speed, and the current rotation speed of the target object in the mask interface, so that the field angle, the azimuth, the rotation speed, and the rotation speed of the target object in the virtual reality interface are consistent with those of the target object in the mask interface.
Through the operation, the field angle, the azimuth angle, the steering direction, the rotating speed and the like of the target object are kept consistent before and after the mask interface is switched to the virtual reality interface, namely the visual effect of the target object is completely the same for the user, and the mask interface and the virtual reality interface can respond to the same user operation. Thus, the user does not perceive the switch from the mask interface to the virtual reality interface, resulting in an optimal user experience.
The target object is further explained with reference to fig. 2 as a building room. In this example, after issuing a virtual reality presentation request for a certain room, a user first generates a mask interface for the room, which is a hexahedron representing a stereoscopic view of the room, by the obtaining unit 320 and the presentation unit 330. The hexahedron may rotate on its own at a set rotational speed, or the user may operate to rotate the hexahedron to view a view of a certain orientation of the room. For example, the user may rotate the hexahedron to see a view of the room in azimuth β, as shown in fig. 2 (a).
During the presentation of the mask interface of the room, the virtual reality interface of the room may be loaded and rendered in the background by the rendering unit 340, and after the rendering is completed, the mask interface is hidden by the presentation unit 330 and the virtual reality interface is presented. As described above, before switching from the mask interface to the virtual reality interface, the visual parameters of the target object in the mask interface, such as the current field angle, the current azimuth angle, the current steering direction, and the current rotation speed, may be obtained, and the visual effect of the virtual reality interface displayed later may be consistent with the visual effect of the mask interface. For example, assume that before switching from the mask interface to the virtual reality interface of the room, the user views the room with an angle of view α, an azimuth β, a clockwise direction of rotation, and a rotational speed r, as shown in fig. 2 (a); after switching to the virtual reality interface, the angle of view of the room in the virtual reality interface can be also α, the azimuth angle is also β, the direction of rotation is also clockwise, and the rotation speed is also r, as shown in fig. 2 (b).
Thus, before and after the switch, due to the smooth transition from the mask interface to the virtual reality interface, the visual effect of the room seen by the user is seamless, and the user does not perceive the change of the picture from the mask interface to the real virtual reality interface. The difference is that in the virtual reality interface of the room, the user can perform more operations based on the virtual reality interface, for example, the user can perform operations such as zooming on the room; a watch-with call may be initiated to communicate with the house broker in a room-based virtual reality interface, and so on.
In addition, according to an example of the embodiment of the present disclosure, optionally, the display request for the virtual reality interface of the target object received by the receiving unit 310 may include a configuration protocol parameter, which may indicate, for example, whether to adopt the loading method executed by the obtaining unit 320, the displaying unit 330, and the rendering unit 340 as described above, so as to solve the white screen time problem. If the configuration protocol parameters indicate that the loading methods of the obtaining unit 320, the presentation unit 330 and the rendering unit 340 are adopted, the above operations may be continuously performed; otherwise, the virtual reality interface of the target object can be directly loaded and rendered without generating the mask interface of the target object.
According to the loading device of the virtual reality interface, the mask interface of the target object is quickly generated and displayed to a user, meanwhile, the virtual reality interface of the rendered target object is loaded in the background during the display period of the mask interface, and after the rendering of the virtual reality interface is completed, the mask interface is hidden and the virtual reality interface is displayed, so that the problem of screen blank time during the loading and rendering of the virtual reality interface can be effectively solved, and the problem of screen blank time caused by the establishment of webview and the loading and rendering of the virtual reality interface in webview can be effectively solved; meanwhile, the user can not perceive the switching from the mask interface to the virtual reality interface through smooth transition and seamless connection from the mask interface to the virtual reality interface, and therefore user experience is improved to the maximum extent. In particular, the loading method of the virtual reality interface according to the embodiment of the present disclosure is particularly suitable for a situation where a user loads the virtual reality interface of the target object for the first time (i.e., first screen loading), so that the user does not have to face the trouble of the white screen time problem.
Next, a loading device of a virtual reality interface according to an embodiment of the present disclosure is described with reference to fig. 4. Fig. 4 shows a schematic structural diagram of a loading device 400 of a virtual reality interface according to an embodiment of the present disclosure. Since the function of the loading apparatus 400 of the present embodiment is the same as the details of the method described above with reference to fig. 1, a detailed description of the same is omitted here for the sake of simplicity.
The loading device 400 of the disclosed embodiment includes one or more processors 410; and one or more memories 420 in which are stored computer-readable instructions that, when executed by the one or more processors 410, cause the one or more processors 410 to: receiving a display request of a virtual reality interface of a target object; in response to receiving the display request, acquiring a plurality of images of the target object; generating a mask interface including a stereoscopic view of the target object based on the plurality of images, and displaying the mask interface; loading and rendering a virtual reality interface of the target object based on the plurality of images during the presentation of the mask interface; and after the rendering of the virtual reality interface is completed, hiding the mask interface and displaying the virtual reality interface of the target object.
Embodiments of the present disclosure may also be implemented as a computer-readable storage medium. Fig. 5 shows a schematic diagram of a computer-readable storage medium 500 according to an embodiment of the disclosure. Computer-readable storage media 500 according to embodiments of the present disclosure have computer-readable instructions 510 stored thereon. The computer readable instructions 510, when executed by the processor, may cause the processor to: receiving a display request of a virtual reality interface of a target object; in response to receiving the display request, acquiring a plurality of images of the target object; generating a mask interface including a stereoscopic view of the target object based on the plurality of images, and displaying the mask interface; loading and rendering a virtual reality interface of the target object based on the plurality of images during the presentation of the mask interface; and after the rendering of the virtual reality interface is completed, hiding the mask interface and displaying the virtual reality interface of the target object.
The computer-readable storage medium 500 includes, but is not limited to, volatile memory and/or non-volatile memory, for example. Volatile memory can include, for example, Random Access Memory (RAM), cache memory (or the like). The non-volatile memory may include, for example, Read Only Memory (ROM), a hard disk, flash memory, and the like.
There is also provided, in accordance with an embodiment of the present disclosure, a computer program product or computer program, including computer readable instructions, the computer readable instructions being stored in a computer readable storage medium. The computer readable instructions may be read by a processor of a computer device from a computer readable storage medium, and the computer readable instructions executed by the processor cause the computer device to perform a method of loading a virtual reality interface as described above with reference to fig. 1.
The individual operations of the methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software components and/or modules including, but not limited to, a circuit, an Application Specific Integrated Circuit (ASIC), or a processor.
The various illustrative logical blocks, modules, and circuits described may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an ASIC, a field programmable gate array signal (FPGA) or other Programmable Logic Device (PLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the invention may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may reside in any form of tangible storage medium. Some examples of storage media that may be used include Random Access Memory (RAM), Read Only Memory (ROM), flash memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, and the like. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. A software module may be a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media.
Those skilled in the art will appreciate that the disclosure of the present disclosure is susceptible to numerous variations and modifications. For example, the various devices or components described above may be implemented in hardware, or may be implemented in software, firmware, or a combination of some or all of the three.
Further, while the present disclosure makes various references to certain elements of a system according to embodiments of the present disclosure, any number of different elements may be used and run on a client and/or server. The units are illustrative only, and different aspects of the systems and methods may use different units.
Furthermore, flow charts are used in this disclosure to illustrate operations performed by systems according to embodiments of the present disclosure. It should be understood that the preceding and following operations are not necessarily performed in the exact order in which they are performed. Rather, various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or one or more operations may be removed from the processes.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The foregoing is illustrative of the present disclosure and is not to be construed as limiting thereof. Although a few exemplary embodiments of this disclosure have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this disclosure. Accordingly, all such modifications are intended to be included within the scope of this disclosure as defined in the claims. It is to be understood that the foregoing is illustrative of the present disclosure and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the appended claims. The present disclosure is defined by the claims and their equivalents.

Claims (17)

1. A loading method of a virtual reality interface comprises the following steps:
receiving a display request of a virtual reality interface of a target object;
in response to receiving the display request, obtaining a plurality of images of the target object;
generating a mask interface including a stereoscopic view of the target object based on the plurality of images and displaying the mask interface;
loading and rendering the virtual reality interface of the target object based on the plurality of images during presentation of the mask interface; and
after the rendering of the virtual reality interface is completed, hiding the mask interface and displaying the virtual reality interface of the target object.
2. A loading method according to claim 1, wherein the plurality of images are a plurality of images representing views of the target object in different orientations.
3. A loading method according to claim 1 or 2, wherein the mask interface is rotatable to show views of the target object in different orientations.
4. A loading method according to claim 1, wherein the virtual reality interface is zoomable and rotatable to present virtual reality views of the target object in different orientations at different field angles, and has one or more user-interactable modules.
5. The loading method of claim 1, wherein generating, based on the plurality of images, a mask interface including a stereoscopic view of the target object comprises:
generating each of the stereoscopic views of the mask interface using each of the plurality of images.
6. The loading method of claim 1, wherein hiding the mask interface and revealing the virtual reality interface of the target object after rendering of the virtual reality interface is completed comprises:
after the rendering of the virtual reality interface is completed, acquiring visual parameters of the mask interface of the target object; and
hiding the mask interface, and displaying the virtual reality interface of the target object based on the visual parameters so as to keep the visual effects of the target object in the displayed virtual reality interface and the target object in the mask interface consistent.
7. The loading method according to claim 6, wherein the visual parameters include at least a current field angle, a current azimuth angle, a current steering, and a current rotation speed of the target object in the mask interface.
8. The loading method according to claim 7, wherein presenting the virtual reality interface of the target object based on the visual parameters such that the visual effect of the target object in the presented virtual reality interface and the target object in the mask interface remain consistent comprises:
displaying the target object in the virtual reality interface with the current field angle, the current azimuth angle, the current rotation speed and the current rotation speed of the target object in the mask interface, so that the field angle, the azimuth angle, the rotation speed and the rotation speed of the target object in the virtual reality interface are consistent with those of the target object in the mask interface.
9. The loading method of claim 1, wherein, in response to receiving the presentation request, obtaining a plurality of images of the target object comprises:
in response to receiving the presentation request, preloading the plurality of images of the target image and storing the plurality of images in a cache;
and wherein loading and rendering the virtual reality interface of the target object based on the plurality of images during presentation of the mask interface comprises:
and acquiring the plurality of images from the cache, and loading and rendering the virtual reality interface of the target object based on the plurality of images.
10. A loading method according to claim 1, wherein the target object is a room of a building and the mask interface is a hexahedral interface representing a panoramic stereoscopic view of the target object.
11. A loading device for a virtual reality interface, comprising:
a receiving unit configured to receive a presentation request for a virtual reality interface of a target object;
an acquisition unit configured to acquire a plurality of images of the target object in response to receiving the presentation request;
a presentation unit configured to generate a mask interface including a stereoscopic view of the target object based on the plurality of images and present the mask interface; and
a rendering unit configured to load and render the virtual reality interface of the target object based on the plurality of images during presentation of the mask interface,
the presentation unit is further configured to hide the mask interface and present the virtual reality interface of the target object after rendering of the virtual reality interface is completed.
12. The loading device of claim 11, wherein the presentation unit is further configured to:
after the rendering of the virtual reality interface is completed, acquiring visual parameters of the mask interface of the target object; and
hiding the mask interface, and displaying the virtual reality interface of the target object based on the visual parameters so as to keep the visual effects of the target object in the displayed virtual reality interface and the target object in the mask interface consistent.
13. The loading device according to claim 12, wherein the visual parameters include at least a current field angle, a current azimuth angle, a current steering, and a current rotation speed of the target object in the mask interface.
14. The loading device of claim 13, wherein the presentation unit is further configured to:
displaying the target object in the virtual reality interface with the current field angle, the current azimuth angle, the current rotation speed and the current rotation speed of the target object in the mask interface, so that the field angle, the azimuth angle, the rotation speed and the rotation speed of the target object in the virtual reality interface are consistent with those of the target object in the mask interface.
15. A loading device for a virtual reality interface, comprising:
one or more processors; and
one or more memories having computer-readable instructions stored therein, which when executed by the one or more processors, cause the one or more processors to perform the method of any one of claims 1-10.
16. A computer readable storage medium having computer readable instructions stored thereon, which, when executed by a processor, cause the processor to perform the method of any one of claims 1-10.
17. A computer program product comprising computer readable instructions which, when executed by a processor, cause the processor to perform the method of any one of claims 1-10.
CN202111164196.4A 2021-09-30 2021-09-30 Virtual reality interface loading method, device, equipment and medium Pending CN113888725A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111164196.4A CN113888725A (en) 2021-09-30 2021-09-30 Virtual reality interface loading method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111164196.4A CN113888725A (en) 2021-09-30 2021-09-30 Virtual reality interface loading method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN113888725A true CN113888725A (en) 2022-01-04

Family

ID=79004930

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111164196.4A Pending CN113888725A (en) 2021-09-30 2021-09-30 Virtual reality interface loading method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN113888725A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975637A (en) * 2016-06-26 2016-09-28 乐视控股(北京)有限公司 Display method and device for page loading
CN106254940A (en) * 2016-09-23 2016-12-21 北京疯景科技有限公司 Play the method and device of panorama content
CN106408631A (en) * 2016-09-30 2017-02-15 厦门亿力吉奥信息科技有限公司 Three-dimensional macro display method and system
CN108304450A (en) * 2017-12-12 2018-07-20 杭州先临三维云打印技术有限公司 The online method for previewing of threedimensional model and device
CN108897468A (en) * 2018-05-30 2018-11-27 链家网(北京)科技有限公司 A kind of method and system of the virtual three-dimensional space panorama into the source of houses
CN109885783A (en) * 2019-01-17 2019-06-14 广州城投发展研究院有限公司 A kind of loading method and its device of three-dimensional building model
CN110136807A (en) * 2019-05-22 2019-08-16 图兮深维医疗科技(苏州)有限公司 A kind of medical image pre-load means and equipment
CN110442316A (en) * 2019-08-02 2019-11-12 Oppo广东移动通信有限公司 Image display method, device and computer readable storage medium
CN112057849A (en) * 2020-09-15 2020-12-11 网易(杭州)网络有限公司 Game scene rendering method and device and electronic equipment
CN112631689A (en) * 2021-01-04 2021-04-09 北京字节跳动网络技术有限公司 Application program loading method and device and computer storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975637A (en) * 2016-06-26 2016-09-28 乐视控股(北京)有限公司 Display method and device for page loading
CN106254940A (en) * 2016-09-23 2016-12-21 北京疯景科技有限公司 Play the method and device of panorama content
CN106408631A (en) * 2016-09-30 2017-02-15 厦门亿力吉奥信息科技有限公司 Three-dimensional macro display method and system
CN108304450A (en) * 2017-12-12 2018-07-20 杭州先临三维云打印技术有限公司 The online method for previewing of threedimensional model and device
CN108897468A (en) * 2018-05-30 2018-11-27 链家网(北京)科技有限公司 A kind of method and system of the virtual three-dimensional space panorama into the source of houses
CN109885783A (en) * 2019-01-17 2019-06-14 广州城投发展研究院有限公司 A kind of loading method and its device of three-dimensional building model
CN110136807A (en) * 2019-05-22 2019-08-16 图兮深维医疗科技(苏州)有限公司 A kind of medical image pre-load means and equipment
CN110442316A (en) * 2019-08-02 2019-11-12 Oppo广东移动通信有限公司 Image display method, device and computer readable storage medium
CN112057849A (en) * 2020-09-15 2020-12-11 网易(杭州)网络有限公司 Game scene rendering method and device and electronic equipment
CN112631689A (en) * 2021-01-04 2021-04-09 北京字节跳动网络技术有限公司 Application program loading method and device and computer storage medium

Similar Documents

Publication Publication Date Title
US10643300B2 (en) Image display method, custom method of shaped cambered curtain, and head-mounted display device
KR101545387B1 (en) System and method for display mirroring
CN109741463B (en) Rendering method, device and equipment of virtual reality scene
JP6672315B2 (en) Image generation device and image display control device
CN111161173B (en) Image correction information acquisition method, image correction information acquisition device, image correction information model construction method, image correction information model construction device, and medium
JP2001527662A (en) Parameterized image orientation for computer displays
JP7378243B2 (en) Image generation device, image display device, and image processing method
EP3857499A1 (en) Panoramic light field capture, processing and display
CN110999307A (en) Display apparatus, server, and control method thereof
CN115328309A (en) Interaction method, device, equipment and storage medium for virtual object
WO2023125362A1 (en) Image display method and apparatus, and electronic device
CN113206993A (en) Method for adjusting display screen and display device
CN113286138A (en) Panoramic video display method and display equipment
CN106845477B (en) Method and device for establishing region of interest based on multiple reconstructed images
CN110889384A (en) Scene switching method and device, electronic equipment and storage medium
CN113888725A (en) Virtual reality interface loading method, device, equipment and medium
CN111710315A (en) Image display method, image display device, storage medium and electronic equipment
CN112862981B (en) Method and apparatus for presenting a virtual representation, computer device and storage medium
CN115454250A (en) Method, apparatus, device and storage medium for augmented reality interaction
US20220060801A1 (en) Panoramic Render of 3D Video
US11869137B2 (en) Method and apparatus for virtual space constructing based on stackable light field
CN110837297B (en) Information processing method and AR equipment
CN115918094A (en) Server device, terminal device, information processing system, and information processing method
US20230168510A1 (en) Head-mounted display device, control method, and non-transitory computer readable storage medium
CN113126836B (en) Picture display method, storage medium and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination