CN111352670B - Virtual reality scene loading method and device, virtual reality system and equipment - Google Patents

Virtual reality scene loading method and device, virtual reality system and equipment Download PDF

Info

Publication number
CN111352670B
CN111352670B CN202010159574.9A CN202010159574A CN111352670B CN 111352670 B CN111352670 B CN 111352670B CN 202010159574 A CN202010159574 A CN 202010159574A CN 111352670 B CN111352670 B CN 111352670B
Authority
CN
China
Prior art keywords
rendering
thread
control
scene
virtual reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010159574.9A
Other languages
Chinese (zh)
Other versions
CN111352670A (en
Inventor
简吉波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Sunflower Digital Technology Co ltd
Original Assignee
Liaoning Sunflower Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning Sunflower Education Technology Co ltd filed Critical Liaoning Sunflower Education Technology Co ltd
Priority to CN202011092319.3A priority Critical patent/CN112181530A/en
Priority to CN202011092320.6A priority patent/CN112181531A/en
Priority to CN202010159574.9A priority patent/CN111352670B/en
Publication of CN111352670A publication Critical patent/CN111352670A/en
Application granted granted Critical
Publication of CN111352670B publication Critical patent/CN111352670B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4482Procedural
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a virtual reality scene loading method, a virtual reality scene loading device, a virtual reality system and virtual reality equipment, wherein a rendering scene control set and a virtual object control set of each model file in a model rendering strategy corresponding to a corresponding virtual reality model resource to be loaded are independently extracted, so that an application environment scene and a virtual reality object are independently loaded and rendered by using independently generated related rendering controls, a complete rendering effect can be achieved without considering interaction calling activities between the virtual reality object and the application environment scene, the loading data volume at the moment can be reduced, and partial expansion customizing operation can be conveniently carried out on the virtual reality object in the application environment scene.

Description

Virtual reality scene loading method and device, virtual reality system and equipment
Technical Field
The application relates to the technical field of virtual reality, in particular to a virtual reality scene loading method and device, a virtual reality system and equipment.
Background
In the loading process of the virtual reality scene, the whole virtual reality model is usually loaded at one time, interaction calling activities between the virtual reality object and the application environment scene need to be considered in the rendering and loading process, on one hand, the loading data volume at the moment is huge, corresponding rendering blocking may be caused, on the other hand, partial expansion customizing operation cannot be performed on the virtual reality object in the application environment scene, once expansion customizing is needed, the whole virtual reality model needs to be adjusted, and the adjusting workload is large.
Disclosure of Invention
In order to overcome at least the above disadvantages in the prior art, an object of the present application is to provide a method, an apparatus, a virtual reality system, and a device for loading a virtual reality scene, wherein a rendering scene control set and a virtual object control set of each model file in a model rendering policy corresponding to a virtual reality model resource to be loaded are separately extracted, so that an application environment scene and a virtual reality object are separately loaded and rendered by using separately generated related rendering controls, and a complete rendering effect can be achieved without considering an interaction calling activity between the virtual reality object and the application environment scene, thereby reducing a loading data amount at an instant time, and facilitating a partial expansion customizing operation for the virtual reality object in the application environment scene.
In a first aspect, the present application provides a virtual reality scene loading method, which is applied to a virtual reality device, where the virtual reality device is in communication connection with a server, and the method includes:
determining a plurality of virtual reality model resources to be loaded according to a loading request aiming at a target virtual reality scene;
obtaining model files corresponding to the virtual reality model resources to be loaded from the server, and extracting a rendering scene control set and a virtual object control set of each model file in a model rendering strategy corresponding to the corresponding virtual reality model resources to be loaded, wherein the rendering scene control set comprises scene rendering controls corresponding to each visual rendering unit of the model files in the model rendering strategy, and the virtual object control set comprises object rendering controls corresponding to each visual rendering unit of the model files in the model rendering strategy;
generating a rendering node set of each visual rendering unit according to the rendering scene control set and the virtual object control set, wherein the rendering node set of the visual rendering unit comprises rendering nodes used for representing that the visual rendering unit renders corresponding scene rendering controls and object rendering controls;
and loading each model file according to the rendering scene control set, the virtual object control set and the rendering node set of each visual rendering unit of each model file to obtain a loaded target virtual reality scene and each target virtual reality object in the target virtual reality scene, and executing object control operation on each target virtual reality object after completing the loading of each target virtual reality object in the target virtual reality scene and the target virtual reality scene.
In a possible design of the first aspect, the step of extracting a rendering scene control set and a virtual object control set of each model file in a model rendering policy corresponding to a virtual reality model resource to be loaded includes:
extracting a plurality of scene cameras of each model file in a model rendering strategy corresponding to the corresponding virtual reality model resource to be loaded;
inputting the model file into the model rendering strategy according to the scene cameras to perform graphical interface calling processing, and obtaining a plurality of graphical interface calling parameters of different rendering categories;
selecting a graphical interface calling parameter corresponding to the rendering category for each scene camera in the plurality of scene cameras from the graphical interface calling parameters of the different rendering categories, judging whether the scene camera is matched with a preset graphical interface calling template by using the graphical interface calling information corresponding to the graphical interface calling parameter, and reserving the scene camera when the scene camera is matched with the preset graphical interface calling template to obtain a plurality of reserved scene cameras;
inputting the model file into the model rendering strategy through the plurality of reserved scene cameras for rendering and segmentation, acquiring a corresponding rendering and segmentation virtual object control set and rendering and segmentation process information of a target object rendering control in the rendering and segmentation virtual object control set, and determining a rendering and segmentation area of the target object rendering control according to the rendering and segmentation process information of the target object rendering control;
acquiring a rendering segmentation position of each target object rendering control in a rendering segmentation area of the target object rendering control, recombining the target object rendering controls according to position confidence of the rendering segmentation position of each target object rendering control to obtain a corresponding candidate visual rendering unit, acquiring a scene rendering control corresponding to the candidate visual rendering unit of the model file, establishing object rendering control information between the candidate visual rendering unit and the object rendering control according to the scene rendering control, and processing each candidate visual rendering unit by taking the object rendering control as a reference to determine rendering segmentation process information corresponding to the object rendering control information of each candidate visual rendering unit;
and determining each visual rendering unit in each candidate visual rendering unit according to the rendering segmentation process information corresponding to the object rendering control information of each candidate visual rendering unit so as to obtain the scene rendering control corresponding to each visual rendering unit of the model file in the model rendering strategy and the object rendering control corresponding to each visual rendering unit of the model file in the model rendering strategy.
In a possible design of the first aspect, the step of generating a rendering node set of each visual rendering unit according to the rendering scene control set and the virtual object control set includes:
determining rendering thread information of the model file for each visual rendering unit according to the rendering scene control set and the virtual object control set, and converting the rendering thread information of the model file for each visual rendering unit into a rendering thread queue;
constructing a corresponding rendering thread topological graph according to the rendering thread queue, and analyzing the constructed rendering thread topological graph to obtain a plurality of rendering thread topological clusters;
obtaining an average value of the number of rendering threads in each rendering thread topological cluster, and calculating a difference value between the number of rendering threads in the rendering thread topological cluster and the average value to obtain a rendering thread number determination value of the rendering thread topological cluster;
classifying a plurality of rendering thread topological clusters associated with the rendering thread quantity determination value to generate a corresponding classified rendering thread topological cluster map composed of classified combinations of the rendering thread topological clusters of the plurality of classifications;
selecting the rendering thread topological cluster with the longest rendering thread and the rendering thread topological cluster with the smallest rendering thread from the classified rendering thread topological cluster map as a first rendering thread topological cluster and a second rendering thread topological cluster respectively, calculating the rendering thread quantity definite value difference between each rendering thread topological cluster in the classified rendering thread topological cluster map and the first rendering thread topological cluster and the second rendering thread topological cluster respectively, allocating the rendering thread topological clusters with the rendering thread quantity definite value difference of the first rendering thread topological cluster being less than or equal to the rendering thread quantity definite value difference of the second rendering thread topological cluster to a first rendering thread topological cluster sequence, and allocating the rendering thread topological cluster with the rendering thread quantity definite value difference of the first rendering thread topological cluster being more than the rendering thread quantity definite value difference of the second rendering thread topological cluster Allocating to a second rendering thread topological clustering sequence;
calculating the average value of the rendering thread quantity determination values of all the rendering thread topology clusters in the first rendering thread topology cluster sequence, obtaining a first rendering thread topology cluster taking the average value of the actual rendering thread quantity determination values as a region, calculating the average value of the rendering thread quantity determination values of all the rendering thread topology clusters in the second rendering thread topology cluster sequence, and obtaining a second rendering thread topology cluster taking the average value of the actual rendering thread quantity determination values as a region;
determining a plurality of corresponding third rendering thread topological clusters and a plurality of corresponding fourth rendering thread topological clusters according to the coincidence degrees of the first rendering thread topological cluster and the second rendering thread topological cluster with each rendering thread topological cluster in the classified rendering thread topological cluster map;
according to the plurality of third rendering thread topological clusters and the plurality of fourth rendering thread topological clusters, determining rendering thread parameter information corresponding to the rendering thread information, and analyzing the rendering thread parameter information to obtain rendering thread process control information in the rendering thread parameter information;
and determining the process control range of the rendering thread process control information as the process control range corresponding to the visual rendering unit, and determining the rendering node set of each visual rendering unit according to the process control range corresponding to the visual rendering unit and the rendering thread process control information.
In a possible design of the first aspect, the step of loading each model file according to the rendering scene control set, the virtual object control set, and the rendering node set of each visual rendering unit of each model file to obtain a loaded target virtual reality scene and each target virtual reality object in the target virtual reality scene includes:
determining loading frame information of each model file according to the rendering scene control set and the virtual object control set of each model file, and determining a loading control set corresponding to each model file according to the loading frame information, wherein the loading control set comprises a scene rendering control and an object rendering control;
obtaining rendering thread parameters of a loading control set corresponding to each model file from a rendering node set of each visual rendering unit, wherein the rendering thread parameters are used for representing arrangement information of rendering threads of the loading control set;
establishing a mapping relation between the loading control set and the arranged rendering threads according to the rendering thread parameters, determining a plurality of rendering states of each model directory resource in each model file according to the established mapping relation between the loading control set and the arranged rendering threads, acquiring a matching relation between state identifications of corresponding rendering states of model directory resources of other model files matched with each model directory resource and state identifications of corresponding rendering states of a preset number of model directory resources behind each model directory resource for each rendering state, and acquiring rendering arrangement state configuration information of each model directory resource at the rendering state according to the matching relation to acquire rendering arrangement state configuration information of each model directory resource;
obtaining a rendering control state of each model directory resource in the rendering process according to the rendering arrangement state configuration information of each model directory resource;
obtaining a plurality of rendering and loading control nodes corresponding to each model directory resource according to the rendering control state, and determining first control characteristic information and second control characteristic information of the plurality of rendering and loading control nodes, wherein the first control characteristic information comprises control characteristic information when the rendering control state of each model directory resource is in an open state, and the second control characteristic information comprises control characteristic information when the rendering control state of each model directory resource is in a closed state;
determining the position sequence of the first control feature information in the rendering loading control node, determining the rendering arrangement position of the first control feature information according to the position sequence, respectively determining the rendering branch sequence corresponding to the rendering loading control node of the arranged rendering thread corresponding to each model directory resource according to the rendering arrangement position, and calculating the rendering branch feature corresponding to the rendering branch sequence, wherein the rendering branch sequence is determined according to the rendering branch record information generated based on the renderable part and corresponding to the rendering loading control node of the arranged rendering thread corresponding to the model directory resource in the rendering process, and the rendering branch record information corresponding to different renderable parts is different;
acquiring a thread sequence corresponding to a arranged rendering thread corresponding to each model directory resource, determining a mapping relation between the rendering branch sequence and the thread sequence according to the rendering branch characteristics, and determining a plurality of rendering branches of each model directory resource and a rendering sequence corresponding to each rendering branch from the rendering branch sequence according to the determined mapping relation between the rendering branch sequence and the thread sequence;
screening a plurality of rendering branches with the same preset rendering sequence from the plurality of rendering branches to obtain a plurality of first rendering branches, and taking the rendering branches except the first rendering branches from the plurality of rendering branches as second rendering branches;
determining the number of rendering intervals for each model directory resource according to a first ratio between the number of the first rendering branches and the number of the second rendering branches, a second ratio between the number of the first rendering sequences and the number of the second rendering sequences in all the first rendering branches, and a third ratio between the number of the first rendering sequences and the number of the second rendering sequences in all the second rendering branches;
and loading each model file according to the rendering interval number and the rendering branch characteristics of each model directory resource in each model file to obtain a loaded target virtual reality scene and each target virtual reality object in the target virtual reality scene.
In one possible design of the first aspect, the step of performing an object control operation on the target virtual reality objects includes:
acquiring first control information of an object service corresponding to each target virtual reality object and second control information of a scene service corresponding to the target virtual reality scene;
comparing the control node association sets between the first control information and the second control information, and when the control behaviors of any at least two associated control nodes in the control node association sets conflict, determining a display object adaptation strategy of the scene service for the conflicting target associated control nodes;
determining a target object adaptation region for performing display control on the target virtual reality object and a control instruction sequence aiming at each target object adaptation region according to the display object adaptation strategy;
and executing object control operation on the target virtual reality object according to the determined target object adaptation areas and the control instruction sequence aiming at each target object adaptation area.
In a second aspect, an embodiment of the present application further provides a virtual reality scene loading apparatus, which is applied to a virtual reality device, where the virtual reality device is in communication connection with a server, and the apparatus includes:
the determining module is used for determining a plurality of virtual reality model resources to be loaded according to the loading request aiming at the target virtual reality scene;
an extraction module, configured to obtain, from the server, model files corresponding to the multiple virtual reality model resources to be loaded, respectively, and extract a rendering scene control set and a virtual object control set of each model file in a model rendering policy corresponding to the corresponding virtual reality model resource to be loaded, where the rendering scene control set includes scene rendering controls corresponding to each visual rendering unit of the model file in the model rendering policy, and the virtual object control set includes object rendering controls corresponding to each visual rendering unit of the model file in the model rendering policy;
a generating module, configured to generate a rendering node set of each visual rendering unit according to the rendering scene control set and the virtual object control set, where the rendering node set of the visual rendering unit includes rendering nodes used for representing that the visual rendering unit renders the corresponding scene rendering control and object rendering control;
and the loading module is used for loading each model file according to the rendering scene control set of each model file, the virtual object control set and the rendering node set of each visual rendering unit to obtain a loaded target virtual reality scene and each target virtual reality object in the target virtual reality scene, and executing object control operation on each target virtual reality object after the loading of each target virtual reality object in the target virtual reality scene and the target virtual reality scene is completed.
In a third aspect, an embodiment of the present application further provides a virtual reality system, where the virtual reality system includes a virtual reality device and a server in communication connection with the virtual reality device;
the virtual reality equipment is used for determining a plurality of virtual reality model resources to be loaded according to a loading request aiming at a target virtual reality scene;
the virtual reality equipment is used for acquiring model files corresponding to the virtual reality model resources to be loaded from the server respectively, and extracting a rendering scene control set and a virtual object control set of each model file in a model rendering strategy corresponding to the corresponding virtual reality model resources to be loaded, wherein the rendering scene control set comprises scene rendering controls corresponding to each visual rendering unit of the model files in the model rendering strategy, and the virtual object control set comprises object rendering controls corresponding to each visual rendering unit of the model files in the model rendering strategy;
the virtual reality equipment is used for generating a rendering node set of each visual rendering unit according to the rendering scene control set and the virtual object control set, wherein the rendering node set of the visual rendering unit comprises rendering nodes used for representing that the visual rendering unit renders corresponding scene rendering controls and object rendering controls;
the virtual reality equipment is used for loading each model file according to the rendering scene control set, the virtual object control set and the rendering node set of each visual rendering unit of each model file to obtain a loaded target virtual reality scene and each target virtual reality object in the target virtual reality scene, and executing object control operation on each target virtual reality object after the loading of each target virtual reality object in the target virtual reality scene and the target virtual reality scene is completed.
In a fourth aspect, an embodiment of the present application further provides a virtual reality device, where the virtual reality device includes a processor, a machine-readable storage medium, and a network interface, where the machine-readable storage medium, the network interface, and the processor are connected through a bus system, the network interface is configured to be connected in communication with at least one object fluctuation range, the machine-readable storage medium is configured to store a program, an instruction, or code, and the processor is configured to execute the program, the instruction, or the code in the machine-readable storage medium, so as to execute the method for loading a virtual reality scene in any one of the possible designs in the first aspect or the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are detected on a computer, the computer is caused to perform the method for loading a virtual reality scene in the first aspect or any one of the possible designs of the first aspect.
Based on any one of the above aspects, the rendering scene control set and the virtual object control set of each model file in the model rendering strategy corresponding to the corresponding virtual reality model resource to be loaded are separately extracted, so that the application environment scene and the virtual reality object are separately loaded and rendered by using the separately generated related rendering controls, a complete rendering effect can be achieved without considering interaction calling activity between the virtual reality object and the application environment scene, the loading data volume at the moment can be reduced, and partial expansion customizing operation can be conveniently performed on the virtual reality object in the application environment scene.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic view of an application scenario of a virtual reality system according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a virtual reality scene loading method provided in an embodiment of the present application;
fig. 3 is a functional module schematic diagram of a virtual reality scene loading device according to an embodiment of the present application;
fig. 4 is a block diagram schematically illustrating a structure of a virtual reality device for implementing the virtual reality scene loading method according to the embodiment of the present application.
Detailed Description
The present application will now be described in detail with reference to the drawings, and the specific operations in the method embodiments may also be applied to the apparatus embodiments or the system embodiments.
Fig. 1 is an interaction diagram of a virtual reality system 10 according to an embodiment of the present application. The virtual reality system 10 may include a server 200 and a virtual reality device 100 communicatively connected to the server 200, and a processor executing instruction operations may be included in the virtual reality device 100. The virtual reality system 10 shown in fig. 1 is only one possible example, and in other possible embodiments, the virtual reality system 10 may include only a portion of the components shown in fig. 1 or may include other components.
In some embodiments, the server 200 may be a single server or a group of servers. The set of operating servers may be centralized or distributed (e.g., server 200 may be a distributed system). In some embodiments, the server 200 may be local or remote to the virtual reality device 100. For example, the server 200 may access information stored in the virtual reality device 100 and a database, or any combination thereof, via a network. As another example, the server 200 may be directly connected to at least one of the virtual reality device 100 and a database to access information and/or data stored therein. In some embodiments, the server 200 may be implemented on a cloud platform; by way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud (community cloud), a distributed cloud, an inter-cloud, a multi-cloud, and the like, or any combination thereof.
In some embodiments, the server 200 may include a processor. The processor may process information and/or data related to the service request to perform one or more of the functions described herein. A processor may include one or more processing cores (e.g., a single-core processor (S) or a multi-core processor (S)). Merely by way of example, a Processor may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an Application Specific Instruction Set Processor (ASIP), a Graphics Processing Unit (GPU), a Physical Processing Unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a microcontroller Unit, a Reduced Instruction Set computer (Reduced Instruction Set computer), a microprocessor, or the like, or any combination thereof.
The network may be used for the exchange of information and/or data. In some embodiments, one or more components in the virtual reality system 10 (e.g., the server 200, the virtual reality device 100, and the database) may send information and/or data to other components. In some embodiments, the network may be any type of wired or wireless network, or combination thereof. Merely by way of example, Network 130 may include a wired Network, a Wireless Network, a fiber optic Network, a telecommunications Network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a WLAN, a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a Public Switched Telephone Network (PSTN), a bluetooth Network, a ZigBee Network, a Near Field Communication (NFC) Network, or the like, or any combination thereof. In some embodiments, the network may include one or more network access points. For example, the network may include wired or wireless network access points, such as base stations and/or network switching nodes, through which one or more components of the virtual reality system 10 may connect to the network to exchange data and/or information.
The aforementioned database may store data and/or instructions. In some embodiments, the database may store data assigned to the virtual reality device 100. In some embodiments, the database may store data and/or instructions for the exemplary methods described herein. In some embodiments, the database may include mass storage, removable storage, volatile Read-write Memory, or Read-Only Memory (ROM), among others, or any combination thereof. By way of example, mass storage may include magnetic disks, optical disks, solid state drives, and the like; removable memory may include flash drives, floppy disks, optical disks, memory cards, zip disks, tapes, and the like; volatile read-write Memory may include Random Access Memory (RAM); the RAM may include Dynamic RAM (DRAM), Double data Rate Synchronous Dynamic RAM (DDR SDRAM); static RAM (SRAM), Thyristor-Based Random Access Memory (T-RAM), Zero-capacitor RAM (Zero-RAM), and the like. By way of example, ROMs may include Mask Read-Only memories (MROMs), Programmable ROMs (PROMs), Erasable Programmable ROMs (PERROMs), Electrically Erasable Programmable ROMs (EEPROMs), compact disk ROMs (CD-ROMs), digital versatile disks (ROMs), and the like. In some embodiments, the database may be implemented on a cloud platform. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, across clouds, multiple clouds, or the like, or any combination thereof.
In some embodiments, the database may be connected to a network to communicate with one or more components in the virtual reality system 10 (e.g., server 200, virtual reality device 100, etc.). One or more components in the virtual reality system 10 may access data or instructions stored in a database via a network. In some embodiments, the database may be directly connected to one or more components in the virtual reality system 10 (e.g., the server 200, the virtual reality device 100, etc.; or, in some embodiments, the database may be part of the server 200.
To solve the technical problem in the foregoing background art, fig. 2 is a schematic flowchart of a virtual reality scene loading method provided in an embodiment of the present application, where the virtual reality scene loading method provided in this embodiment may be executed by the virtual reality device 100 shown in fig. 1, and the virtual reality scene loading method is described in detail below.
Step S110, according to the loading request aiming at the target virtual reality scene, a plurality of virtual reality model resources to be loaded are determined.
Step S120, obtaining model files corresponding to a plurality of virtual reality model resources to be loaded from the server 200, and extracting a rendering scene control set and a virtual object control set of each model file in a model rendering policy corresponding to the corresponding virtual reality model resource to be loaded.
And step S130, generating a rendering node set of each visual rendering unit according to the rendering scene control set and the virtual object control set.
Step S140, loading each model file according to the rendering scene control set, the virtual object control set, and the rendering node set of each visual rendering unit of each model file to obtain a loaded target virtual reality scene and each target virtual reality object in the target virtual reality scene, and after completing loading each target virtual reality object in the target virtual reality scene and the target virtual reality scene, executing object control operation on each target virtual reality object.
In this embodiment, the loading request for the target virtual reality scene may be triggered by the user operating the virtual reality device 100, or may be automatically triggered, which is not specifically limited herein.
In this embodiment, the rendering scene control set may include scene rendering controls corresponding to each visual rendering unit in the model rendering policy of the model file, the virtual object control set may include object rendering controls corresponding to each visual rendering unit in the model rendering policy of the model file, and the rendering node set of the visual rendering unit may include rendering nodes for representing that the visual rendering unit renders the corresponding scene rendering controls and the corresponding object rendering controls. Therefore, the rendering scene control set and the virtual object control set of each model file in the model rendering strategy corresponding to the corresponding virtual reality model resource to be loaded are independently extracted, so that the application environment scene and the virtual reality object are independently loaded and rendered by using the independently generated related rendering controls, the complete rendering effect can be achieved without considering the interaction calling activity between the virtual reality object and the application environment scene, the loading data volume at the moment can be reduced, and partial expansion customizing operation can be conveniently carried out on the virtual reality object in the application environment scene.
In a possible design, for step S120, in order to effectively determine the scene rendering control corresponding to each visual rendering unit of the model file in the model rendering policy and the object rendering control corresponding to each visual rendering unit of the model file in the model rendering policy, this embodiment may extract a plurality of scene cameras of each model file in the model rendering policy corresponding to the virtual reality model resource to be loaded, and input the model file into the model rendering policy according to the plurality of scene cameras to perform the graphics interface calling processing, so as to obtain the graphics interface calling parameters of a plurality of different rendering categories.
And then, aiming at each scene camera in the plurality of scene cameras, selecting a graphical interface calling parameter corresponding to the rendering category from graphical interface calling parameters of a plurality of different rendering categories for the scene camera, judging whether the scene camera is matched with a preset graphical interface calling template by utilizing graphical interface calling information corresponding to the graphical interface calling parameter, and reserving the scene camera when the scene camera is matched with the preset graphical interface calling template so as to obtain a plurality of reserved scene cameras.
Then, inputting the model file into a model rendering strategy through a plurality of reserved scene cameras for rendering and segmenting, obtaining corresponding rendering and segmenting virtual object control sets and rendering and segmenting process information of target object rendering controls in the rendering and segmenting virtual object control sets, determining rendering and segmenting areas of the target object rendering controls according to the rendering and segmenting process information of the target object rendering controls, obtaining rendering and segmenting positions of each target object rendering control in the rendering and segmenting areas of the target object rendering controls, recombining the target object rendering controls according to position confidence degrees of the rendering and segmenting positions of each target object rendering control to obtain corresponding candidate visual rendering units, obtaining scene rendering controls corresponding to the candidate visual rendering units of the model file, and establishing object rendering control information between the candidate visual rendering units and the object rendering controls according to the scene rendering controls, and processing each candidate visual rendering unit by taking the object rendering control as a reference so as to determine rendering segmentation process information corresponding to the object rendering control information of each candidate visual rendering unit.
Therefore, each visual rendering unit in each candidate visual rendering unit can be determined according to the rendering segmentation process information corresponding to the object rendering control information of each candidate visual rendering unit, so as to obtain the scene rendering control corresponding to each visual rendering unit of the model file in the model rendering strategy and the object rendering control corresponding to each visual rendering unit of the model file in the model rendering strategy.
In a possible design, for step S130, in order to effectively determine a rendering node set of each visual rendering unit, in this embodiment, rendering thread information of a model file for each visual rendering unit may be determined according to a rendering scene control set and a virtual object control set, rendering thread information of the model file for each visual rendering unit is converted into a rendering thread queue, then a corresponding rendering thread topological graph is constructed according to the rendering thread queue, and the constructed rendering thread topological graph is analyzed, so as to obtain a plurality of rendering thread topological clusters.
Next, an average value of the number of rendering threads in each rendering thread topology cluster may be obtained, a difference value between the number of rendering threads in the rendering thread topology cluster and the average value is calculated to obtain a rendering thread number determination value of the rendering thread topology cluster, and a plurality of rendering thread topology clusters associated with the rendering thread number determination value are classified to generate a corresponding classified rendering thread topology cluster map composed of classification combinations of the rendering thread topology clusters of the plurality of classifications.
Then, selecting the rendering thread topological cluster with the longest rendering thread and the rendering thread topological cluster with the smallest rendering thread in the classified rendering thread topological cluster map as a first rendering thread topological cluster and a second rendering thread topological cluster respectively, calculating the rendering thread quantity definite value difference between each rendering thread topological cluster in the classified rendering thread topological cluster map and the first rendering thread topological cluster and between each rendering thread topological cluster in the classified rendering thread topological cluster map and the second rendering thread topological cluster respectively, distributing the rendering thread topological cluster with the rendering thread quantity definite value difference of the first rendering thread topological cluster being less than or equal to the rendering thread quantity definite value difference of the second rendering thread topological cluster to a first rendering thread topological cluster sequence, distributing the rendering thread topological cluster with the rendering thread quantity definite value difference of the first rendering thread topological cluster being more than the rendering thread quantity definite value difference of the second rendering thread topological cluster to a second rendering thread topological cluster sequence And (4) columns.
On the basis, the average value of the rendering thread number determination values of all the rendering thread topology clusters in the first rendering thread topology cluster sequence can be further calculated, the first rendering thread topology cluster taking the average value of the actual rendering thread number determination values as an area is obtained, the average value of the rendering thread number determination values of all the rendering thread topology clusters in the second rendering thread topology cluster sequence is calculated, the second rendering thread topology cluster taking the average value of the actual rendering thread number determination values as an area is obtained, and then a plurality of corresponding third rendering thread topology clusters and a plurality of corresponding fourth rendering thread topology clusters are determined according to the coincidence degree of the first rendering thread topology clusters, the second rendering thread topology clusters and each rendering thread topology cluster in the classified rendering thread topology cluster map.
For example, rendering thread topology clusters with an overlap ratio greater than a set overlap ratio with each rendering thread topology cluster in the classified rendering thread topology cluster map may be determined as a corresponding plurality of third rendering thread topology clusters and a plurality of fourth rendering thread topology clusters, respectively.
Then, according to the plurality of third rendering thread topological clusters and the plurality of fourth rendering thread topological clusters, rendering thread parameter information corresponding to the rendering thread information is determined, and the rendering thread parameter information is analyzed to obtain rendering thread process control information in the rendering thread parameter information, so that the process control range of the rendering thread process control information is determined as the process control range corresponding to the visual rendering unit, and the rendering node set of each visual rendering unit is determined according to the process control range corresponding to the visual rendering unit and the rendering thread process control information.
For example, as an example, the rendering node set of each visual rendering unit may be determined according to the rendering thread process control node corresponding to the rendering thread process control information in the process control range corresponding to the visual rendering unit.
In a possible design, for step S140, in this embodiment, loading frame information of each model file may be determined according to the rendering scene control set and the virtual object control set of each model file, and a loading control set corresponding to each model file may be determined according to the loading frame information, where the loading control set includes a scene rendering control and an object rendering control, and then a rendering thread parameter of the loading control set corresponding to each model file is obtained from a rendering node set of each visual rendering unit, and the rendering thread parameter is used to represent arrangement information of rendering threads of the loading control set.
On the basis, a mapping relation between the loading control set and the arranged rendering threads can be established according to the rendering thread parameters, a plurality of rendering states of each model directory resource in each model file are determined according to the established mapping relation between the loading control set and the arranged rendering threads, a matching relation between state identifications of the corresponding rendering states of the model directory resources of other model files matched with each model directory resource and state identifications of the corresponding rendering states of a preset number of model directory resources behind each model directory resource is obtained for each rendering state, and rendering arrangement state configuration information of each model directory resource at the rendering state is obtained according to the matching relation, so that rendering arrangement state configuration information of each model directory resource is obtained.
Then, a rendering control state of each model directory resource in the rendering process can be obtained according to the rendering arrangement state configuration information of each model directory resource, and a plurality of rendering loading control nodes corresponding to each model directory resource are obtained according to the rendering control state. For example, the rendering load control node when the rendering control state of the model directory resource in the rendering process is the preset control state may be determined as a plurality of rendering load control nodes corresponding to each model directory resource,
On this basis, the first control characteristic information and the second control characteristic information of the plurality of rendering load control nodes may be further determined. It should be noted that the first control characteristic information includes control characteristic information when the rendering control state of each model directory resource is in the on state, and the second control characteristic information includes control characteristic information when the rendering control state of each model directory resource is in the off state.
Then, the position sequence of the first control feature information in the rendering loading control node can be determined, the rendering arrangement position of the first control feature information is determined according to the position sequence, the rendering branch sequence corresponding to the rendering loading control node for arranging the rendering thread corresponding to each model directory resource is respectively determined according to the rendering arrangement position, the rendering branch feature corresponding to the rendering branch sequence is calculated, the rendering branch sequence is determined according to the rendering branch record information generated by the rendering part corresponding to the rendering loading control node for arranging the rendering thread corresponding to the model directory resource in the rendering process, and the rendering branch record information corresponding to different rendering parts is different.
Then, a thread sequence corresponding to the arranged rendering threads corresponding to each model directory resource can be obtained, a mapping relation between a rendering branch sequence and the thread sequence is determined according to the rendering branch characteristics, a plurality of rendering branches of each model directory resource and a rendering sequence corresponding to each rendering branch are determined from the rendering branch sequence according to the determined mapping relation between the rendering branch sequence and the thread sequence, a plurality of rendering branches identical to a preset rendering sequence are selected from the rendering branches, a plurality of first rendering branches are obtained, rendering branches except the first rendering branches in the rendering branches are used as second rendering branches, and therefore a first ratio between the number of the first rendering branches and the number of the second rendering branches, a second ratio between the number of the first rendering sequences and the number of the second rendering sequences in all the first rendering branches and the number of the first rendering sequences in all the second rendering branches can be obtained according to And determining the number of rendering intervals for each model directory resource according to a third ratio of the number of the second rendering sequences to the number of the second rendering sequences, and loading each model file according to the number of rendering intervals and the rendering branch characteristics of each model directory resource in each model file to obtain the loaded target virtual reality scene and each target virtual reality object in the target virtual reality scene.
In a possible design, still referring to step S140, in order to avoid a situation that the virtual reality object cannot be effectively adapted to the application environment scene due to non-uniformity of the control manners of the application environment scene and the virtual reality object in the virtual reality scene, and improve the rendering effect of the virtual reality object and the adaptation degree to the application environment scene, this embodiment may further obtain first control information of an object service corresponding to each target virtual reality object and second control information of a scene service corresponding to the target virtual reality scene, then compare the control node association sets between the first control information and the second control information, and when there is a conflict between the control behaviors of any at least two associated control nodes in the control node association sets, determine a display object adaptation policy of the scene service for the target associated control node having the conflict, therefore, the target object adaptation area for carrying out display control on the target virtual reality object and the control instruction sequence aiming at each target object adaptation area can be determined according to the display object adaptation strategy, and the target virtual reality object is subjected to object control operation according to the determined target object adaptation area and the control instruction sequence aiming at each target object adaptation area.
It should be noted that the first control information and the second control information are respectively used for representing a control instruction of each control link, and may be configured by a provider of the virtual display model. In addition, the target object adaptation region can be understood as a region for performing adaptation control on the target virtual reality object in a specific presentation process.
In a possible design, in order to effectively determine the above display object adaptation strategy and improve the rendering effect of the virtual reality object and the adaptation degree to the application environment scene, the embodiment may obtain a current scene space corresponding to a target associated control node having a conflict from a scene service, then calculate a first scene linkage space in which the current scene space is located according to an object adaptation matrix, perform simulation update on the range of the first scene linkage space, and obtain a second scene linkage space in which the current scene space is located, so as to use the second scene linkage space as an initial scene linkage space of a next scene space.
And then, updating the object adaptation matrix by taking the next scene space as the current scene space to obtain an updated object adaptation matrix, and performing linkage updating on the initial scene linkage space corresponding to the current scene space according to the updated object adaptation matrix to obtain the initial scene linkage space corresponding to the next scene space until all scene objects in the scene space are subjected to simulated linkage, so as to obtain a simulated linkage result.
On the basis, the corresponding dynamic adaptation function can be calculated according to the initial simulation linkage parameters, the linkage times of all scene objects in the scene space, the total linkage times of all scene objects and the regional configuration parameters of the initial scene linkage space.
As a possible example, the present embodiment may obtain a plurality of simulated linkage spatial coordinates according to the initial simulated linkage parameters, obtain a linkage coordinate value of each simulated linkage spatial coordinate in the plurality of simulated linkage spatial coordinates, and then obtain simulated linkage set header information of each simulated linkage spatial coordinate according to the linkage coordinate value of each simulated linkage spatial coordinate and a simulated linkage interval before simulated linkage of each simulated linkage spatial coordinate.
It should be noted that the simulation linkage aggregation header information includes the number of times of linkage between the simulation linkage interval and each corresponding scene object and the total number of times of linkage.
And then, calculating to obtain an initial value of the simulated linkage interval of each simulated linkage space coordinate according to the simulated linkage type of each simulated linkage space coordinate and the simulated linkage interval of each simulated linkage space coordinate. For example, the analog linkage type may correspond to an interval coefficient, and on this basis, the analog linkage interval of each analog linkage space coordinate may be multiplied by the interval coefficient to obtain an initial value of the analog linkage interval of each analog linkage space coordinate.
Then, according to the initial value of the simulated linkage interval of each simulated linkage space coordinate and the corresponding linkage times and total linkage times of each scene object, a simulated linkage information table is inquired to obtain coordinate offset adaptive parameters of a plurality of simulated linkage space coordinates, then parameter fusion information between the coordinate offset adaptive parameters of the plurality of simulated linkage space coordinates and the regional configuration parameters of the initial scene linkage space is determined to obtain a plurality of parameter fusion information, and the simulated linkage results and the corresponding simulated linkage control parameters of the plurality of parameter fusion information are calculated, and the simulated linkage control parameters are processed according to the simulated linkage process node sequence in the simulated linkage results to obtain a plurality of simulated linkage control parameter sets.
And then, sequentially extracting the simulation linkage adaptation processes in the plurality of simulation linkage control parameter sets, taking the matching targets in the plurality of simulation linkage adaptation processes as simulation linkage targets, and sequentially generating a simulation linkage set corresponding to each simulation linkage target according to the simulation linkage adaptation processes, so that the linkage amplitude between each matching target in the simulation linkage adaptation processes can be respectively matched with each simulation linkage set. It should be noted that the linkage amplitude corresponds to the absolute value of the difference between the sequence maximum and the sequence minimum of the analog linkage set.
Then, corresponding simulation linkage adaptation nodes can be set for each simulation linkage set according to linkage amplitude matched with each simulation linkage set, the simulation linkage sets provided with the simulation linkage adaptation nodes are subjected to associated configuration according to the simulation linkage adaptation process, the simulation linkage sets subjected to associated configuration are applied to the corresponding simulation linkage controls according to the types of the simulation linkage control parameter sets corresponding to the simulation linkage sets subjected to associated configuration, target simulation linkage controls are obtained, and the dynamic adaptation functions of each target simulation linkage control are combined to obtain the corresponding dynamic adaptation functions.
On the basis of the description, the dynamic adaptation function, the simulation linkage result and the scene parameters of the scene space can be used as the display object adaptation strategy output of the control node association set.
For example, function substitution may be performed on each simulation linkage node in the simulation linkage result according to a dynamic adaptation function, a simulation linkage adaptation coordinate point of each simulation linkage node is determined, main linkage adaptation configuration information of the simulation linkage node is determined according to the simulation linkage adaptation coordinate point, slave linkage adaptation configuration information corresponding to the simulation linkage node is found out based on the main linkage adaptation configuration information, and each simulation linkage node is combined into at least one spatial configuration relationship chain according to the slave linkage adaptation configuration information.
Then, a chain simulation linkage parameter corresponding to each spatial configuration relationship chain and used for representing the simulated linkage of each spatial configuration relationship chain can be obtained based on each spatial configuration relationship chain, linkage control information of each spatial configuration relationship chain when the spatial configuration relationship chain is in linkage simulation linkage nodes is determined according to the chain simulation linkage parameter, and each spatial configuration relationship chain is spliced according to the linkage control logic relationship of each linkage control information to obtain a spliced spatial configuration relationship chain.
Then, corresponding splicing space point adaptation information can be extracted according to splicing space points on the splicing space configuration relation chain, the splicing space point adaptation information is grouped according to different object display labels, adaptation adjusting information of each object display label is calculated, a splicing space point adaptation interval is selected according to the adaptation adjusting information, when an updating process for updating a display object adaptation strategy is generated in the splicing space point adaptation information according to the splicing space point adaptation interval, an adaptation mapping script corresponding to the splicing space point adaptation interval is obtained according to the updating process, a mapping bit space for recording the adaptation mapping script is generated at the same time, the adaptation mapping script is mapped to the mapping bit space, and mapping relevant information of the adaptation mapping script is set according to the object display labels of the splicing space point adaptation information.
Therefore, whether the splicing space point adaptation information is adapted to the display object adaptation strategy or not can be judged according to the mapping association information, when the splicing space point adaptation information is adapted to the display object adaptation strategy, at least one updating parameter for updating the display object adaptation strategy is determined according to the updating process, so that the display object adaptation strategy is updated according to the at least one updating parameter, wherein the display object adaptation strategy comprises a control instruction corresponding to each unit area.
In a possible design, the embodiment may locate each object fluctuation range related to the target virtual reality object, determine an object control coordinate system corresponding to the target virtual reality object, then determine a fluctuation distance segment set according to the object control coordinate system, extract a dense control region of the object control coordinate system, and extract a centralized control range of the fluctuation distance segment set associated with the dense control region with a set threshold as a fluctuation interval.
Among them, the dense control region may be used to represent a region in the object control coordinate system formed such that the number of controllable coordinate points in the unit coordinate system is greater than a set number (e.g., 50).
Then, according to at least two related fluctuation distance segment coordinates in the centralized control range, a plurality of visual movement units are generated according to the visual movement direction in the visual area corresponding to the fluctuation distance segment coordinates, the overlapping area between all the visual areas in the next fluctuation distance segment coordinate and all the visual areas in the previous fluctuation distance segment coordinate is calculated, and a corresponding visual movement direction table is obtained according to each obtained overlapping area. Therefore, the visual moving units of which the visual moving directions are matched and the overlapping area between the visual areas of the two visual moving units is smaller than the maximum continuous overlapping area of the object control coordinate system in the overlapping area can be obtained according to the visual moving direction table to form the coordinate space of the fluctuation distance section.
On the basis, the coordinate space in the coordinate space of each fluctuation distance segment can be adapted to obtain an adapted interval of each adapted fluctuation distance segment coordinate space, a target object adapted area for controlling the display of the target virtual reality object is determined according to the adapted interval of each fluctuation distance segment coordinate space, and therefore a control instruction sequence aiming at each target object adapted area is determined according to a control instruction of a unit area corresponding to each target object adapted area in the display object adapted strategy.
Fig. 3 is a schematic functional module diagram of a virtual reality scene loading apparatus 300 according to an embodiment of the present application, where the virtual reality scene loading apparatus 300 may be divided into functional modules according to the foregoing method embodiments. For example, the functional blocks may be divided for the respective functions, or two or more functions may be integrated into one processing block. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the present application is schematic, and is only a logical function division, and there may be another division manner in actual implementation. For example, in the case of dividing each function module according to each function, the virtual reality scene loading apparatus 300 shown in fig. 3 is only a schematic apparatus diagram. The virtual reality scene loading apparatus 300 may include a determining module 310, an extracting module 320, a generating module 330, and a loading control module 340, and the functions of the functional modules of the virtual reality scene loading apparatus 300 are described in detail below.
A determining module 310, configured to determine, according to the loading request for the target virtual reality scene, a plurality of virtual reality model resources to be loaded.
The extracting module 320 is configured to obtain, from the server 200, model files corresponding to a plurality of virtual reality model resources to be loaded, and extract a rendering scene control set and a virtual object control set of each model file in a model rendering policy corresponding to the corresponding virtual reality model resource to be loaded, where the rendering scene control set includes scene rendering controls corresponding to each visual rendering unit of the model file in the model rendering policy, and the virtual object control set includes object rendering controls corresponding to each visual rendering unit of the model file in the model rendering policy.
The generating module 330 is configured to generate a rendering node set of each visual rendering unit according to the rendering scene control set and the virtual object control set, where the rendering node set of the visual rendering unit includes rendering nodes used for representing that the visual rendering unit renders the corresponding scene rendering control and the corresponding object rendering control.
And the loading module 340 is configured to load each model file according to the rendering scene control set, the virtual object control set, and the rendering node set of each visual rendering unit of each model file to obtain a loaded target virtual reality scene and each target virtual reality object in the target virtual reality scene, and perform object control operation on each target virtual reality object after completing loading of each target virtual reality object in the target virtual reality scene and the target virtual reality scene.
Further, fig. 4 is a schematic structural diagram of a virtual reality device 100 for executing the virtual reality scene loading method according to the embodiment of the present application. As shown in fig. 4, the virtual reality device 100 may include a network interface 110, a machine-readable storage medium 120, a processor 130, and a bus 140. The processor 130 may be one or more, and one processor 130 is illustrated in fig. 4 as an example. The network interface 110, the machine-readable storage medium 120, and the processor 130 may be connected by a bus 140 or otherwise, as exemplified by the connection by the bus 140 in fig. 4.
The machine-readable storage medium 120 is a computer-readable storage medium, and can be used to store software programs, computer-executable programs, and modules, such as program instructions/modules (for example, the determining module 310, the extracting module 320, the generating module 330, and the loading control module 340 shown in fig. 3) corresponding to the virtual reality scene loading method in the embodiment of the present application. The processor 130 executes various functional applications and data processing of the terminal device by detecting the software program, instructions and modules stored in the machine-readable storage medium 120, that is, the above-mentioned virtual reality scene loading method is implemented, and details are not repeated herein.
The machine-readable storage medium 120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the machine-readable storage medium 120 may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data rate Synchronous Dynamic random access memory (DDR SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), and direct memory bus RAM (DR RAM). It should be noted that the memories of the systems and methods described herein are intended to comprise, without being limited to, these and any other suitable memory of a publishing node. In some examples, the machine-readable storage medium 120 may further include memory located remotely from the processor 130, which may be connected to the virtual reality device 100 over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor 130 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 130. The processor 130 may be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor.
The virtual reality device 100 can interact with other devices (e.g., the server 200) through the communication interface 110. Communication interface 110 may be a circuit, bus, transceiver, or any other device that may be used to exchange information. Processor 130 may send and receive information using communication interface 110.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website site, computer, virtual reality device, or data center to another website site, computer, virtual reality device, or data center by wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a virtual reality device, a data center, etc., that incorporates one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the embodiments of the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to encompass such modifications and variations.

Claims (10)

1. A virtual reality scene loading method is applied to virtual reality equipment, the virtual reality equipment is in communication connection with a server, and the method comprises the following steps:
determining a plurality of virtual reality model resources to be loaded according to a loading request aiming at a target virtual reality scene;
obtaining model files corresponding to the virtual reality model resources to be loaded from the server, and extracting a rendering scene control set and a virtual object control set of each model file in a model rendering strategy corresponding to the corresponding virtual reality model resources to be loaded, wherein the rendering scene control set comprises scene rendering controls corresponding to each visual rendering unit of the model files in the model rendering strategy, and the virtual object control set comprises object rendering controls corresponding to each visual rendering unit of the model files in the model rendering strategy;
generating a rendering node set of each visual rendering unit according to the rendering scene control set and the virtual object control set, wherein the rendering node set of the visual rendering unit comprises rendering nodes used for representing that the visual rendering unit renders corresponding scene rendering controls and object rendering controls;
and loading each model file according to the rendering scene control set, the virtual object control set and the rendering node set of each visual rendering unit of each model file to obtain a loaded target virtual reality scene and each target virtual reality object in the target virtual reality scene, and executing object control operation on each target virtual reality object after completing the loading of each target virtual reality object in the target virtual reality scene and the target virtual reality scene.
2. The virtual reality scene loading method according to claim 1, wherein the step of extracting the rendering scene control set and the virtual object control set of each model file in the model rendering policy corresponding to the corresponding virtual reality model resource to be loaded comprises:
extracting a plurality of scene cameras of each model file in a model rendering strategy corresponding to the corresponding virtual reality model resource to be loaded;
inputting the model file into the model rendering strategy according to the scene cameras to perform graphical interface calling processing, and obtaining a plurality of graphical interface calling parameters of different rendering categories;
selecting a graphical interface calling parameter corresponding to the rendering category for each scene camera in the plurality of scene cameras from the graphical interface calling parameters of the different rendering categories, judging whether the scene camera is matched with a preset graphical interface calling template by using the graphical interface calling information corresponding to the graphical interface calling parameter, and reserving the scene camera when the scene camera is matched with the preset graphical interface calling template to obtain a plurality of reserved scene cameras;
inputting the model file into the model rendering strategy through the plurality of reserved scene cameras for rendering and segmentation, acquiring a corresponding rendering and segmentation virtual object control set and rendering and segmentation process information of a target object rendering control in the rendering and segmentation virtual object control set, and determining a rendering and segmentation area of the target object rendering control according to the rendering and segmentation process information of the target object rendering control;
acquiring a rendering segmentation position of each target object rendering control in a rendering segmentation area of the target object rendering control, recombining the target object rendering controls according to position confidence of the rendering segmentation position of each target object rendering control to obtain a corresponding candidate visual rendering unit, acquiring a scene rendering control corresponding to the candidate visual rendering unit of the model file, establishing object rendering control information between the candidate visual rendering unit and the object rendering control according to the scene rendering control, and processing each candidate visual rendering unit by taking the object rendering control as a reference to determine rendering segmentation process information corresponding to the object rendering control information of each candidate visual rendering unit;
and determining each visual rendering unit in each candidate visual rendering unit according to the rendering segmentation process information corresponding to the object rendering control information of each candidate visual rendering unit so as to obtain the scene rendering control corresponding to each visual rendering unit of the model file in the model rendering strategy and the object rendering control corresponding to each visual rendering unit of the model file in the model rendering strategy.
3. The virtual reality scene loading method according to claim 1, wherein the step of generating a rendering node set of each visual rendering unit according to the rendering scene control set and the virtual object control set comprises:
determining rendering thread information of the model file for each visual rendering unit according to the rendering scene control set and the virtual object control set, and converting the rendering thread information of the model file for each visual rendering unit into a rendering thread queue;
constructing a corresponding rendering thread topological graph according to the rendering thread queue, and analyzing the constructed rendering thread topological graph to obtain a plurality of rendering thread topological clusters;
obtaining an average value of the number of rendering threads in each rendering thread topological cluster, and calculating a difference value between the number of rendering threads in the rendering thread topological cluster and the average value to obtain a rendering thread number determination value of the rendering thread topological cluster;
classifying a plurality of rendering thread topological clusters associated with the rendering thread quantity determination value to generate a corresponding classified rendering thread topological cluster map composed of classified combinations of the rendering thread topological clusters of the plurality of classifications;
selecting the rendering thread topological cluster with the longest rendering thread and the rendering thread topological cluster with the smallest rendering thread from the classified rendering thread topological cluster map as a first rendering thread topological cluster and a second rendering thread topological cluster respectively, calculating the rendering thread quantity definite value difference between each rendering thread topological cluster in the classified rendering thread topological cluster map and the first rendering thread topological cluster and the second rendering thread topological cluster respectively, allocating the rendering thread topological clusters with the rendering thread quantity definite value difference of the first rendering thread topological cluster being less than or equal to the rendering thread quantity definite value difference of the second rendering thread topological cluster to a first rendering thread topological cluster sequence, and allocating the rendering thread topological cluster with the rendering thread quantity definite value difference of the first rendering thread topological cluster being more than the rendering thread quantity definite value difference of the second rendering thread topological cluster Allocating to a second rendering thread topological clustering sequence;
calculating the average value of the rendering thread quantity determination values of all the rendering thread topology clusters in the first rendering thread topology cluster sequence, obtaining a first rendering thread topology cluster taking the average value of the actual rendering thread quantity determination values as a region, calculating the average value of the rendering thread quantity determination values of all the rendering thread topology clusters in the second rendering thread topology cluster sequence, and obtaining a second rendering thread topology cluster taking the average value of the actual rendering thread quantity determination values as a region;
determining a plurality of corresponding third rendering thread topological clusters and a plurality of corresponding fourth rendering thread topological clusters according to the coincidence degrees of the first rendering thread topological cluster and the second rendering thread topological cluster with each rendering thread topological cluster in the classified rendering thread topological cluster map;
according to the plurality of third rendering thread topological clusters and the plurality of fourth rendering thread topological clusters, determining rendering thread parameter information corresponding to the rendering thread information, and analyzing the rendering thread parameter information to obtain rendering thread process control information in the rendering thread parameter information;
and determining the process control range of the rendering thread process control information as the process control range corresponding to the visual rendering unit, and determining the rendering node set of each visual rendering unit according to the process control range corresponding to the visual rendering unit and the rendering thread process control information.
4. The virtual reality scene loading method according to any one of claims 1 to 3, wherein the step of loading each model file according to the rendering scene control set, the virtual object control set and the rendering node set of each visual rendering unit of each model file to obtain the loaded target virtual reality scene and each target virtual reality object in the target virtual reality scene comprises:
determining loading frame information of each model file according to the rendering scene control set and the virtual object control set of each model file, and determining a loading control set corresponding to each model file according to the loading frame information, wherein the loading control set comprises a scene rendering control and an object rendering control;
obtaining rendering thread parameters of a loading control set corresponding to each model file from a rendering node set of each visual rendering unit, wherein the rendering thread parameters are used for representing arrangement information of rendering threads of the loading control set;
establishing a mapping relation between the loading control set and the arranged rendering threads according to the rendering thread parameters, determining a plurality of rendering states of each model directory resource in each model file according to the established mapping relation between the loading control set and the arranged rendering threads, acquiring a matching relation between state identifications of corresponding rendering states of model directory resources of other model files matched with each model directory resource and state identifications of corresponding rendering states of a preset number of model directory resources behind each model directory resource for each rendering state, and acquiring rendering arrangement state configuration information of each model directory resource at the rendering state according to the matching relation to acquire rendering arrangement state configuration information of each model directory resource;
obtaining a rendering control state of each model directory resource in the rendering process according to the rendering arrangement state configuration information of each model directory resource;
obtaining a plurality of rendering and loading control nodes corresponding to each model directory resource according to the rendering control state, and determining first control characteristic information and second control characteristic information of the plurality of rendering and loading control nodes, wherein the first control characteristic information comprises control characteristic information when the rendering control state of each model directory resource is in an open state, and the second control characteristic information comprises control characteristic information when the rendering control state of each model directory resource is in a closed state;
determining the position sequence of the first control feature information in the rendering loading control node, determining the rendering arrangement position of the first control feature information according to the position sequence, respectively determining the rendering branch sequence corresponding to the rendering loading control node of the arranged rendering thread corresponding to each model directory resource according to the rendering arrangement position, and calculating the rendering branch feature corresponding to the rendering branch sequence, wherein the rendering branch sequence is determined according to the rendering branch record information generated based on the renderable part and corresponding to the rendering loading control node of the arranged rendering thread corresponding to the model directory resource in the rendering process, and the rendering branch record information corresponding to different renderable parts is different;
acquiring a thread sequence corresponding to a arranged rendering thread corresponding to each model directory resource, determining a mapping relation between the rendering branch sequence and the thread sequence according to the rendering branch characteristics, and determining a plurality of rendering branches of each model directory resource and a rendering sequence corresponding to each rendering branch from the rendering branch sequence according to the determined mapping relation between the rendering branch sequence and the thread sequence;
screening a plurality of rendering branches with the same preset rendering sequence from the plurality of rendering branches to obtain a plurality of first rendering branches, and taking the rendering branches except the first rendering branches from the plurality of rendering branches as second rendering branches;
determining the number of rendering intervals for each model directory resource according to a first ratio between the number of the first rendering branches and the number of the second rendering branches, a second ratio between the number of the first rendering sequences and the number of the second rendering sequences in all the first rendering branches, and a third ratio between the number of the first rendering sequences and the number of the second rendering sequences in all the second rendering branches;
and loading each model file according to the rendering interval number and the rendering branch characteristics of each model directory resource in each model file to obtain a loaded target virtual reality scene and each target virtual reality object in the target virtual reality scene.
5. The virtual reality scene loading method according to claim 1, wherein the step of performing object control operations on the respective target virtual reality objects comprises:
acquiring first control information of an object service corresponding to each target virtual reality object and second control information of a scene service corresponding to the target virtual reality scene;
comparing the control node association sets between the first control information and the second control information, and when the control behaviors of any at least two associated control nodes in the control node association sets conflict, determining a display object adaptation strategy of the scene service for the conflicting target associated control nodes;
determining a target object adaptation region for performing display control on the target virtual reality object and a control instruction sequence aiming at each target object adaptation region according to the display object adaptation strategy;
and executing object control operation on the target virtual reality object according to the determined target object adaptation areas and the control instruction sequence aiming at each target object adaptation area.
6. The utility model provides a virtual reality scene loading device which characterized in that is applied to virtual reality equipment, virtual reality equipment and server communication connection, the device includes:
the determining module is used for determining a plurality of virtual reality model resources to be loaded according to the loading request aiming at the target virtual reality scene;
an extraction module, configured to obtain, from the server, model files corresponding to the multiple virtual reality model resources to be loaded, respectively, and extract a rendering scene control set and a virtual object control set of each model file in a model rendering policy corresponding to the corresponding virtual reality model resource to be loaded, where the rendering scene control set includes scene rendering controls corresponding to each visual rendering unit of the model file in the model rendering policy, and the virtual object control set includes object rendering controls corresponding to each visual rendering unit of the model file in the model rendering policy;
a generating module, configured to generate a rendering node set of each visual rendering unit according to the rendering scene control set and the virtual object control set, where the rendering node set of the visual rendering unit includes rendering nodes used for representing that the visual rendering unit renders the corresponding scene rendering control and object rendering control;
and the loading module is used for loading each model file according to the rendering scene control set of each model file, the virtual object control set and the rendering node set of each visual rendering unit to obtain a loaded target virtual reality scene and each target virtual reality object in the target virtual reality scene, and executing object control operation on each target virtual reality object after the loading of each target virtual reality object in the target virtual reality scene and the target virtual reality scene is completed.
7. The virtual reality scene loading device of claim 6, wherein the extracting module is configured to extract the rendering scene control set and the virtual object control set of each model file in the model rendering policy corresponding to the corresponding virtual reality model resource to be loaded by:
extracting a plurality of scene cameras of each model file in a model rendering strategy corresponding to the corresponding virtual reality model resource to be loaded;
inputting the model file into the model rendering strategy according to the scene cameras to perform graphical interface calling processing, and obtaining a plurality of graphical interface calling parameters of different rendering categories;
selecting a graphical interface calling parameter corresponding to the rendering category for each scene camera in the plurality of scene cameras from the graphical interface calling parameters of the different rendering categories, judging whether the scene camera is matched with a preset graphical interface calling template by using the graphical interface calling information corresponding to the graphical interface calling parameter, and reserving the scene camera when the scene camera is matched with the preset graphical interface calling template to obtain a plurality of reserved scene cameras;
inputting the model file into the model rendering strategy through the plurality of reserved scene cameras for rendering and segmentation, acquiring a corresponding rendering and segmentation virtual object control set and rendering and segmentation process information of a target object rendering control in the rendering and segmentation virtual object control set, and determining a rendering and segmentation area of the target object rendering control according to the rendering and segmentation process information of the target object rendering control;
acquiring a rendering segmentation position of each target object rendering control in a rendering segmentation area of the target object rendering control, recombining the target object rendering controls according to position confidence of the rendering segmentation position of each target object rendering control to obtain a corresponding candidate visual rendering unit, acquiring a scene rendering control corresponding to the candidate visual rendering unit of the model file, establishing object rendering control information between the candidate visual rendering unit and the object rendering control according to the scene rendering control, and processing each candidate visual rendering unit by taking the object rendering control as a reference to determine rendering segmentation process information corresponding to the object rendering control information of each candidate visual rendering unit;
and determining each visual rendering unit in each candidate visual rendering unit according to the rendering segmentation process information corresponding to the object rendering control information of each candidate visual rendering unit so as to obtain the scene rendering control corresponding to each visual rendering unit of the model file in the model rendering strategy and the object rendering control corresponding to each visual rendering unit of the model file in the model rendering strategy.
8. The virtual reality scene loading device of claim 6, wherein the generating module is configured to generate the rendering node sets of the respective visual rendering units by:
determining rendering thread information of the model file for each visual rendering unit according to the rendering scene control set and the virtual object control set, and converting the rendering thread information of the model file for each visual rendering unit into a rendering thread queue;
constructing a corresponding rendering thread topological graph according to the rendering thread queue, and analyzing the constructed rendering thread topological graph to obtain a plurality of rendering thread topological clusters;
obtaining an average value of the number of rendering threads in each rendering thread topological cluster, and calculating a difference value between the number of rendering threads in the rendering thread topological cluster and the average value to obtain a rendering thread number determination value of the rendering thread topological cluster;
classifying a plurality of rendering thread topological clusters associated with the rendering thread quantity determination value to generate a corresponding classified rendering thread topological cluster map composed of classified combinations of the rendering thread topological clusters of the plurality of classifications;
selecting the rendering thread topological cluster with the longest rendering thread and the rendering thread topological cluster with the smallest rendering thread from the classified rendering thread topological cluster map as a first rendering thread topological cluster and a second rendering thread topological cluster respectively, calculating the rendering thread quantity definite value difference between each rendering thread topological cluster in the classified rendering thread topological cluster map and the first rendering thread topological cluster and the second rendering thread topological cluster respectively, allocating the rendering thread topological clusters with the rendering thread quantity definite value difference of the first rendering thread topological cluster being less than or equal to the rendering thread quantity definite value difference of the second rendering thread topological cluster to a first rendering thread topological cluster sequence, and allocating the rendering thread topological cluster with the rendering thread quantity definite value difference of the first rendering thread topological cluster being more than the rendering thread quantity definite value difference of the second rendering thread topological cluster Allocating to a second rendering thread topological clustering sequence;
calculating the average value of the rendering thread quantity determination values of all the rendering thread topology clusters in the first rendering thread topology cluster sequence, obtaining a first rendering thread topology cluster taking the average value of the actual rendering thread quantity determination values as a region, calculating the average value of the rendering thread quantity determination values of all the rendering thread topology clusters in the second rendering thread topology cluster sequence, and obtaining a second rendering thread topology cluster taking the average value of the actual rendering thread quantity determination values as a region;
determining a plurality of corresponding third rendering thread topological clusters and a plurality of corresponding fourth rendering thread topological clusters according to the coincidence degrees of the first rendering thread topological cluster and the second rendering thread topological cluster with each rendering thread topological cluster in the classified rendering thread topological cluster map;
according to the plurality of third rendering thread topological clusters and the plurality of fourth rendering thread topological clusters, determining rendering thread parameter information corresponding to the rendering thread information, and analyzing the rendering thread parameter information to obtain rendering thread process control information in the rendering thread parameter information;
and determining the process control range of the rendering thread process control information as the process control range corresponding to the visual rendering unit, and determining the rendering node set of each visual rendering unit according to the process control range corresponding to the visual rendering unit and the rendering thread process control information.
9. A virtual reality system is characterized by comprising virtual reality equipment and a server in communication connection with the virtual reality equipment;
the virtual reality equipment is used for determining a plurality of virtual reality model resources to be loaded according to a loading request aiming at a target virtual reality scene;
the virtual reality equipment is used for acquiring model files corresponding to the virtual reality model resources to be loaded from the server respectively, and extracting a rendering scene control set and a virtual object control set of each model file in a model rendering strategy corresponding to the corresponding virtual reality model resources to be loaded, wherein the rendering scene control set comprises scene rendering controls corresponding to each visual rendering unit of the model files in the model rendering strategy, and the virtual object control set comprises object rendering controls corresponding to each visual rendering unit of the model files in the model rendering strategy;
the virtual reality equipment is used for generating a rendering node set of each visual rendering unit according to the rendering scene control set and the virtual object control set, wherein the rendering node set of the visual rendering unit comprises rendering nodes used for representing that the visual rendering unit renders corresponding scene rendering controls and object rendering controls;
the virtual reality equipment is used for loading each model file according to the rendering scene control set, the virtual object control set and the rendering node set of each visual rendering unit of each model file to obtain a loaded target virtual reality scene and each target virtual reality object in the target virtual reality scene, and executing object control operation on each target virtual reality object after the loading of each target virtual reality object in the target virtual reality scene and the target virtual reality scene is completed.
10. A virtual reality device, comprising a processor, a machine-readable storage medium, and a network interface, wherein the machine-readable storage medium, the network interface, and the processor are connected via a bus system, the network interface is configured to be communicatively connected with at least one object fluctuation range, the machine-readable storage medium is configured to store a program, instructions, or code, and the processor is configured to execute the program, instructions, or code in the machine-readable storage medium to perform the virtual reality scene loading method according to any one of claims 1 to 5.
CN202010159574.9A 2020-03-10 2020-03-10 Virtual reality scene loading method and device, virtual reality system and equipment Active CN111352670B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202011092319.3A CN112181530A (en) 2020-03-10 2020-03-10 Virtual reality scene loading method based on cloud platform and virtual reality system
CN202011092320.6A CN112181531A (en) 2020-03-10 2020-03-10 Virtual reality scene loading method and virtual reality system
CN202010159574.9A CN111352670B (en) 2020-03-10 2020-03-10 Virtual reality scene loading method and device, virtual reality system and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010159574.9A CN111352670B (en) 2020-03-10 2020-03-10 Virtual reality scene loading method and device, virtual reality system and equipment

Related Child Applications (2)

Application Number Title Priority Date Filing Date
CN202011092320.6A Division CN112181531A (en) 2020-03-10 2020-03-10 Virtual reality scene loading method and virtual reality system
CN202011092319.3A Division CN112181530A (en) 2020-03-10 2020-03-10 Virtual reality scene loading method based on cloud platform and virtual reality system

Publications (2)

Publication Number Publication Date
CN111352670A CN111352670A (en) 2020-06-30
CN111352670B true CN111352670B (en) 2021-01-08

Family

ID=71196192

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202011092319.3A Withdrawn CN112181530A (en) 2020-03-10 2020-03-10 Virtual reality scene loading method based on cloud platform and virtual reality system
CN202011092320.6A Withdrawn CN112181531A (en) 2020-03-10 2020-03-10 Virtual reality scene loading method and virtual reality system
CN202010159574.9A Active CN111352670B (en) 2020-03-10 2020-03-10 Virtual reality scene loading method and device, virtual reality system and equipment

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN202011092319.3A Withdrawn CN112181530A (en) 2020-03-10 2020-03-10 Virtual reality scene loading method based on cloud platform and virtual reality system
CN202011092320.6A Withdrawn CN112181531A (en) 2020-03-10 2020-03-10 Virtual reality scene loading method and virtual reality system

Country Status (1)

Country Link
CN (3) CN112181530A (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529995B (en) * 2020-12-28 2023-03-31 Oppo(重庆)智能科技有限公司 Image rendering calculation method and device, storage medium and terminal
WO2023185476A1 (en) * 2022-03-29 2023-10-05 北京字跳网络技术有限公司 Object rendering method and apparatus, electronic device, storage medium, and program product
CN114870407B (en) * 2022-04-29 2023-02-14 深圳市中视典数字科技有限公司 Digital human body data acquisition system and method based on virtual reality
CN117635787A (en) * 2022-08-11 2024-03-01 华为云计算技术有限公司 Image rendering method, device and equipment
CN115795342B (en) * 2022-11-15 2024-02-06 支付宝(杭州)信息技术有限公司 Method and device for classifying business scenes, storage medium and electronic equipment
CN116681869B (en) * 2023-06-21 2023-12-19 西安交通大学城市学院 Cultural relic 3D display processing method based on virtual reality application

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069020A (en) * 2015-07-14 2015-11-18 国家信息中心 3D visualization method and system of natural resource data

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9323055B2 (en) * 2006-05-26 2016-04-26 Exelis, Inc. System and method to display maintenance and operational instructions of an apparatus using augmented reality
CN103076919B (en) * 2012-12-29 2016-04-13 广东志成冠军集团有限公司 A kind of wireless touch remote control thereof and system
CN103106680B (en) * 2013-02-16 2015-05-06 赞奇科技发展有限公司 Implementation method for three-dimensional figure render based on cloud computing framework and cloud service system
US9947139B2 (en) * 2014-06-20 2018-04-17 Sony Interactive Entertainment America Llc Method and apparatus for providing hybrid reality environment
CN106502654B (en) * 2016-09-30 2019-09-17 歌尔科技有限公司 Virtual reality scenario loading method and equipment
US10423632B2 (en) * 2017-07-19 2019-09-24 Facebook, Inc. Systems and methods for incrementally downloading augmented-reality effects
US10565802B2 (en) * 2017-08-31 2020-02-18 Disney Enterprises, Inc. Collaborative multi-modal mixed-reality system and methods leveraging reconfigurable tangible user interfaces for the production of immersive, cinematic, and interactive content
CN108319478B (en) * 2018-01-22 2020-11-10 腾讯科技(深圳)有限公司 Object attribute adjusting method and device, storage medium and electronic device
CN108479067B (en) * 2018-04-12 2019-09-20 网易(杭州)网络有限公司 The rendering method and device of game picture
CN109388245A (en) * 2018-11-06 2019-02-26 杭州炽橙文创科技有限公司 A kind of shared resource real-time interaction system of virtual reality applications
CN110060341B (en) * 2019-04-29 2023-11-17 网易(杭州)网络有限公司 Game model rendering method and device
CN110363839B (en) * 2019-06-19 2023-08-11 深圳市彬讯科技有限公司 Model rendering method, device, computer equipment and storage medium
CN110570505B (en) * 2019-09-11 2020-11-17 腾讯科技(深圳)有限公司 Image rendering method, device and equipment and storage medium
CN110665219A (en) * 2019-10-14 2020-01-10 网易(杭州)网络有限公司 Operation control method and device for virtual reality game
CN110743168B (en) * 2019-10-21 2023-10-20 腾讯科技(深圳)有限公司 Virtual object control method in virtual scene, computer equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069020A (en) * 2015-07-14 2015-11-18 国家信息中心 3D visualization method and system of natural resource data

Also Published As

Publication number Publication date
CN111352670A (en) 2020-06-30
CN112181530A (en) 2021-01-05
CN112181531A (en) 2021-01-05

Similar Documents

Publication Publication Date Title
CN111352670B (en) Virtual reality scene loading method and device, virtual reality system and equipment
CN111104291B (en) Environment monitoring method, device and system based on Internet of things and monitoring server
CN111371856B (en) Cloud computing task scheduling method and device, cloud computing system and server
CN112867006A (en) Network communication system, network communication safety monitoring method and server
CN111209336B (en) Data distribution method and device based on block chain and server
CN111367414B (en) Virtual reality object control method and device, virtual reality system and equipment
CN111209074B (en) Browser view loading method, device and system and server
CN111105076A (en) Weather forecast data generation method and device, computer equipment and storage medium
CN111367528B (en) Compiling method and device of software development kit, software development system and server
CN111327706B (en) Remote meter reading data processing method and device, gas meter system and gas cloud platform
CN112417668B (en) Ecological protection intelligent early warning method and device and server
CN111107162B (en) Indoor positioning data processing method, device and system based on Internet of things
CN111476886A (en) Smart building three-dimensional model rendering method and building cloud server
CN111209509B (en) Information display method and device based on big data platform and big data platform
CN111340683B (en) Image data processing method, image data processing device, image processing system and server
CN112364515B (en) Ecological environment intelligent monitoring method and device and server
CN111988187B (en) Internet connection method and device of central management server
CN112380493A (en) Intelligent analysis method and device for soil component content detection and server
CN112613535A (en) Water quality detection control method, device and platform
CN111178209A (en) Nuclear magnetic resonance interaction processing method and device and nuclear magnetic resonance interaction system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Building B2, yingxiangxin City, the third city, Kunming Economic and Technological Development Zone, Yunnan Province

Applicant after: Jian Jibo

Address before: 264004 building 1607, building B3, 1861 cultural and Creative Industry Park, No. 7, Tongshi South Road, Zhifu District, Yantai City, Shandong Province

Applicant before: Jian Jibo

CB02 Change of applicant information
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201221

Address after: Room c418, 860-1, shangshengou village, Dongling District, Shenyang City, Liaoning Province, 110000

Applicant after: LIAONING SUNFLOWER EDUCATION TECHNOLOGY Co.,Ltd.

Address before: No.650b2, Kunming Economic and Technological Development Zone, Xindong, Yunnan Province

Applicant before: Jian Jibo

GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Virtual reality scene loading method, device, virtual reality system and equipment

Effective date of registration: 20220601

Granted publication date: 20210108

Pledgee: China Construction Bank Shenyang Hunnan sub branch

Pledgor: LIAONING SUNFLOWER EDUCATION TECHNOLOGY Co.,Ltd.

Registration number: Y2022210000053

PE01 Entry into force of the registration of the contract for pledge of patent right
CP03 Change of name, title or address

Address after: No. 868-3, Shangshengou Village, Hunnan District, Shenyang City, Liaoning Province, 110000 (101, 201)

Patentee after: Liaoning Sunflower Digital Technology Co.,Ltd.

Address before: Room c418, 860-1, shangshengou village, Dongling District, Shenyang City, Liaoning Province, 110000

Patentee before: LIAONING SUNFLOWER EDUCATION TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address