CN113852841A - Visual scene establishing method, device, equipment, medium and system - Google Patents
Visual scene establishing method, device, equipment, medium and system Download PDFInfo
- Publication number
- CN113852841A CN113852841A CN202011536104.6A CN202011536104A CN113852841A CN 113852841 A CN113852841 A CN 113852841A CN 202011536104 A CN202011536104 A CN 202011536104A CN 113852841 A CN113852841 A CN 113852841A
- Authority
- CN
- China
- Prior art keywords
- scene
- data
- real
- rendering
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 230000000007 visual effect Effects 0.000 title claims abstract description 47
- 238000009877 rendering Methods 0.000 claims abstract description 159
- 230000009471 action Effects 0.000 claims abstract description 56
- 238000004458 analytical method Methods 0.000 claims abstract description 32
- 230000033001 locomotion Effects 0.000 claims abstract description 30
- 238000012800 visualization Methods 0.000 claims description 30
- 238000012545 processing Methods 0.000 claims description 22
- 238000003860 storage Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 12
- 238000004519 manufacturing process Methods 0.000 claims description 10
- 238000004891 communication Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 11
- 230000003287 optical effect Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 230000003993 interaction Effects 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000004886 head movement Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234309—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/161—Encoding, multiplexing or demultiplexing different image signal components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/398—Synchronisation thereof; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440218—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the invention discloses a method, a device, equipment, a medium and a system for establishing a visual scene, wherein the method comprises the following steps: acquiring real-time data of a target scene and motion data captured by a target VR device based on a 5G network; analyzing the action data, and performing real-scene rendering on the target scene according to an analysis result and the real-time data; and coding the real scene rendering result and then sending the real scene rendering result to the target VR equipment so as to realize visual display of the target scene. By the technical scheme of the embodiment of the invention, the problem of high cost caused by low hardware resource configuration utilization rate in the prior art is solved, the hardware resource utilization rate of scene rendering is improved, the application scene has expansibility, and the deployment cost of the transparent visual scene is reduced.
Description
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a visual scene establishing method, device, equipment, medium and system.
Background
The visual transparent factory is an industrial internet scene based on Virtual Reality, the production operation condition of a real factory is rendered and presented to a Virtual Reality (VR) glasses end by establishing a three-dimensional data model of the factory and using a data driving scene, and the factory is more intelligent by using VR technology and production field interaction.
At present, the existing VR transparent factory mainly uses a local rendering technology, and data is transmitted through an HDMI cable or a wireless lan. With the great development of VR transparent factory technology, transparent factory scenes are more and more complex, the requirements on scene rendering are higher and higher, meanwhile, VR has high requirements on resolution, the requirements on computer performance and network are higher and higher, the requirements on network bandwidth and data rendering are higher and higher, and a wireless local area network can not meet the requirements gradually. The local deployment computer resources are increasingly huge to meet the above conditions, and the resource utilization efficiency is also increasingly low.
Disclosure of Invention
The embodiment of the invention provides a visual scene establishing method, a visual scene establishing device, visual scene establishing equipment, a visual scene establishing medium and a visual scene establishing system, which are used for improving the utilization rate of hardware resources of scene rendering, enabling an application scene to have expansibility and reducing the deployment cost of a transparent visual scene.
In a first aspect, an embodiment of the present invention provides a method for establishing a visual scene, which is applied to a server, and the method includes:
acquiring real-time data of a target scene and motion data captured by a target VR device based on a 5G network;
analyzing the action data, and performing real-scene rendering on the target scene according to an analysis result and the real-time data;
and coding the real scene rendering result and then sending the real scene rendering result to the target VR equipment so as to realize visual display of the target scene.
Optionally, the analyzing the motion data and performing real-scene rendering on the target scene according to an analysis result and the real-time data includes:
analyzing the action data, and determining real-time data corresponding to scene rendering according to an analysis result, wherein the analysis result comprises an angle and a range of a user sight line;
matching graphics processor resources according to the data volume of real-time data required by the scene rendering;
and performing real-scene rendering on the target scene through the matched graphics processor resources.
Optionally, the target scene includes a factory, and the real-time data includes data acquired by a positioning sensor, a mechanical coding sensor or a camera, which are correspondingly arranged on the staff, the tool, the product, the production equipment and the automatic guided vehicle in the target scene.
Optionally, the time taken for analyzing the motion data and performing real-scene rendering on the target scene according to the analysis result and the real-time data is less than 25 milliseconds.
In a second aspect, an embodiment of the present invention further provides a scene visualization display method, which is applied to a VR terminal, and the method includes:
collecting user action data and coding the action data;
sending the encoded action data to a scene rendering server through a 5G network so that the rendering server analyzes the action data and performs real-scene rendering on the target scene according to an analysis result and real-time data of the target scene;
and receiving the live-action rendering result data sent by the scene rendering server, and decoding and displaying the live-action rendering result data.
Optionally, the process of collecting the user action data and the process of decoding and displaying the real-scene rendering result data take less than 20 milliseconds.
In a third aspect, an embodiment of the present invention further provides a visual scene creating device, configured in a server, where the visual scene creating device includes:
the data acquisition module is used for acquiring real-time data of a target scene and action data captured by the target VR device based on a 5G network;
the data processing module is used for analyzing the action data and performing real-scene rendering on the target scene according to an analysis result and the real-time data;
and the data feedback module is used for coding the real scene rendering result and then sending the real scene rendering result to the target VR equipment so as to realize visual display of the target scene.
Optionally, the data processing module is specifically configured to:
analyzing the action data, and determining real-time data corresponding to scene rendering according to an analysis result, wherein the analysis result comprises an angle and a range of a user sight line;
matching graphics processor resources according to the data volume of real-time data required by the scene rendering;
and performing real-scene rendering on the target scene through the matched graphics processor resources.
Optionally, the target scene includes a factory, and the real-time data includes data acquired by a positioning sensor, a mechanical coding sensor or a camera, which are correspondingly arranged on the staff, the tool, the product, the production equipment and the automatic guided vehicle in the target scene.
Optionally, the time taken for analyzing the motion data and performing real-scene rendering on the target scene according to the analysis result and the real-time data is less than 25 milliseconds.
In a fourth aspect, an embodiment of the present invention further provides a scene visualization display apparatus, configured on a VR terminal, where the apparatus includes:
the data acquisition module is used for acquiring user action data and coding the action data;
the data sending module is used for sending the coded action data to a scene rendering server through a 5G network so that the rendering server can analyze the action data and perform real-scene rendering on the target scene according to an analysis result and the real-time data of the target scene;
and the data display module is used for receiving the live-action rendering result data sent by the scene rendering server, decoding and displaying the live-action rendering result data.
Optionally, the process of collecting the user action data and the process of decoding and displaying the real-scene rendering result data take less than 20 milliseconds.
In a fifth aspect, an embodiment of the present invention further provides a rendering server, where the rendering server includes:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a visualization scenario setup method as in any embodiment of the present invention.
In a sixth aspect, an embodiment of the present invention further provides a VR device, where the VR device includes:
the 5G communication module is used for establishing data connection with the rendering server;
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method for visual display of a scene as in any embodiment of the invention.
In a seventh aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a visual scene creating method or a scene visual display method as provided in any embodiment of the present invention.
In an eighth aspect, an embodiment of the present invention further provides a visualization scenario establishing system, where the system includes:
the scene data acquisition module comprises a positioning sensor and a camera and is used for acquiring dynamic data in a target scene;
VR means for implementing a scene visualization display method as provided by any of the embodiments of the invention;
and the rendering server is used for realizing the visual scene establishing method provided by any embodiment of the invention.
The embodiment of the invention has the following advantages or beneficial effects:
acquiring user action data captured by VR equipment and real-time data of a target scene based on a 5G network, analyzing the action data, performing real-scene rendering on the target scene according to an analysis result and the real-time data, namely judging the angle and the direction of a user sight line, determining a required real-time data amount, and allocating graphics processor resources according to the data amount to perform data processing; and finally, coding the real scene rendering result and then sending the real scene rendering result to target VR equipment so as to realize visual display of the target scene. The method solves the problems of low hardware resource configuration utilization rate and high cost in the prior art, improves the hardware resource utilization rate of scene rendering, enables the application scene to have expansibility, and reduces the deployment cost of the transparent visual scene.
Drawings
Fig. 1 is a flowchart of a visual scene creating method according to an embodiment of the present invention;
fig. 2 is a flowchart of a scene visualization display method according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a visual scene creating apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a scene visualization display apparatus according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of a rendering server device according to a fifth embodiment of the present invention;
fig. 6 is a schematic structural diagram of a VR device according to a sixth embodiment of the present invention;
fig. 7 is a schematic structural diagram of a visualization scene creating system according to an eighth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a visual scene establishing method applied to a rendering server according to an embodiment of the present invention, which is applicable to a situation of performing virtual reality real-time display on a target scene. The method may be implemented by a visual scene creation apparatus, which is configured in the rendering server device, and may be specifically implemented by software and/or hardware in the server device.
As shown in fig. 1, the visual scene establishing method specifically includes the following steps:
and S110, acquiring real-time data of a target scene and motion data captured by the target VR device based on the 5G network.
The target scene may be any indoor or outdoor space. The indoor scenes comprise restaurants, exhibition halls, gymnasiums, shopping malls, factories, production workshops and the like, and the outdoor scenes can be scenes such as parks, squares and the like.
In a target scene, real-time data in the target scene, including the position, activity state, sound data and other contents of an object or a person in the target scene, can be acquired through devices such as a sensor and a camera, and then sent to a rendering server through a 5G network, and the server performs data operation processing. Instead, the data is transmitted and communicated through an HDMI (high-definition multimedia interface) line or a wireless local area network, and the data is rendered locally in the target scene.
For example, in this embodiment, the target scene is a factory, and accordingly, the real-time data may be data acquired by a positioning sensor, a mechanical coding sensor, or a camera, which is correspondingly disposed on a worker, a tool, a product, a production device, and an automatic guided vehicle in the factory in real time. The real-time data acquisition equipment sends data to the rendering server through a 5G network, and data processing is carried out at the cloud end so as to establish a transparent factory, so that a user can remotely view the real-time updated production condition in the factory. It can be understood that a corresponding number or types of sensors can be arranged in a target scene according to the scene layout and the sight line requirement, so as to ensure high-precision restoration of the rendering data to the field production operation condition. Further, the target VR device may be a VR device connected to the rendering server and wishing to acquire real-time display data of a target scene, and the target VR device, i.e. the head-mounted display device, may capture head movements of the user, such as an angle or a direction of the head rotation, so that the rendering server may determine a sight range of the user and corresponding scene data within the sight range according to the movement data of the head of the user.
And S120, analyzing the action data, and performing real-scene rendering on the target scene according to an analysis result and the real-time data.
Specifically, when the rendering server acquires the action data, the action data is analyzed, and a target scene corresponding to the data is determined according to the identifier of the action data. And then, analyzing the motion and the visual field range of the head of the user and matching corresponding scene data. And distributing the graphics processor resources according to the calculated amount of the matched data so as to perform real-scene image rendering on the target scene through the matched graphics processor resources.
Generally speaking, the more complex the target scene is, and the higher the resolution requirement of the VR device is, the more the corresponding data calculation amount is, the more graphics processor resources need to be allocated to perform the rendering process of the scene image data. Therefore, the rendering process is ensured to be completed within a certain time, so that the time delay is reduced, and the real-time effect is better. Furthermore, different computing resources (GPU resources of a graphic processor) can be distributed according to different scenes, the computing resources are fully utilized, multi-user dynamic distribution and use are supported, and the requirement of VR scenes on low delay is met. It can be understood that the GPU product used by the cloud rendering server needs to support virtualization, that is, the cloud rendering platform needs to support a three-dimensional rendering engine for being invoked by different usage scenarios.
S130, encoding the real scene rendering result and then sending the encoded real scene rendering result to the target VR equipment so as to enable the target scene to be visually displayed.
And the result data after being rendered by the rendering server is sent to the target VR equipment after being encoded, the target VR equipment decodes the data and performs three-dimensional display, so that the target scene is visually displayed.
In one embodiment, in order to ensure timeliness of scene display and not affect the viewing effect of the user, the delay of the whole display process is ensured to be less than 70 ms. The delay may be allocated in such a way that the execution time of step S120 is less than 25 ms. The data network transmission process based on the 5G network should not exceed 15ms, the VR device performs data decoding for 10ms, and the motion capture and display presentation for 20 ms.
According to the technical scheme of the embodiment, the action data of a user to be captured by VR equipment and the real-time data of a target scene are acquired based on a 5G network, the action data are analyzed, the target scene is subjected to live-action rendering according to an analysis result and the real-time data, namely, the angle and the direction of the sight of the user are judged, the required real-time data volume is determined, and the graphics processor resources are allocated according to the data volume for data processing; and finally, the real scene rendering result is coded and then sent to the target VR equipment, so that the target scene is visually displayed, in the embodiment, data and resources are deployed at the cloud end, unified management and scheduling can be implemented, data sharing is shared, and unnecessary reconstruction is avoided. The method solves the problems of low hardware resource configuration utilization rate and high cost in the prior art, improves the hardware resource utilization rate of scene rendering, enables the application scene to have expansibility, and reduces the deployment cost of the transparent visual scene.
Example two
Fig. 2 is a flowchart of a scene visualization display method applied to a VR device according to a second embodiment of the present invention, where the present embodiment is applicable to a case where a target scene is viewed through a VR device. The method can be implemented by a scene visualization display device configured in the VR device, and the scene visualization display device is configured in the VR device and can be specifically implemented by software and/or hardware in the VR device.
As shown in fig. 2, the scene visualization display method specifically includes the following steps:
s210, collecting user action data and coding the action data.
The VR equipment is head display equipment worn by a user, is connected with the rendering server and is accessed into a specific scene. The VR device, i.e. the head-mounted display device, may capture head movements of the user, for example, an angle or a direction of head rotation, and may further enable the rendering server to determine a line of sight range of the user in a specific scene and corresponding scene data within the line of sight range according to the movement data of the head of the user.
After capturing the action data of the user, the VR device encodes the data, so that when the rendering server processes the data, the VR device can perform operations such as classification, checking, totaling and retrieval on the data information, and the data processing speed of the server is improved.
S220, sending the coded motion data to a scene rendering server through a 5G network so that the rendering server can analyze the motion data and perform real-scene rendering on the target scene according to an analysis result and the real-time data of the target scene.
The VR equipment is communicated with a cloud computer (namely a rendering server) through a 5G high-speed low-delay network, action data are uploaded, and the rendering server renders scenes according to the action data and real-time data of corresponding target scenes. Therefore, VR scenes (such as transparent factories) are not limited to fixed scenes, and VR equipment can be used in a 5G coverage area to interact with a target scene.
Furthermore, the motion data may be not only data of the head of the user, but also trigger some options on the VR display interface captured by a device such as a handle, a wearable sensor, and the like, for example, selecting to view VR video provided by the scene rendering display system, and the video content may be information such as introduction content related to the target scene.
And S230, receiving the live-action rendering result data sent by the scene rendering server, and decoding and displaying the live-action rendering result data.
And when receiving the image rendering data fed back by the rendering server, the VR equipment decodes the data and displays the data.
According to the technical scheme of the embodiment, captured user action data is sent to a rendering server through VR equipment based on a 5G network, so that the server analyzes the action data, and real-scene rendering is performed on the target scene according to an analysis result and real-time data; and finally, visually displaying the received real scene rendering result. The problem of limitation of target scene VR display in the prior art is solved, the VR scene has expansibility, and the deployment cost of the transparent visual scene is reduced.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a visual scene creating device configured in a rendering server according to a third embodiment of the present invention, which is applicable to a situation where a target scene is displayed in real time in a virtual reality manner.
As shown in fig. 3, the visualization scenario establishing apparatus includes a data acquiring module 310, a data processing module 320, and a data feedback module 330.
The data acquisition module 310 is configured to acquire real-time data of a target scene and motion data captured by a target VR device based on a 5G network; the data processing module 320 is configured to analyze the motion data and perform live-action rendering on the target scene according to an analysis result and the real-time data; and the data feedback module 330 is configured to encode the real-scene rendering result and send the encoded real-scene rendering result to the target VR device, so that the target scene is visually displayed.
According to the technical scheme of the embodiment, the action data of a user to be captured by VR equipment and the real-time data of a target scene are acquired based on a 5G network, the action data are analyzed, the target scene is subjected to live-action rendering according to an analysis result and the real-time data, namely, the angle and the direction of the sight of the user are judged, the required real-time data volume is determined, and the graphics processor resources are allocated according to the data volume for data processing; and finally, coding the real scene rendering result and then sending the real scene rendering result to target VR equipment so as to realize visual display of the target scene. The method solves the problems of low hardware resource configuration utilization rate and high cost in the prior art, improves the hardware resource utilization rate of scene rendering, enables the application scene to have expansibility, and reduces the deployment cost of the transparent visual scene.
Optionally, the data processing module 320 is specifically configured to:
analyzing the action data, and determining real-time data corresponding to scene rendering according to an analysis result, wherein the analysis result comprises an angle and a range of a user sight line;
matching graphics processor resources according to the data volume of real-time data required by the scene rendering;
and performing real-scene rendering on the target scene through the matched graphics processor resources.
Optionally, the target scene includes a factory, and the real-time data includes data acquired by a positioning sensor, a mechanical coding sensor or a camera, which are correspondingly arranged on the staff, the tool, the product, the production equipment and the automatic guided vehicle in the target scene.
Optionally, the time taken for analyzing the motion data and performing real-scene rendering on the target scene according to the analysis result and the real-time data is less than 25 milliseconds.
The device configured to the rendering server and used for establishing the visual scene provided by the embodiment of the invention can execute the method for establishing the visual scene applied to the rendering server provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of a scene visualization display apparatus according to a fourth embodiment of the present invention, which is applicable to a situation where a VR device is used to view a stereoscopic real scene of a target scene.
As shown in fig. 4, the scene visualization display apparatus includes a data acquisition module 410, a data transmission module 420, and a data presentation module 430.
The data acquisition module 410 is configured to acquire user motion data and encode the motion data; a data sending module 420, configured to send the encoded motion data to a scene rendering server through a 5G network, so that the rendering server parses the motion data, and performs live-action rendering on the target scene according to a parsing result and the real-time data of the target scene; and a data display module 430, configured to receive the live-action rendering result data sent by the scene rendering server, decode and display the live-action rendering result data.
According to the technical scheme of the embodiment, captured user action data is sent to a rendering server through VR equipment based on a 5G network, so that the server analyzes the action data, and real-scene rendering is performed on the target scene according to an analysis result and real-time data; and finally, visually displaying the received real scene rendering result. The problem of limitation of target scene VR display in the prior art is solved, the VR scene has expansibility, and the deployment cost of the transparent visual scene is reduced.
Optionally, the process of collecting the user action data and the process of decoding and displaying the real-scene rendering result data take less than 20 milliseconds.
The scene visualization display device configured in the VR device provided in the embodiment of the present invention can execute the scene visualization display method applied to the VR device provided in any embodiment of the present invention, and has the corresponding functional modules and beneficial effects of the execution method.
EXAMPLE five
Fig. 5 is a schematic structural diagram of a rendering server device according to a fifth embodiment of the present invention. FIG. 5 illustrates a block diagram of an exemplary computer device 12 suitable for use in implementing embodiments of the present invention. The computer device 12 shown in FIG. 5 is only an example and should not bring any limitations to the functionality or scope of use of embodiments of the present invention. The rendering server may also be a server cluster composed of a plurality of computer devices 12, and is respectively used for scheduling and allocating server resources, storing data, or providing a connection interface for other service systems to implement service interaction, where the service interaction includes scene display, information sharing, and the like.
As shown in FIG. 5, computer device 12 is in the form of a general purpose computing device. The components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. Computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, and commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. System memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in system memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
The processing unit 16 executes various functional applications and data processing by executing programs stored in the system memory 28, for example, implementing a visualization scenario creating method provided by the embodiment of the present invention, the method includes:
acquiring real-time data of a target scene and motion data captured by a target VR device based on a 5G network;
analyzing the action data, and performing real-scene rendering on the target scene according to an analysis result and the real-time data;
and coding the real scene rendering result and then sending the real scene rendering result to the target VR equipment so as to realize visual display of the target scene.
EXAMPLE six
Fig. 6 is a schematic structural diagram of a VR device according to a fifth embodiment of the present invention.
As shown in fig. 6, a VR device generally includes a motion tracking module, a data processing module, a communication module, and a display module.
In particular, the VR device may capture head motion of the user, e.g., an angle or direction of head rotation, through the motion tracking module. Then, the captured motion data is encoded by the data processing module, so that the rendering server can perform operations such as classification, checking, totaling, retrieval and the like on the data information during data processing, and the data processing speed of the server is improved.
And the VR equipment realizes data interaction between the rendering servers through the communication module, and uploads the captured action data to the rendering servers or receives rendered scene data from the rendering server. Wherein, the communication module can support 5G network communication. The data processing module can further process the received rendered scene data and send the processed scene data to the display module for display.
Through the cooperation of the modules, the VR device scene visualization display method in any embodiment of the invention can be realized.
EXAMPLE seven
The seventh embodiment provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a visual scene creating method applied to a rendering server according to any embodiment of the present invention, where the method includes:
acquiring real-time data of a target scene and motion data captured by a target VR device based on a 5G network;
analyzing the action data, and performing real-scene rendering on the target scene according to an analysis result and the real-time data;
and coding the real scene rendering result and then sending the real scene rendering result to the target VR equipment so as to realize visual display of the target scene.
Alternatively, the program, when executed by a processor, implements a scene visualization display method applied to a VR device as provided in any embodiment of the present invention, the method including: collecting user action data and coding the action data;
sending the encoded action data to a scene rendering server through a 5G network so that the rendering server analyzes the action data and performs real-scene rendering on the target scene according to an analysis result and real-time data of the target scene;
and receiving the live-action rendering result data sent by the scene rendering server, and decoding and displaying the live-action rendering result data.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer-readable storage medium may be, for example but not limited to: an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
Example eight
Fig. 7 provides a schematic structural diagram of a visualization scene creating system in an eighth embodiment of the present invention, and as shown in fig. 7, the system includes: the system comprises a scene data acquisition module, VR equipment and a rendering server.
The scene data acquisition module comprises a positioning sensor and a camera and is used for acquiring dynamic data in a target scene; VR means for implementing a scene visualization display method as provided by any of the embodiments of the invention; and the rendering server is used for realizing the visual scene establishing method provided by any embodiment of the invention.
According to the technical scheme, the requirements of the VR scene on bandwidth and time delay are met based on data communication of the 5G network, real-time interaction can be achieved, and scene data are transmitted to the cloud computer through the 5G network. The VR scene is not necessarily limited to a fixed scene, and the VR terminal can be used for identification in a 5G coverage area. Meanwhile, based on the cloud rendering technology, hardware resources can be fully scheduled to render different scenes, the utilization rate of the hardware resources is improved, meanwhile, the cloud rendering technology can be used for reducing the requirement on the VR pen-related computing performance, a smaller VR terminal can be used, and the VR experience comfort level is improved. Moreover, data and resources are deployed at the cloud end, unified management scheduling can be implemented, data sharing and sharing are achieved, unnecessary reconstruction is avoided, and system operation and maintenance cost is saved.
It will be understood by those skilled in the art that the modules or steps of the invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of computing devices, and optionally they may be implemented by program code executable by a computing device, such that it may be stored in a memory device and executed by a computing device, or it may be separately fabricated into various integrated circuit modules, or it may be fabricated by fabricating a plurality of modules or steps thereof into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (12)
1. A visual scene establishing method is applied to a server and is characterized by comprising the following steps:
acquiring real-time data of a target scene and motion data captured by a target VR device based on a 5G network;
analyzing the action data, and performing real-scene rendering on the target scene according to an analysis result and the real-time data;
and coding the real scene rendering result and then sending the real scene rendering result to the target VR equipment so as to realize visual display of the target scene.
2. The method of claim 1, wherein the parsing the motion data and rendering the target scene according to the parsed result and the real-time data comprises:
analyzing the action data, and determining real-time data corresponding to scene rendering according to an analysis result, wherein the analysis result comprises an angle and a range of a user sight line;
matching graphics processor resources according to the data volume of real-time data required by the scene rendering;
and performing real-scene rendering on the target scene through the matched graphics processor resources.
3. The method according to claim 1, wherein the target scene comprises a factory, and the real-time data comprises data collected by positioning sensors, mechanical coding sensors or cameras correspondingly arranged on workers, tools, products, production equipment and automatic guided vehicles in the target scene.
4. The method according to claims 1-3, wherein the parsing of the motion data and the live-action rendering of the target scene according to the parsing result and the real-time data takes less than 25 milliseconds.
5. A scene visualization display method is applied to a VR terminal and is characterized by comprising the following steps:
collecting user action data and coding the action data;
sending the encoded action data to a scene rendering server through a 5G network so that the rendering server analyzes the action data and performs real-scene rendering on the target scene according to an analysis result and real-time data of the target scene;
and receiving the live-action rendering result data sent by the scene rendering server, and decoding and displaying the live-action rendering result data.
6. The method of claim 5, wherein the process of collecting user motion data and decoding and presenting the real scene rendering result data takes less than 20 milliseconds.
7. A visual scene creation device configured in a server, comprising:
the data acquisition module is used for acquiring real-time data of a target scene and action data captured by the target VR device based on a 5G network;
the data processing module is used for analyzing the action data and performing real-scene rendering on the target scene according to an analysis result and the real-time data;
and the data feedback module is used for coding the real scene rendering result and then sending the real scene rendering result to the target VR equipment so as to realize visual display of the target scene.
8. A scene visualization display device provided in a VR terminal, comprising:
the data acquisition module is used for acquiring user action data and coding the action data;
the data sending module is used for sending the coded action data to a scene rendering server through a 5G network so that the rendering server can analyze the action data and perform real-scene rendering on the target scene according to an analysis result and the real-time data of the target scene;
and the data display module is used for receiving the live-action rendering result data sent by the scene rendering server, decoding and displaying the live-action rendering result data.
9. A rendering server, characterized in that the rendering server comprises:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the visualization scenario establishment method of any of claims 1-4.
10. A VR device, comprising:
the 5G communication module is used for establishing data connection with the rendering server;
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the scene visualization display method as recited in claim 5 or 6.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a visualization scene creation method as claimed in any one of claims 1 to 4, or a scene visualization display method as claimed in claim 5 or 6.
12. A visual scene creation system, comprising:
the scene data acquisition module comprises a positioning sensor and a camera and is used for acquiring dynamic data in a target scene;
a VR device for implementing the scene visualization display method according to claim 5 or 6;
rendering server for implementing the visualization scene creation method of any of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011536104.6A CN113852841A (en) | 2020-12-23 | 2020-12-23 | Visual scene establishing method, device, equipment, medium and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011536104.6A CN113852841A (en) | 2020-12-23 | 2020-12-23 | Visual scene establishing method, device, equipment, medium and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113852841A true CN113852841A (en) | 2021-12-28 |
Family
ID=78972137
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011536104.6A Pending CN113852841A (en) | 2020-12-23 | 2020-12-23 | Visual scene establishing method, device, equipment, medium and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113852841A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114422819A (en) * | 2022-01-25 | 2022-04-29 | 纵深视觉科技(南京)有限责任公司 | Video display method, device, equipment, system and medium |
CN114513647A (en) * | 2022-01-04 | 2022-05-17 | 聚好看科技股份有限公司 | Method and device for transmitting data in three-dimensional virtual scene |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6559846B1 (en) * | 2000-07-07 | 2003-05-06 | Microsoft Corporation | System and process for viewing panoramic video |
US20160101356A1 (en) * | 2014-01-02 | 2016-04-14 | Ubitus Inc. | System And Method For Delivering Media Over Network |
CN106127844A (en) * | 2016-06-22 | 2016-11-16 | 民政部零研究所 | Mobile phone users real-time, interactive access long-range 3D scene render exchange method |
US20170068312A1 (en) * | 2015-09-04 | 2017-03-09 | Sony Computer Entertainment Inc. | Apparatus and method for dynamic graphics rendering based on saccade detection |
CN106713889A (en) * | 2015-11-13 | 2017-05-24 | 中国电信股份有限公司 | 3D frame rendering method and system and mobile terminal |
US20170163958A1 (en) * | 2015-12-04 | 2017-06-08 | Le Holdings (Beijing) Co., Ltd. | Method and device for image rendering processing |
CN109801353A (en) * | 2019-01-16 | 2019-05-24 | 北京七鑫易维信息技术有限公司 | A kind of method of image rendering, server and terminal |
CN110322526A (en) * | 2019-07-05 | 2019-10-11 | 武汉魅客科技有限公司 | A kind of real-time three-dimensional interactive rendering method based on cloud architecture |
-
2020
- 2020-12-23 CN CN202011536104.6A patent/CN113852841A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6559846B1 (en) * | 2000-07-07 | 2003-05-06 | Microsoft Corporation | System and process for viewing panoramic video |
US20160101356A1 (en) * | 2014-01-02 | 2016-04-14 | Ubitus Inc. | System And Method For Delivering Media Over Network |
US20170068312A1 (en) * | 2015-09-04 | 2017-03-09 | Sony Computer Entertainment Inc. | Apparatus and method for dynamic graphics rendering based on saccade detection |
CN106713889A (en) * | 2015-11-13 | 2017-05-24 | 中国电信股份有限公司 | 3D frame rendering method and system and mobile terminal |
US20170163958A1 (en) * | 2015-12-04 | 2017-06-08 | Le Holdings (Beijing) Co., Ltd. | Method and device for image rendering processing |
CN106127844A (en) * | 2016-06-22 | 2016-11-16 | 民政部零研究所 | Mobile phone users real-time, interactive access long-range 3D scene render exchange method |
CN109801353A (en) * | 2019-01-16 | 2019-05-24 | 北京七鑫易维信息技术有限公司 | A kind of method of image rendering, server and terminal |
CN110322526A (en) * | 2019-07-05 | 2019-10-11 | 武汉魅客科技有限公司 | A kind of real-time three-dimensional interactive rendering method based on cloud architecture |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114513647A (en) * | 2022-01-04 | 2022-05-17 | 聚好看科技股份有限公司 | Method and device for transmitting data in three-dimensional virtual scene |
CN114513647B (en) * | 2022-01-04 | 2023-08-29 | 聚好看科技股份有限公司 | Method and device for transmitting data in three-dimensional virtual scene |
CN114422819A (en) * | 2022-01-25 | 2022-04-29 | 纵深视觉科技(南京)有限责任公司 | Video display method, device, equipment, system and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109582425B (en) | GPU service redirection system and method based on cloud and terminal GPU fusion | |
CN113852841A (en) | Visual scene establishing method, device, equipment, medium and system | |
CN110769257A (en) | Intelligent video structured analysis device, method and system | |
CN108762934B (en) | Remote graphic transmission system and method and cloud server | |
CN111225280B (en) | Lightweight video analysis system based on embedded platform | |
CN104765636B (en) | A kind of synthetic method and device of remote desktop image | |
US11080943B2 (en) | Method and apparatus for displaying with 3D parallax effect | |
CN109992111B (en) | Augmented reality extension method and electronic device | |
CN115185408A (en) | Vehicle-mounted entertainment information display method, device, equipment and medium | |
CN110807111A (en) | Three-dimensional graph processing method and device, storage medium and electronic equipment | |
CN114004972A (en) | Image semantic segmentation method, device, equipment and storage medium | |
CN113778593A (en) | Cloud desktop control method and device, electronic equipment, storage medium and program product | |
CN115512046B (en) | Panorama display method and device for points outside model, equipment and medium | |
US10223997B2 (en) | System and method of leveraging GPU resources to increase performance of an interact-able content browsing service | |
CN114327790A (en) | Rendering method of Android container based on Linux system | |
CN110876068B (en) | Method, device, equipment and storage medium for displaying virtual articles in live broadcast room | |
CN110083357B (en) | Interface construction method, device, server and storage medium | |
CN113730905A (en) | Method and device for realizing free migration in virtual space | |
CN111091848B (en) | Method and device for predicting head posture | |
US11836437B2 (en) | Character display method and apparatus, electronic device, and storage medium | |
US20230122666A1 (en) | Cloud xr-based program virtualizing method | |
EP4290869A1 (en) | Method for analyzing user input regarding 3d object, device, and program | |
CN115379193A (en) | Three-dimensional data transmission method, system and chip | |
CN118018703A (en) | Method, apparatus and storage medium for performing streaming processing on stereoscopic mode | |
CN116634102A (en) | Front-end command equipment and data processing method based on same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |