CN117710611A - Display processing method, device, equipment and medium based on virtual reality space - Google Patents

Display processing method, device, equipment and medium based on virtual reality space Download PDF

Info

Publication number
CN117710611A
CN117710611A CN202211088460.5A CN202211088460A CN117710611A CN 117710611 A CN117710611 A CN 117710611A CN 202211088460 A CN202211088460 A CN 202211088460A CN 117710611 A CN117710611 A CN 117710611A
Authority
CN
China
Prior art keywords
virtual reality
canvas
rendering
video frame
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211088460.5A
Other languages
Chinese (zh)
Inventor
许邦存
张
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202211088460.5A priority Critical patent/CN117710611A/en
Publication of CN117710611A publication Critical patent/CN117710611A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure relates to a display processing method, a device, equipment and a medium based on a virtual reality space, wherein the method comprises the following steps: when the first canvas is created and generated in the virtual reality scene, a second canvas is created and generated through a preset rendering component in the virtual reality device, wherein the second canvas is consistent with the canvas display parameters of the first canvas; responding to a video stream display request in the virtual reality scene, and generating a real-time video frame on a second canvas through rendering by a preset rendering component; and synchronizing the real-time video frames to the first canvas for display so as to realize the display of the video stream in the virtual reality scene to process the virtual reality scene. Therefore, the rendering component of the virtual reality equipment is multiplexed to render the video frames in the virtual reality scene, normal display of the video stream in the virtual reality scene running in the virtual reality equipment is realized, and the display cost is reduced on the basis of ensuring the display effect.

Description

Display processing method, device, equipment and medium based on virtual reality space
Technical Field
The disclosure relates to the technical field of virtual reality, and in particular relates to a display processing method, device, equipment and medium based on virtual reality space.
Background
With the progress of computer technology, virtual Reality (VR) technology, as a technology for creating and experiencing a Virtual world, can be calculated to generate a Virtual environment, so as to realize a fused, interactive, three-dimensional dynamic view of the Virtual environment and simulation of physical behaviors, and to immerse a user in the simulated Virtual Reality environment, thereby realizing applications in various Virtual environments such as maps, games, videos, education, medical treatment, simulation, collaborative training, sales, assistance in manufacturing, maintenance, and repair.
In the related art, a virtual reality effect is achieved through a virtual reality device, and the virtual reality effect acts on a virtual environment to continuously update a virtual reality scene displayed on a display screen along with a user's line of sight. For example, a video frame displayed on a display screen (the video frame may be understood as a video frame corresponding to a displayed video stream in a virtual reality scene, such as a concert video frame in an online concert, etc.), or the like is updated.
However, the rendering engine displaying the virtual reality scene is not generally consistent with the rendering logic of the underlying rendering engine of the virtual reality device (for example, the rendering engine of the virtual reality scene is generally inconsistent with the rendering language of the underlying rendering engine of the virtual reality device), and the play function of the video stream and the like are generally developed based on the rendering logic of the virtual reality device, so the rendering logic based on the rendering engine cannot parse the video frame rendering information and render the video frame, thereby affecting the display effect of the video frame.
Disclosure of Invention
In order to solve the above technical problems or at least partially solve the above technical problems, the present disclosure provides a display processing method, apparatus, device, and medium based on a virtual reality space, so as to implement rendering processing of video frames based on rendering logic of a virtual reality device, and ensure a display effect of video frames corresponding to a virtual reality scene on the virtual reality device.
The embodiment of the disclosure provides a display processing method based on a virtual reality space, which comprises the following steps: when a first canvas is created and generated in the virtual reality scene, a second canvas is created and generated through a preset rendering component in the virtual reality device, wherein the second canvas is consistent with canvas display parameters of the first canvas; in response to a video stream display request in a virtual reality scene, generating a real-time video frame by rendering on the second canvas through the preset rendering component; and synchronizing the real-time video frames to the first canvas for display so as to realize the display processing of the video stream in the virtual reality scene.
The embodiment of the disclosure also provides a display processing device based on the virtual reality space, which comprises: the canvas creation module is used for creating and generating a second canvas through a preset rendering component in the virtual reality equipment when the first canvas is created and generated in the virtual reality scene, wherein the second canvas is consistent with the canvas display parameters of the first canvas; the rendering module is used for responding to a video stream display request in the virtual reality scene, and generating real-time video frames through rendering on the second canvas by the preset rendering component; and the display processing module is used for synchronizing the real-time video frames to the first canvas for display so as to realize the display processing of the video stream in the virtual reality scene.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing the processor-executable instructions; the processor is configured to read the executable instructions from the memory and execute the instructions to implement a virtual reality space based display processing method according to an embodiment of the disclosure.
The embodiments of the present disclosure also provide a computer-readable storage medium storing a computer program for executing the virtual reality space-based display processing method as provided by the embodiments of the present disclosure.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
according to the display processing scheme based on the virtual reality space, when the first canvas is created and generated in the virtual reality scene is detected, the second canvas is created and generated through the preset rendering component in the virtual reality device, wherein the display parameters of the second canvas are consistent with those of the first canvas, the real-time video frame is generated on the second canvas through rendering of the preset rendering component in response to the video stream display request in the virtual reality scene, and then the real-time video frame is synchronized to the first canvas to be displayed, so that display processing of the video stream in the virtual reality scene is achieved. Therefore, the rendering component of the virtual reality equipment is multiplexed to render the video frames in the virtual reality scene, normal display of the video stream in the virtual reality scene running in the virtual reality equipment is realized, and the display cost is reduced on the basis of ensuring the display effect.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a schematic view of an application scenario of a virtual reality device according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a display processing method based on a virtual reality space according to an embodiment of the disclosure;
fig. 3 is a schematic view of a scene of a virtual reality space based display process according to an embodiment of the disclosure;
fig. 4 is a schematic structural diagram of a display processing device based on a virtual reality space according to an embodiment of the disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Some technical concepts or noun concepts referred to herein are described in association with:
the virtual reality device, the terminal for realizing the virtual reality effect, may be provided in the form of glasses, a head mounted display (Head Mount Display, HMD), or a contact lens for realizing visual perception and other forms of perception, but the form of the virtual reality device is not limited to this, and may be further miniaturized or enlarged as needed.
The virtual reality device described in the embodiments of the present invention may include, but is not limited to, the following types:
a computer-side virtual reality (PCVR) device performs related computation of a virtual reality function and data output by using a PC side, and an external computer-side virtual reality device realizes a virtual reality effect by using data output by the PC side.
The mobile virtual reality device supports setting up a mobile terminal (such as a smart phone) in various manners (such as a head-mounted display provided with a special card slot), performing related calculation of a virtual reality function by the mobile terminal through connection with the mobile terminal in a wired or wireless manner, and outputting data to the mobile virtual reality device, for example, watching a virtual reality video through an APP of the mobile terminal.
The integrated virtual reality device has a processor for performing the calculation related to the virtual function, and thus has independent virtual reality input and output functions, and is free from connection with a PC or a mobile terminal, and has high degree of freedom in use.
Virtual reality objects, objects that interact in a virtual reality scene, objects that are stationary, moving, and performing various actions in a virtual reality scene, such as virtual persons corresponding to a user in a live scene, are controlled by a user or a robot program (e.g., an artificial intelligence-based robot program).
As shown in fig. 1, HMDs are relatively light, ergonomically comfortable, and provide high resolution content with low latency. The sensor (such as a nine-axis sensor) for detecting the gesture in the virtual reality device is arranged in the virtual reality device, and is used for detecting the gesture change of the virtual reality device in real time, if the user wears the virtual reality device, when the gesture of the head of the user changes, the real-time gesture of the head is transmitted to the processor, so that the gaze point of the sight of the user in the virtual environment is calculated, an image in the gaze range (namely a virtual view field) of the user in the three-dimensional model of the virtual environment is calculated according to the gaze point, and the image is displayed on the display screen, so that the user looks like watching in the real environment.
In this embodiment, when a user wears the HMD device and opens a predetermined application program, for example, a live video application program, the HMD device may run corresponding virtual reality scenes, where the virtual reality scenes may be simulation environments to the real world, semi-simulation and semi-fictitious virtual reality scenes, or pure fictitious virtual reality scenes. The virtual reality scene may be any one of a two-dimensional virtual reality scene, a 2.5-dimensional virtual reality scene, or a three-dimensional virtual reality scene, and the dimensions of the virtual reality scene are not limited in the embodiments of the present disclosure. For example, the virtual reality scene may include a person, sky, land, sea, etc., where the land may include environmental elements such as a desert, city, etc., where the user may control the virtual object to move in the virtual reality scene, and may also interactively control a control, model, presentation content, person, etc. in the virtual reality scene by means of a handle device, a bare hand gesture, etc.
In this embodiment, a virtual reality scene in a virtual reality space may be watched by wearing a virtual reality device, for example, watching an online concert in the virtual reality space, etc., where for some virtual reality scenes where video streams exist, the virtual reality scene is rendered based on a Unity engine, for example, based on the open graphic library english of the android, taking into account the rendering logic of the virtual reality scene, which may be different from the rendering logic of the preset rendering component of the virtual reality device itself: open Graphics Library, openGL), the related functions of the video player are usually developed by the preset rendering component based on android, so that rendering of video frames based on the Unity engine cannot be realized, and therefore, in order to realize normal display of video frames in virtual reality scenes, in the display processing method based on virtual reality space provided by the disclosure, the preset rendering component of the virtual reality device is multiplexed to render video frames.
The method is described below in connection with specific examples.
Fig. 2 is a flow chart of a display processing method based on a virtual reality space according to an embodiment of the disclosure, where the method may be performed by a display processing device based on a virtual reality space, and the device may be implemented by software and/or hardware, and may be generally integrated in a virtual display device worn by a user. As shown in fig. 2, the method includes:
in step 201, when the creation and generation of the first canvas in the virtual reality scene are detected, a second canvas is created and generated through a preset rendering component in the virtual reality device, wherein the second canvas is consistent with the canvas display parameters of the first canvas.
In one embodiment of the present disclosure, in response to a canvas creation request in a virtual reality scene, a first canvas created in the virtual reality scene is displayed in the first canvas, in this embodiment, the virtual reality scene is rendered by a rendering engine corresponding to the virtual reality scene, where the virtual reality scene is built up is implemented by the rendering engine, the rendering engine may be a Unity engine or the like, a rendering component corresponding to the rendering engine is different from a pre-packaged rendering component in the virtual reality device, and rendering logic and rendering language of the rendering component and the rendering language of the rendering component are different.
For some virtual reality scenes played by the existing video stream, considering the rendering logic of the virtual reality scenes, the rendering logic of the virtual reality scenes may be different from that of a rendering component in the virtual reality device, for example, the virtual reality scenes are rendered based on a Unity engine, and the virtual reality device is based on the open graphic library English of the android. Open Graphics Library, openGL), the related functions of the video player are usually developed by an android-based rendering component, and therefore rendering video frames based on a Unity engine cannot be achieved, and therefore, in order to achieve normal display of video frames in a virtual reality scene, in an embodiment of the present disclosure, rendering logic of the multiplexing virtual reality device achieves rendering of video frames in the virtual reality scene.
In the embodiment of the disclosure, when the first canvas is created and generated in the virtual reality scene is detected, a second canvas is created and generated through a preset rendering component in the virtual reality device, wherein the second canvas is consistent with canvas display parameters of the first canvas, the canvas display parameters comprise canvas display size, canvas refresh frequency and the like, namely, the logical size of a video frame rendered on the second canvas is consistent with the logical size of a video frame corresponding to the first canvas, and the like, so that the video frame rendered on the second canvas is ensured to be identical to the video frame rendered on the first canvas in practice, and the video frame rendered on the second canvas can be displayed on the first canvas in a fitting manner.
In this embodiment, whether a first canvas is created in the virtual reality scene may be monitored, and if the first canvas is created, a second canvas corresponding to a preset rendering component may be synchronously generated at any stage of the first canvas generation, so as to facilitate the subsequent rendering of a generated video frame on the second canvas.
In step 202, in response to a video stream display request in a virtual reality scene, a real-time video frame is generated by rendering on a second canvas through a preset rendering component.
In one embodiment of the present disclosure, in response to a video stream display request in a virtual reality scene, real-time video frames are generated by rendering on a second canvas through a preset rendering component, and the real-time video frames are continuously generated to achieve frame-by-frame rendering of the video stream.
It should be noted that, in different application scenarios, the manner of generating the real-time video frame by rendering on the second canvas through the preset rendering component is different, in some possible embodiments, in response to the video stream display request in the virtual reality scenario, the video frame identifier of the real-time video frame to be currently displayed is determined, the video frame rendering request carrying the video frame identifier is sent to the preset server, where the preset server may be understood as the cloud server of the corresponding player or the like,
the preset server responds to the video frame rendering request to obtain video frame rendering information corresponding to the video frame to be displayed currently, the video frame rendering information comprises element content information, display position information and the like of each display element (content element and background element) contained in the video frame, brightness information and the like of the corresponding video frame, the corresponding video frame can be rendered based on the video frame rendering information, in the embodiment, the video frame rendering information fed back by the preset server according to the video frame rendering request is obtained, and the real-time video frame is generated on the second canvas according to the video frame rendering information through the preset rendering component.
In one embodiment of the disclosure, the rendering engine directly sends a video stream rendering request to a corresponding preset server, and further, when the virtual reality device obtains video frame rendering information fed back by the preset server, the preset rendering component is called to render and generate a real-time video frame on the second canvas according to the video frame rendering information because the video stream is basically developed based on rendering logic of the rendering component of the virtual reality device.
And 203, synchronizing the real-time video frames to the first canvas for display so as to realize the display processing of the video stream in the virtual reality scene.
In one embodiment of the disclosure, after the real-time video frame is generated by rendering, the real-time video frame is synchronized to the first canvas for display, so as to realize display processing of the video stream in the virtual reality scene, that is, the rendering component of the virtual reality device is multiplexed to realize the rendering of the video frame, so that the video frame in the virtual reality scene cannot be displayed.
It should be noted that, in different application scenarios, the manner of synchronizing the real-time video frame to the first canvas for display is different, and the following is illustrated as an example:
in one embodiment of the present disclosure, whether the current real-time video frame is rendered is detected, for example, a rendering function corresponding to the rendering component may be detected, when it is detected that the rendering function is released or suspended, a rendering completion message of the current real-time video frame is obtained, and a video frame refresh instruction carrying the real-time video frame is sent to the rendering component, so that the rendering engine displays the real-time video frame on the first canvas in response to the video frame refresh instruction.
In some possible examples, the first communication interface of the rendering component and the second communication interface of the preset rendering engine are determined by controlling the preset rendering component to send a video frame refresh command carrying real-time video frames to the rendering engine through a preset communication link, and a communication link between the first communication interface and the second interface is constructed.
In some possible examples, in response to detecting the rendered message of the real-time video frame, a service rendering interface of the first canvas is invoked through which the current real-time video frame is displayed on the first canvas. Because the current real-time video frame is rendered by the rendering component based on the virtual reality equipment, the video frame can be ensured to be normally displayed on the virtual reality equipment, and the problem of different rendering logics of the virtual scene and the virtual reality equipment is solved.
In some possible embodiments, if the virtual scene includes multiple first canvases, a rendering thread corresponding to each first canvas may be constructed in the virtual reality device, the current video frame corresponding to the corresponding first canvas is rendered by the corresponding rendering thread, and the multiple rendering threads work in parallel, so as to flexibly meet the requirements under the related scene.
In the embodiment of the disclosure, as shown in fig. 3, a rendering engine is Unity, an operating system of a virtual reality device is an android system, a first canvas is canvas 1, a second canvas is canvas 2, a current video frame is video frame a, and video frame rendering information of the video frame a pushed by a preset server is obtained for a rendering component in the android system, the rendering component in the android system renders the video frame a on the pre-built canvas 2 based on self rendering logic, the rendered video frame a is transmitted to the Unity canvas 1 based on a pre-built communication link, and the video frame a is displayed in the canvas 1. Therefore, the video frame is rendered through the android system, the rendering cost of the video frame is reduced, and the video frame can be ensured to be normally displayed in the virtual reality equipment.
In summary, according to the virtual reality space-based display processing method, when a first canvas is created and generated in a virtual reality scene is detected, a second canvas is created and generated through a preset rendering component in virtual reality equipment, wherein the second canvas is consistent with canvas display parameters of the first canvas, a real-time video frame is generated on the second canvas through rendering of the preset rendering component in response to a video stream display request in the virtual reality scene, and further, the real-time video frame is synchronized to the first canvas to be displayed, so that display processing of video streams in the virtual reality scene is achieved. Therefore, the rendering component of the virtual reality equipment is multiplexed to render the video frames in the virtual reality scene, normal display of the video stream in the virtual reality scene running in the virtual reality equipment is realized, and the display cost is reduced on the basis of ensuring the display effect.
In order to achieve the above embodiments, the present disclosure proposes a display processing apparatus based on a virtual reality space.
Fig. 4 is a schematic structural diagram of a display processing device based on a virtual reality space according to an embodiment of the present disclosure, where the device may be implemented by software and/or hardware, and the device is applied in a rendering engine of a virtual reality scene in the virtual reality space. As shown in fig. 4, the apparatus includes: a canvas creation module 410, a rendering module 420, and a display processing module 430, wherein,
the canvas creation module 410 is configured to create and generate a second canvas through a preset rendering component in the virtual reality device when it is detected that the first canvas is created and generated in the virtual reality scene, where the second canvas is consistent with canvas display parameters of the first canvas;
a rendering module 420, configured to generate a real-time video frame by rendering on the second canvas through a preset rendering component in response to a video stream display request in the virtual reality scene;
and the display processing module 430 is used for synchronizing the real-time video frames to the first canvas for display so as to realize the display processing of the video stream in the virtual reality scene.
The display processing device based on the virtual reality space provided by the embodiment of the disclosure can execute the display processing method based on the virtual reality space provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
To achieve the above embodiments, the present disclosure also proposes a computer program product comprising a computer program/instruction which, when executed by a processor, implements the virtual reality space based display processing method in the above embodiments.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Referring now in particular to fig. 5, a schematic diagram of an electronic device 500 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 500 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 5 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 5, the electronic device 500 may include a processor (e.g., a central processing unit, a graphics processor, etc.) 501 that may perform various suitable actions and processes in accordance with programs stored in a Read Only Memory (ROM) 502 or loaded from a memory 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the electronic apparatus 500 are also stored. The processor 501, ROM 502, and RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
In general, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; memory 508 including, for example, magnetic tape, hard disk, etc.; and communication means 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 shows an electronic device 500 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or from the memory 508, or from the ROM 502. When executed by the processor 501, the computer program performs the functions defined above in the virtual reality space based display processing method of an embodiment of the present disclosure.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: when the first canvas is created and generated in the virtual reality scene is detected, a second canvas is created and generated through a preset rendering component in the virtual reality device, wherein the second canvas is consistent with the canvas display parameters of the first canvas, a real-time video frame is generated on the second canvas through rendering of the preset rendering component in response to a video stream display request in the virtual reality scene, and further, the real-time video frame is synchronized to the first canvas to be displayed, so that display processing of video streams in the virtual reality scene is achieved. Therefore, the rendering component of the virtual reality equipment is multiplexed to render the video frames in the virtual reality scene, normal display of the video stream in the virtual reality scene running in the virtual reality equipment is realized, and the display cost is reduced on the basis of ensuring the display effect.
The electronic device may write computer program code for performing the operations of the present disclosure in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (10)

1. A virtual reality space-based display processing method, the method comprising the steps of:
when a first canvas is created and generated in the virtual reality scene, a second canvas is created and generated through a preset rendering component in the virtual reality device, wherein the second canvas is consistent with canvas display parameters of the first canvas;
in response to a video stream display request in a virtual reality scene, generating a real-time video frame by rendering on the second canvas through the preset rendering component;
and synchronizing the real-time video frames to the first canvas for display so as to realize the display processing of the video stream in the virtual reality scene.
2. The method of claim 1, wherein the generating, by the preset rendering component, real-time video frames on the second canvas in response to a video stream display request in a virtual reality scene comprises:
responding to a video stream display request in a virtual reality scene, and determining a video frame identification of a real-time video frame to be displayed currently;
sending a video frame rendering request carrying the video frame identifier to a preset server, and acquiring video frame rendering information fed back by the preset server according to the video frame rendering request;
and generating the real-time video frame on the second canvas according to the video frame rendering information through the preset rendering component.
3. The method of claim 1, wherein the synchronizing the real-time video frame to be displayed on the first canvas comprises:
and in response to detecting the rendering completion message of the real-time video frame, sending a video frame refreshing instruction carrying the real-time video frame to the rendering engine, so that the rendering engine responds to the video frame refreshing instruction to display the real-time video frame on the first canvas.
4. The method of claim 3, wherein the sending a video frame refresh instruction to the rendering engine that carries the real-time video frame comprises:
and controlling the preset rendering component to send a video frame refreshing instruction carrying the real-time video frame to the rendering engine through a preset communication link.
5. The method of claim 4, comprising, prior to said controlling said preset rendering component to send said real-time video frames to said rendering engine over a preset communication link:
determining a first communication interface of the rendering component and a second communication interface of the preset rendering engine;
a communication link between the first communication interface and the second interface is constructed.
6. The method of claim 1, prior to the detecting that the first canvas is created in the virtual reality scene, further comprising:
in response to a canvas creation request in a virtual reality scene, creating and generating a first canvas in the virtual reality scene through a rendering engine corresponding to the virtual reality scene, wherein,
and a rendering component corresponding to the rendering engine is different from the preset rendering component in the virtual reality equipment.
7. The method of any of claims 1-6, wherein the canvas display parameters comprise:
canvas display size, canvas refresh frequency.
8. A virtual reality space-based display processing apparatus, the apparatus comprising:
the canvas creation module is used for creating and generating a second canvas through a preset rendering component in the virtual reality equipment when the first canvas is created and generated in the virtual reality scene, wherein the second canvas is consistent with the canvas display parameters of the first canvas;
the rendering module is used for responding to a video stream display request in the virtual reality scene, and generating real-time video frames through rendering on the second canvas by the preset rendering component;
and the display processing module is used for synchronizing the real-time video frames to the first canvas for display so as to realize the display processing of the video stream in the virtual reality scene.
9. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the virtual reality space based display processing method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program for executing the virtual reality space based display processing method of any of the preceding claims 1-7.
CN202211088460.5A 2022-09-07 2022-09-07 Display processing method, device, equipment and medium based on virtual reality space Pending CN117710611A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211088460.5A CN117710611A (en) 2022-09-07 2022-09-07 Display processing method, device, equipment and medium based on virtual reality space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211088460.5A CN117710611A (en) 2022-09-07 2022-09-07 Display processing method, device, equipment and medium based on virtual reality space

Publications (1)

Publication Number Publication Date
CN117710611A true CN117710611A (en) 2024-03-15

Family

ID=90148490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211088460.5A Pending CN117710611A (en) 2022-09-07 2022-09-07 Display processing method, device, equipment and medium based on virtual reality space

Country Status (1)

Country Link
CN (1) CN117710611A (en)

Similar Documents

Publication Publication Date Title
US20240153216A1 (en) Shoe try-on method and apparatus based on augmented reality, and electronic device
CN111198610A (en) Method, device and equipment for controlling field of view of panoramic video and storage medium
CN114900625A (en) Subtitle rendering method, device, equipment and medium for virtual reality space
CN111818265B (en) Interaction method and device based on augmented reality model, electronic equipment and medium
CN111833459B (en) Image processing method and device, electronic equipment and storage medium
US20230405475A1 (en) Shooting method, apparatus, device and medium based on virtual reality space
CN115761103A (en) Control method and device based on virtual reality, electronic equipment and storage medium
CN117319725A (en) Subtitle display method, device, equipment and medium
CN117710611A (en) Display processing method, device, equipment and medium based on virtual reality space
CN113730905A (en) Method and device for realizing free migration in virtual space
CN110662099A (en) Method and device for displaying bullet screen
CN117765207A (en) Virtual interface display method, device, equipment and medium
CN114357348B (en) Display method and device and electronic equipment
US20230377248A1 (en) Display control method and apparatus, terminal, and storage medium
US20240161390A1 (en) Method, apparatus, electronic device and storage medium for control based on extended reality
CN118057466A (en) Control method and device based on augmented reality, electronic equipment and storage medium
CN118343924A (en) Virtual object motion processing method, device, equipment and medium
CN117632063A (en) Display processing method, device, equipment and medium based on virtual reality space
CN117632391A (en) Application control method, device, equipment and medium based on virtual reality space
CN117354484A (en) Shooting processing method, device, equipment and medium based on virtual reality
CN117376655A (en) Video processing method, device, electronic equipment and storage medium
CN117994403A (en) Scene loading method and device, electronic equipment and storage medium
CN117376591A (en) Scene switching processing method, device, equipment and medium based on virtual reality
CN118433467A (en) Video display method, device, electronic equipment and storage medium
CN117687542A (en) Information interaction method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination