CN115082610A - Multi-user cooperation method based on 3D cloud rendering, cloud rendering server and cloud rendering system - Google Patents

Multi-user cooperation method based on 3D cloud rendering, cloud rendering server and cloud rendering system Download PDF

Info

Publication number
CN115082610A
CN115082610A CN202210830788.3A CN202210830788A CN115082610A CN 115082610 A CN115082610 A CN 115082610A CN 202210830788 A CN202210830788 A CN 202210830788A CN 115082610 A CN115082610 A CN 115082610A
Authority
CN
China
Prior art keywords
instruction
virtual camera
cloud
cloud rendering
corresponding virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210830788.3A
Other languages
Chinese (zh)
Inventor
毛智睿
周舟
陈虹旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Smart Yunzhou Technology Co ltd
Original Assignee
Beijing Smart Yunzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Smart Yunzhou Technology Co ltd filed Critical Beijing Smart Yunzhou Technology Co ltd
Priority to CN202210830788.3A priority Critical patent/CN115082610A/en
Publication of CN115082610A publication Critical patent/CN115082610A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a multi-user cooperation method based on 3D cloud rendering, a cloud rendering server and a cloud rendering system, belongs to the technical field of cloud rendering, and is used for solving the problem that cloud rendering occupies more resources of the cloud server in the related technology. In the method and a cloud rendering server, namely a cloud rendering system, a plurality of virtual cameras which are in one-to-one correspondence with a plurality of clients are configured in 3D scene data, so that the clients receive sub-scene pictures acquired by the corresponding virtual cameras; and acquiring a control instruction sent by the client to change the content of the sub-scene picture acquired by the corresponding virtual camera according to the control instruction. By adopting the technical scheme, multiple 3D application program examples do not need to be configured on the cloud server, and the cloud server does not need to perform data synchronization, so that the occupation of cloud server resources is effectively reduced.

Description

Multi-user cooperation method based on 3D cloud rendering, cloud rendering server and cloud rendering system
Technical Field
The application relates to the technical field of cloud rendering, in particular to a multi-user cooperation method based on 3D cloud rendering, a cloud rendering server and a cloud rendering system.
Background
The cloud rendering is a method of rendering a 3D application program in a cloud server, a client can access the 3D application program in the cloud server, and after the client sends a control instruction, the cloud server executes a corresponding rendering task according to the instruction sent by the client, obtains a rendering result picture and transmits the rendering result picture back to the client for display.
Fig. 1 illustrates a schematic diagram of a conventional 3D cloud rendering technique. As shown in fig. 1, in a conventional 3D cloud rendering technology, a 3D application is configured in a cloud server, a 3D original scene and a main virtual camera are preconfigured in the cloud server, a client is connected to the cloud server and can be operated and generate a control instruction, after the control instruction is sent to the cloud server, the 3D application controls the main virtual camera to operate in the 3D original scene according to the control instruction, when the main virtual camera operates, the content of a projection irradiation range of the main virtual camera also changes correspondingly, and when the main virtual camera operates, the 3D application performs rendering, continuous frame extraction, compression coding and other operations on an image in the projection irradiation range of the main virtual camera to form a video stream, and transmits the video stream back to the client for display.
Fig. 2 illustrates a schematic diagram of a multi-client mode 3D cloud rendering technique. As shown in fig. 2, the basic principle of the technology is similar to that of the conventional 3D cloud rendering technology in fig. 1, except that there are multiple clients connected to the cloud server, the multiple clients can all send control instructions to the cloud server, and video streams generated by the cloud server are respectively transmitted back to the multiple clients for display. The 3D application program has only one 3D application program, and can only receive and execute one control command at a time, that is, if the 3D application program has received and executed a control command issued by one client, it may be considered that the 3D application program is preempted by the control command, and at this time, the control commands issued by other clients are invalid because they cannot be executed.
FIG. 3 illustrates a schematic diagram of a multi-client collaborative mode 3D cloud rendering technique. As shown in fig. 3, the difference between this technique and the 3D cloud rendering technique in fig. 2 is: the cloud server is configured with a plurality of identical 3D application program instances, each 3D application program instance can correspond to one client, receive a control instruction sent by the corresponding client and operate a pre-configured 3D original scene based on the control instruction, and the cloud server performs data synchronization on operation results of the plurality of 3D application program instances. Although the technology solves the defects of the related technology in fig. 2, the cloud server needs to have a plurality of 3D application program instances to be executed simultaneously, the resource occupation of the cloud server is high, and the cloud server needs to perform data synchronization, so that the resource occupation of the cloud server is further increased.
Disclosure of Invention
The application provides a multi-user cooperation method based on 3D cloud rendering, a cloud rendering server and a cloud rendering system, which can reduce the occupation of cloud server resources.
In a first aspect, the application provides a multi-user collaboration method based on 3D cloud rendering.
The multi-user cooperation method based on 3D cloud rendering specifically adopts the following technical scheme:
a multi-user cooperation method based on 3D cloud rendering is applied to a cloud server, and the cloud server is connected with a plurality of clients; the method comprises the following steps:
configuring a plurality of virtual cameras in one-to-one correspondence with a plurality of clients in 3D scene data so that the clients receive sub-scene pictures acquired by the corresponding virtual cameras;
and acquiring a control instruction sent by the client to change the content of the sub-scene picture acquired by the corresponding virtual camera according to the control instruction.
By adopting the technical scheme, multiple 3D application program examples do not need to be configured on the cloud server, and the cloud server does not need to perform data synchronization, so that the occupation of cloud server resources is effectively reduced.
Further, the control instructions include motion instructions; the motion instruction is used for controlling the corresponding virtual camera to move in the 3D scene data.
Further, the control instructions include operational instructions; the operation instruction is used for changing the 3D object in the sub-scene picture acquired by the corresponding virtual camera.
Further, the control instructions comprise visual vertebra adjustment instructions and/or range adjustment instructions; the visual vertebra adjusting instruction is used for adjusting the visual vertebra of the corresponding virtual camera, and the range adjusting instruction is used for adjusting the projection area range of the corresponding virtual camera.
Further, the changing the content of the sub-scene picture acquired by the corresponding virtual camera according to the control instruction includes:
for a 3D object, if one needle has an operation instruction for the 3D object and the other needle has been acquired, determining the priorities of the two operation instructions based on the pre-stored priority of the client, and executing the operation instruction with higher priority for the 3D object.
In a second aspect, the present application provides a cloud rendering server.
The cloud rendering server specifically adopts the following technical scheme:
a cloud rendering server, comprising:
the resource allocation module is used for allocating a plurality of virtual cameras which are in one-to-one correspondence with a plurality of clients in 3D scene data so that the clients receive sub-scene pictures acquired by the corresponding virtual cameras; and
and the picture determining module is used for acquiring the control instruction sent by the client so as to change the content of the sub-scene picture acquired by the corresponding virtual camera according to the control instruction.
Further, the picture determination module is further configured to: the control instructions comprise movement instructions; the motion instruction is used for controlling the corresponding virtual camera to move in the 3D scene data.
Further, the picture determination module is further configured to: the control instruction comprises an operation instruction; and the operation instruction is used for changing the 3D object in the sub-scene picture acquired by the corresponding virtual camera.
Further, the picture determination module is further configured to:
the control instruction comprises a visual vertebra adjusting instruction and/or a range adjusting instruction; the visual vertebra adjusting instruction is used for adjusting the visual vertebra of the corresponding virtual camera, and the range adjusting instruction is used for adjusting the projection area range of the corresponding virtual camera;
or the like, or, alternatively,
for a 3D object, if one needle has an operation instruction for the 3D object and the other needle has been acquired, determining the priorities of the two operation instructions based on the pre-stored priority of the client, and executing the operation instruction with higher priority for the 3D object.
In a third aspect, the present application provides a cloud rendering system.
The cloud rendering system provided by the application specifically adopts the following technical scheme:
a cloud rendering system comprising any one of the cloud rendering servers of the second aspect above and a plurality of clients connected to the cloud server.
In summary, the present application at least includes the following beneficial effects:
1. the multi-user cooperation method based on the 3D cloud rendering, the cloud rendering server and the cloud rendering system are provided, and the occupation of cloud server resources can be reduced;
2. the client can control the visual cone and the projection area range of the corresponding virtual camera, and the usability is high;
3. and when the operation instructions of the 3D object conflict, preemptive operation is performed based on the priority of the operation instructions, so that the reliability is high.
It should be understood that what is described in this summary section is not intended to limit key or critical features of the embodiments of the application, nor is it intended to limit the scope of the application. Other features of the present application will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present application will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters denote like or similar elements, and wherein:
FIG. 1 illustrates a schematic diagram of a conventional 3D cloud rendering technique;
FIG. 2 illustrates a schematic diagram of a multi-client mode 3D cloud rendering technique;
FIG. 3 illustrates a schematic diagram of a multi-client collaborative mode 3D cloud rendering technique;
FIG. 4 illustrates an exemplary operating environment in which embodiments of the present application can operate;
FIG. 5 shows a flowchart of a multi-user collaboration method based on 3D cloud rendering in an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a binding principle of a client and a virtual camera;
fig. 7 shows a schematic diagram of a client controlling a sub-scene picture of a corresponding virtual camera;
FIG. 8 shows a control schematic when a 3D object is superimposed on a sub-scene screen of a client;
fig. 9 is a block diagram illustrating a cloud rendering server in an embodiment of the present application;
fig. 10 shows a block diagram of a cloud rendering system in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The application provides a multi-user cooperation method based on 3D cloud rendering, a cloud server and a cloud rendering system, which can reduce the occupation of cloud server resources.
FIG. 4 illustrates an exemplary operating environment 400 in which embodiments of the present application can operate.
Referring to fig. 4, the execution environment 400 includes a cloud server 410 and a plurality of clients 420, wherein the plurality of clients 420 are network-connected to the cloud server 410. The cloud server 410 is used by an operator, the client 420 is used by a user, and the user application client 420 is connected to the cloud server 410 to realize a 3D cloud rendering service.
Fig. 5 shows a flowchart of a multi-user collaboration method 500 based on 3D cloud rendering in an embodiment of the present application. Method 500 may be performed by cloud server 420 in fig. 4.
Referring to fig. 5, the method 500 specifically includes the following steps:
s510: a plurality of virtual cameras corresponding to a plurality of clients 420 one to one are configured in 3D scene data, so that the clients receive sub-scene pictures acquired by the corresponding virtual cameras.
S520: and acquiring a control instruction sent by the client 420, so as to change the content of the sub-scene picture acquired by the corresponding virtual camera according to the control instruction.
Specifically, the 3D scene data may be pre-configured in the cloud server 410, each 3D scene data corresponds to a specific 3D scene, one kind of 3D scene data may be pre-stored in the cloud server 410, or multiple kinds of 3D scene data may be pre-stored in the cloud server 410, and of course, the cloud server 410 also retains the authority and functions of addition, deletion, and content modification for each 3D scene data, so that the operator can independently select and set the 3D scene data.
For each 3D scene data, the cloud server 410 is configured with a virtual camera, each virtual camera is independently controllable, and can acquire a picture in the 3D scene data and obtain a sub-scene picture. When the virtual camera is bound to the client 420, the virtual camera may change motion (change position and change shooting direction), operate a 3D object in a sub-scene picture, adjust a view cone, adjust a projection range, and the like in the 3D scene data based on a control instruction sent by the client 420, that is, the control instruction may include an operation instruction for changing the 3D object in the sub-scene picture acquired by the corresponding virtual camera, a view cone adjustment instruction for adjusting the view cone of the corresponding virtual camera, and a range adjustment instruction for adjusting the projection area range of the corresponding virtual camera.
It will be appreciated that virtual cameras can be analogized to human and real cameras, and that 3D scene data can be analogized to real scenes. The movement of the virtual camera in the 3D scene data can be analogized to a person driving the virtual camera to move in an actual scene, for example, the position and shooting direction of the real camera are changed; virtual camera controlled operation 3D objects within a scene can be analogized to human operation of objects within a real scene, e.g., changing object states, locations, etc.; the virtual camera adjusts the visual spine and adjusts the projection range by analogy with the human operation of the real camera to change the visual spine and the setting range of the real camera.
Of course, for the virtual camera to operate the 3D object in the sub-scene screen according to the operation instruction of the client 420, an additional condition may be added, for example, the virtual camera can operate the 3D object only when the distance between the virtual camera and the 3D object to be operated is less than the distance setting value. Naturally, the operable action for each 3D object is stored in the cloud server 410 in advance, and is bound to the operation keys of the client 420 one by one, so as to implement the operation on the 3D object. A distance setting value may be configured for each operable action of the same 3D object, and only when the distance from the virtual camera to the 3D object is less than the distance setting value, the client 420 corresponding to the virtual camera can successfully perform the operation action on the 3D object.
Fig. 6 shows a schematic diagram of the binding principle of the client 420 and the virtual camera.
Referring to fig. 6, a virtual camera sets 3D scene data to obtain a sub-scene picture reflecting its photographic contents. And each sub scene picture enters a network layer after off-screen rendering, compressed acquisition coding and streaming media service. Each sub-scene picture corresponds to an ID on a network layer, the client 420 can bind a designated virtual camera through the designated ID to receive and display the designated sub-scene picture, and can also obtain the control authority of the designated virtual camera and the designated sub-scene picture through the client 420, that is, the client 420 can realize the control of the designated virtual camera, the designated sub-scene picture and a 3D object in the designated sub-scene picture by sending a control instruction.
Fig. 7 shows a schematic diagram of the client 420 controlling the sub-scene picture of the corresponding virtual camera.
Referring to fig. 7, when the virtual camera moves, the content in the shooting range of the virtual camera will follow from one area of the 3D scene data to another area in real time, so that the content of the sub-scene picture changes; when the virtual camera expands the range of the projection area, the content in the shooting range of the virtual camera is changed into a larger area from a smaller area of the 3D scene data in real time, so that the content of the sub-scene picture is changed; when the virtual camera moves to the right and the projection area is enlarged, the content of the sub-scene picture is changed correspondingly in real time (as shown in fig. 7).
Fig. 8 shows a control schematic when a 3D object is superimposed on a sub-scene screen of the client 420.
Referring to fig. 8, when sub-scene pictures of two virtual cameras overlap, the same 3D object exists in the sub-scene pictures of the two virtual cameras, when the client 420 corresponding to the two virtual cameras operates the same 3D object, a preemptive operation mode may be adopted, or the priority of the virtual camera or the priority of the operation instruction of the client 420 may be stored in the cloud line in the cloud server 410, and when more than two operation instructions exist for the same 3D object, the priority of the virtual camera and/or the client 420 corresponding to the operation instruction may be determined, and the operation instruction with higher priority may be executed.
Of course, if there are a plurality of operable actions for the same 3D object, any two operable actions may or may not be allowed to exist simultaneously, for the actions allowed to exist simultaneously, the clients 420 corresponding to two or more virtual cameras may operate the 3D object simultaneously, and for the actions not allowed to exist simultaneously, the clients may understand according to the mode of the same action, and determine which operation instruction is valid, preemptively or according to the priority.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required in this application.
The above is a description of method embodiments, and the embodiments of the present application are further described below by way of apparatus embodiments.
Fig. 9 shows a block diagram of a cloud rendering server 900 in an embodiment of the present application. Referring to fig. 9, the cloud rendering server 900 includes:
a resource configuration module 910, configured to configure a plurality of virtual cameras in a 3D scene data, where the virtual cameras correspond to a plurality of clients 420 one to one, so that the clients 420 receive sub-scene pictures acquired by the corresponding virtual cameras; and
the picture determining module 920 is configured to obtain the control instruction sent by the client 420, so as to change the content of the sub-scene picture acquired by the corresponding virtual camera according to the control instruction.
Further, the screen determining module 920 is further configured to: the control instructions comprise movement instructions; the motion instruction is used for controlling the corresponding virtual camera to move in the 3D scene data.
Further, the screen determining module 920 is further configured to: the control instruction comprises an operation instruction; and the operation instruction is used for changing the 3D object in the sub-scene picture acquired by the corresponding virtual camera.
Further, the screen determining module 920 is further configured to:
the control instruction comprises a visual vertebra adjusting instruction and/or a range adjusting instruction; the visual vertebra adjusting instruction is used for adjusting the visual vertebra of the corresponding virtual camera, and the range adjusting instruction is used for adjusting the projection area range of the corresponding virtual camera;
or the like, or, alternatively,
for a 3D object, if one operation instruction for the 3D object already exists and another operation instruction for the 3D object is obtained, the priorities of the two operation instructions are determined based on the pre-stored priorities of the clients 420, and the operation instruction with the higher priority is executed for the 3D object.
Fig. 10 shows a block diagram of a cloud rendering system 1000 in an embodiment of the present application. Referring to fig. 10, a cloud rendering system 1000 may include a cloud rendering server 900 in fig. 9 and a plurality of clients 420 in fig. 4.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the disclosure. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (10)

1. A multi-user cooperation method based on 3D cloud rendering is applied to a cloud server (410), wherein a plurality of clients (420) are connected to the cloud server (410); characterized in that the method comprises:
configuring a plurality of virtual cameras corresponding to a plurality of clients (420) one by one in 3D scene data so that the clients (420) receive sub-scene pictures acquired by the corresponding virtual cameras;
and acquiring a control instruction sent by the client (420) so as to change the content of the sub-scene picture acquired by the corresponding virtual camera according to the control instruction.
2. The method of claim 1, wherein the control instructions comprise motion instructions; the motion instruction is used for controlling the corresponding virtual camera to move in the 3D scene data.
3. The method according to claim 1 or 2, wherein the control instruction comprises an operation instruction; the operation instruction is used for changing the 3D object in the sub-scene picture acquired by the corresponding virtual camera.
4. The method of claim 3, wherein the control instructions comprise visual vertebra adjustment instructions and/or range adjustment instructions; the visual vertebra adjusting instruction is used for adjusting the visual vertebra of the corresponding virtual camera, and the range adjusting instruction is used for adjusting the projection area range of the corresponding virtual camera.
5. The method according to claim 3, wherein the changing the content of the sub-scene picture acquired by the corresponding virtual camera according to the control instruction comprises:
for a 3D object, if one operation instruction for the 3D object exists and another operation instruction for the 3D object is acquired, the priorities of the two operation instructions are judged based on the pre-stored priority of the client (420), and the operation instruction with higher priority is executed for the 3D object.
6. A cloud rendering server (900), comprising:
the resource allocation module (910) is configured to allocate a plurality of virtual cameras in one-to-one correspondence with a plurality of clients (420) in 3D scene data, so that the clients (420) receive sub-scene pictures acquired by the corresponding virtual cameras; and
and the picture determining module (920) is used for acquiring the control instruction sent by the client (420) so as to change the content of the sub-scene picture acquired by the corresponding virtual camera according to the control instruction.
7. The cloud rendering server of claim 6, wherein the screen determination module is further configured to: the control instructions comprise movement instructions; the motion instruction is used for controlling the corresponding virtual camera to move in the 3D scene data.
8. The cloud rendering server (900) of claim 6 or 7, wherein the screen determining module (920) is further configured to: the control instruction comprises an operation instruction; the operation instruction is used for changing the 3D object in the sub-scene picture acquired by the corresponding virtual camera.
9. The cloud rendering server (900) of claim 8, wherein the screen determining module (920) is further configured to:
the control instruction comprises a visual vertebra adjusting instruction and/or a range adjusting instruction; the visual vertebra adjusting instruction is used for adjusting the visual vertebra of the corresponding virtual camera, and the range adjusting instruction is used for adjusting the projection area range of the corresponding virtual camera;
or the like, or, alternatively,
for a 3D object, if one operation instruction for the 3D object exists and another operation instruction for the 3D object is acquired, the priorities of the two operation instructions are judged based on the pre-stored priority of the client (420), and the operation instruction with higher priority is executed for the 3D object.
10. A cloud rendering system comprising the cloud rendering server (900) according to any one of claims 6 to 9 and a plurality of clients (420) connected to the cloud rendering server (900).
CN202210830788.3A 2022-07-15 2022-07-15 Multi-user cooperation method based on 3D cloud rendering, cloud rendering server and cloud rendering system Pending CN115082610A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210830788.3A CN115082610A (en) 2022-07-15 2022-07-15 Multi-user cooperation method based on 3D cloud rendering, cloud rendering server and cloud rendering system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210830788.3A CN115082610A (en) 2022-07-15 2022-07-15 Multi-user cooperation method based on 3D cloud rendering, cloud rendering server and cloud rendering system

Publications (1)

Publication Number Publication Date
CN115082610A true CN115082610A (en) 2022-09-20

Family

ID=83259021

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210830788.3A Pending CN115082610A (en) 2022-07-15 2022-07-15 Multi-user cooperation method based on 3D cloud rendering, cloud rendering server and cloud rendering system

Country Status (1)

Country Link
CN (1) CN115082610A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200137057A1 (en) * 2018-10-24 2020-04-30 Servicenow, Inc. Feedback framework
CN111951366A (en) * 2020-07-29 2020-11-17 北京蔚领时代科技有限公司 Cloud native 3D scene game method and system
CN113382221A (en) * 2021-05-14 2021-09-10 异现实科技(成都)有限公司 Single-instance multi-terminal cloud rendering system and method thereof
CN113592992A (en) * 2021-08-09 2021-11-02 郑州捷安高科股份有限公司 Rendering method and device for rail transit simulation driving
CN113625869A (en) * 2021-07-15 2021-11-09 北京易智时代数字科技有限公司 Large-space multi-person interactive cloud rendering system
CN114174993A (en) * 2019-08-16 2022-03-11 思科技术公司 Optimizing cluster applications in a cluster infrastructure

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200137057A1 (en) * 2018-10-24 2020-04-30 Servicenow, Inc. Feedback framework
CN114174993A (en) * 2019-08-16 2022-03-11 思科技术公司 Optimizing cluster applications in a cluster infrastructure
CN111951366A (en) * 2020-07-29 2020-11-17 北京蔚领时代科技有限公司 Cloud native 3D scene game method and system
CN113382221A (en) * 2021-05-14 2021-09-10 异现实科技(成都)有限公司 Single-instance multi-terminal cloud rendering system and method thereof
CN113625869A (en) * 2021-07-15 2021-11-09 北京易智时代数字科技有限公司 Large-space multi-person interactive cloud rendering system
CN113592992A (en) * 2021-08-09 2021-11-02 郑州捷安高科股份有限公司 Rendering method and device for rail transit simulation driving

Similar Documents

Publication Publication Date Title
CN103324457B (en) Terminal and multi-task data display method
US10019213B1 (en) Composition control method for remote application delivery
CN113395478B (en) Method, system and storage medium for providing high resolution video stream
US8375301B2 (en) Network displays and method of their operation
US9741316B2 (en) Method and system for displaying pixels on display devices
US7401116B1 (en) System and method for allowing remote users to specify graphics application parameters for generation of interactive images
RU2368940C2 (en) Synchronised graphic data and region data for systems of remote handling graphic data
US11750674B2 (en) Ultra-low latency remote application access
US11240403B2 (en) Compensation for delay in PTZ camera system
EP3657824A2 (en) System and method for multi-user control and media streaming to a shared display
US9258516B2 (en) News production system with display controller
US11445229B2 (en) Managing deep and shallow buffers in a thin-client device of a digital media distribution network
US9456177B2 (en) Video conference data generation
CN115082610A (en) Multi-user cooperation method based on 3D cloud rendering, cloud rendering server and cloud rendering system
KR101877034B1 (en) System and providing method for multimedia virtual system
US8046698B1 (en) System and method for providing collaboration of a graphics session
KR101839542B1 (en) Signage server and method for controlling video using the same
WO2017096377A1 (en) Managing deep and shallow buffers in a thin-client device of a digital media distribution network
Repplinger et al. URay: A flexible framework for distributed rendering and display
Marino et al. Description and performance analysis of a distributed rendering architecture for virtual environments
KR102241240B1 (en) Method and apparatus for image processing
Repplinger et al. A Flexible Adaptation Service for Distributed Rendering.
CN117608431A (en) Control method and device
CN116996629A (en) Multi-video equipment acquisition display method and system thereof
CN117424931A (en) Cloud computing system and method based on image interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination