CN115908670A - Digital twin model distributed rendering method, device and system in production scene - Google Patents

Digital twin model distributed rendering method, device and system in production scene Download PDF

Info

Publication number
CN115908670A
CN115908670A CN202211421446.2A CN202211421446A CN115908670A CN 115908670 A CN115908670 A CN 115908670A CN 202211421446 A CN202211421446 A CN 202211421446A CN 115908670 A CN115908670 A CN 115908670A
Authority
CN
China
Prior art keywords
rendering
data
entity
bounding box
digital twin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211421446.2A
Other languages
Chinese (zh)
Inventor
舒亮
高居建
杨艳芳
王奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wenzhou University
Original Assignee
Wenzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wenzhou University filed Critical Wenzhou University
Priority to CN202211421446.2A priority Critical patent/CN115908670A/en
Publication of CN115908670A publication Critical patent/CN115908670A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The invention relates to a digital twin model distributed rendering method, device and system in a production scene, relating to the technical field of scene rendering, wherein the method comprises the following steps: dividing a screen space according to a picture corresponding to a rendering end; obtaining model data of a moving entity in a production workshop; determining an entity range and an entity center according to the model data and constructing bounding box data; screening the moving entities needing synchronization according to the bounding box data; determining a projection matrix and an observation matrix of a moving entity required to be synchronized; constructing view cone information according to the projection matrix and the observation matrix; acquiring motion state information of a motion entity needing synchronization; and rendering the moving entities needing to be synchronized in the corresponding screen space according to the view cone information and the motion state information. The method can meet the real-time and accurate synchronization of the dynamic model in the complex production environment, obviously improve the rendering efficiency of the digital twin workshop, and output a high-resolution picture under the condition of ensuring the smoothness.

Description

Digital twin model distributed rendering method, device and system in production scene
Technical Field
The invention belongs to the technical field of scene rendering, and particularly relates to a digital twin model distributed rendering method, device and system in a production scene.
Background
Digital twinning is one of the key enabling technologies for smart manufacturing. It creates the virtual model of the physical entity in digital mode, and depends on the technologies of digital analysis, multi-domain simulation, omnibearing monitoring, etc. to make multi-dimensional and multi-dimensional space-time image mapping for the whole life cycle of the physical object, and adds or expands new functions for the physical entity, and deepens the integration and application of the manufacturing industry information technology. The digital twin models in a complex production scene are complex in type and large in quantity, the total quantity of virtual models needing to be rendered in real time is very large, the scene comprises various operation devices which are responsible for functions of various process actions, clamping types and the like, and each operation device comprises numerous fine parts. And the logical data of production, process and the like are required to be introduced into the scene, so that a plurality of models are in a motion state, the amount of action logical information is large, the amount of dynamic models is large, the overhead of rendering and logical calculation is huge, and huge pressure is brought to computer hardware. The problems cause that the calculation result of the system is difficult to feed back in real time, and the rendering effect of the scene is greatly influenced. Aiming at the problems, the invention provides a digital twin model distributed rendering method, device and system in a production scene, which can meet the requirements of real-time and accurate synchronization of dynamic models in a complex production environment, remarkably improve the rendering efficiency of a digital twin workshop and output a high-resolution picture under the condition of ensuring fluency.
Disclosure of Invention
The invention aims to provide a method, a device and a system for rendering a digital twin model in a production scene in a distributed manner, which meet the requirements of real-time and accurate synchronization of a dynamic model in a complex production environment, remarkably improve the rendering efficiency of a digital twin workshop and output a high-resolution picture under the condition of ensuring fluency.
In order to achieve the purpose, the invention provides the following scheme:
a digital twin model distributed rendering method in a production scene comprises the following steps:
dividing a screen space according to a picture corresponding to a rendering end;
acquiring model data of a moving entity in a production workshop;
determining an entity range and an entity center according to the model data;
constructing bounding box data according to the entity range and the entity center;
screening the moving entities needing synchronization according to the bounding box data;
determining a projection matrix and an observation matrix of the moving entity needing synchronization;
constructing view cone information according to the projection matrix and the observation matrix;
acquiring the motion state information of the motion entity needing to be synchronized;
and rendering the moving entity needing to be synchronized in a corresponding screen space according to the view cone information and the motion state information.
Optionally, after the step of "dividing the screen space according to the picture corresponding to the rendering end", and before the step of "obtaining model data of the moving entity in the production workshop", the method further includes:
and numbering the moving entities in the production workshop.
Optionally, the method further includes:
and calculating the relation between the moving entity and the screen space according to the projection matrix and the observation matrix.
Optionally, the screening, according to bounding box data, a moving entity that needs to be synchronized specifically includes:
determining 8 vertexes of a bounding box according to the bounding box data;
determining homogeneous coordinates of corresponding points in a screen space according to the 8 vertexes of the bounding box;
judging whether at least one of the homogeneous coordinates meets a preset judgment condition;
and if so, the motion entities corresponding to the bounding box data need to be synchronized.
A digital twin model distributed rendering device in a production scene comprises: the device comprises a control end, a transmission end, a rendering end and a display end;
the control end, the rendering end and the display end are connected in sequence;
the control end is used for determining and distributing a rendering task and transmitting the rendering task to the rendering end through a transmission end;
and the rendering end is used for analyzing the rendering task and transmitting the rendered picture to the display end.
Optionally, the rendering task is in a data packet format, where the data packet includes a check data area, a fixed data area, and a variable data area;
the check data area comprises a check identifier and a frame number; the fixed data area comprises a viewpoint observation matrix and a projection matrix of each frame; the variable data area includes a status data type and status data.
Optionally, the rendering end is configured to parse the rendering task, and specifically includes:
checking whether the check identifier is correct, and if not, discarding the data packet corresponding to the rendering task;
and checking whether the frame numbers are matched, and if not, discarding the data packet corresponding to the rendering task.
Optionally, the rendering end is further configured to determine whether the type of the state data is heartbeat data.
Optionally, if the number of the motion entities that need to be synchronized is zero, the data area of the variable data area is empty.
A complex production scenario-oriented digital twin model distributed rendering system, comprising:
the screen dividing module is used for dividing the screen space according to the picture corresponding to the rendering end;
the model data acquisition module is used for acquiring model data of a moving entity in a production workshop;
the entity range and entity center determining module is used for determining the entity range and the entity center according to the model data;
the bounding box data construction module is used for constructing bounding box data according to the entity range and the entity center;
the motion entity screening module is used for screening the motion entities needing to be synchronized according to the bounding box data;
the projection matrix and observation matrix determining module is used for determining the projection matrix and the observation matrix of the moving entity needing to be synchronized;
the viewing cone constructing module is used for constructing viewing cone information according to the projection matrix and the observation matrix;
a motion state obtaining module, configured to obtain motion state information of the motion entity that needs to be synchronized;
and the rendering module is used for rendering the moving entity needing to be synchronized in the corresponding screen space according to the view cone information and the motion state information.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the method divides the scene data into the logic data and the real-time rendering data, the logic data is distributed at the control end, and the rendering data is distributed at the rendering end, so that multi-machine parallel rendering is realized. Meanwhile, synchronization of rendering of a dynamic model at a rendering end is used as an entry point, a designed GPU multi-thread mechanism is combined, logic data and motion entity synchronous judgment and calculation are achieved, then a synchronous data packet including workshop state data, observation and projection matrixes and the like is constructed, and a master-slave synchronous data transmission and analysis mechanism is combined, so that rapid and accurate synchronization of motion models is achieved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings required in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic diagram of a digital twin model distributed rendering method in a production scenario according to the present invention;
FIG. 2 is a view of a cone projective transformation according to the present invention;
FIG. 3 is a schematic view of a single-node rendering canonical view space according to the present invention;
FIG. 4 is a diagram of a distributed rendering four-channel canonical view space of the present invention;
FIG. 5 is a diagram illustrating a synchronous data packet format according to the present invention;
FIG. 6 is a flow chart of transferring data to video memory according to the present invention;
FIG. 7 is a schematic diagram of a multi-threaded fast calculation process according to the present invention;
FIG. 8 is a flowchart illustrating bounding box based synchronization determination in accordance with the present invention;
FIG. 9 is a schematic diagram of a synchronous data transmission and analysis process according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a digital twin model distributed rendering method, device and system in a production scene, which meet the requirements of real-time and accurate synchronization of dynamic models in a complex production environment, remarkably improve the rendering efficiency of a digital twin workshop and output a high-resolution picture under the condition of ensuring fluency.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The overall method scheme of the invention is shown in figure 1, and the specific scheme is as follows:
step 1: and building a distributed hardware architecture. The master-slave structure mainly comprises a control end, a transmission layer, a rendering end and a display end. The specific functions of each component object are as follows:
(1) A control end: the method comprises the steps of holding logic data in a scene, including dynamic data of production processes, steps, processes and the like of a production line, taking charge of collision detection and dynamic logic calculation of the scene, simultaneously carrying out rendering task data calculation of an observation matrix and a projection matrix of each frame of viewpoint, transmitting a logic calculation result and a rendering task to a rendering end through a network, controlling the rendering end to be logically synchronous, and managing and distributing the rendering task.
(2) Rendering end: and the rendering data in the scene is held and is responsible for analyzing the synchronous data, the rendering of the model is completed through the rendering pipeline processes such as visual cone rejection, vertex coloring, geometric coloring and the like, and the rendered picture is transmitted to a display end.
(3) A transmission layer: the system is responsible for constructing a local area network, connecting the control end and the rendering end and transmitting rendering tasks and logic information.
(4) A display end: and displaying the splicing rendering result.
And 2, step: and dividing a rendering end screen. By analyzing the principle of a three-dimensional model rendering pipeline, the three-dimensional model in the scene is transformed through a series of coordinate matrixes and is drawn on a two-dimensional plane. In the process, model vertexes are transformed from a model space to a clipping space through the transformation of a viewpoint and a projection matrix, and then the vertexes are normalized to device coordinates (NDC) and mapped to a screen space. In the distributed rendering system, a screen space is divided according to a picture corresponding to a rendering end, a plurality of rendering ends draw images of the same frame in a coordinated mode, and the rendering task distribution of the distributed rendering system is rapidly completed by carrying out operations of zooming, translation and rotation on a projection matrix of an original single-machine system. A multi-screen-oriented rendering task allocation method is provided based on matrix transformation of a projection matrix.
In the rendering engine, NDC is a value range of length, width and height [ -1,1]And its values are (-1, -1, -1) to (1,1,1). As shown in fig. 2, a point a (x, y, z) in the viewing volume of the viewing volume is projected on the near-clipping plane z = N to obtain a projection point a 1 (x 1 ,y 1 Z) where x 1 =xN/z,y 1 = yN/z, z = N, followed by a 1 Mapping into a canonical view space through normalization to obtain a point A 2 (x ', y ', z '), wherein the left lower angular point of the near cutting surface of the cone is (L, B, N), and the right upper angular point is (R, T, N). From the above projection principle, the projection matrix is as follows:
Figure BDA0003941105230000051
as shown in fig. 3 and 4, assuming that the original single machine system number is Node0, the rendering ends are numbered according to the rendering spatial position, and taking 4 rendering ends as an example, the numbers are Node1, node2, node3, and Node4.Node1 renders a model positioned at the upper left of the original screen, node2 renders a model positioned at the upper right of the original screen, node3 renders a model positioned at the lower left of the original screen, and Node4 renders a model positioned at the lower right of the original screen. The view space of the original single machine system is compared with the view cone standard view space of the distributed rendering system, wherein the view space range of the model on the left upper side of the original screen in the original system after projection transformation is (-1,0, -1) to (0,1,1), the rendering task can be successfully distributed to the Node1 Node only by transforming to the NDC coordinate, and on the basis of the original p matrix, the projection matrixes corresponding to 4 nodes are as follows: (1) node1 point, R =0, b =0; (2) node2 point, B =0, l =0; (3) node3 point, R =0, t =0; (4) node4 point, T =0, l =0.
And step 3: and designing a data structure. The data packet format designed herein is mainly composed of a check data area, a fixed data area, and a variable data area, as shown in fig. 5. The check data area comprises check marks and frame numbers, wherein the check marks are used for testing the legality of the data packet, the frame numbers correspond to the frame serial numbers of the current scene, and all rendering ends must have the same frame serial numbers at the same time. The fixed data area refers to rendering task data, comprises a viewpoint observation matrix and a projection matrix of each frame, and is mainly used for rendering tasks at a synchronous rendering end. The variable data area is composed of state data types and state data, and if the number of the motion entities needing to be synchronized is zero, the data area is empty; if there is a twin dynamic entity that needs synchronization, the status data type and corresponding status data is added to the variable data area. The control end uniformly packages the data into data packets and sends the data packets to the plurality of rendering ends, and the rendering ends perform subsequent data analysis and execution work.
Table 1 shows the state data and the associated events corresponding thereto, and the state data of the synchronization target is divided into four states, i.e., a transport state (T) at the position of the buffer conveyor, a transition state (S) to be entered into a certain processing unit, a processing state (W) in a certain processing unit, and a completion state (D) in which processing is completed and left from the processing unit, and E is the associated event triggered by different states, and is set as follows:
TABLE 1 State data and associated events
Figure BDA0003941105230000061
Where n is a synchronization target number, i represents a processing unit, and j is the number of steps included in the processing unit. Table 2 is a specific semantic interpretation of table 1.
TABLE 2 State data and associated event interpretation
Figure BDA0003941105230000062
Figure BDA0003941105230000071
And 4, step 4: and acquiring and transmitting data. All the twin motion entities in the scene are numbered, twin entity model data are obtained, and a bounding box data structure body is constructed by the centers and the ranges of the obtained twin motion entities. And then acquiring a projection matrix P and an observation matrix V to form the view cone information. The model with dynamic logic attributes can preset the motion trail in the memory of the CPU, and the motion state information is formed by the motion trail. As shown in table 3, the data structure of the buffer area is a buffer area for applying the above three data in the video memory in advance, and a result buffer area is applied at the same time for storing the result of the logic calculation. Fig. 6 is a flowchart of data transmission to the video memory, where the model data is preprocessed, the variable space is declared in the buffer, and then the logic information such as bounding boxes and view cones is obtained and transmitted to the video memory through the buffer array.
TABLE 3 cache data Structure
Figure BDA0003941105230000072
And 5: the physical space calculation based on multithreading is a multithreading rapid calculation process as shown in fig. 7, a CPU loads model data to a memory, obtains logic data such as a bounding box, and the like, transmits the logic data to a video memory buffer area, starts a plurality of sub-threads in a GPU, the sub-threads read the logic data in the buffer area, performs matrix change calculation on a moving entity through corresponding matrix calculation, and simultaneously determines whether the moving entity needs to be synchronized, and filters out objects that do not need to be synchronized. And finally, storing the obtained calculation result into a buffer area, waiting for being transmitted to a rendering end, and finishing the multithreading rapid calculation.
As shown in fig. 8, the synchronization determination based on the bounding box is performed in the sub-thread, 8 vertices of the bounding box are obtained from the bounding box center point and the range, the homogeneous coordinate P of the screen space corresponding point obtained by the state and projection matrix conversion and normalization after the vertices of the bounding box are calculated, and the homogeneous coordinate P is compared with the determination condition. If at least one vertex of the bounding box meets the judgment condition, the motion entity corresponding to the bounding box needs to be synchronized, and if all the vertices do not meet the judgment condition, the motion entity corresponding to the bounding box is meaningless to rendering and does not need to be synchronized.
Step 6: and synchronizing data transmission and analysis.
Fig. 9 is a flow chart of synchronous data transmission and analysis, in which a control terminal calculates and packages a synchronous data packet, writes the synchronous data packet into a data buffer, then sends the data packet through a network layer, and a rendering terminal receives the synchronous data, and then performs the following operations: (1) Checking whether the check mark is correct, and if not, discarding the data packet. (2) Checking whether the frame numbers are matched, and if not, discarding the data packet. (3) And reading the rendering task, and acquiring the viewing point position and the projection matrix of the viewing cone. (4) And determining the type of the state data, if the data is heartbeat data, transmitting all the data needing to be synchronized in the current time frame, and waiting for the control end to transmit an exchange frame cache instruction after receiving the data. And if the data type is the state information of the synchronous object, the rendering end analyzes the associated event according to the table 1, and updates the state of the synchronous object according to the event. And simultaneously, setting the visual angles of a camera projection matrix and a viewpoint in the scene according to the rendering task data, and finishing the rendering task of the current frame. And finally, waiting for all rendering ends to finish drawing, and transmitting a frame buffer instruction to enable all rendering ends to synchronously display the picture.
It should be noted that the above-mentioned solution of the present invention is a scheme for implementing distributed rendering by using a synchronous model state, and can also perform distributed rendering from a model rendering pipeline level, for example, in a process of performing a computer rendering pipeline process, rendering data such as vertices and triangles are divided and then transmitted to a plurality of computers for rendering calculation, and then the calculation results are collected, which can also provide reference for distributed rendering.
Based on the above scheme, the present invention further provides a complex production scene-oriented digital twin model distributed rendering system, which includes:
and the screen dividing module is used for dividing the screen space according to the picture corresponding to the rendering end.
And the model data acquisition module is used for acquiring the model data of the moving entity in the production workshop.
And the entity range and entity center determining module is used for determining the entity range and the entity center according to the model data.
And the bounding box data construction module is used for constructing bounding box data according to the entity range and the entity center.
And the moving entity screening module is used for screening the moving entities needing to be synchronized according to the bounding box data.
And the projection matrix and observation matrix determining module is used for determining the projection matrix and the observation matrix of the moving entity needing to be synchronized.
And the viewing cone constructing module is used for constructing viewing cone information according to the projection matrix and the observation matrix.
And the motion state acquisition module is used for acquiring the motion state information of the motion entity needing to be synchronized.
And the rendering module is used for rendering the moving entity needing to be synchronized in the corresponding screen space according to the view cone information and the motion state information.
The invention also discloses the following technical effects:
1. the digital twin workshop with distributed rendering can realize seamless splicing effect at the screen splicing positions corresponding to the nodes, especially when a dynamic model with dynamic attributes is positioned at the screen splicing positions, and the phenomena of tearing and distortion of pictures do not occur.
2. According to the invention, a plurality of rendering ends in the digital twin workshop of distributed rendering can correctly receive rendering tasks of the same frame under the action of the control end.
3. The logic of each rendering end is consistent with that of the control end, and no logic confusion occurs.
4. The multithread calculation technology of the invention accelerates the main-end logic calculation, improves the data synchronous transmission rate, and the synchronous judgment technology lightens the scale of rendering data and accelerates the rendering calculation.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (10)

1. A digital twin model distributed rendering method in a production scene is characterized by comprising the following steps:
dividing a screen space according to a picture corresponding to a rendering end;
acquiring model data of a moving entity in a production workshop;
determining an entity range and an entity center according to the model data;
constructing bounding box data according to the entity range and the entity center;
screening the moving entities needing synchronization according to the bounding box data;
determining a projection matrix and an observation matrix of the moving entity needing to be synchronized;
constructing view cone information according to the projection matrix and the observation matrix;
acquiring the motion state information of the motion entity needing to be synchronized;
and rendering the moving entity needing to be synchronized in a corresponding screen space according to the view cone information and the motion state information.
2. The distributed rendering method of the digital twin model in the production scene according to claim 1, wherein after the step of dividing the screen space according to the picture corresponding to the rendering end, and before the step of obtaining the model data of the moving entity in the production workshop, the method further comprises:
the moving entities in the production workshop are numbered.
3. The distributed rendering method for the digital twin model in the production scene as recited in claim 1, further comprising:
and calculating the relation between the moving entity and the screen space according to the projection matrix and the observation matrix.
4. The distributed rendering method of the digital twin model under the production scenario as claimed in claim 1, wherein the screening of the moving entities that need to be synchronized according to the bounding box data specifically comprises:
determining 8 vertices of a bounding box from the bounding box data;
determining homogeneous coordinates of corresponding points of a screen space according to the 8 vertexes of the bounding box;
judging whether at least one of the homogeneous coordinates meets a preset judgment condition;
and if so, the motion entities corresponding to the bounding box data need to be synchronized.
5. A digital twin model distributed rendering device in a production scene is characterized by comprising: the device comprises a control end, a transmission end, a rendering end and a display end;
the control end, the rendering end and the display end are connected in sequence;
the control end is used for determining and distributing a rendering task and transmitting the rendering task to the rendering end through a transmission end;
and the rendering end is used for analyzing the rendering task and transmitting the rendered picture to the display end.
6. The distributed rendering device of the digital twin model under the production scene as claimed in claim 5, wherein the rendering task is in the form of a data packet, and the data packet comprises a check data area, a fixed data area and a variable data area;
the check data area comprises a check identifier and a frame number; the fixed data area comprises a viewpoint observation matrix and a projection matrix of each frame; the variable data area includes a status data type and status data.
7. The digital twin model distributed rendering device under the production scene according to claim 6, wherein the rendering end is configured to parse the rendering task, and specifically includes:
checking whether the check identifier is correct, and if not, discarding the data packet corresponding to the rendering task;
and checking whether the frame numbers are matched, and if not, discarding the data packet corresponding to the rendering task.
8. The distributed rendering device of the digital twin model under the production scene as claimed in claim 6, wherein the rendering end is further configured to determine whether the type of the state data is heartbeat data.
9. The distributed rendering device of digital twin model under production scene as recited in claim 6, wherein if the number of the motion entities requiring synchronization is zero, the data area of the variable data area is empty.
10. A digital twin model distributed rendering system oriented to complex production scenes, comprising:
the screen dividing module is used for dividing the screen space according to the picture corresponding to the rendering end;
the model data acquisition module is used for acquiring model data of a moving entity in a production workshop;
the entity range and entity center determining module is used for determining the entity range and the entity center according to the model data;
the bounding box data construction module is used for constructing bounding box data according to the entity range and the entity center;
the moving entity screening module is used for screening the moving entities needing to be synchronized according to the bounding box data;
the projection matrix and observation matrix determining module is used for determining the projection matrix and the observation matrix of the moving entity needing to be synchronized;
the view cone constructing module is used for constructing view cone information according to the projection matrix and the observation matrix;
a motion state acquisition module, configured to acquire motion state information of the motion entity that needs to be synchronized;
and the rendering module is used for rendering the moving entity needing to be synchronized in the corresponding screen space according to the view cone information and the motion state information.
CN202211421446.2A 2022-11-14 2022-11-14 Digital twin model distributed rendering method, device and system in production scene Pending CN115908670A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211421446.2A CN115908670A (en) 2022-11-14 2022-11-14 Digital twin model distributed rendering method, device and system in production scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211421446.2A CN115908670A (en) 2022-11-14 2022-11-14 Digital twin model distributed rendering method, device and system in production scene

Publications (1)

Publication Number Publication Date
CN115908670A true CN115908670A (en) 2023-04-04

Family

ID=86493339

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211421446.2A Pending CN115908670A (en) 2022-11-14 2022-11-14 Digital twin model distributed rendering method, device and system in production scene

Country Status (1)

Country Link
CN (1) CN115908670A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117574691A (en) * 2024-01-17 2024-02-20 湘江实验室 Virtual entity data system construction method and related equipment
CN117572771A (en) * 2023-11-22 2024-02-20 北京京能清洁能源电力股份有限公司内蒙古分公司 Digital twin system parameter control method and system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117572771A (en) * 2023-11-22 2024-02-20 北京京能清洁能源电力股份有限公司内蒙古分公司 Digital twin system parameter control method and system
CN117574691A (en) * 2024-01-17 2024-02-20 湘江实验室 Virtual entity data system construction method and related equipment
CN117574691B (en) * 2024-01-17 2024-05-14 湘江实验室 Virtual entity data system construction method and related equipment

Similar Documents

Publication Publication Date Title
CN115908670A (en) Digital twin model distributed rendering method, device and system in production scene
WO2020228511A1 (en) Image occlusion processing method, device, apparatus and computer storage medium
Banerjee et al. Virtual manufacturing
JP4902748B2 (en) Computer graphic shadow volume with hierarchical occlusion culling
CN106600672B (en) A kind of network-based distributed synchronization rendering system and method
US20220377432A1 (en) Detecting latency anomalies from pipeline components in cloud-based systems
US6806876B2 (en) Three dimensional rendering including motion sorting
CN107481291B (en) Traffic monitoring model calibration method and system based on physical coordinates of marked dotted lines
CN111932663B (en) Parallel drawing method based on multi-level asymmetric communication management
JPH09244522A (en) Method and device for undergoing virtual building
CN109920043B (en) Stereoscopic rendering of virtual 3D objects
CN106570923A (en) Frame rendering method and device
Luo et al. An Internet-enabled image-and model-based virtual machining system
CN112489225A (en) Method and device for fusing video and three-dimensional scene, electronic equipment and storage medium
CN115174805A (en) Panoramic stereo image generation method and device and electronic equipment
CN115797991A (en) Method for generating recognizable face image according to face side image
Regan et al. An interactive graphics display architecture
US6346939B1 (en) View dependent layer ordering method and system
CN113205599B (en) GPU accelerated video texture updating method in video three-dimensional fusion
US6559844B1 (en) Method and apparatus for generating multiple views using a graphics engine
CN107274449B (en) Space positioning system and method for object by optical photo
CN110211239B (en) Augmented reality method, apparatus, device and medium based on label-free recognition
WO2012173304A1 (en) Graphical image processing device and method for converting a low-resolution graphical image into a high-resolution graphical image in real time
CN112911260A (en) Multimedia exhibition hall sand table projection display system
JP7278720B2 (en) Generation device, generation method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination